OpenAI gives developers better control over ChatGPT responses
OpenAI announced new changes to its File Search system last week, giving developers more control when asking its artificial intelligence (AI) chatbots to choose answers. The improvement has been implemented into ChatGPT’s Application Programming Interface (API) and not only allows developers to control the behavior of the chatbot’s answer retrieval method, but also to fine-tune its behavior. This way, developers can ensure that only the answers they want are chosen. Notably, an earlier report had claimed that the company is planning to launch another AI model, dubbed ‘Strawberry’, which can improve ChatGPT’s mathematics and logical reasoning.
OpenAI Improves ChatGPT API for Developers
The AI company announced the changes to the API in a after on X (formerly known as Twitter). Essentially, the upgrade improves the File Search controls in the Assistant API. It allows developers to review the results selected by the chatbot and make further adjustments based on their requirements.
APIs differ from the consumer-facing ChatGPT web and apps. While the interface that end users see is refined by OpenAI and set to behave in a certain way, developers building internal tools for businesses or integrating the chatbot into different apps and software need more freedom.
This could be because the public version of ChatGPT is configured for general purposes, while the API version is used for one specific function. To excel in that, users require the AI to make no mistakes and return the highest quality answers.
Until now, developers have had no option to tune the API so that the chatbot generates relevant answers for the specific use cases. With the new control options, this will change. OpenAI, in its support pageemphasized how this will work.
Developers can now inspect File Search responses. The File Search tool in the Assistant API chooses the answers it deems relevant for a given query. However, developers can now audit the responses the AI chooses and test the information it has generated in previous runs. This information should help them gain more insight into how the tool works.
Furthermore, developers can customize the settings of the result ranker that is used to sift through the information to generate the responses. By choosing a ranking between 0.0 and 1.0, they can determine which information the AI chooses and which information it ignores.