OpenAI unveils Realtime API and other improvements for developers
OpenAI hosted its annual DevDay conference in San Francisco on Tuesday and announced several new upgrades to the application programming interface (API) version of ChatGPT, which can be reimagined and refined to power other applications and software. Key introductions include the real-time API, rapid coaching, and refining the vision with GPT-4o. The company is also making the model distillation process easier for developers. OpenAI also announced the completion of its funding round and stated that it had raised $6.6 billion (approximately Rs. 55,000 crore) at the event.
OpenAI announces new features for developers
In several blog posts, the AI company highlighted the new features and tools for developers. The First is a real-time API that will be available to the paying subscribers of the ChatGPT API. This new capability provides a low-latency, multi-modal experience, enabling speech-to-speech conversations, similar to ChatGPT Advanced Voice Mode. Developers can also use the six preset voices previously added to the API.
Another new introduction is the quick coaching option in the API. OpenAI is introducing this feature as a way for developers to save costs on frequently used prompts. The company noticed that developers usually keep sending the same input prompts when editing a codebase or having a multi-turn conversation with the chatbot. With quick coaching, they can now reuse recently used input prompts at a reduced rate. The processing for this will also be faster. The new rates can be checked here.
The GPT-4o model can be that too refined for vision-related tasks. Developers can customize the large language model (LLM) by training it on a fixed set of visual data to improve output efficiency. According to the blog post, GPT-4o’s performance for vision tasks can be improved with just 100 images.
Finally, the company also makes the process of model distillation easier for developers. Model distillation is the process of building smaller, refined AI models based on a larger language model. Previously, the process was complicated and required a multi-step approach. Now OpenAI offers new tools like Stored Completions (to easily generate distillation datasets), Evals (to perform custom evaluations and measure performance), and Fine-Tuning (refining the smaller models right after running an Eval).
Notably, all of these features are currently available in beta and will be available at a later date to all developers using the paid version of the API. Furthermore, the company said it will take steps to further reduce the cost of input and output tokens.
For the latest tech news and reviews, follow Gadgets 360 X, Facebook, WhatsApp, Wires And Google News. For the latest videos on gadgets and technology, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who is that360 on Instagram And YouTube.
Google says it is working on Reasoning AI, following the efforts of OpenAI
Google Pixel 9a colors, dimensions leak online; Could get a slightly larger design with four shades