Tech & Gadgets

OpenAI may struggle to significantly improve its next AI model

OpenAI is rumored to be working on the next generation of its flagship large language model (LLM), but this may hit a sticking point. According to a report, the San Francisco-based AI company is struggling to significantly upgrade the capabilities of its next AI model, which is internally codenamed Orion. The model is said to outperform older models when it comes to language-based tasks, but it is disappointing at certain tasks, such as coding. Notably, the company would also struggle to collect enough training data to properly train AI models.

OpenAI’s Orion AI model reportedly shows no significant improvements

The information reported that the AI ​​company’s next big LLM, Orion, is not performing as per expectations when it comes to coding-related tasks. Citing unnamed employees, the report claimed that the AI ​​model has shown a significant upgrade when it comes to language-based tasks, but certain tasks are underwhelming.

This is considered a major problem as it is reportedly more expensive to use Orion in OpenAI’s data centers compared to the older models such as GPT-4 and GPT-4o. The cost-performance ratio of the upcoming LLM could pose a challenge for the company to make it attractive to businesses and subscribers.

Furthermore, the report also claimed that the overall quality jump between GPT-4 and Orion is smaller than the jump between GPT-3 and GPT-4. This is a worrying development, but the trend is also noted in other recently released AI models from competitors such as Anthropic and Mistral.

For example, the benchmark scores of Claude 3.5 Sonnet show that the quality jump is more iterative with each new foundation model. However, competitors have largely avoided the spotlight by focusing on developing new capabilities such as agentic AI.

The report also highlights that as a way to address this challenge, the industry is opting to improve the AI ​​model after initial training is completed. This can be done by fine-tuning the output by adding additional filters. However, this is a workaround and does not compensate for the limitation caused by the framework or by the lack of sufficient data.

While the former is more of a technological and research-based challenge, the latter is largely due to the availability of free and licensed data. To solve this, OpenAI has reportedly created a foundation team tasked with finding a way to deal with the lack of training data. However, it cannot be said whether this team will be able to obtain more data in time to further train and improve Orion’s capabilities.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button