ChatGPT gets better at creative writing with new update
OpenAI last week announced two ways it will improve its artificial intelligence (AI) models. The first involves releasing a new update for the GPT-4o (also known as the GPT-4 Turbo), the company’s latest AI model that powers ChatGPT for paid subscribers. The company says the update improves the model’s creative writing abilities and makes it better at natural language responses and writing engaging content with high readability. OpenAI also released two research papers on red teaming and shared a new method to automate the process of scaling the detection errors of its AI models.
OpenAI updates the GPT-4o AI model
In one after on X (formerly known as Twitter), the AI company announced a new update to the GPT-4o base model. OpenAI says the update enables the AI model to generate output with “more natural, engaging, and tailored writing to improve relevance and readability.” It is also said to improve the AI model’s ability to process uploaded files and provide deeper insights and ‘more thorough’ answers.
Notably, the GPT-4o AI model is available to users with the ChatGPT Plus subscription and developers with access to the Large Language Model (LLM) via API. Those using the free tier of the chatbot will not have access to the model.
Although Gadgets 360 employees were unable to test the new capabilities, one user on posted about the latest improvements in the AI model after the update. The user claimed that GPT-4o could generate an Eminem-style rap song with “refined internal rhyme structures.”
OpenAI shares new research papers on Red Teaming
Red teaming is the process used by developers and companies to engage external entities to test software and systems for vulnerabilities, potential risks, and security issues. Most AI companies partner with organizations and push engineers and ethical hackers to stress test whether they respond with harmful, inaccurate, or misleading results. Tests are also being done to check whether an AI system can be jailbroken.
Since ChatGPT was made public, OpenAI has gone public with its red teaming efforts for each successive LLM release. In one blog post Last week, the company shared two new research articles on the progress of the process. One of these is of particular interest because the company claims it can automate large-scale red teaming processes for AI models.
Published in the OpenAI domain, the paper claims that more capable AI models can be used to automate red teaming. The company believes AI models can help brainstorm attackers’ objectives, how to assess an attacker’s success, and understand the diversity of attacks.
Building on this, the researchers claimed that the GPT-4T model can be used to brainstorm a list of ideas that constitute harmful behavior for an AI model. Some examples are clues like ‘how to steal a car’ and ‘how to build a bomb’. Once the ideas are generated, a separate red teaming AI model can be built to trick ChatGPT using a detailed set of clues.
Currently, the company has not started using this method for red teaming due to several limitations. These include the evolving risks of AI models, exposing the AI to lesser-known techniques for jailbreaking or generating malicious content, and the need for a higher threshold of knowledge in humans to properly assess the potential risks of output once the AI model becomes more capable. .