Tech & Gadgets

OpenAI may have overlooked safety and security protocols for GPT-4o

OpenAI has been at the forefront of the artificial intelligence (AI) boom with its ChatGPT chatbot and advanced Large Language Models (LLMs), but the company’s security record has raised concerns. A new report alleges that the AI ​​company is rushing and neglecting safety and security protocols when developing new models. The report highlighted that the negligence occurred before the launch of OpenAI’s latest GPT-4 Omni (or GPT-4o) model.

Several anonymous OpenAI employees recently signed an open letter expressing their concerns about the lack of oversight in the building of AI systems. Notably, the AI ​​company has also created a new Safety and Security Committee, made up of select board members and directors, to evaluate and develop new protocols.

OpenAI allegedly neglects security protocols

However, three anonymous OpenAI employees told According to the Washington Post, the team felt pressured to quickly implement a new testing protocol designed to “prevent the AI ​​system from causing catastrophic damage, in order to meet the May launch date set by OpenAI leaders.”

These protocols are specifically designed to ensure that AI models do not provide harmful information, for example about the development of chemical, biological, radiological and nuclear (CBRN) weapons, or help carry out cyber attacks.

The report further highlighted that a similar incident occurred before the launch of the GPT-4o, which the company touted as its most advanced AI model. “They planned the launch afterparty before they knew if it was safe to launch. We basically failed the process,” the report quoted an unnamed OpenAI employee as saying.

This isn’t the first time that OpenAI employees have signaled an apparent disregard for safety and security protocols at the company. Last month, several former and current employees of OpenAI and Google DeepMind signed an open letter raising concerns about the lack of oversight in building new AI systems that could pose significant risks.

The letter called for government intervention and regulatory mechanisms, as well as strong whistleblower protections to be provided by employers. Two of the three godfathers of AI, Geoffrey Hinton and Yoshua Bengio, signed the open letter.

In May, OpenAI announced the creation of a new Safety and Security Committee, tasked with evaluating and evolving the AI ​​company’s processes and safeguards regarding “critical safety and security decisions for OpenAI projects and operations.” The company also recently shared new guidelines for building a responsible and ethical AI model, called Model Spec.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button