OpenAI, Google DeepMind employees demand right to warn about AI risks
OpenAI and Google DeepMind are among the top tech companies leading the way in building artificial intelligence (AI) systems and capabilities. However, several current and former employees of these organizations have signed an open letter alleging that there is little to no oversight when building these systems and that not enough attention is paid to the significant risks that come with this technology. The open letter is signed by two of the three “godfathers” of AI, Geoffrey Hinton and Yoshua Bengio, and calls for better whistleblower protection policies from their employers.
OpenAI and Google DeepMind Employees Demand Right to Warn About AI
The open letter states that it was written by current and former employees at major AI companies who believe in the potential of AI to deliver unprecedented benefits to humanity. It also points out the risks the technology poses, including exacerbating societal inequalities, spreading misinformation and manipulation, and even losing control of AI systems, which could lead to human extinction.
The open letter highlights that the self-governance structure implemented by these tech giants is not effective in ensuring oversight of these risks. It also claims that “strong financial incentives” further encourage companies to overlook the potential danger that AI systems can pose.
The open letter argues that AI companies are already aware of the potential, limitations and risk levels of various types of harm from AI, and questions their intention to take corrective action. “They currently have only weak obligations to share some of this information with governments, and none with civil society. We do not believe we can trust them to share it all voluntarily,” the letter reads.
The open letter has made four demands of their employers. First, the employees want companies not to enter into or enforce agreements that prohibit their criticism of risk-related concerns. Second, they have asked for a verifiably anonymous process for current and former employees to report risk-related concerns to the company’s board of directors, to regulators, and to an appropriate independent organization.
The employees also urge organizations to develop a culture of open criticism. Finally, the open letter emphasizes that employers should not retaliate against current and former employees who openly share risk-related confidential information after other processes have failed.
In total, 13 former and current employees of OpenAI and Google DeepMind have signed the letter. In addition to the two ‘godfathers’ of AI, British computer scientist Stuart Russell has also supported this move.
Former OpenAI employee speaks out about AI risks
One of the former OpenAI employees who signed the open letter, Daniel Kokotajlo, also made a series to inform on X (formerly known as Twitter), highlighting his experience at the company and the risks of AI. He alleged that when he resigned from the company, he was asked to sign a non-disparagement clause to prevent him from saying anything critical about the company. He also alleged that the company threatened Kokotajlo with forfeiting his acquired assets because he refused to sign the agreement.
Kokotajlo claimed that the neural networks of AI systems are growing rapidly due to the large datasets that are fed to them. He further added that there were no adequate measures to monitor the risks.
“There is still much we do not understand about how these systems work and whether they will remain aligned with human interests as they become smarter and potentially surpass human-level intelligence in all domains,” he added.
Notably, OpenAI is building Model Spec, a document the company hopes will provide better guidance on building ethical AI technology. It also recently established a Safety and Security Committee. Kokotajlo applauded these promises in one of the posts.