The news is by your side.

The fear and tension that led to Sam Altman’s ouster from OpenAI

0

Over the past year, Sam Altman led OpenAI to the technology industry’s adult table. Thanks to its wildly popular ChatGPT chatbot, the San Francisco startup found itself at the center of an artificial intelligence boom, and OpenAI CEO Mr. Altman had become one of the most recognizable people in tech.

But that success caused tensions within the company. Ilya Sutskever, a respected AI researcher who co-founded OpenAI with Mr. Altman and nine other people, became increasingly concerned that OpenAI’s technology could be dangerous and that Mr. Altman was not paying enough attention to that risk, according to three people familiar with his thinking. Mr. Sutskever, a member of the company’s board of directors, also took issue with what he saw as his diminished role within the company, according to two of the people.

That conflict between rapid growth and AI safety came into focus Friday afternoon, when Mr. Altman was ousted from his job by four of OpenAI’s six board members, led by Mr. Sutskever. The move shocked OpenAI employees and the rest of the tech industry, including Microsoft, which has invested $13 billion in the company. Some industry insiders said the split was as big as when Steve Jobs was forced out of Apple in 1985.

The ouster of the 38-year-old Altman drew attention to a longstanding divide in the AI ​​community between people who believe AI is the greatest business opportunity in a generation and others who worry that moving too quickly could be dangerous. And the impeachment showed how a philosophical movement dedicated to the fear of AI had become an inevitable part of tech culture.

Since ChatGPT was released almost a year ago, artificial intelligence has captured the public’s imagination, with hopes that it can be used for important work such as drug research or to help teach children. But some AI scientists and political leaders worry about its risks, such as automated job losses or autonomous warfare that goes beyond human control.

The fear that AI researchers were building something dangerous is a fundamental part of OpenAI’s culture. The founders believed that because they understood these risks, they were the right people to build it.

OpenAI’s board did not give a specific reason why it kicked Atman out, other than to say in a blog post that it did not believe he was communicating honestly with them. OpenAI employees were told Saturday morning that his removal had nothing to do with “crime or anything related to our financial, business, safety or security/privacy practices,” according to a report from The New York Times.

Greg Brockman, another co-founder and president of the company, quit in protest Friday evening. OpenAI’s research director did the same. By Saturday morning, the company was in chaos, according to half a dozen current and former employees, and its roughly 700 employees struggled to understand why the board was taking the step.

“I’m sure you’re all feeling confusion, sadness, and perhaps some fear,” Brad Lightcap, OpenAI’s chief operating officer, said in a memo to OpenAI employees. “We are fully focused on handling this, strive for resolution and clarity and get back to work.”

Mr. Altman was asked to participate in a board meeting via video at noon on Friday in San Francisco. There, Mr. Sutskever, 37, read from a script that closely resembled the blog post the company published minutes later, according to a person familiar with the matter. The post stated that Mr. Altman “was not consistently forthcoming in his communications with the board, hindering the board’s ability to carry out its responsibilities.”

But in the hours that followed, OpenAI employees and others focused not only on what Mr. Altman may have done, but also on the way the San Francisco startup is structured and the extreme views on the dangers of AI that have since emerged. are embedded in the company’s work. it was founded in 2015.

Mr. Sutskever and Mr. Altman could not be reached for comment on Saturday.

In recent weeks, Jakob Pachocki, who helped oversee GPT-4, the technology at the heart of ChatGPT, was promoted to director of research at the company. After previously holding a position under Mr. Sutskever, he was elevated to a position alongside Mr. Sutskever, two people familiar with the matter said.

Mr. Pachocki left the company late Friday, the people said, shortly after Mr. Brockman. Earlier in the day, OpenAI said Mr Brockman had been removed as chairman of the board and would report to the new interim chief executive, Mira Murati. Other allies of Mr Altman – including two senior researchers, Szymon Sidor and Aleksander Madry – have also left the company.

Mr. Brockman said in a message on X, formerly Twitter, that although he was chairman of the board, he was not part of the board meeting where Mr. Altman was ousted. That left Mr. Sutskever and three other board members: Adam D’Angelo, CEO of the question-and-answer site Quora; Tasha McCauley, deputy senior management scientist at the RAND Corporation; and Helen Toner, director of strategy and fundamental research grants at Georgetown University’s Center for Security and Emerging Technology.

They could not be reached for comment on Saturday.

Ms. McCauley and Ms. Toner have ties to the Rationalist and Effective Altruist movements, a community deeply concerned that AI could one day destroy humanity. Current AI technology cannot destroy humanity. But this community believes that as technology becomes more and more powerful, these dangers will emerge.

In 2021, a researcher named Dario Amodei, who also has ties to this community, and about 15 other OpenAI employees left the company to form a new AI company called Anthropic.

Mr Sutskever increasingly aligned himself with these beliefs. He was born in the Soviet Union, spent his formative years in Israel and emigrated to Canada as a teenager. As a student at the University of Toronto, he helped create a breakthrough in an AI technology called neural networks.

In 2015, Mr. Sutskever left a job at Google and helped found OpenAI with Mr. Altman, Mr. Brockman and Tesla’s CEO, Elon Musk. They built the lab as a nonprofit and said that, unlike Google and other companies, it would not be driven by commercial incentives. They promised to build a so-called artificial general intelligence (AI), a machine that can do everything the brain can do.

Mr. Altman transformed OpenAI into a for-profit company in 2018 and negotiated a $1 billion investment from Microsoft. Such huge sums of money are essential for building technologies like GPT-4, which was released earlier this year. Since the initial investment, Microsoft has injected another $12 billion into the company.

The company continued to be governed by the nonprofit’s board of directors. Investors like Microsoft do receive profits from OpenAI, but their profits are limited. Any money over the limit is returned to the nonprofit.

Seeing the power of GPT-4, Mr. Sutskever helped create a new Super Alignment team within the company that would explore ways to ensure future versions of the technology would not cause harm.

Mr. Altman was receptive to these concerns, but he also wanted OpenAI to stay ahead of its much larger competitors. In late September, Mr. Altman flew to the Middle East to meet with investors, according to two people familiar with the matter. He sought as much as $1 billion in funding from SoftBank, the Japanese tech investor led by Masayoshi Son, for a potential OpenAI venture that would build a hardware device for running AI technologies like ChatGPT.

OpenAI is also in talks for a “tender offer” financing that will allow employees to redeem shares in the company. That deal would value OpenAI at more than $80 billion, nearly triple its value about six months ago.

But the company’s success only seems to have fueled more concerns that something could go wrong with AI

“It doesn’t seem at all unlikely that we will have computers – data centers – that are much smarter than people,” Mr Sutskever said at a podcast on November 2. “What would such AIs do? Don’t know.”

Kevin Rose And Tripp Mickle reporting contributed.

Leave A Reply

Your email address will not be published.