The news is by your side.

Many details of Sam Altman’s Ouster are obscure. But some things are clear.

0

Phones all over Silicon Valley lit up Friday with the same question: What the hell happened to Sam Altman?

The sudden, mysterious ouster of OpenAI CEO Mr. Altman by the company’s board of directors sent shockwaves through the tech world and set off a frantic guessing game as to what brought down one of the industry’s biggest stars at a time where everything seemed to be going his way.

I’ll start by saying: I don’t know all the details about why Mr. Altman was pushed out. It appears neither does OpenAI’s shocked employees, investors, and business partners, many of whom learned of the move at the same time as the general public. In a blog post On Friday, the company said Mr. Altman “was not consistently forthcoming in his communications” with the board, but gave no other details.

An all-hands meeting for OpenAI employees on Friday afternoon did not yield much more. Ilya Sutskever, the company’s chief scientist and member of its board of directors, defended the ouster, according to a person briefed on his comments. He rejected employee suggestions that pushing out Mr. Altman amounted to a “hostile takeover” and asserted that it was necessary to protect OpenAI’s mission to make artificial intelligence beneficial to humanity, the person said.

It appears that Mr. Altman was also blinded. He recorded an interview for the podcast I co-host, “Hard Fork,” on Wednesday, two days before his firing. During our conversation, he gave no indication that anything was wrong, and he spoke at length about the success of ChatGPT, his plans for OpenAI, and his vision for the future of AI.

Mr. Altman remained silent about the precise circumstances of his departure on Friday. But Greg Brockman — co-founder and president of OpenAI, who quit on Friday in solidarity with Mr. Altman — released a rack They said they were both “shocked and saddened by what the board did today.” Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, Mr. Brockman said.

There will be plenty of palace intrigue in the coming days as the full story emerges. But a few things are already clear.

First, the ouster was only possible due to the unusual nature of OpenAI corporate governance structure. OpenAI started as a nonprofit in 2015 and created a limited-profit subsidiary in 2019 – a new arrangement where investors’ returns are limited to a certain amount above their initial investment. But it preserved the nonprofit’s mission and gave the nonprofit’s board the power to govern the limited-profit entity’s operations, including firing its CEO. Unlike some other tech founders, who maintain control of their companies through dual-class share structures, Mr. Altman does not directly own shares in OpenAI.

There are a few more quirks about the OpenAI board. It is small (six members before Friday, and four without Mr. Altman and Mr. Brockman) and includes several AI experts who do not own shares in the company. The directors do not have the responsibility to maximize shareholder value, as most boards do, but are instead bound by a fiduciary duty to create “safe AGI” – artificial general intelligence – “which, broadly is beneficial’.

At least two of the board members, Tasha McCauley and Helen Toner, have ties to the Effective Altruism movement, a utilitarian-inspired group that has pushed for AI safety research and raised alarms that a powerful AI system could one day lead to the extinction of man could lead. Another board member, Adam D’Angelo, is the CEO of Quora, a question-and-answer website.

Some of Mr. Altman’s friends and allies accused these board members of staging a ‘coup’ on Friday. But it is still not clear which board members voted to oust Mr. Altman or what their motivations were.

What we also know about Mr. Altman’s impeachment is that it has the potential to roil the entire tech industry. Mr. Altman was one of the best-connected executives in Silicon Valley, thanks to the years he spent running the startup accelerator Y Combinator. Thanks to its connections, OpenAI was able to forge strong ties with other technology companies.

Microsoft in particular has sided with OpenAI, investing more than $10 billion in the company and providing much of the technical infrastructure on which products like ChatGPT depend. Earlier this month, Microsoft CEO Satya Nadella appeared on stage at a developer conference with Mr. Altman and said it was “just fantastic to work with you.”

Typically, such close ties entitle you to a warning about a sudden resignation of the CEO. But Microsoft’s top executives only learned of Mr. Altman’s firing a minute — yes, one minute — before the news went public, according to Axios. On Friday, Mr. Nadella reassured customers that the company’s deal with OpenAI remained intact, but it’s clear the company wants answers about why one of its key strategic partners fired its top executive so abruptly.

The fate of OpenAI also matters to the thousands of developers who build AI products on top of its language models and rely on the company to maintain a stable infrastructure. Those developers may not come to a rival overnight, but if more OpenAI employees quit – at least three senior OpenAI researchers announced they were leaving on Friday, according to The Information, they may be tempted to go shopping around.

Finally, Mr. Altman’s defenestration will almost certainly fuel the culture war in the AI ​​industry between those who believe AI should be able to move faster and those who believe it should be slowed down to avoid potentially catastrophic damage.

This argument, sometimes called an argument between “accelerationists” and “doomers,” has flared up in recent months as regulators have begun to circle the AI ​​industry and the technology has become more powerful. Some prominent accelerationists have argued that big AI companies are lobbying for rules that could make it harder for small startups to compete with them. They blame safety advocates in the industry inflating the risks of AI in order to anchor themselves.

Security advocates, on the other hand, have raised alarms that OpenAI and other companies are moving too quickly to build powerful AI systems and ignoring warning voices. And some skeptics have accused these companies of stealing copyrighted works from artists, writers and others to train their models.

Mr. Altman was always careful to cross the line between optimism and worry – he made it clear that he believed AI would ultimately be beneficial to humanity, while also agreeing that guardrails and thoughtful design are needed were to keep it safe.

Some version of this argument has been going on among OpenAI staff for years. In 2020, a group of OpenAI employees quit over concerns that the company was becoming too commercial and was putting aside security research in favor of lucrative deals. (They went on to found rival AI lab Anthropic.) And several current and former OpenAI employees told me that some staffers believed Mr. Altman and Mr. Brockman could be too aggressive when it came to launching new products.

None of this is necessarily related to why Mr. Altman was kicked out. But it’s certainly a harbinger of a battle likely to come.

During our interview on Wednesday, Mr. Altman said he considered himself something of a centrist in the AI ​​debate.

“I believe this will be the most important and useful technology humanity has ever invented. And I also believe that if we’re not careful with it, it can be quite disastrous, and that’s why we have to handle it with care.”

He added, “I think you want the CEO of this company to be somewhere in the middle, and I think so.”

Leave A Reply

Your email address will not be published.