The news is by your side.

Before Altman’s ouster, OpenAI’s board was divided and feuding

0

Before Sam Altman was kicked out of OpenAI last week, he and the company’s board of directors had been bickering for more than a year. The tension worsened when OpenAI became a mainstream name thanks to the popular ChatGPT chatbot.

Mr. Altman, the CEO, recently attempted to kick out one of the board members because he believed a research paper she co-wrote was critical of the company.

Another member, Ilya Sutskever, who is also OpenAI’s chief scientist, felt that Mr. Altman was not always honest in his conversations with the board. And board members worried that Mr. Altman was too focused on expansion while trying to balance that growth with AI safety.

The news that he was being kicked out came during a video conference on Friday afternoon, when Mr. Sutskever, who had worked closely with Mr. Altman at OpenAI for eight years, read him a statement from the board. While the decision stunned OpenAI employees and exposed board members to tough questions about their qualifications to lead such a high-profile company, it was the culmination of long-simmering tension in the boardroom.

The divide also showed how building new AI systems is a test of whether businesspeople looking to make money with artificial intelligence can work with researchers who worry that what they’re building could ultimately eliminate jobs or become a threat to humanity if things like autonomous weapons are growing. got out of hand.

OpenAI started in 2015 with an ambitious plan to one day create a super-intelligent automated system that can do everything a human brain can do. But OpenAI’s board has long been plagued by friction. The board has not even managed to reach an agreement on the replacement of resigned members.

Now the company’s survival is in doubt, largely because of that dysfunction. Nearly all of OpenAI’s 800 employees have threatened to follow Mr. Altman to Microsoft, which asked him to run an AI lab with Greg Brockman, who stepped down from his role as president and chairman of OpenAI in solidarity with Mr. Altman.

The board had told Mr. Brockman that he would no longer be OpenAI’s chairman, but invited him to remain with the company – although he was not invited to the meeting where it was decided to remove him from the board and Mr. Altman from the board. Company.

The board did not say what they thought Mr. Altman was being dishonest about.

There were indications that the board was still open to his return as the board and Mr. Altman held talks that stretched into Tuesday, two people familiar with the talks said. But there was a sticking point: Mr. Altman rejected some of the guardrails proposed to improve his communications with the board. It was not clear what exactly those guardrails would be.

Mr. Sutskever did not respond to a request for comment on Tuesday.

OpenAI’s governance problems can be traced back to the startup’s nonprofit inception. In 2015, Mr. Altman teamed up with Elon Musk and others, including Mr. Sutskever, to create a nonprofit to build AI that was safe and beneficial to humanity. They planned to raise money from private donors for their mission. But within a few years they realized that their computing needs required much more funding than they could obtain from individuals.

After Mr. Musk left in 2018, they founded a for-profit subsidiary that began raising billions of dollars from investors, including $1 billion from Microsoft. They said the subsidiary would be controlled by the nonprofit’s board and that each director’s fiduciary duty would be to “humanity, not to OpenAI investors.” OpenAI reports this on its website.

After Mr. Altman was forced out and Mr. Brockman left, the four remaining board members are Mr. Sutskever; Adam D’Angelo, the CEO of Quora, the question-and-answer site; Helen Toner, director of strategy at Georgetown University’s Center for Security and Emerging Technology; and Tasha McCauley, an entrepreneur and computer scientist.

A few weeks before Mr. Altman’s ouster, he met with Ms. Toner to discuss an article she had recently co-written for Georgetown University’s Center for Security and Emerging Technology.

Mr. Altman complained that the research paper appeared to criticize OpenAI’s efforts to keep its AI technologies secure while praising Anthropic’s approach, according to an email Mr. Altman wrote to colleagues and seen by The New York Times.

In the email, Mr. Altman said he had reprimanded Ms. Toner over the paper and that it was dangerous for the company, especially at a time, he added, when the Federal Trade Commission was investigating OpenAI based on the data used to build his business. technology.

Ms Toner defended it as an academic paper that analyzed the challenges the public faces in trying to understand the intentions of the countries and companies developing AI. But Mr. Altman disagreed.

“I didn’t feel like we were on the same page about the harm from all of this,” he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.”

Senior OpenAI leaders, including Mr. Sutskever, who is deeply concerned that AI could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the talks said.

But shortly after those discussions, Mr. Sutskever did the unexpected: He sided with board members to oust Mr. Altman, according to two people familiar with the board’s deliberations. He read to Mr. Altman the board’s public statement explaining that Mr. Altman was fired because he wasn’t.always frank in his communication with the board.”

Mr Sutskever’s frustration with Mr Altman echoed what happened in 2021 another senior AI scientist left OpenAI to form the Anthropic company. That scientist and other researchers went to the board to try to get Mr. Altman out. After they failed, they gave up and left, according to three people familiar with the effort to push Mr. Altman out.

“After a series of fairly amicable negotiations, Anthropic’s co-founders were able to negotiate their departure on mutually agreeable terms,” Anthropic spokeswoman Sally Aldous said.

Vacancies exacerbated the board’s problems. This year there was disagreement over how to replace three outgoing executives: Reid Hoffman, the founder of LinkedIn and a Microsoft board member; Shivon Zilis, chief operating officer at Neuralink, a company founded by Mr. Musk to implant computer chips in people’s brains; and Will Hurd, a former Republican congressman from Texas.

After vetting four candidates for one position, the remaining directors couldn’t agree on who should fill it, the two people familiar with the board’s deliberations said. The standoff hardened the rift between Mr. Altman and Mr. Brockman and other board members.

Hours after Mr. Altman was ousted, OpenAI executives confronted the remaining board members on a video call, according to three people on the call.

During the call, OpenAI Chief Strategy Officer Jason Kwon said the board was jeopardizing the company’s future by pushing Mr. Altman out. According to him, this was contrary to the responsibilities of the members.

Ms Toner disagreed. The board’s mission is to ensure that the company creates artificial intelligence that “benefits all humanity,” and if the company were destroyed, she said, that mission would be fulfilled. According to the board, OpenAI would be stronger without Mr. Altman.

On Sunday, Mr. Sutskever at the OpenAI office urged to change course by Mr.’s wife. Brockman, Anna, according to two people familiar with the exchange. Hours later, he signed a letter with other employees demanding that the independent directors resign. The confrontation between Mr Sutskever and Ms Brockman was previously reported by The Wall Street Journal.

He posted a message at 5:15 a.m. on Monday on Xformerly Twitter, that “I deeply regret my participation in the board’s actions.”

Leave A Reply

Your email address will not be published.