The news is by your side.

The paradox at the heart of Elon Musk’s OpenAI lawsuit

0

It would be easy to dismiss Elon Musk’s lawsuit against OpenAI as a case of sour grapes.

Mr. Musk sued OpenAI this week, accusing the company of breaching the terms of its founding agreement and violating its founding principles. In his story, OpenAI was founded as a non-profit organization that would build powerful AI systems for the good of humanity and give away its research freely to the public. But Musk claims OpenAI broke this promise by starting a for-profit subsidiary that took billions of dollars in investments from Microsoft.

An OpenAI spokeswoman declined to comment on the lawsuit. In a memo sent to employees on Friday, Jason Kwon, the company’s chief strategy officer, denied Mr Musk’s claims, saying: “We believe the allegations in this lawsuit may arise from Elon’s regret that he is not involved with the company today,” according to a copy of the memo I reviewed.

On one level, the lawsuit smacks of personal beef. Mr Musk, who co-founded OpenAI with a group of other tech heavyweights in 2015 and provided much of the initial funding but left in 2018 amid disputes with leadership, is upset at being sidelined in the talks about AI. Its own AI projects haven’t gotten as much traction as ChatGPT, OpenAI’s flagship chatbot. And Mr. Musk’s feud with OpenAI CEO Sam Altman is well documented.

But amid all the animus, there’s a point worth making, because it illustrates a paradox at the heart of much of today’s AI conversations – and a place where OpenAI really has both sides has spoken from his mouth. and emphasizes that its AI systems are incredibly powerful and that they do not come close to matching human intelligence.

The claim centers on a term known as AGI, or “artificial general intelligence.” Defining what AGI means is notoriously difficult, although most people agree that it means an AI system that can do most or all of the things the human brain can do. Mr. Altman did that certain AGI as “the equivalent of an average human you could hire as a colleague,” while OpenAI itself defines AGI as “a highly autonomous system that outperforms humans at the most economically valuable work.”

Most leaders of AI companies argue that not only is it possible to build AGI, but that it is imminent. Demis Hassabis, the CEO of Google DeepMind, told me in a recent podcast interview that he thought AGI could arrive as soon as 2030. Mr. Altman said that AGI may be only four or five years away.

Building AGI is the explicit goal of OpenAI, and there are many reasons to want to get there first. A true AGI would be an incredibly valuable resource, capable of automating vast amounts of human labor and making a lot of money for its creators. It’s also the kind of shiny, audacious goal that investors like to fund, and that helps AI labs recruit top engineers and researchers.

But AGI can also be dangerous if it can outsmart people, or if it becomes deceptive or no longer aligned with human values. The people who started OpenAI, including Mr. Musk, worried that an AGI would be too powerful to be owned by a single entity, and that if they ever got close to it, they would have to change the control structure around it. to prevent it from causing damage or concentrating too much wealth and power in the hands of one company.

That’s why when OpenAI partnered with Microsoft, it specifically gave the tech giant a license that only applied to “pre-AGI” technologies. (The New York Times has sued Microsoft and OpenAI for using copyrighted work.)

Under the terms of the deal, Microsoft’s license would no longer apply if OpenAI ever built anything that met the definition of AGI – as determined by OpenAI’s nonprofit board. OpenAI’s board could decide to do whatever it wanted to ensure that OpenAI’s AGI benefited all of humanity. That could mean many things, including open sourcing the technology or shutting it down completely.

Most AI commentators believe that today’s advanced AI models do not qualify as AGI because they lack sophisticated reasoning skills and often make stupid errors.

But in his legal filing, Mr. Musk makes an unusual argument. He states that OpenAI has done that already AGI has achieved with its GPT-4 language model, released last year, that the company’s future technology will qualify even more clearly as AGI

“Based on information and belief, GPT-4 is an AGI algorithm, and therefore expressly beyond the scope of Microsoft’s September 2020 exclusive license with OpenAI,” the complaint reads.

What Mr. Musk is arguing here is a bit complicated. In short, he says that because OpenAI has achieved AGI with GPT-4, it should no longer license it to Microsoft, and the board should make the technology and research more freely available.

His complaint cites the now infamous “Sparks of AGI” paper from a Microsoft research team last year, which argued that GPT-4 showed early indications of general intelligence, including signs of human-level reasoning.

But the complaint also notes that OpenAI’s board is unlikely to make decisions about its AI systems Actually qualify as AGI because once it does, it will have to make major changes in how it deploys and benefits from the technology.

Moreover, he notes that Microsoft — which now has a non-voting observer seat on OpenAI’s board, following an upheaval last year that resulted in Mr. Altman’s temporary resignation — has a strong incentive to deny that OpenAI’s technology qualifies as AGI. That would end its license to use that technology in its products, potentially jeopardizing huge profits.

“Given Microsoft’s enormous financial interest in keeping its gates closed to the public, the new, conflicted and obedient board of OpenAI, Inc. have every reason to ever delay OpenAI achieving AGI,” the complaint reads. “On the contrary, it will always take a day for OpenAI to reach AGI, like ‘Tomorrow’ in ‘Annie’.”

Considering his track record of questionable lawsuits, it’s easy to question Mr. Musk’s motives here. And as the head of a competing AI startup, it’s not surprising that he wants to drag OpenAI into messy lawsuits. But his lawsuit highlights a real conundrum for OpenAI.

Like its competitors, OpenAI is eager to be seen as a leader in the race to build AGI, and it has a vested interest in convincing investors, business partners and the public that its systems are improving at breakneck speed.

But because of the terms of the deal with Microsoft, OpenAI’s investors and executives may not want to admit that the technology actually qualifies as AGI, if and when it does.

That has put Mr. Musk in the strange position of having to ask a jury to rule on what constitutes AGI, and to decide whether OpenAI’s technology has reached the threshold.

The lawsuit has also put OpenAI in the strange position of downplaying the capabilities of its own systems while continuing to fuel expectations that a major AGI breakthrough is just around the corner.

“GPT-4 is not AGI,” OpenAI’s Mr. Kwon wrote in the memo to employees on Friday. “It is capable of solving small tasks in many jobs, but the ratio of the work done by a human to the work done by GPT-4 in the economy remains staggeringly high.”

The personal vendetta that fueled Musk’s complaint has led some people to view it as a frivolous lawsuit – one commentator compared it’s about ‘suing your ex for renovating the house after your divorce’ – that will quickly be dismissed.

But even if it’s dismissed, Mr. Musk’s lawsuit points to important questions: Who gets to decide when something qualifies as AGI? Are tech companies exaggerating or sandbagging (or both) when it comes to describing how capable their systems are? And what incentives underlie the different claims about how close or far from AGI we might be?

A lawsuit against a resentful billionaire is probably not the right way to resolve these questions. But they’re good to ask, especially as progress in AI continues to accelerate.

Leave A Reply

Your email address will not be published.