The news is by your side.

Why Elon Musk’s OpenAI lawsuit relies on Microsoft AI research

0

When Elon Musk sued OpenAI and its CEO, Sam Altman, for breach of contract on Thursday, he turned claims from the startup’s closest partner, Microsoft, into a weapon.

He repeatedly cited a controversial but highly influential article written by researchers and top executives at Microsoft about the power of GPT-4, the groundbreaking OpenAI artificial intelligence system released last March.

In the article ‘Sparks of AGI’, Microsoft’s research lab said that – although it did not understand how – GPT-4 had shown ‘sparks’ of ‘artificial general intelligence’ or AGI, a machine that can do everything the human brain can do . can do.

It was a bold claim, and came as the world’s largest tech companies rushed to introduce AI into their own products.

Mr Musk turns the paper against OpenAI, saying it showed how OpenAI went back on its promises not to bring really powerful products to market.

Microsoft and OpenAI declined to comment on the lawsuit. (The New York Times has sued both companies for copyright infringement over GPT-4 training.) Mr. Musk did not respond to a request for comment.

A team of Microsoft researchers led by Sébastien Bubeck, a 38-year-old French expat and former professor at Princeton, began testing an early version of GPT-4 in the fall of 2022, months before the technology was released to the public released. Microsoft has committed $13 billion to OpenAI and negotiated exclusive access to the underlying technologies that power its AI systems.

As they talked to the system, they were amazed. It wrote a complex mathematical proof in the form of a poem, generated computer code that could draw a unicorn, and explained the best way to stack a random and eclectic collection of household items. Dr. Bubeck and his fellow researchers began to wonder if they were witnessing a new form of intelligence.

“I started out feeling very skeptical, and that evolved into a feeling of frustration, annoyance and maybe even fear,” says Peter Lee, head of research at Microsoft. “You think: where the hell did this come from?”

Mr. Musk argued that OpenAI had breached its contract because it had agreed not to market any product deemed AGI by AGI’s board.

“GPT-4 is an AGI algorithm,” Mr. Musk’s lawyers wrote. They said this meant the system should never have been licensed to Microsoft.

Mr. Musk’s complaint repeatedly cited the Sparks article to claim that GPT-4 was AGI. His lawyers said: “Microsoft’s own scientists recognize that GPT-4 ‘achieves a form of general intelligence,’” and given “the breadth and depth of GPT-4, we believe that it can reasonably be viewed as an early (but still incomplete) version of an artificial general intelligence (AGI) system.”

The article has had enormous influence since it was published a week after the release of GPT-4.

Thomas Wolf, co-founder of the high-profile AI startup Hugging Face, wrote on X the next day that the study contained “completely stunning examples” of GPT-4.

Microsoft’s research has since been cited by more than 1,500 other articles. according to Google Scholar. It is one of the most cited articles on AI in the past five years, according to Semantic Scholar.

It has also drawn criticism from experts, including some within Microsoft, who worried that the 155-page article supporting the claim was not rigorous enough and fueled an AI marketing frenzy.

The paper has not been peer-reviewed and its results cannot be reproduced because it was run on early versions of GPT-4 that were tightly controlled by Microsoft and OpenAI. As the authors noted in the article, they did not use the GPT-4 version that was later released to the public, so anyone who replicated the experiments would get different results.

Some outside experts said it was not clear whether GPT-4 and similar systems exhibited behavior anything like human reasoning or common sense.

“When we see a complicated system or machine, we anthropomorphize it; everyone does that – people who work in the field and people who don’t,” says Alison Gopnik, a professor at the University of California, Berkeley. “But thinking about this as a constant comparison between AI and humans – as some kind of game show competition – is just not the right way to think about it.”

In the introduction of the article, the authors initial defined “intelligence” by quoting a 30-year-old Op-ed from the Wall Street Journal which, in defending a concept called the Bell Curve, claimed that “Jews and East Asians” were likely to have higher IQs than “blacks and Hispanics.”

Dr. Lee, who is listed as an author in the article, said in an interview last year that when the researchers wanted to define AGI, “we took it from Wikipedia.” He said that when they later learned the Bell Curve connection, “we were really shocked by it and immediately made the change.”

Microsoft chief scientist Eric Horvitz, who was one of the main contributors to the article, wrote in an email that he personally took responsibility for inserting the reference, and said he had seen it referenced in a newspaper by a co-founder of Google’s DeepMind AI lab and failed to notice the racist references. When they heard about this, via a post on X, “we were shocked because we were simply looking for a fairly broad definition of intelligence from psychologists,” he said.

When the Microsoft researchers initially wrote the article, they called it “First Contact with an AGI System.” But some members of the team, including Dr. Horvitz, disagreed with the characterization.

He later told The Times that they were not seeing something he would call “artificial general intelligence” – but rather through probes and sometimes surprisingly powerful results.”

GPT-4 does not do everything the human brain can do.

In a message sent to OpenAI employees Friday afternoon and seen by The Times, OpenAI Chief Strategy Officer Jason Kwon explicitly said that GPT-4 was not AGI.

“It is capable of solving small tasks in many jobs, but the ratio of the work done by a human to the work done by GPT-4 in the economy remains staggeringly high,” he wrote. “Importantly, an AGI will be a highly autonomous system capable of coming up with new solutions to long-standing challenges – GPT-4 cannot do that.”

Still, the article fueled claims from some researchers and experts that GPT-4 marked an important step toward AGI and that companies like Microsoft and OpenAI would continue to improve the technology’s reasoning skills.

The AI ​​field is still bitterly divided over how intelligent the technology is today or soon will be. If Musk gets his way, a jury could settle the debate.

Leave A Reply

Your email address will not be published.