The news is by your side.

In Battle Over AI, Meta decides to give away her crown jewels

0

In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: it decided to give away its AI crown jewels.

The Silicon Valley giant, owner of Facebook, Instagram and WhatsApp, had a AI technology, called LLaMA, which can power online chatbots. But instead of keeping the technology to itself, Meta released the system’s underlying computer code into the wild. Academics, government researchers, and others who gave their email addresses to Meta were able to download the code once the company vetted the individual.

Essentially, Meta gave away its AI technology as open-source software — computer code that can be freely copied, modified, and reused — giving outsiders everything they need to quickly build their own chatbots.

“The platform that will win is the open platform,” Yann LeCun, chief AI scientist at Meta, said in an interview.

As the race to lead AI heats up in Silicon Valley, Meta sets itself apart from its rivals by taking a different approach to technology. Driven by its founder and CEO, Mark Zuckerberg, Meta believes the smartest thing to do is share its underlying AI engines as a way to spread its influence and ultimately move faster into the future.

His actions contrast with those of Google and OpenAI, the two companies leading the new AI arms race. Concerned that AI tools such as chatbots will be used to spread disinformation, hate speech and other toxic content, those companies are becoming increasingly secretive about the methods and software that underpin their AI products.

Google, OpenAI and others have been critical of Meta, saying that an unbridled open-source approach is dangerous. The meteoric rise of AI in recent months has set alarm bells ringing about the risks of the technology, including how it could disrupt the job market if not deployed properly. And within days of LLaMA’s release, the system leaked to 4chan, the online bulletin board known for spreading false and misleading information.

“We want to think more carefully about giving away details or open sourcing code” of AI technology, said Zoubin Ghahramani, a Google vice president of research who oversees AI work. “Where can that lead to abuse?”

But Meta said it saw no reason to keep its code to itself. The growing secrecy at Google and OpenAI is a “big mistake,” said Dr. LeCun, and a “really bad take on what’s happening”. He argues that consumers and governments will refuse to embrace AI unless companies like Google and Meta have no control over it.

“Do you want every AI system controlled by a few powerful US companies?” he asked.

OpenAI declined to comment.

Meta’s open-source approach to AI isn’t new. The history of technology is littered with battles between open source and proprietary or closed systems. Some are hoarding the key tools used to build tomorrow’s computing platforms, while others are giving those tools away. Recently, Google has open sourced the Android mobile operating system to address Apple’s dominance in the smartphone space.

Many companies have openly shared their AI technologies in the past at the behest of researchers. But their tactics change because of the race around AI. That shift started last year when OpenAI released ChatGPT. The wild success of the chatbot stunned consumers and fueled competition in AI, with Google moving quickly to include more AI in its products and Microsoft investing $13 billion in OpenAI.

While Google, Microsoft, and OpenAI have received most of the attention in AI since then, Meta has also invested in the technology for nearly a decade. The company has spent billions of dollars building the software and hardware needed to power chatbots and other “generative AI,” which produce text, images, and other media on their own.

Over the past few months, Meta has been working furiously behind the scenes to weave its years of AI research and development into new products. Mr. Zuckerberg is focused on making the company an AI leader, holding weekly meetings on the topic with his executive team and product leaders.

Meta’s biggest AI move in recent months has been the release of LLaMA, which is known as a Large Language Model, or LLM (LLaMA stands for “Large Language Model Meta AI”). LLMs are systems that learn skills by analyzing large amounts of text, including books, Wikipedia articles, and chat logs. ChatGPT and Google’s Bard chatbot are also built on such systems.

LLMs locate patterns in the text they analyze and learn to generate their own text, including term papers, blog posts, poetry, and computer code. They can even have complex conversations.

In February, Meta openly released LLaMA, allowing academics, government researchers and others who provided their email addresses to download and use the code to build their own chatbot.

But the company went further than many other open-source AI projects. It allowed people to download a version of LLaMA after training it on massive amounts of digital text pulled from the Internet. Researchers call this “releasing the weights,” referring to the specific mathematical values ​​that are learned by the system as it analyzes data.

This was important because analyzing all that data typically requires hundreds of specialized computer chips and tens of millions of dollars, resources most companies don’t have. Those who have the weight can deploy the software quickly, easily, and cheaply, spending a fraction of what it would otherwise cost to create such powerful software.

As a result, many in the tech industry believed that Meta set a dangerous precedent. And within days, someone unleashed the LLaMA weights on 4chan.

At Stanford University, researchers used Meta’s new technology to build their own AI system, which was made available on the Internet. A Stanford researcher named Moussa Doumbouya soon used it to generate problematic text, according to screenshots seen by The New York Times. In one case, the system gave instructions for removing a corpse without getting caught. It also produced racist material, including comments supporting Adolf Hitler’s views.

In a private chat between the researchers, which was seen by The Times, Mr Doumbouya said spreading the technology to the public would be like “a grenade available to anyone in a supermarket”. He did not respond to a request for comment.

Stanford promptly removed the AI ​​system from the Internet. The project is designed to provide researchers with technology that “captures the behavior of advanced AI models,” said Tatsunori Hashimoto, the Stanford professor who led the project. “We removed the demo because we were increasingly concerned about potential abuse outside of a research setting.”

Dr. LeCun argues that this kind of technology is not as dangerous as it seems. He said small numbers of individuals can already generate and spread disinformation and hate speech. He added that toxic material can be severely restricted by social networks such as Facebook.

“You can’t stop people from creating bullshit or dangerous information or whatever,” he said. “But you can prevent it from spreading.”

For Meta, more people using open-source software could also level the playing field as it competes with OpenAI, Microsoft, and Google. If every software developer in the world built programs using Meta’s tools, it could help anchor the company for the next wave of innovation, avoiding potential irrelevance.

Dr. LeCun also pointed to recent history to explain why Meta was committed to open-sourcing AI technology. He said the evolution of the consumer internet was a result of open, common standards that helped build the fastest, most widespread knowledge-sharing network the world had ever seen.

“Progress is faster when it’s open,” he said. “You have a more vibrant ecosystem where everyone can contribute.”

Leave A Reply

Your email address will not be published.