The news is by your side.

Google is giving away some of the AI ​​that powers chatbots

0

When Meta shared the raw computer code needed to build a chatbot last year, competing companies said Meta was putting poorly understood and perhaps even dangerous technology into the world.

Now, in an indication that critics of AI technology sharing are losing ground against their peers, Google is making a similar move. Google on Wednesday released the computer code that powers its online chatbot, after keeping this type of technology under wraps for months.

Like Meta, Google said the benefits of freely sharing the technology – called a grand language model – outweighed the potential risks.

The company said in a blog post that it is releasing two AI language models that can help third-party companies and independent software developers build online chatbots similar to Google's own chatbot. Called Gemma 2B and Gemma 7B, they aren't Google's most powerful AI technologies, but the company argued they could rival many of the industry's leading systems.

“We hope to re-engage the third-party developer community and ensure that” Google-based models become an industry standard for how modern AI is built, Tris Warkentin, a Google DeepMind director of product management, said in an interview.

Google said it currently has no plans to release its flagship AI model, Gemini, for free. Since it is more effective, Gemini can also do more damage.

This month, Google started charging for access to the most powerful version of Gemini. By offering the model as an online service, the company can control the technology more tightly.

Concerned that AI technologies will be used to spread disinformation, hate speech and other toxic content, some companies, such as OpenAI, the maker of the online chatbot ChatGPT, have become increasingly secretive about the methods and software underlying their products .

But others, such as Meta and French start-up Mistral, have argued that sharing code freely – called open sourcing – is the safer approach because it allows outsiders to identify problems with the technology and propose solutions.

Yann LeCun, Meta's chief AI scientist, has argued that consumers and governments will refuse to embrace AI unless it is beyond the control of companies like Google, Microsoft and Meta.

“Do you want every AI system to be controlled by a few powerful American companies?” he told The New York Times last year.

In the past, Google has open-sourced many of its leading AI technologies, including the fundamental technology for AI chatbots. But under competitive pressure from OpenAI, things became more secretive about how they were built.

The company decided to make its AI more freely available again because of developer interest, Jeanine Banks, Google's vice president of developer relations, said in an interview.

As it prepared to release its Gemma technologies, the company said it had worked to ensure they were secure and that using them to spread disinformation and other harmful material violated its software license.

“We are making sure that we release as many completely safe approaches as possible, both in the private sphere and in the open sphere,” Mr Warkentin said. “With the introduction of these 2B and 7B models, we are relatively confident that we have taken an extremely safe and responsible approach to ensure their successful entry into the industry.”

But bad actors can still use these technologies to cause problems.

Google allows people to download systems trained on vast amounts of digital text pulled from the Internet. Researchers call this “letting go of the weights,” referring to the specific mathematical values ​​the system learns as it analyzes data.

Analyzing all that data typically requires hundreds of specialized computer chips and tens of millions of dollars. Those are resources that most organizations – let alone individuals – do not have.

Leave A Reply

Your email address will not be published.