The news is by your side.

Dozens of top scientists sign efforts to prevent AI bioweapons

0

Dario Amodei, CEO of the high-profile AI startup Anthropic, told Congress last year that new AI technology could soon help unskilled but evil people create large-scale biological attackssuch as the release of viruses or toxins that cause widespread illness and death.

Senators from both parties were alarmed, while AI researchers in industry and academia debated the severity of the threat.

Now more than 90 biologists and other scientists specializing in AI technologies used to design new proteins – the microscopic mechanisms that drive all creation in biology – have signed an agreement that wants to ensure that their AI-enabled research will progress without exposing the world to serious harm.

The biologists, including Nobel laureate Frances Arnold and labs in the United States and other countries, also argued that the latest technologies would have far more benefits than harm, including new vaccines and drugs.

“As scientists engage in this work, we believe that the benefits of current AI technologies for protein design far outweigh the potential harms, and we are eager to ensure that our research continues to benefit everyone,” the agreement said.

The agreement does not aim to suppress the development or distribution of AI technologies. Instead, the biologists want to regulate the use of equipment needed to produce new genetic material.

This DNA production equipment is ultimately what makes bioweapons development possible, said David Baker, director of the Institute for Protein Design at the University of Washington, who helped broker the deal.

“Protein design is just the first step in making synthetic proteins,” he said in an interview. “You then have to actually synthesize DNA and move the design from the computer into the real world – and that is the right place to regulate.”

The agreement is one of many efforts to weigh the risks of AI against its potential benefits. While some experts warn that AI technologies could help spread disinformation, replace jobs at an unusual rate and perhaps even destroy humanity, tech companies, academic labs, regulators and regulators are struggling to understand these risks and find ways to mitigate them. to deal with.

Dr.’s company Amodei, Anthropic, is building large language models, or LLMs, the new breed of technology that powers online chatbots. Testifying before Congress, he argued that the technology could soon help attackers build new bioweapons.

But he acknowledged that this was not possible today. Anthropic recently had one detailed study proving that if someone is trying to acquire or design biological weapons, LLMs are marginally more useful than a regular internet search engine.

Dr. Amodei and others worry that as companies improve LLMs and combine them with other technologies, a serious threat will emerge. He told Congress that this would only take another two to three years.

OpenAI, maker of the online chatbot ChatGPT, later conducted a similar study that found LLMs were not significantly more dangerous than search engines. Aleksander Mądry, a professor of computer science at the Massachusetts Institute of Technology and head of preparedness at OpenAI, said he expected researchers would continue to improve these systems, but that he had not yet seen any evidence that they could create new bioweapons. .

Today’s LLMs are created by analyzing vast amounts of digital text pulled from the Internet. This means they regurgitate or recombine what is already available online, including existing information about biological attacks. (The New York Times has taken OpenAI and its partner Microsoft to court, accusing them of copyright infringement in the process.)

But in an effort to accelerate the development of new drugs, vaccines and other useful biological materials, researchers are beginning to build similar AI systems that can generate new protein designs. Biologists say such technology could also help attackers design biological weapons, but they point out that actually building the weapons would require a multimillion-dollar laboratory, including DNA production equipment.

“There is a risk that doesn’t require millions of dollars in infrastructure, but those risks have been around for a while and are not related to AI,” said Andrew White, co-founder of the nonprofit Future House and one of the biologists. who signed the agreement.

The biologists called for the development of safeguards that would prevent DNA production equipment from being used with harmful materials – although it is unclear how those measures would work. They also called for safety and security assessments of new AI models before they are released.

They did not argue that the technologies should be bottled.

“These technologies should not only be in the hands of a small number of people or organizations,” said Rama Ranganathan, a professor of biochemistry and molecular biology at the University of Chicago, who also signed the agreement. “The community of scientists should be able to explore and contribute to these freely.”

Leave A Reply

Your email address will not be published.