The news is by your side.

AI poses a ‘risk of extinction’, industry leaders warn

0

A group of industry leaders plans to warn Tuesday that the artificial intelligence technology they are building could one day pose an existential threat to humanity and should be considered a societal risk akin to pandemics and nuclear war.

Reducing the risk of AI extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. one sentence statement is expected to be released by the Center for AI Safety, a non-profit organization. The open letter was signed by more than 350 executives, researchers and engineers working in AI

Signatories included top executives from three of the leading AI companies: Sam Altman, CEO of OpenAI; Demis Hassabis, CEO of Google DeepMind; and Dario Amodei, CEO of Anthropic.

Geoffrey Hinton and Yoshua Bengio, two of three researchers who won a Turing Award for their pioneering work on neural networks and who are often considered “godfathers” of the modern AI movement, signed the statement, along with other prominent researchers in the field (The third Turing Award winner, Yann LeCun, who leads Meta’s AI research effort, had not yet signed on Tuesday.)

The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent developments in so-called large language models – the type of AI system used by ChatGPT and other chatbots – have raised fears that AI could soon be widely used to spread misinformation and propaganda, or that it could create millions of white-collar jobs. to eliminate .

Some believe AI could eventually become powerful enough to cause societal disruption within a few years if nothing is done to slow it down, though researchers sometimes stop explaining how that would happen.

These fears are shared by numerous industry leaders, putting them in the unusual position of claiming that a technology they are building — and in many cases racing furiously to build faster than their competitors — carries serious risks and needs to be more tightly regulated .

This month, Mr. Altman, Mr. Hassabis and Mr. Amodei met with President Biden and Vice President Kamala Harris to discuss AI regulation. In a Senate testimony after the meeting, Mr. Altman warned that the risks posed by advanced AI systems were serious enough to warrant government intervention and called for AI to be regulated for the potential harm.

Dan Hendrycks, the executive director of the Center for AI Safety, said in an interview that the open letter was a “coming out” for some industry leaders who had expressed concerns – but only privately – about the risks of the technology they were to develop.

“There’s a common misconception, even in the AI ​​community, that there are only a handful of doomers,” said Mr. hendrycks. “But in fact, a lot of people would privately voice their concerns about these things.”

Some skeptics argue that AI technology is still too immature to pose an existential threat. When it comes to current AI systems, they are more concerned about short-term problems, such as biased and incorrect responses, than longer-term dangers.

But others have argued that AI is improving so fast that it has already surpassed human performance in some areas, and soon to be in others. They say the technology has shown signs of advanced capability and understanding, prompting fears that “artificial general intelligence” or AGI, a type of artificial intelligence that can match or exceed human-level performance on a wide variety of tasks, could that maybe won’t. be far away.

In a blog post last week, Mr. Altman and two other OpenAI executives suggested several ways in which high-performance AI systems could be managed responsibly. They called for collaboration among the leading AI makers, more technical research into major language models, and the formation of an international AI security organization, similar to the International Atomic Energy Agency, that seeks to control the use of nuclear weapons.

Mr Altman has also expressed his support for rules that would require makers of large, advanced AI models to register for a government-issued license.

In March, more than 1,000 technologists and researchers signed another open letter calling for a six-month pause in the development of the biggest AI models, citing concerns about “an out-of-control race to create increasingly powerful digital minds.” develop and deploy. ”

That letter, which was written by another AI-focused nonprofit, the Future of Life Institute, was signed by Elon Musk and other well-known tech leaders, but didn’t have many signatures from the leading AI labs.

The brevity of the Center for AI Safety’s new statement — just 22 words in all — was intended to unite AI experts who may disagree about the nature of specific risks or steps to prevent those risks, but who have general concerns. shared about powerful AI systems, Mr Hendrycks said.

“We didn’t want to insist on a very large menu of 30 possible interventions,” said Mr. hendrycks. “When that happens, it dilutes the message.”

The statement was initially shared with a few high-profile AI experts, including Mr Hinton, who this month quit his job at Google so he could speak more freely, he said, about the potential dangers of artificial intelligence. From there, it made its way to several of the big AI labs, where some of the employees then signed up.

The urgency of AI leaders’ warnings has increased as millions of people turn to AI chatbots for entertainment, companionship and increased productivity, and as the underlying technology advances at a rapid pace.

“I think if this technology goes wrong, it could go pretty wrong,” Mr. Altman told the Senate subcommittee. “We want to work with the government to prevent that.”

Leave A Reply

Your email address will not be published.