The news is by your side.

The Next Fear on AI: Hollywood’s Killer Robots Become the Military’s Tools

0

When President Biden announced sharp restrictions on the sale of the most advanced computer chips to China in October, he did so in part to give American industry a chance to regain competitiveness.

But at the Pentagon and the National Security Council, there was a second agenda: arms control.

If the Chinese military can’t get the chips, the theory goes, it could slow down its efforts to develop weapons powered by artificial intelligence. That would give the White House, and the world, time to work out some rules for using artificial intelligence in sensors, missiles, and cyberweapons, and ultimately to guard against some of the nightmares Hollywood has conjured up — autonomous killers. robots and computers that lock their human creators out.

Now, the fog of fear surrounding the popular ChatGPT chatbot and other generative AI software has made the restriction of chips to Beijing seem like only a temporary solution. When Mr. Biden stopped by a White House gathering of tech executives struggling to mitigate the technology’s risks on Thursday, his first comment was “what you’re doing has tremendous potential and tremendous danger.”

It was a reflection, his national security officials say, of recent secret briefings on the new technology’s potential to upend war, cyber conflict and — in the most extreme case — decision-making about the use of nuclear weapons.

But even as Mr Biden issued his warning, Pentagon officials, speaking at technology forums, said they thought the idea of ​​a six-month pause in the development of next-generation ChatGPT and similar software was a bad idea: the Chinese won’t wait and neither do the Russians.

“If we stop, guess who won’t stop: potential overseas adversaries,” said John Sherman, the Pentagon’s chief information officer. said Wednesday. “We have to keep moving.”

His blunt statement underlined the tension felt throughout the defense community today. No one really knows what these new technologies are capable of when it comes to developing and controlling weapons, and they have no idea what kind of gun control regime, if any, might work.

The premonition is vague, but very worrying. Could ChatGPT enable bad actors who previously wouldn’t have easy access to destructive technology? Could it accelerate confrontations between superpowers, leaving little time for diplomacy and negotiation?

“The industry isn’t stupid here, and you’re already seeing attempts at self-regulation,” said Eric Schmidt, the former Google chairman who served as the inaugural chairman of the advisory Defense Innovation Board from 2016 to 2020.

“So there’s a series of informal conversations taking place in the industry right now – all informal – about what the rules for AI safety would look like,” said Mr. Schmidt, who wrote with former Secretary of State Henry Kissinger: a series of articles and books about the potential of artificial intelligence to disrupt geopolitics.

The preliminary effort to put guardrails in the system is obvious to anyone who has tested the first iterations of ChatGPT. The bots won’t answer questions about how to harm someone with, say, a concoction of drugs, or how to blow up a dam or cripple nuclear centrifuges, all operations the United States and other countries have conducted without the benefit of artificial intelligence tools . .

But those blacklists of actions will only delay the abuse of these systems; few think they can completely stop such efforts. There’s always a hack to get around safety limits, as anyone who’s tried to turn off the urgent beeps on a car’s seatbelt warning system can attest.

While the new software has popularized the problem, it’s not new to the Pentagon. The first rules for developing autonomous weapons were published ten years ago. The Pentagon’s Joint Artificial Intelligence Center was established five years ago to research the use of artificial intelligence in combat.

Some weapons already operate on autopilot. Patriot missiles, which shoot down missiles or aircraft entering protected airspace, have long had an “automatic” mode. It allows them to fire without human intervention when overwhelmed by incoming targets, faster than a human could react. But they should be supervised by people who can abort attacks if necessary.

The assassination of Mohsen Fakhrizadeh, Iran’s top nuclear scientist, was carried out by Israel’s Mossad using an autonomous machine gun assisted by artificial intelligence, although there appears to have been a high degree of remote control. Russia recently said it has begun production — but not yet deployed — its submarine Poseidon nuclear torpedo. If it lives up to Russian hype, the weapon would be capable of autonomous travel across an ocean, evading existing missile defenses, to deliver a nuclear weapon days after launch.

So far, there are no treaties or international agreements dealing with such autonomous weapons. At a time when arms control agreements are abandoned faster than negotiated, there is little prospect of such an agreement. But the kind of challenges that ChatGPT and its ilk pose are different and in some ways more complicated.

In the military, AI-infused systems can accelerate the pace of decisions on the battlefield to the point where they create entirely new risks from accidental attacks or decisions made based on misleading or intentionally false warnings of incoming attacks.

“A core problem with AI in military and national security is how do you defend against attacks that are faster than human decision-making, and I think that problem is unsolved,” Schmidt said. “In other words, the missile comes in so fast that it should be automatically responded to. What happens if it’s a false signal?”

The Cold War was littered with stories of false warnings — once because a training tape, intended to be used for nuclear response practice, was somehow placed in the wrong system and set off an alarm of a massive incoming Soviet attack. (Good judgment led everyone to resign.) Paul Scharre, of the Center for a New American Security, noted in his 2018 book “Army of None” that “from 1962 to 2002 there were at least 13 nuclear incidents that nearly used became”. “affirms the view that near misses are normal, if frightening, circumstances of nuclear weapons.”

For that reason, when tensions between the superpowers were much lower than they are today, a series of presidents tried to negotiate building more time into nuclear decision-making on all sides so that no one would rush into conflict. But generative AI threatens to push countries in the other direction, towards faster decision-making.

The good news is that the great powers will probably be cautious – because they know what an opponent’s reaction would look like. But so far there are no agreed rules.

Anja Manuel, a former State Department official and now director of the Rice, Hadley, Gates and Manuel advisory group, recently wrote that even if China and Russia are not ready for arms control talks on AI, meetings on the subject would lead to discussions of what use of AI is seen as “out of bounds”.

Of course, the Pentagon will also be concerned about agreeing to many limits.

“I fought really hard to get a policy that if you have autonomous elements of weapons, you need a way to disable them,” said Danny Hillis, a computer scientist who pioneered parallel computing. which were used for artificial intelligence. Mr. Hillis, who also served on the Defense Innovation Board, said Pentagon officials backed off saying, “If we can take them out, the enemy can take them out too.”

The greater risks could come from individual actors, terrorists, ransomware groups, or smaller countries with advanced cyber skills, such as North Korea, learning how to clone a smaller, less restricted version of ChatGPT. And they may discover that the generative AI software is perfect for accelerating cyberattacks and tackling disinformation.

Tom Burt, who leads trust and security operations at Microsoft, which is speeding up the use of the new technology to revamp its search engines, said at a recent forum at George Washington University that he thought AI systems would help defenders to detect anomalous behavior faster than it would help attackers. Other experts disagree. But he said he feared artificial intelligence could “drive” the spread of targeted disinformation.

All this portends a new era of gun control.

Some experts say that since it would be impossible to stop the spread of ChatGPT and similar software, the best hope is to limit the special chips and other computing power required to advance the technology. That will no doubt be one of many different arms control plans to be proposed in the coming years, at a time when the major nuclear powers at least seem uninterested in negotiating old weapons, let alone new ones.

Leave A Reply

Your email address will not be published.