The news is by your side.

Under pressure from Biden, AI companies agree to guardrails for new tools

0

Seven leading AI companies in the United States have agreed to voluntary safeguards for the development of the technology, the White House announced Friday, pledging to manage the risks of the new tools even as they compete for the potential of artificial intelligence.

The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally made their commitment to new standards of safety, security and trust during a meeting with President Biden at the White House on Friday afternoon.

“We need to be clear and vigilant about the threats posed by emerging technologies that can shape our democracy and our values ​​– not necessarily, but can shape them,” Mr. Biden said in terse remarks from the Roosevelt Room at the White House.

“This is a serious responsibility; we have to get it right,” he said, flanked by the companies’ executives. “And there’s also huge, huge potential.”

The announcement comes as the companies race to outdo each other with versions of AI that offer powerful new ways to create text, photos, music and video without human intervention. But the technological leaps have led to fears about the spread of disinformation and dire warnings of a “risk of extinction” as artificial intelligence becomes increasingly sophisticated and human.

The voluntary safeguards are just an early, tentative step as Washington and governments around the world try to put in place legal and regulatory frameworks for artificial intelligence development. The agreements include testing products for security vulnerabilities and using watermarks to ensure consumers can recognize AI-generated material.

But lawmakers are struggling to regulate social media and other technologies in a way that keeps pace with rapidly evolving technology.

The White House did not provide details about an upcoming presidential executive order that aims to address another issue: how to monitor China’s and other competitors’ ability to acquire the new artificial intelligence programs, or the components used to develop them.

The order is expected to bring new restrictions on advanced semiconductors and restrictions on the export of major language models. These are difficult to secure – much of the software fits, compressed, on a USB stick.

An executive order could elicit more industry opposition than Friday’s voluntary commitments, which experts say were already reflected in the practices of the companies involved. The promises will not hinder the plans of the AI ​​companies, nor hinder the development of their technologies. And as voluntary commitments, they will not be enforced by government regulators.

“We are excited to make these voluntary commitments along with others in the industry,” Nick Clegg, the president of global affairs at Meta, Facebook’s parent company, said in a statement. “They are an important first step in ensuring that responsible guardrails are put in place for AI and they create a model for other governments to follow.”

As part of the safeguards, the companies agreed to security testing, in part by independent experts; research into bias and privacy issues; share information about risks with governments and other organizations; development of tools to tackle societal challenges such as climate change; and transparency measures to identify AI-generated material.

In a statement announcing the agreements, the Biden administration said the companies must ensure that “innovation does not come at the expense of the rights and safety of Americans.”

“Companies developing these emerging technologies have a responsibility to ensure their products are safe,” the administration said in a statement.

Brad Smith, Microsoft’s president and one of the executives who attended the White House meeting, said his company endorsed the voluntary safeguards.

“By acting quickly, the White House commitments create a foundation to ensure the promise of AI stays ahead of the risks,” said Smith.

Anna Makanju, OpenAI’s vice president of global affairs, described the announcement as “part of our ongoing partnership with governments, civil society organizations and others around the world to advance AI governance.”

For the companies, the standards described Friday serve two purposes: as an attempt to self-monitor legislative and regulatory movements, and as a signal that they are thoughtful and proactive about the new technology.

But the rules they agreed on are largely the lowest common denominator, and can be interpreted differently by each company. For example, the companies have committed to strict cybersecurity measures around the data used to create the language models on which generative AI programs are developed. But there’s no specificity as to what that means, and the companies would have an interest in protecting their intellectual property anyway.

And even the most cautious companies are vulnerable. Microsoft, one of the firms attending the White House event with Mr Biden, last week attempted to counter a Chinese government-sponsored hack on the private emails of US officials doing business with China. It now appears that China has stolen or somehow obtained a “private key” from Microsoft that is the key to verifying emails – one of the company’s most closely guarded bits of code.

Given such risks, the agreement is unlikely to slow down efforts to pass legislation and impose regulations on the emerging technology.

Paul Barrett, deputy director of the Stern Center for Business and Human Rights at New York University, said more needs to be done to protect us from the dangers artificial intelligence poses to society.

“The voluntary commitments announced today are not enforceable. Therefore, it is vital that Congress, along with the White House, quickly enact legislation requiring transparency, privacy protections and more research on the broad range of risks posed by generative AI,” Barrett said in a statement.

European regulators are poised to pass AI laws later this year, prompting many of the companies to push for US regulation. Several lawmakers have introduced bills that include licenses for AI companies to release their technologies, the creation of a federal agency to oversee the industry, and data privacy requirements. But members of Congress are far from agreeing on rules.

Lawmakers have struggled with how to handle the rise of AI technology, with some focusing on the risks to consumers and others acutely concerned about falling behind adversaries, especially China, in the race for dominance in the field.

This week, the House of Representatives’ Competition Committee on China sent bipartisan letters to US-based venture capital firms demanding a reckoning of the investments they had made in Chinese AI and semiconductor companies. For months, various House and Senate panels have polled the most influential entrepreneurs and critics of the AI ​​industry to determine what kind of legislative guardrails and incentives Congress should investigate.

Many of those witnesses, including OpenAI’s Sam Altman, have pleaded with lawmakers to regulate the AI ​​industry, citing the new technology’s potential to cause unnecessary harm. But that regulation is slow to take off in Congress, where many legislators are still struggling to understand what exactly AI technology is.

In an effort to improve lawmakers’ understanding, Senator Chuck Schumer, New York Democrat and Majority Leader, began a series of sessions this summer to hear from government officials and experts about the merits and dangers of artificial intelligence in a number of areas.

Karun Demirjian contributed reporting from Washington.

Leave A Reply

Your email address will not be published.