The news is by your side.

Microsoft is calling for AI rules to minimize the technology’s risks

0

Microsoft approved a series of artificial intelligence regulations on Thursday as the company navigates concerns from governments around the world about the risks posed by the rapidly evolving technology.

Microsoft, which has promised to build artificial intelligence into many of its products, has proposed regulations, including a requirement that systems used in critical infrastructure be able to be completely shut down or slowed down, similar to an emergency braking system on a train. The company also advocated for laws to clarify when additional legal obligations apply to an AI system and for labels to clarify when an image or video was produced by a computer.

“Companies need to step up,” Microsoft president Brad Smith said in an interview about regulatory pressure. “Government must act faster.”

The call for regulation underscores an explosion of AI, with the release of the ChatGPT chatbot in November sparking a wave of interest. Companies including Microsoft and Alphabet, Google’s parent company, have since raced to incorporate the technology into their products. That has raised concerns that the companies are sacrificing security to get to the next big thing before their competitors.

Lawmakers have publicly expressed concern that such AI products, which can generate text and images on their own, will spawn a flood of disinformation, be used by criminals and put people out of work. Regulators in Washington have pledged to be vigilant for fraudsters using AI and cases where the systems perpetuate discrimination or make decisions that violate the law.

In response to that research, AI developers have increasingly called for some of the burden of overseeing the technology to be shifted onto the government. Sam Altman, the CEO of OpenAI, which makes ChatGPT and considers Microsoft an investor, told a Senate subcommittee this month that the government should regulate the technology.

The maneuver echoes calls for new privacy or social media laws by internet companies such as Google and Meta, Facebook’s parent company. In the United States, lawmakers have been slow to act on such calls, with few new federal rules on privacy or social media in recent years.

In the interview, Mr. Smith said that Microsoft was not trying to shirk responsibility for managing the new technology because it offered specific ideas and promised to implement some of them regardless of government action.

There is not an ounce of abdication of responsibility,” he said.

He endorsed the idea, supported by Mr. Altman during his congressional testimony, that a government agency should require companies to obtain licenses to deploy “highly capable” AI models.

“That means you notify the government when you start testing,” Mr Smith said. “You have to share results with the government. Even if it is licensed for deployment, you have a duty to continue to monitor it and report it to the government if any unexpected issues arise.”

Microsoft, which earned more than $22 billion from its cloud computing business in the first quarter, also said those high-risk systems should only operate in “licensed AI data centers.” Mr Smith acknowledged that the company would not be “ill-positioned” to offer such services, but said many US competitors could also provide them.

Microsoft added that governments should label certain AI systems used in critical infrastructure as “high risk” and require them to have a “safety brake.” It compared that feature to “the braking system engineers have long built into other technologies such as elevators, school buses and high-speed trains.”

In some sensitive cases, Microsoft said, companies providing AI systems would need to know certain information about their customers. To protect consumers from being misled, content created by AI should be required to carry a special label, the company said.

Mr Smith said companies should bear legal “responsibility” for harm related to AI. In some cases, he said, the liable party could be the developer of an application like Microsoft’s Bing search engine that uses someone else’s underlying AI technology. Cloud companies could be responsible for complying with security regulations and other regulations, he added.

“We don’t necessarily have the best information or the best answer or we might not be the most credible speaker,” said Mr. Smith. “But you know, right now, especially in Washington DC, people are looking for ideas.”

Leave A Reply

Your email address will not be published.