The news is by your side.

AI experts agree that regulations are needed. That’s the easy part.

0

This article is part of our special section on the DealBook Summit that brought together business and policy leaders from around the world.


  • The rise of generative artificial intelligence, such as ChatGPT, signals a radical change in the way AI will be used in every area of ​​society, but it should still be seen as a tool that people can use and control – not as something that controls us.

  • Some form of regulation of AI is needed, but opinions vary widely on the scope and enforceability of such rules.

  • To realize the potential of AI and manage the risks as much as possible, technology companies cannot do it alone. There must be real partnerships with other sectors, such as universities and government.


Put seven artificial intelligence experts together in one room and there will be a lot of discussion about almost everything: from legislation to transparency to best practices. But they agreed on at least one thing.

It’s not supernatural.

“AI is not something that came from Mars. It’s something we shape,” said Francesca Rossi, an IBM fellow and IBM AI Ethics Global Leader. Ms Rossi, together with other representatives from industry, academia and the Parliament of the European Union, took part last week in the DealBook Summit task force on how to harness the potential of AI while regulating its risks .

Recognizing that AI didn’t come from space was the easy part. But how it will take shape – not just in the United States but globally – has been much more difficult. What role should governments play in controlling AI? How transparent should tech companies be about their AI research? Should AI adoption slow down in some areas, even if there is an opportunity to do so?

Although AI has been around for decades, when the company OpenAI released ChatGPT a year ago, it immediately became a global phenomenon; Kevin Roose, a technology writer for The New York Times and task force moderator wrote, “ChatGPT is simply the best artificial intelligence chatbot ever released to the general public.”

These new types of chatbots can communicate in an eerily human way and in countless languages. And they’re all still in their infancy. While ChatGPT is the best known, there are others, including Google’s Bard and most recently: Amazon’s Q.

“We all know that this particular phase of AI is in its very, very early stages,” said John Roese, president and chief technology officer of Dell Technologies. No one can be complacent or view AI “just as a commodity.”

“It’s not that,” he said. “This is not something you just consume. This is something you navigate through.”

Although AI has made a giant leap forward — and is evolving so quickly that it’s difficult to keep up — it’s important not to overwhelm it, says Fei-Fei Li, professor of computer science at Stanford University and co-director from the university’s Human-Centered AI Institute. “Somehow we’re too excited about this. It’s a tool. Human civilization begins with the use of tools and the invention of tools, from fire to stone, from steam to electricity. They are becoming more and more complex, but it is still a relationship between tools and people.”

While it’s true that some of the ways AI works are inexplicable even to its developers, Professor Li noted that this also applies to things like pharmaceuticals: paracetamol, For example. However, she said part of the reason most people don’t hesitate to take the drugs is because there is a federal agency — the Food and Drug Administration — that regulates drugs.

That raises the question of whether there should be an equivalent of the FDA for AI?

Some regulation is needed, participants agreed, but the trick is deciding what that should look like.

Vice President Kamala Harris, who was interviewed at the DealBook conference, spoke separately on the issue.

“I know there can and must be a balance between what we need to do in terms of oversight and regulation, and being intentional about not stifling innovation,” she said.

However, it is difficult to find the balance.

The European Parliament insists on the ffirst great law to regulate artificial intelligence, something the rest of the world is watching closely.

Part of the law calls for reviews of AI used in identified high-risk areas such as healthcare, education and criminal justice. That would require makers of AI systems to disclose, among other things, what data is used to train their systems – to avoid bias and other problems – and how they handle sensitive information and its impact on the environment. It would also severely limit the use of facial recognition software.

Brando Benifei, Member of the European Parliament and participant in the task force, said he hopes it will be adopted early next year; There will be a grace period before it is implemented.

In October, the White House released a lengthy report executive order on AI, but without an enforcement mechanism, something Mr Benifei sees as necessary. “It’s obviously a sensitive subject,” he says. “There is a lot of concern from the business community, I think rightly, that we don’t over-regulate until we fully understand all the challenges.” But, he said, “we cannot rely solely on self-regulation.” The development and use of AI, he added, must be “enforceable and explainable to our citizens.”

Other task force members were much more reluctant to embrace such broad regulations. There are many questions, such as who is responsible if something goes wrong: the original developer? An external supplier? The end user?

“You can’t regulate AI in a vacuum,” Mr Roese said. “AI depends on the software ecosystem, on the data ecosystem. If you try to regulate AI without considering the upstream and downstream effects on adjacent industries, you are wrong.”

For that reason, he said, it makes more sense to have an AI agency or department within the relevant government agencies – perhaps with an overarching AI coordinator – rather than trying to create a centralized AI agency.

Transparency is crucial, all agree, and this also applies to partnerships between government, industry and university research. “If you’re not very transparent, academia will be left behind and researchers won’t come out of academia anymore,” said Rohit Prasad, senior vice president and chief scientist at Amazon Artificial General Intelligence.

Professor Li, the only academic representative in the room, noted that companies often say they want partnerships, but they don’t “follow it closely.”

Moreover, she said, “It’s not just about regulations. It really has to do with public sector investments in a profound way,” noting that she has directly implored Congress and President Biden to support universities in this area. Academia, she said, can serve as a trusted, neutral platform in this area, but “right now we have completely starved the public sector.”

AI becomes one existential threat to humanity – possibly through its use in surveillance that undermines democracy or in launching automated weapons that could be deadly on a large scale. But such much-discussed warnings distract from the more mundane but more immediate problems of AI, Mr. Benifei said.

“Today we have problems with algorithmic bias, with misuse of AI, that is, in people’s daily lives, and not with the catastrophe for humanity,” he said.

All these issues concern Lila Ibrahim, Chief Operating Officer of Google DeepMind. But one key point, she noted, they hadn’t had time to explore in more depth: “How can we actually equip today’s youth with AI skills and do it with diversity and inclusion?” she asked. “How do we not leave people further behind?”

Moderator: Kevin Roose, technology writer, The New York Times

Attendees: Brando Benifei, Member of the European Parliament; Lila Ibrahim, Chief Operating Officer, Google DeepMind; Fei-Fei Li, professor of computer science, Stanford University and co-director, Stanford Institute for Human-Centered AI; Rohit Prasad, senior vice president and chief scientist at Amazon Artificial General Intelligence; David Risher, CEO of Lyft; John Roese, president and Global Chief Technology Officer, Dell Technologies; Francesca Rossi, IBM fellow and world leader in AI ethics

Leave A Reply

Your email address will not be published.