The news is by your side.

In the big election year, the architects of AI are taking action against its misuse

0

Artificial intelligence companies are at the forefront of developing the transformative technology. Now they're also racing to set limits on how AI is used in a year full of major elections around the world.

Last month, OpenAI, the maker of the ChatGPT chatbot, said it worked to prevent misuse of its tools in elections, in part by banning their use to create chatbots that masquerade as real people or institutions. In recent weeks, Google also said it would limit its AI chatbot, Bard, from responding to certain election-related prompts to avoid inaccuracies. And Meta, owner of Facebook and Instagram, promised to do just that better label AI generated content on its platforms so that voters could more easily distinguish which information was real and which was fake.

On Friday, Anthropic, another leading AI startup, joined its peers in banning its technology from being used for political campaigning or lobbying. In a blog post, the company, which makes a chatbot called Claude, said it would warn or suspend any users who break the rules. It added that it used tools trained to automatically detect and block disinformation and influence operations.

“The history of AI deployment is also one full of surprises and unexpected effects,” the company said. “We expect that by 2024 there will be surprising applications of AI systems – applications that were not anticipated by their own developers.”

The efforts are part of an effort by AI companies to gain traction over a technology they have popularized as billions of people head to the polls. At least 83 elections are expected around the world this year, the largest concentration in the next 24 years, according to consultancy Anchor Change. In recent weeks, people in Taiwan, Pakistan and Indonesia have voted, while India, the world's largest democracy, will hold general elections in the spring.

How effective the restrictions on AI tools will be is unclear, especially as tech companies continue to push more and more advanced technology. On Thursday, OpenAI unveiled Sora, a technology that can instantly generate realistic videos. Such tools could be used to produce text, sound and images in political campaigns, blurring fact and fiction and raising questions about whether voters can tell which content is real.

AI-generated content has already emerged in US political campaigns, prompting regulatory and legal pressure. Some state lawmakers are drafting bills to regulate AI-generated political content.

Last month, New Hampshire residents received robocall messages that prevented them from voting in the state's primary with a vote that was most likely artificially generated to sound like President Biden. The Federal Communications Commission banned such calls last week.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, impersonate celebrities, and misinform voters,” FCC Chairman Jessica Rosenworcel said at the time.

AI tools have also led to misleading or deceptive portrayals of politicians and political issues in Argentina, Australia, Britain and Canada. Last week, former Prime Minister Imran Khan, whose party won the most seats in Pakistan's elections, used an AI vote to declare victory while he was in jail.

In one of the most consequential election cycles in history, the disinformation and deception that AI can wreak could be devastating to democracy, experts say.

“We are behind the eight ball here,” said Oren Etzioni, a professor at the University of Washington who specializes in artificial intelligence and founder of True Media, a nonprofit that works to identify disinformation online in political campaigns. “We need tools to respond to this in real time.”

Anthropic said in its announcement Friday that it plans tests to identify how its Claude chatbot can produce biased or misleading content related to political candidates, political issues and election administration. These “red team” tests, which are often used to breach a technology's protections to better identify its vulnerabilities, will also examine how the AI ​​responds to malicious questions, such as asking about voter suppression tactics.

In the coming weeks, Anthropic is also rolling out a pilot that aims to direct US users with voting-related questions to authoritative information sources such as TurboVote from Democracy Works, a nonpartisan nonprofit organization. The company says its AI model is not trained often enough to reliably provide real-time facts about specific elections.

Similarly, OpenAI said last month that it planned to direct people to voting information via ChatGPT and label AI-generated images.

“Like any new technology, these tools bring benefits and challenges,” OpenAI said in a blog post. “They are also unprecedented, and we will continue to evolve our approach as we learn more about how our tools are used.”

(The New York Times sued OpenAI and its partner Microsoft in December for copyright infringement of news content related to AI systems.)

Synthesia, an AI video generator startup that has been linked to disinformation campaigns, also bans the use of the technology for “news-type content,” including false, polarizing, divisive or misleading material. The company has improved the systems it uses to detect misuse of its technology, said Alexandru Voica, Synthesia's head of corporate affairs and policy.

Stability AI, an image generation tool startup, said it banned the use of its technology for illegal or unethical purposes, made efforts to block the generation of unsafe images and applied an imperceptible watermark to all images.

The biggest tech companies have also done their part. Last week, Meta said it was working with other companies on technology standards to help recognize when content was generated with artificial intelligence. In the run-up to the European Union parliamentary elections in June, TikTok said in a blog post Wednesday that it would ban potentially misleading manipulated content and require users to label realistic AI creations.

Google said in December that YouTube creators and all election advertisers would also be required to make digitally modified or generated content public. The company said it was preparing for the 2024 election by limiting its AI tools, such as Bard, from returning answers to certain election-related questions.

“Like any emerging technology, AI presents both new opportunities and challenges,” Google said. AI can help combat abuse, the company added, “but we are also preparing for how it could change the disinformation landscape.”

Leave A Reply

Your email address will not be published.