The news is by your side.

Hackers from China, Russia and others used OpenAI systems, the report said

0

Hackers working for nation states have used OpenAI's systems to create their cyberattacks, according to research released Wednesday by OpenAI and Microsoft.

The companies believe their research, published on their websites, documents for the first time how hackers with ties to foreign governments are using generative artificial intelligence in their attacks.

But instead of using AI to generate exotic attacks, as some in the tech industry feared, the hackers have used it in everyday ways such as composing emails, translating documents and debugging computer code, according to the report. the companies.

“They are we just use it like everyone else, to try to be more productive in what they do,” said Tom Burt, who oversees Microsoft's efforts to detect and disrupt major cyber attacks.

Microsoft has committed $13 billion to OpenAI, and the tech giant and startup are close partners. They shared threat intelligence to document how five hacking groups with ties to China, Russia, North Korea and Iran used OpenAI's technology. The companies did not say which OpenAI technology was used. The startup said it cut off their access after learning about the use.

Since OpenAI released ChatGPT in November 2022, tech experts, the press, and government officials have worried that adversaries could weaponize the more powerful tools, looking for new and creative ways to exploit vulnerabilities. Like other things with AI, reality can be more subdued.

“Does it offer something new and innovative that accelerates an opponent beyond what a better search engine could do? I haven't seen any evidence of that,” said Bob Rotsted, head of cybersecurity threat intelligence for OpenAI.

He said OpenAI limited where customers could sign up for accounts, but advanced perpetrators could evade detection through various techniques, such as masking their location.

“They sign up just like anyone else,” Mr. Rotsted said.

Microsoft said a hacking group linked to Iran's Islamic Revolutionary Guards Corps had used the AI ​​systems to explore ways to avoid anti-virus scanners and generate phishing emails. The emails included “one pretending to be from an international development organization and another attempting to lure prominent feminists to a website about feminism built by attackers,” the company said.

In another case, a Russia-affiliated group trying to influence the war in Ukraine used OpenAI's systems to conduct research into satellite communications protocols and radar imaging technology, OpenAI said.

Microsoft tracks more than 300 hacking groups, including cybercriminals and nation states, and OpenAI's proprietary systems made it easier to track and disrupt their use, the executives said. They said that while there were ways to identify whether hackers were using open-source AI technology, a proliferation of open systems made the task more difficult.

“When the work is open source, you can't always know who is deploying that technology, how they are deploying it and what their policies are for responsible and safe use of the technology,” Mr. Burt said.

Microsoft did not discover any use of generative AI in the Russian hack of top Microsoft executives that the company disclosed last month, he said.

Cade Metz contributed reporting from San Francisco.

Leave A Reply

Your email address will not be published.