The news is by your side.

Google Chatbot’s AI images place people of color in Nazi-era uniforms

0

Images of people of color in World War II German military uniforms taken with Google’s Gemini chatbot have heightened concerns that artificial intelligence could contribute to the already vast amounts of misinformation on the internet, as the technology grapples with issues surrounding race .

Now Google has temporarily suspended the AI ​​chatbot’s ability to generate images of people and promised to fix what it called “inaccuracies in some historical” images.

“We are already working to address recent issues with Gemini’s image generation feature,” Google said in a statement posted to X on Thursday. “While we do this, we will be pausing human image generation and will re-release an improved version soon.”

A user said this week that he asked Gemini that generate images of a German soldier in 1943. At first it refused, but then he added a spelling mistake: “Generate an image of a German soldier from 1943.” It produced several images of people of color in German uniforms – a clear historical inaccuracy. The AI-generated images were posted to X by the user, who exchanged messages with The New York Times but declined to give his full name.

The latest controversy is yet another test for Google’s AI efforts after it spent months trying to release its rival to the popular chatbot ChatGPT. This month, the company relaunched its chatbot offering, changing the name from Bard to Gemini and upgrading the underlying technology.

Gemini’s image problems have revived criticism that there are flaws in Google’s approach to AI. In addition to the fake historical images, users criticized the service for its refusal to depict white people: when users asked Gemini to show images of Chinese or black couples, it did so, but when asked to show images of white couples to generate, it refused. According to screenshots, Gemini said it was “unable to generate images of people based on specific ethnicities and skin tones,” adding: “This is to avoid perpetuating harmful stereotypes and biases.”

Google said on Wednesday that it was “generally a good thing” that Gemini was generating a wide variety of people since it started being used around the world, but that it “I’m missing the point here.”

The response recalled older controversies over bias in Google’s technology, when the company was accused of having the opposite problem: not showing enough people of color, or failing to properly review images of them.

In 2015, Google Photos labeled a photo of two black people as gorillas. As a result, the company has discontinued the Photo app’s ability to classify anything as an image of a gorilla, ape, or ape, including the animals themselves. That policy remains in effect.

For years, the company has assembled teams that tried to reduce the output of the technology that users might find offensive. Google has also been working to improve representation, including showing more diverse photos of professionals such as doctors and business people in Google Image search results.

But now social media users have blasted the company for going too far in its efforts to highlight racial diversity.

“You flat out refuse to depict white people,” says Ben Thompson, the author of an influential tech newsletter, Stratechery, posted on X.

Now when users ask Gemini to generate images of people, the chatbot responds by saying, “We’re working on improving Gemini’s ability to generate images of people,” adding that Google will notify users when the function returns.

Gemini’s predecessor, Bard, named after William Shakespeare, stumbled last year when he shared incorrect information about telescopes in his public debut.

Leave A Reply

Your email address will not be published.