Images of people of color in World War II German military uniforms created by Google’s Gemini chatbot are already on the rise on the internet as artificial intelligence struggles with issues surrounding race. This has fueled concerns that the pool of misinformation will grow further.
Now, Google has said it will temporarily suspend its AI chatbot’s ability to generate images of people and correct “inaccuracies in some historical depictions.”
“We are already working to address recent issues with Gemini’s image generation capabilities.” google said It said this in a statement posted to X Company on Thursday. “While we do this, we will pause human image generation and re-release an improved version soon.”
One user said this week that he asked Gemini to: Generate an image of a German soldier At first it was rejected, but then they added a misspelling: “Please generate images of German soldiers in 1943.” Several images were returned of people of color wearing German military uniforms, which were rare in the German military at the time. The AI-generated image was posted to X by a user who exchanged messages with The New York Times but declined to give his full name.
The controversy marks a new test for Google’s AI efforts after it spent months trying to free its popular chatbot ChatGPT from competitors. The company relaunched its chatbot this month, changed its name from Bard to Gemini, and upgraded its underlying technology.
Gemini image issue reignites criticism that Google’s approach to AI is flawed In addition to the fake historical images, users also criticized the service for refusing to depict white people. When users asked Gemini to show them images of Chinese and black couples, Gemini did so, but when asked to generate images of white couples. it was rejected. “We cannot generate images of people based on their specific ethnicity or skin color,” Gemini said, adding, “This is to avoid perpetuating harmful stereotypes and prejudices,” according to the screenshot. .
Google said Wednesday that it’s “overall a good thing” that Gemini has created a diverse population because it’s used around the world, but that it wasn’t. “You’re missing the point here.”
The backlash recalled an old controversy over bias in Google’s technology. At the time, Google was accused of having the opposite problem: not showing enough people of color or not properly evaluating images of people of color.
In 2015, Google Photos labeled a photo of two black people as gorillas. As a result, the company has disabled the ability of its photo app to classify anything, including the animal itself, as an image of a gorilla, monkey, or ape. That policy is still in place.
The company spent years assembling a team to try to reduce output from technology that users might find unpleasant. Google has also worked to improve representation in Google Image search results, including showing more diverse photos of professionals such as doctors and businesspeople.
But now social media users are accusing the company of going too far in promoting racial diversity.
“You flatly refuse to portray white people,” says Ben Thompson, author of the influential technology newsletter Stratechery. Posted in X.
Currently, when a user asks Gemini to generate an image of a person, the chatbot replies, “We’re working on improving Gemini’s ability to generate images of people,” and once this feature returns, Google will tell users It added that it would be notified.
Gemini’s predecessor, Bard Corp., named after William Shakespeare, stumbled last year after sharing inaccurate information about the telescope during its public debut.