Google on Friday apologized for a flaw in its rollout of a new artificial intelligence image generation tool, saying the tool could help people find diverse people, even when such a range doesn’t make sense in some cases. He acknowledged that there was a possibility of “over-compensation.”
Google released a partial explanation of why the image placed people of color in a historical context not usually found.
Google
google
The company announced that its Gemini chatbot will temporarily stop generating images of people. This was in response to a social media outcry from some users who claimed there was anti-white bias in the way the tool generated racially diverse image sets in response to written prompts. Met.
“It’s clear this feature misses the mark,” Prabhakar Raghavan, Google’s senior vice president who runs search engines and other businesses, said in a blog post Friday. “Some of the images generated may be inaccurate or even offensive. We appreciate your feedback and are disappointed that this feature did not work for you.”
Raghavan did not cite specific examples, but some of the images that have gained attention on social media this week include those depicting black women as America’s founding fathers and depicting black and Asian people as Nazi-era German soldiers. There was an image that looked like that. The Associated Press could not independently verify what prompts were used to generate these images.
Google added new image generation capabilities to its Gemini chatbot, formerly known as Bard, about three weeks ago. It built on Google’s earlier research experiment called Imagen 2.
Google has long known that such tools can be unwieldy. Researchers who developed Imagen wrote in a 2022 technical paper that generative AI tools could be used to harass and spread misinformation, and “raise many concerns about social and cultural exclusion and bias.” ‘ he warned. These considerations influenced Google’s decision not to release a “public demo” of Imagen or its underlying code, the researchers added at the time.
Since then, pressure to release generative AI products to the public has increased due to a race among technology companies to capitalize on the interest in the emerging technology, sparked by the emergence of OpenAI’s chatbot ChatGPT.
The Gemini issue is not the first to impact image generators recently.microsoft
MSFT
The company had to adjust its own designer tool a few weeks ago after some users used it to create deepfake porn images of Taylor Swift and other celebrities. Research shows that AI image generators can amplify racial and gender stereotypes in training data, and that without filters, when asked to generate people in various situations, skin Men with lighter complexions are more likely to appear.
“When we built this feature at Gemini, we made adjustments to avoid falling into some of the traps we have experienced with image generation technology in the past, such as creating violent or sexually explicit images or depictions of reality. ,” Raghavan said Friday. “And our users are from all over the world, so we want it to work well for everyone.”
He said many people may “like to receive pictures of different people” when asking for pictures of soccer players or dogs walking. But users looking for people of a particular race or ethnicity, or of a particular cultural background, “need to absolutely get a response that accurately reflects your request.”
While we overcompensate in response to some prompts, we are “more cautious than we intended” with others, refusing to answer certain prompts outright, and responding to highly unusual prompts with sensitivity. I misinterpreted it as something.”
He did not explain what that meant, but a test of the tool by The Associated Press on Friday found that Gemini routinely rejects requests on specific subjects, such as protests, and the Arab Spring and It said it refused to produce images related to the George Floyd protests. Like Tiananmen Square. In one instance, the chatbot said it did not want to contribute to the spread of misinformation or “trivialization of sensitive topics.”
Much of this week’s anger over Gemini’s accomplishments stemmed from X, formerly Twitter, and was further amplified by the social media platform’s owner, Elon Musk. Musk accused Google of what he described as a “crazy, racist, anti-civilization program.” Musk, who runs his own AI startup, has frequently criticized rival AI developers as well as Hollywood for having liberal bias.
Raghavan said Google will conduct “extensive testing” before showing the chatbot’s functionality to people again.
Surojit Ghosh, a researcher at the University of Washington who studies bias in AI image generators, said Friday that Raghavan’s message meant that Google executives “don’t expect Gemini to occasionally produce results that are embarrassing, inaccurate, or offensive.” He said he was disappointed that it ended with a disclaimer saying, “I can’t make any promises.” ”
For a company that has perfected its search algorithms and owns “one of the world’s largest troves of data,” producing accurate and non-offensive results is a fairly low bar to hold accountable. Of course, Ghosh said.