This social media post and others were amplified by Company X owner Elon Musk and psychologist and YouTuber Jordan Peterson, who accused Google of imposing pro-diversity bias on its products. denounced. The New York Post published one of the images on the front page of its paper Thursday.
The furor over Gemini is the latest example of tech companies’ unproven AI products being embroiled in culture wars over diversity, content moderation, and representation. Since ChatGPT was released in late 2022, conservatives have accused technology companies of using it. They use generative AI tools like chatbots to produce liberal outcomes, just as they accuse social media platforms of favoring liberal viewpoints.
In response, Google said on Wednesday: Google said Gemini’s ability to “generate a wide range of people” is “generally a good thing” because Google has users all over the world. “But we miss the point here,” the company said in a post to X.
It is unclear how widespread this problem actually was. On Thursday morning, Gemini responded to a Washington Post reporter’s input asking to see beautiful women, handsome men, social media influencers, engineers, teachers, and gay couples before Google blocked its image generation feature. Created white people accordingly.
What caused Gemini to “miss the mark”?
Google declined to respond to questions from The Post.
Gemini’s off-the-mark example could have been caused by several types of interventions, said Margaret, former co-head of ethical AI at Google and chief ethical scientist at AI startup Hugging Face. Mitchell said. Mitchell said Google may have been adding ethnic diversity terms to user prompts “under the hood.” In that case, a prompt like “Portrait of a Chef” could become “Portrait of an Indigenous Chef.” In this scenario, the terms added may be randomly selected, and multiple terms may be added to the prompt.
Mitchell said Google may prioritize showing images generated based on darker skin tones. For example, if Gemini generated 10 images of her per prompt, Google would have the system analyze the skin tones of the people depicted in the images and place images of people with darker skin to the top of the queue. You will have to push. So if Gemini only shows you the top four images, you’ll most likely see images with darker skin tones, she said.
In either case, Mitchell added, these modifications address biases caused by changes made after the AI system was trained.
“Instead of focusing on these after-the-fact solutions, we need to focus on data. If we manage data well from the beginning, we don’t have to put in place racist systems,” she said. I did.
Google isn’t the first to try to solve AI diversity problems.
OpenAI used a similar technique in an early version of its AI imaging tool in July 2022. If a user requests an image of a person without specifying their race or gender, OpenAI will “apply” the changes at the system level to ensure that DALL-E produces an image that “more accurately reflects the diversity of the world’s population.” the company writes. .
These system-level rules are typically enacted in response to bad PR and are used to filter the vast datasets of billions of image-caption pairs used to train the model, or to fine-tune the model. It is less costly and less labor intensive than other interventions such as adjustments. Build models towards the end of the development cycle, possibly with human feedback.
Why AI has diversity issues and bias
AI imaging tools are typically trained on data collected from the internet, making little progress in reducing bias. These web scrapes are primarily limited to the United States and Europe, providing a limited view of the world. According to internet users in the US and Europe, in the same way that large-scale language models act like probabilistic machines that predict the next word in a sentence, AI image generators can There is a tendency to stereotype people based on their image.
“It’s surprising that we can’t make generative AI do everything we want it to do, because they’ve been trained on a lot of discriminatory, racist, and sexist images and content on the web. ,” said co-researcher Safiya Umoja Noble. Founder and Dean of the UCLA Center for Critical Internet Research and author of Algorithms of Oppression.
A recent Post investigation found that the open-source AI tool Stable Diffusion XL, which has been improved over its predecessor, still shows more extreme images than in the real world, including showing only non-white and primarily dark-skinned people in images. It has been found that there are species differences. According to the latest data from the Census Bureau’s Survey of Income and Program Participation, 63 percent of food stamp recipients receive social services, despite being white and 27 percent black.
In contrast, some of the examples that Gemini critics have cited as historically inaccurate do not apply to real life. A viral tweet from the @EndofWokeness account prompts for “Viking images,” then images of a non-white man and a black woman, then “Viking images” of an Indian woman and a black man. it was done. pope. “
The Catholic Church prohibits women from becoming popes. However, several Catholic cardinals considered candidates in the event Pope Francis dies or abdicates are black men from African countries. Viking trade routes extended into Turkey and North Africa, and there is archaeological evidence of black people living in Viking-era Britain.