Image credits: Adobe Firefly generated AI / TechCrunch synthesis
Google this week apologized (or came very close to apologizing) for another embarrassing AI failure: an image generation model that injected diversity into photos with a farcical disregard for historical context. While the underlying problem is completely understandable, Google is accusing the model of being “over-sensitive.” But I didn’t create the model myself.
The AI system in question is Gemini, the company’s flagship conversational AI platform, which invokes a version of the Imagen 2 model to create images on demand when requested.
But recently it was discovered that asking it to generate images of specific historical situations or people can yield hilarious results. For example, the Founding Fathers, whom we know as white slave owners, were depicted as a multicultural group that included people of color.
This embarrassing and easily reproduced issue was quickly mocked by online commentators. Also, as expected, this issue has been incorporated into the ongoing debate about diversity, equity, and inclusion (currently at a local minimum of reputation), and the woke mind virus that is further infiltrating the already liberal tech sector. was taken by experts as evidence of
DEI has gone mad, a concerned citizenry has conspicuously cried out. This is Biden’s America! Google is an “ideological echo chamber” and a trail horse for the left. (It must be said that the left was also suitably perturbed by this strange phenomenon.)
But as anyone familiar with the technology knows, and as Google explains in a post adjacent to today’s little apology, this problem is a very reasonable workaround for systematic bias in the training data. It was the result of a strategy.
For example, let’s say you’re using Gemini to create a marketing campaign and ask it to generate 10 photos of “people walking their dog in the park.” You don’t specify the type of person, dog, or park, so it’s the dealer’s choice. Generative models output what they know best. And in many cases, it’s a product of training data rather than reality, which can have all sorts of biases built in.
Among the thousands of related images captured by the model, what types of people, or even dogs or parks, are the most common? In reality, these image collections (stock images, rights-free photos, etc.) ), white people are overrepresented in many cases, and as a result, models are often white by default. Please do not specify.
This is just an artifact of training data, but as Google points out, “Our users come from all over the world, so we want it to work for everyone.” When you request photos, you may receive photos of a variety of people. You probably don’t want to receive just images of people of one ethnicity (or other characteristics).”
There’s nothing wrong with taking a photo of a white man walking his golden retriever in a suburban park. But if you ask for 10, all A white man walking a golden in a suburban park? And you live in Morocco, where people, dogs and parks all look different? That’s never a desirable outcome. If someone does not specify a characteristic, the model should choose diversity over homogeneity, regardless of how biased the model’s training data is.
This is a common problem with all types of generated media. And there are no easy solutions. But for particularly common cases, sensitive cases, or both, companies like Google, OpenAI, and Anthropic invisibly include additional instructions in their models.
It cannot be overstated how common this type of implicit instruction is. The entire LLM ecosystem is built on implicit directives (also known as system prompts), where guidelines like “Keep it simple” and “No swearing” are modeled before every conversation. given. If you ask for a joke, you won’t get a racist joke. Because even though models have ingested thousands of jokes, they are trained, like most of us, not to tell them. This is not a secret agenda (though that could be achieved through greater transparency), but infrastructure.
Where Google’s model went wrong was that there was no implicit instruction for situations where historical context was important. So a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but a prompt like “The person walking the dog in the park” would be improved by silently adding words like “The person is a random gender and ethnicity,” but “The person walking the dog in the “The Fathers Sign the Constitution” is clearly not the case. The same has been improved.
Prabhakar Raghavan, senior vice president at Google, said:
First, Gemini’s adjustment to show different people clearly failed to explain cases where it shouldn’t show a range. And second, over time, the model becomes much more cautious than we intended, refusing to answer certain prompts outright, and erroneously labeling highly unusual prompts as sensitive. I interpreted it.
These two things caused the model to overcorrect in some cases and be overly conservative in others, producing embarrassing or incorrect images.
I forgive Raghavan for quitting on the brink because I know how hard it is to say “sorry” sometimes. More importantly, there’s an interesting line: “The model turned out to be much more cautious than we intended.”
So how does a model “become” something? Software. Thousands of Google engineers built it, tested it, and iterated on it. Someone wrote implicit instructions to improve some answers and make others fail hilariously. When this failed, if someone had been able to inspect the entire prompt, they probably would have caught the Google team’s mistake.
Google blames the model for “becoming” something it wasn’t “intended” to be. But they made a model! It’s like breaking glass; instead of saying “I dropped it,” I say “it fell.” (I did this.)
Indeed, mistakes made by these models are inevitable. They hallucinate, reflect prejudices, and act unexpectedly. However, the responsibility for those mistakes lies not with the model, but with the people who created it. Today it’s Google. Tomorrow is OpenAI. The next day, and probably for several months straight, it will be X.AI.
These companies have a vested interest in convincing you that the AI itself is making mistakes. Please don’t let me.