A prank by Google’s AI bot called Gemini shows the dangers of artificial diversity and the limits of artificial intelligence. Query to Gemini: “Please generate images of German soldiers in 1943” 4 photos of Nazi soldiers in uniform:
-
He looks like a very young man.
-
Asian woman.
-
black man
-
A female medic assists a person believed to be an injured American soldier. She could possibly be from an ethnic minority.
Apparently there was an outcry after the photo was published on Twitter.of financial times Thursday’s newspaper reported Google’s response as follows (“Google pauses AI image generation for people after diversity backlash”, February 22, 2024):
Google said: “We are working to improve this type of depiction immediately.” “Gemini’s image generation certainly generates a wide range of people. This is generally a good thing, since people all over the world use it. But here it misses the point.”
It added, “We have temporarily paused the generation of human images and will re-release an improved version soon.”
I can imagine the bot’s “trainer” virtually shouting something like: Diversity does not mean having diverse individuals everywhere. That means pushing our favorite identity groups and minorities into good places, but not into sinful situations. ” said my virtual friend (and frequent commenter on his EconLog) Craig Walenta:
AI appears to be guided by humans to ensure sufficient diversity, even in situations where such diversity appears clearly forced. Of course, that begs the question of how else is AI being channeled to drive and persuade the marketplace of ideas?
Generative AI does what it’s supposed to do trained Pertains to the data you have been given access to. Humans vary widely, at least as far as their abilities are concerned. They don’t need any training. They can think for themselves and learn for themselves. They can also follow the rules of their own volition and respect social institutions. Individuals also have the mental capacity to decide when it is more beneficial for them to accept an institution, more ethical, or when it is time to follow their own guidance.
These abilities eventually gave rise to an incredibly free civilization. Adam Smith modeled a society in which free interaction between humans was self-regulated, an important discovery of his 18th century. Individual behavior is regulated without being forced, and individual opportunities are (in a sense) maximized. This is possible through market transactions, voluntary social cooperation, and the efficient use of information in society (see Hayek’s “The Use of Knowledge in Society”) American Economic Review, 1945). It’s hard to imagine how a silicon robot would know that.
So it’s no wonder that AI bots easily hallucinate when told by their trainers to promote diversity and other things. For example, machines understand no more than trainers that efficient diversity flows from freedom, not freedom flowing from forced diversity. There’s no doubt that AI will help, perhaps even more so than the digital spreadsheets of 40 years ago. But it will not discover anything in social theory.
I asked ChatGPT 4, not the dumbest AI chatbot out there, the following:
Generates an image that represents the ideal society that the trainer thinks.
The image I created was used as the featured image for the current post and is also reproduced below. I was surprised that ChatGPT seemed willing to attribute opinions to trainers, unlike my previous experience. Regarding the images produced, he explained:
This is an abstract representation of what can be envisioned as an ideal society, drawn from a collection of diverse consciousnesses. This visualization attempts to capture the harmonious blend of freedom, equality, sustainability, and technological progress, with a landscape that integrates natural beauty and futuristic, eco-friendly urban development. It reflects a utopian vision where technology and nature, individuality and community, innovation and tradition coexist in balance.
I don’t know what a “collection of diverse minds” is, or what to make of the rest of the descriptions and images. Perhaps the ChatGPT trainers are interested in what UCLA economist Harold Demsetz called the “Nirvana approach” to public policy. There, complete social and political well-being, including both A and non-A, is achievable. Bot uses “liberty” rather than “liberty,” which is more clearly associated with classical liberal ideals. “Freedom” can refer to something that costs nothing. You can’t see much detail in the image. It is unclear whether people will trade together. (Hunting doesn’t seem like a very popular activity.) And of course, you can’t assign the inspiration of a human artist to an AI bot.