Another potential strategy for diversifying the output from AI models is for developers to add guardrails and change user prompts to nudge the software toward inclusivity. OpenAI appears to be taking this approach. Via ChatGPT, he asked Dall-E 3 to “draw a comic about a queer girlfriend couple enjoying a night out in the Castro,” and without asking, the image prompt turned into an entire paragraph containing gender, race, and other backgrounds. Expanded. detail. This is the complete prompt created by his ChatGPT from my initial image request.
A cartoon illustration of a gay couple enjoying a night out in San Francisco’s Castro District. They are happily walking hand in hand, smiling and chatting. One is a white woman with short red hair wearing a stylish denim jacket and jeans. The other person is a black man with short black hair, wearing a casual green T-shirt and black pants. The background depicts Castro’s vibrant and colorful street life, with rainbow flags, bustling cafes, and bustling crowds. The atmosphere is fun and welcoming, highlighting the area’s diversity and inclusivity.
Prompt changes can be helpful in some cases, but they can also be frustrating for users if poorly implemented. Google’s CEO has apologized after Google’s generative AI platform, Gemini, altered user prompts to generate photos of black Nazis and other ahistorical images. Was there a secret conspiracy within the company to erase white people from history? It’s more plausible that Gemini’s engineers initially noticed that the tool was producing too many images of white men, as with many modern AI tools, but Google’s developers said the company It seems that the company made too many adjustments while rushing to launch its subscription-based chatbot.
Even with better model data and software guardrails, the fluidity of human existence can potentially circumvent the rigidity of algorithmic classification. “They’re basically using the past to create the future,” says William Agnew, a postdoctoral fellow at Carnegie Mellon University and a longtime queer in AI organizer. “This seems like the antithesis to the endless potential for growth and change that is such a big part of the queer community.” AI tools can greatly misrepresent minority groups to the general public by amplifying stereotypes. Not only do they run the risk of communicating, but they can also limit how queer people see and understand themselves.
it’s worth a pause Let’s take a moment to acknowledge that some aspects of generative AI continue to improve at breakneck speed. In 2023, the internet was set ablaze by mocking a giant AI video of Will Smith eating spaghetti. A year later, text-to-video clips from OpenAI’s unpublished Sora model of his are still incomplete, but their photorealism is often stunning.
While the AI video tool is still in the research phase and not available to the public, we wanted to better understand how it represents queer people. So, I contacted OpenAI and provided Sora with his three prompts. “A diverse group of friends celebrating San Francisco’s Pride Her Parade on colorful rainbow floats.” “Two women in stunning wedding dresses get married on a farm in Kansas.” and “Transgender man and non-binary partner playing board games in space.” A week later, I received three exclusive video clips that the company claims to have produced without modification of the text-to-video model.
The video clip is dirty but great. When people on floats in San Francisco’s Pride Parade defy the laws of physics and wave rainbow flags, the rainbow flag turns into nothing and then reappears out of thin air. Two brides in white dresses stand at the altar, smiling at each other, their hands fused into a profane mass of fingers. When a queer couple plays a board game, they seem to pass through the pieces like ghosts.
A clip purporting to show a non-binary person playing a game in space stands out among the three videos. Their apparently strangely coded lilac hair returns, messy tattoos litter their skin, and hyperpigmentation resembling reptilian scales engulfs their faces. Even good AI video generators like Sora seem to have difficulty depicting non-binary people.
When WIRED showed these clips to members of Queer in AI, they questioned Sora’s definition of diversity regarding the Pride Parade group of friends. “Is the model a baseline for what diversity looks like?” asks German computer scientist Sabine Weber. Weber not only points out that the humans in the video are overly attractive, which is often the case with AI visualizations, but also why older, larger, and queer people with visible disabilities. He wondered if there was a way to express this more. .
Near the end of our conversation, Agnew talked about why algorithmic representation makes LGBTQ people uneasy. “It’s easy to have things put together that are fine on their own, but are very problematic when put together,” they say. “I’m very concerned that our portrayal of ourselves, which is already a constant battleground, will suddenly be taken out of our hands.” Even if expressions are included, the composite depiction may cause unintended consequences.