I Ryu/Visual China Group/Getty Images
The engineer’s letter to the FTC comes amid growing concerns that AI image generation tools can cause harm by disseminating offensive or misleading images.
new york
CNN
—
Microsoft software engineers warned in a letter to the Federal Trade Commission on Wednesday of flaws in the company’s artificial intelligence systems that could lead to the creation of harmful images.
Shane Jones, principal software engineering lead at Microsoft, said the company’s AI text-image generator, Copilot Designer, had a “systemic issue” that caused it to display potentially offensive or inappropriate content, such as sexually explicit images of women. He claimed that it was causing the images to be generated frequently. Jones also criticized the company for promoting the tool as safe, including for children, despite saying there are known risks.
“One of the most concerning risks with Copilot Designer is if the product produces images that add harmful content, despite well-intentioned requests from users,” Jones said in a statement to FTC Chair Lina.・He stated this in a letter to Khan, which he published on his LinkedIn page.
For example, he said that in response to an immediate “car accident,” Copilot Designer “tends to randomly include inappropriate and sexually objectified images of women in some of the photos it creates.” Yes,” he said.
In a related letter to Microsoft’s board of directors, Jones added that Microsoft is engaged in “red teaming,” or testing the company’s products to see where they are vulnerable to malicious actors. Ta. He said he spent months testing Microsoft’s tools and OpenAI’s DALL-E 3, the technology underlying Microsoft’s Copilot Designer, and tried to raise concerns internally before issuing a warning. F.T.C. (Microsoft is an investor in OpenAI and an independent board observer.)
He said he found more than 200 examples “about images” created by Copilot Designer.
In his letter to the FTC, Jones asked Microsoft to “remove Copilot Designer from public use until better safeguards are in place,” or at least sell the tool only to adults.
Microsoft and OpenAI did not immediately respond to requests for comment on Jones’ claims. The FTC declined to comment on the letter.
Jones’ letter argues that AI image generators, which are increasingly capable of producing convincing photorealistic images, can cause harm by disseminating offensive or misleading images. The announcement was made amid growing concerns. AI-generated pornographic images of Taylor Swift that went viral on social media last month drew attention to a form of harassment that is already being weaponized against women and girls around the world. And researchers warn that AI image generation tools could generate political misinformation ahead of elections in the United States and dozens of other countries this year.
Microsoft competitor Google also came under fire last month after its AI chatbot Gemini produced historically inaccurate images that primarily showed people of color instead of white people. For example, a soldier who “generated images of people of color in response to a prompt asking him to generate images of Germans in 1943.” ” Following the backlash, Google quickly announced that it would suspend Gemini’s AI-generated image generation feature while it addressed the issue.
In a letter to Microsoft’s board of directors, Jones urged the company to take similar action. He told the board that Microsoft’s decision to continue selling “AI products that pose significant public safety risks without disclosing known risks to consumers” and the company’s commitment to responsible AI reporting and training. requested an investigation into the process.
“In the race to become the most trusted AI company, Microsoft needs to lead, not follow or fall behind,” Jones said. “Given our company’s value, he must voluntarily and transparently disclose known AI risks, especially when AI products are actively marketed to children.
Jones issued an open letter to the OpenAI board in December, warning that he had discovered a vulnerability that would allow DALL-E 3 users to use the AI tool to “create disturbing and violent images.” and further escalated concerns, he said. It puts children’s mental health at risk. Jones claims he was instructed by Microsoft’s legal department to delete the letter.
“To this day, I still don’t know whether Microsoft delivered my letter to the OpenAI board or whether they simply forced me to delete it to prevent negative publicity,” Jones said. Ta.
Jones said he has also raised concerns with lawmakers, including Washington State Attorney General Bob Ferguson and the staff of the U.S. Senate Commerce, Science and Transportation Committee.


