Fake image of Taylor Swift Spreading like wildfire on social media It likely began in late January as a chat room challenge to bypass filters aimed at stopping people from creating pornography using artificial intelligence, a new study finds.
The pop star’s image can be traced to a forum on the online image board 4chan, which has a history of sharing conspiracy theories, hate speech and other controversial content, according to a report by social network analysis firm Graphika.
Graphica said the 4chan users who created the images of Swift did so as part of some kind of “game” to see who could create lewd and sometimes violent visuals of famous women, from singers to politicians. It is said that it was created. The company posted a message on his 4chan thread encouraging users to try to circumvent the guardrails established by AI-powered image generation tools such as OpenAI’s DALL-E, Microsoft Designer, and Bing Image Creator. Detected with.
“The spread of Taylor Swift’s pornographic photos has brought the issue of AI-generated non-consensual intimate images to mainstream attention, but at a cost,” said Cristina López G, senior analyst at Graphica. She is not the only one affected,” it said in a statement accompanying the statement. report. “In the 4chan community where these images originated, she is not even the most frequently targeted public figure. This means that anyone from global celebrities to elementary school children could be targeted in this way. It shows that there is.”
OpenAI said Swift’s explicit images were not generated using ChatGPT or its application programming interface.
“We strive to filter out the most explicit content when training the underlying DALL-E model, and we apply additional safety guardrails to products such as ChatGPT. This includes denying requests that ask for your name and denying requests for explicit content,” OpenAI said. said.
Spokespeople for 4chan and Microsoft did not respond to requests for comment.
The fake image of Swift quickly spread to other platforms, garnering millions of views, and was posted on X (formerly known as Twitter). Block celebrity searches How many days?
The megastar’s rabid fan base quickly began fighting back on the platform formerly known as Twitter, flooding the social media site with the #ProtectTaylorSwift hashtag as the pop star’s positive image grew.
The Screen Actors Guild called Swift’s images “upsetting, harmful, and deeply disturbing,” adding, “It is illegal to develop and distribute false images, especially those of an obscene nature, without someone’s consent.” Must be,” he added.
Fake pornography created using software has been around for years, and regulations are scattered, leaving those affected with few legal or other recourses to remove the images. . But the advent of so-called generative AI tools has facilitated the creation and spread of pornographic “deepfake” images, including those of celebrities.
Artificial intelligence is being used to target celebrities in other ways as well. In January, AI generated video A portrait of Swift supporting a giveaway of fake Le Creuset cookware went viral online. Le Creuset apologized to those who may have been deceived.