Elon Musk’s X has blocked searches for Taylor Swift after sexually explicit images of the pop star created using artificial intelligence were widely distributed on the platform.
The incident involves social media groups abusing so-called deepfakes, or realistic images or sounds generated using AI, to depict celebrities in dangerous or misleading situations without their consent. This is the latest example of the company’s efforts to tackle this issue.
Searches for terms like “Taylor Swift” and “Taylor AI” on X returned error messages for several hours over the weekend after AI-generated pornographic images of Taylor Swift proliferated online over the past few days. . This change means that even legitimate content about one of the world’s most popular stars will be difficult to view on the site.
“This is a temporary measure and was taken out of an abundance of caution as safety is our priority in this matter,” said Joe Benarroch, X’s director of business operations.
Swift has not commented publicly on the matter.
X was acquired in October 2022 for $44 billion by billionaire entrepreneur Musk, who has championed free speech ideals and reduced resources devoted to policing content and implemented moderation policies. Relaxed.
The weekend’s blatant mockery comes as X and its rivals Meta, TikTok and Google’s YouTube face increasing pressure to address abuses of the increasingly real and accessible deepfake technology. ration mechanism was used. A thriving market of tools has emerged that use generative AI to allow anyone to create videos and images that resemble celebrities and politicians in a few clicks.
Deepfake technology has been available for several years, but recent advances in generative AI have made the images easier to create and more realistic. Fake pornographic images are one of the most common new uses of deepfake technology, experts warn, with use in political disinformation campaigns on the rise in a year of elections around the world. It also points out that
In response to questions about Swift’s images on Friday, White House press secretary Karine Jean-Pierre said the circulation of false images was “alarming,” adding: They play an important role in enforcing their own rules. ” She called on Congress to legislate on this issue.
On Wednesday, social media executives including X’s Linda Yaccarino, Meta’s Mark Zuckerberg and TikTok’s Sho Zhi Chu spoke out over growing concerns that their platforms are not doing enough to protect children. He will be questioned at a U.S. Senate Judiciary Committee hearing on online child sexual exploitation.
On Friday, X’s official security account said: statement The posting of “non-consensual nude (NCN) images” is “strictly prohibited” on the platform, and the platform does not have a “zero-tolerance policy toward such content.”
It added: “Our team is proactively removing all images identified and taking appropriate action against the accounts that posted them. We will immediately address any further violations and We are closely monitoring the situation to ensure the content is removed.”
But Company X’s depleted content management resources have failed to stop the fake Swift images from being viewed millions of times before they are taken down, leading the company to resort to blocking searches for one of the world’s biggest stars. I had no choice but to rely on it.
According to a report by technology news site 404 Media, these images were taken from the anonymous bulletin board 4chan and the messaging app Telegram, which specializes in sharing AI-generated images of abusive women (often created using Microsoft tools). It was determined that the message appeared to have come from a group of Telegram and Microsoft did not respond to requests for comment.