This week, an AI deepfake of Taylor Swift went viral on social media. Fernando Leon/TAS23/TAS Copyright Management Getty Images
White House says AI-generated sexually explicit images of pop star Taylor Swift are a concern and Congress should consider legislation to address fake and abusive images circulating online He said that.
White House press secretary Karine Jean-Pierre said Friday that social media networks also need to do more to prevent images from spreading.
“It’s alarming,” Jean-Pierre told reporters. “So while social media companies make their own independent decisions about content moderation, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual intimate images. I am thinking about it.”
In recent weeks, sexually explicit fake images of Swift have flooded social media platform X, formerly known as Twitter, racking up millions of views and tens of thousands of reposts. There have also been instances where images of Swift and other celebrities have been manipulated to appear as if they were endorsing commercial products.
Read more: Understanding deepfakes and Taylor Swift images
X said it has removed the image and is taking action against the accounts involved in spreading it. But the controversy has sparked bipartisan calls from members of Congress for new safeguards.
Jean-Pierre said President Joe Biden is working with artificial intelligence companies on a unilateral effort to watermark generated images to make it easier to identify them as fake. The Biden administration also appointed a task force to combat online harassment and abuse, and the Justice Department established a hotline for victims of image-based sexual abuse.