Jenna Ortega said she deleted X, then known as Twitter, after a “horrifying” experience of encountering AI-generated pornographic images of herself on the platform when she was a minor.
In an interview with The New York Times published on Saturday, the 21-year-old actress reflected on how she feels about artificial intelligence.
“I hate AI,” she told the magazine, noting that while AI “has the potential to be used for amazing things,” it is also being used for malicious purposes online.
“Did I, at 14, create a Twitter account because I had to, and enjoy looking at salacious edited content of myself as a child? No,” she said. “It’s horrifying. It’s corrupting. It’s wrong.”
The “Beetlejuice” star said that when he was 12 years old, the first direct message he opened on his Twitter account was a photo of a man’s genitals.
“And that was just the beginning of what was to come,” she said. “I had a Twitter account before, and they said, ‘Oh, you have to do that, you have to build your image.'”
After the release of “Wednesday” in 2022, the influx of “absurd images and photos” became so severe that Ortega decided to delete her account altogether a few years ago.
“It was gross, it made me feel bad, it made me uncomfortable,” she said. “I just couldn’t say anything without seeing it, so I deleted it. And then I woke up one day and I was like, ‘Oh, I don’t need this anymore,’ so I deleted it.”
A spokesman for Mr. Ortega did not immediately respond to a request for comment. Mr. X did not immediately respond to a request for comment.
Ortega told the Times that he is still learning how to protect himself.
The actress’ concerns are driven in large part by the rapidly growing number of AI tools now available to the public, pointing to a growing pattern of non-consensual AI-generated deepfakes being created and distributed online.
Sophisticated apps and programs that “undress” or “nude” photos, as well as “face swap” tools that superimpose the victim’s face onto pornographic content, primarily target women and girls.
According to an independent study by deepfake analyst Genevieve Oh and deepfake victim advocacy group MyImageMyChoice, the number of non-consensual sexually explicit deepfake videos posted online in 2023 exceeded all other years combined. The same study found that Ortega was one of the 40 female celebrities most targeted on the largest deepfake websites.
Earlier this year, an app that claimed to use AI to strip women of their clothes ran several adverts online featuring doctored and blurred images of the 16-year-old Ortega.
The ad showed an app called “Perky AI” changing Ortega’s outfit in photos based on text prompts such as “latex costume,” “Batman underwear” and finally “no clothing.”
Xochitl Gomez, the teen actress known for her role as America Chavez in Doctor Strange and the Multiverse of Madness, also said in January (when she was 17) that she found a sexually explicit deepfake video of her on social media without her consent, and that her team was unable to remove it.
That same month, Taylor Swift was also targeted by the technology, after deepfakes of the pop star in nude or sexually explicit scenes went viral on X, forcing the platform to temporarily make her name unsearchable.
But it’s not just celebrities who are being targeted: Over the past year, teenage girls in the US have increasingly become victims of fake nude photos created with AI, and while some states have enacted laws targeting deepfake pornography, the path to legal recourse remains uneven and uneven across the country.
A California middle school expelled five students in March after they used generative AI to create and share fake nude images of their classmates, sparking fear among families across the school district. A similar uproar occurred at a New Jersey high school last year after AI-generated deepfake images of students were shared.