In this story
Lawmakers and activists are calling for federal legislation to criminalize AI-generated pornography, which they say is being used to ruin the lives of its victims, many of whom are women and girls.
“In the absence of clear laws at the federal and state levels, when victims go to police, they are often told there is nothing they can do,” Andrea Powell, executive director of the advocacy group Image-Based Sexual Violence Initiative, said at a recent roundtable discussion on the issue, an online forum hosted by the nonprofit National Organization for Women (NOW).
“These individuals have subsequently received offline threats of sexual violence and harassment, and unfortunately, some [victims] “It’s not survivable,” added Powell, who called the AI deepfake nude app a “virtual gun” for men and boys.
term “Deepfake” Built in late 2017 Google (Google) They use open source face-swapping technology to create pornographic videos. Since ChatGPT brought generative artificial intelligence to the mainstream, AI-generated sexually explicit content has gone viral. Tech companies are racing to develop better AI photo and video tools, and some are misusing them. According to Powell, a Google search has 9,000 websites showing explicit deepfake exploits. And between 2022 and 2023, Deepfake sexual content online increases by over 400%.
“We’re starting to see 11- and 12-year-old girls being afraid to use the internet,” she said.
Deepfake regulations vary by state. So far: Ten states have laws Deepfake bills have been enacted in six states, including some with criminal penalties. Deepfake bills are also pending in Florida, Virginia, California, and Ohio, and were introduced in San Francisco this week. A groundbreaking lawsuit was filed Against 16 deepfake porn websites.
But advocates say a lack of consistency in state laws creates problems, federal regulation is long overdue, and that platforms, not just individuals, should be held liable for non-consensual deepfakes.
Some federal policymakers are working on this. Rep. Joe Morrell (R-NY) has pledged to introduce a bill for a 2023 Senate majority. How to prevent deepfakes of intimate imagesThe bill would criminalize the non-consensual distribution of deepfakes. Lawmakers introduced it shortly after Taylor Swift’s deepfake nudes became an internet sensation. Rebellious behaviorStrengthening victims’ civil rights to sue; and A bipartisan bill called the Personal Information Protection Act Tech companies will be held liable if they fail to address deepfake nudes on their platforms.
Meanwhile, victims and advocates are taking matters into their own hands. Breeze Liu was working as a venture capitalist when she was the target of a deepfake sexual harassment scam in 2020. She developed an app called Alecto AI to help track and remove deepfake content that uses her likeness online.
Reflecting on her experience as a victim of deepfake abuse, Liu said in an online meeting with supporters: “It was such a horrible experience that I thought it would be better if I were dead.
“We have struggled with the violation of our image online for far too long,” she added, “and I founded this company in the hope that one day we can all, and our future generations, take for granted that no one will lose their life to online violence.”
In addition to Alecto AI, Liu has also advocated for federal policy changes to criminalize non-consensual AI deepfake pornography, such as Rep. Morrell’s 2023 bill, but the Prevent Deepfake Intimate Images Act has not made any progress since being introduced last year.
Notably, some tech companies are already taking steps to address this issue. Google updated its policy on July 31st. Other groups are facing pressure to mitigate non-consensual deepfake content. (Meta) Supervisory Committee meeting in late July The company said it needed to do more to address explicit AI-generated content. On that platform.