Students at a Beverly Hills middle school used artificial intelligence technology to create fake nude photos of their classmates, according to school administrators. Now the community is grappling with the fallout.
School officials at Beverly Vista Middle School were informed last week of the student’s “AI-generated nude photos,” the district’s superintendent said in a letter to parents. The superintendent told NBC News that the photo included the student’s face superimposed on a nude body. The district did not say how it determined the photo was created using artificial intelligence.
“It’s very scary because people don’t feel safe coming to school,” a student at Beverly Vista Middle School, who did not want to be identified, told KNBC in Los Angeles. “They’re afraid of other people showing off things like explicit photos.”
Lt. Andrew Myers of the Beverly Hills Police Department told NBC News that police responded to a call late last week from the Beverly Hills Unified School District regarding the incident. A non-criminal investigation is currently underway, Myers said. Because the investigation involves a juvenile, Myers said he could not share any further information.
The Beverly Hills Middle School incident comes after a series of similar incidents at high schools around the world in which students created and shared AI-generated nude photos of female classmates. In January, a teenage New Jersey victim told her story before members of Congress in Washington, D.C., advocating for a federal law that would criminalize all non-consensual sexually explicit deepfakes. No such federal law currently exists.
For more on this story, tune into “NBC Nightly News with Lester Holt” tonight at 6:30pm ET/5:30pm CT.
In a letter to parents obtained by NBC News, Beverly Hills Unified School District Superintendent Dr. Michael Breggie characterized the deepfake incident as part of the “disturbing and unethical use of AI that is plaguing the nation.” I attached it.
“We urge Congress, the federal government, and state governments to take immediate and decisive action to protect children from the potential dangers of unregulated AI technology,” Breggie said. wrote. “We call for the creation and enforcement of laws that not only punish perpetrators to deter future actions, but also tightly regulate evolving AI technologies to prevent abuse.”
Breggie told NBC News that the school district will punish the offending student according to district policy. For now, he said, those students have been removed from the school pending the results of the district’s investigation. Next, Breggie said, student perpetrators would receive punishments ranging from suspension to expulsion, depending on their level of involvement in the creation and distribution of the images. But outside the school district, the path to redress for student victims is less clear.
In 2020, California passed a law allowing victims of non-consensual sexually explicit deepfakes to sue those who created and distributed the material. If the perpetrator is found to have acted maliciously, the plaintiff can seek damages of up to $150,000. It is unclear whether the law has ever granted compensation for damages.
Mary Ann Franks, director of the Cyber Civil Rights Initiative and a professor at George Washington University Law School, said that based on the information currently available about the case, California law still does not allow the incident at Beverly Vista Middle School to be covered. He stated that it was not explicitly prohibited. . Not all nude depictions of children are legally considered pornographic, so without detailed information about what is depicted in the photo, its legality is unclear.
“California civil litigation may apply here, but it’s always difficult for victims to identify the perpetrator, get the legal help they need, and actually pursue a lawsuit,” Franks said. said.
“It’s hard for students to think about what justice means to them,” she continued. “The problem with image-based abuse is that once the material is created and out into the world, these images can continue to circulate forever, even if we punish those who created the material.”
The technology for creating fake nude images has rapidly become more sophisticated and accessible over the past few years, with high-profile deepfake incidents involving celebrities such as the Taylor Swift deepfake scandal that made headlines in January. This allows users to replace the victim’s face with pornographic content or “undress” the photo.
The law is not always enforced in deepfake sexual abuse cases involving underage perpetrators and victims.
Digital news organization 404 Media investigated a 2023 incident involving a high school in Washington state, reporting in police documents that high school administrators said students had created AI-generated nude photos from their classmates’ Instagram photos. It became clear that there was not. A Washington police report said the incident was a possible sex crime against a child, but administrators attempted to handle the situation internally before multiple parents filed police reports. After police investigated the incident, prosecutors declined to file charges against the perpetrators.
“I hope that lawmakers begin to recognize that while civil penalties may be helpful for certain victims, they are only a partial solution,” Franks said. “What we should focus on is deterrence. This is unacceptable behavior and should be punished accordingly.”