TThe CEOs of five social media companies, including Meta, TikTok and X (formerly Twitter), were grilled by senators on Wednesday about how they prevent online child sexual exploitation. It was done.
The Senate Judiciary Committee will convene to hold CEOs accountable for failing to prevent abuse of minors and ask whether they support legislation proposed by committee members to address the issue. Ta.
The problem is only getting worse, with reports of child sexual abuse material (CSAM) reaching an all-time high of more than 36 million last year, according to the National Center for Missing and Exploited Children. washington post. The National Center for Missing and Exploited Children Cyber Tipline, the U.S. centralized system for reporting online CSAM, reported that in 2022 he was alerted to more than 88 million files, with almost 90% of reports coming from outside the country. did.
Meta’s Mark Zuckerberg, TikTok’s Shou Chu, and X’s Linda Yaccarino appeared alongside Snap’s Jason Spiegel and Discord’s Jason Citron to answer questions from the Senate Judiciary Committee. Mr. Zuckerberg and Mr. Chu appeared in court voluntarily, but the committee had to subpoena Mr. Spiegel, Mr. Citron, and Mr. Yaccarino.
Sen. Richard Durbin, a Democrat from Illinois and the committee chairman, opened the hearing with a video showing victims of online sexual exploitation. Among them were families of children who committed suicide after being targeted by online predators.
South Carolina Republican Ranking Member Sen. Lindsey Graham told attendees about Gavin Guffey, the 17-year-old son of South Carolina Congressman Brandon Guffey. Gavin Guffey committed suicide after being sexually coerced on Instagram. “You’ve got blood on your hands,” Graham told the CEOs, specifically mentioning Zuckerberg.
Many members of Congress expressed frustration that social media companies are not doing enough to address the issue, and expressed a desire to take action themselves. Over the past year, the Judiciary Committee has held numerous hearings, as well as a number of bills aimed at protecting children online, including the EARN IT Act, which strips technology companies of civil and criminal immunity. reported to the Senate floor. Liability under the Child Sexual Abuse Materials Act.
In their testimony, the CEOs explained the steps they take to prevent harm to children online. However, when asked whether they supported the bill reported by the Judiciary Committee, many expressed reluctance.
At one point, Missouri Republican Sen. Josh Hawley asked Zuckerberg if he wanted to apologize to the parents of children affected by online CSAM who were attending the hearing. “I’m sorry for everything you’ve been through,” Zuckerberg said. “It’s terrible. No one should have to go through what your family went through.”
CEOs have emphasized multiple times that their company is using artificial intelligence to address online CSAM issues. In his testimony, Citron highlighted Discord’s acquisition of Sentropy, a company that developed an AI-based content moderation solution. Zuckerberg said 99% of the content Meta removes is automatically detected by AI tools. However, lawmakers and technology leaders did not discuss the role AI is playing in promoting CSAM.
AI-generated child abuse images
The advent of generative artificial intelligence has raised concerns about harm to children online. Law enforcement officials around the world are scrambling to deal with an onslaught of cases involving AI-generated child sexual abuse material. This is an unprecedented phenomenon in many courts.
On January 10, 17-year-old Marvel actress Xochitl Gomez, who plays teenage superhero America Chavez in the 2022 film, spoke out. Doctor Strange in the Multiverse of Madnesstalked about how difficult it was to scrub through X number of AI-generated pornographic images of her.
Gomez said on a podcast with actor Taylor Lautner and his wife that her mother and her team have been trying to remove the images without success. “She had a lot of emails coming her way, a lot of things, and she dealt with everything,” she says. “For me, it wasn’t something that was daunting, it was more like, ‘Why is it so hard to beat?’
Authorities face a complex challenge in containing the spread of AI-generated CSAM as technology rapidly evolves and tools such as the so-called nudify app become easier to access, even among children themselves. .
Dan Sexton, chief technology officer at the UK-based Internet Watch Foundation (IWF), said that as AI models become better and more accessible, they will also be cracked down on when they are used for illegal purposes, such as creating CSAM. It’s said to be difficult. He says the world urgently needs to agree on a solution. “For each of these potential problems that could happen tomorrow, the longer it takes to find a solution, the more likely it is that it has already happened and then we’re all left behind. ‘And you are trying to undo the harm that has already been done. ”
growing problem
In most cases, the creation of CSAM of any kind is already widely criminalized, including the use of AI. In its latest report, the International Center on Missing and Exploited Children found that 182 out of 196 countries have sufficient legislation to specifically address or combat CSAM. For example, U.S. federal law defines CSAM as any visual depiction of sexually explicit conduct involving a minor. This includes “digital or computer-generated images that are indistinguishable from actual minors” and “images that are created, altered, or modified to appear to be of actual minors.” may occur. Depicts an actual, identifiable minor. ” The laws are much stricter in Ireland and CSAM, mock or not, is illegal.
Some criminals have already been convicted under such laws. In September, a South Korean court sentenced a man in his 40s to two and a half years in prison for using AI to create hundreds of realistic-looking pornographic images of children. Last April, a Quebec judge sentenced a 61-year-old man to three years in prison for using deepfake technology to create a synthetic child sexual abuse video. That same month, in New York, a 22-year-old man pleaded guilty to several charges related to the creation and distribution of more than a dozen sexually explicit images and was sentenced to six months in prison as a sex offender and suspended for 10 years. received the verdict. underage women.
read more: Taylor Swift deepfake highlights need for new legal protections
However, solving these cases is not always easy. In September, Spain was rocked by an incident in which AI-generated nudes of more than 20 girls aged between 11 and 17 were circulated online. However, it took time for law enforcement authorities to determine the criminal liability of the alleged perpetrators. They are also considered minors. Manuel Cancio, a professor of criminal law at the Universidad Autónoma de Madrid, told Time magazine: “If it was a clear case, everyone would know where and in which section of the police the case is located.” [Spanish] If criminal law had been applied, he would have already been indicted. David Wright, director of the UK Safety Internet Center, also told Time magazine that child protection organizations have also received reports of schoolchildren creating and disseminating naked AI-generated images of their peers. Ta.
Today’s AI technology can generate sexual abuse material in just a few clicks, using the likeness of an unsuspecting child or creating images that are not based on an actual child. However, many developers do not allow its use in such materials. The Stanford Internet Observatory found that some AI models were trained on datasets containing at least 3,000 known images of his CSAM, taken from mainstream platforms such as: X This is despite Reddit’s policies prohibiting posting such content. Reddit and X did not respond to requests for comment on the report. Mr Sexton said the IWF has also received reports that images of past child victims are being reused to create more CSAMs.
David Thiel, chief engineer at the Stanford Internet Observatory, said AI-generated CSAM outperforms solutions used to track and remove content. “The visual fingerprinting part becomes very difficult because there’s just a constant flow of new material instead of the known material being recirculated,” Thiel said.
How can we stop the spread of AI CSAM?
AI model developers say their tools have special guardrails to prevent abuse. OpenAI prohibits the use of its image generator DALL-E for sexual images, while Midjourney requires content to be PG-13. Stability AI has updated its software to make it more difficult to create adult content. However, some users have found a way to jailbreak these models, according to Internet safety group ActiveFence. OpenAI leadership called on policymakers to intervene and set parameters for the use of AI models.
Cleaning up all the abusive content that exists on the internet requires massive amounts of computing power and time, which is why tech companies and anti-sex trafficking organizations like Thorne can detect, remove, and report CSAM. Developed machine learning technology. One of the technologies used is called hash matching. This is a process that allows the platform to tag visually similar files. Another is the use of a classifier, a machine learning tool that indicates the likelihood that the content is her CSAM.
Another solution being researched is restricting access to technology. Proponents of open source AI models argue that open sourcing fosters collaboration among users, promotes transparency by relinquishing control of the companies running the models, and democratizes access. Sexton said that while it may sound like everyone should have access in principle, there are risks. “The reality is that the effect we’re seeing of putting such extremely powerful technology in the hands of everyone is putting it in the hands of child sex offenders, and putting it in the hands of child sex offenders, That means putting it in the hands of people, organized crime. And they will and are creating harmful content out of it.”
But Rebecca Portnoff, head of data science at Thorn, said the debate over access has created a false dichotomy between open source and closed models, preventing widespread adoption of AI-generated CSAM. suggests that the greatest opportunity lies with developers. She says developers should focus on creating a “safety-by-design” model that reduces harm to children, rather than taking precautions to react reactively to existing threats.
Portnoff emphasizes that time is of the essence: “There’s no slowing down,” she says. “I went back to the concept of using all the tools at hand to properly address this issue. And those tools include the tools that we actually build, the collaboration with regulatory agencies. , including both collaborations with technology companies.”