A charity supporting people who are struggling with their thoughts and actions says a growing number of callers are feeling confused about the ethics of viewing AI images of child abuse.
The Lucy Faithfull Foundation (LFF) says AI images are acting as a gateway.
The charity warns that even if the children are not real, it is still illegal to create or view such images.
Neil (not his real name) contacted the helpline after he was arrested for creating the AI image.
The 43-year-old denied any sexual attraction to children.
An IT employee who used AI software to create his own indecent images of children using text prompts said he would never look at such images of real children because he finds them unappealing. Ta. He claimed to be simply fascinated by the technology.
He called LFF to try to get his point across, but the caller reminded him that his actions were illegal, regardless of whether the children were real or not.
The charity said it had received similar calls from others expressing confusion.
Another caller contacted her after learning that her 26-year-old partner had been looking at indecent AI images of children, but said the photos were not serious as they were “not real”. . The suspect then asked for help.
The teacher sought advice from a charity because her 37-year-old partner had been looking at images that appeared to be illegal, but neither of them were sure if they were illegal.
LFF’s Donald Findlater said some callers to the confidential Stop It Now helpline believe the line between what is illegal and what is morally wrong with AI images is blurring. He said that some people do.
“This is a dangerous view. Some criminals believe that because children are not being harmed, it is somehow okay for them to create or view this material. But this is wrong,” he says.
In some cases, AI-powered images can be incorrectly labeled or promoted as AI-generated, making it increasingly difficult to spot the difference in realism.
Findlater says that for people convicted of sex crimes, deviant sexual fantasies are the strongest predictor of recidivism.
“Instilling such deviant fantasies increases the potential for harm to children,” he says.
The charity says the number of callers citing AI images as a reason for committing a crime is still small, but on the rise. The foundation is calling on society to be aware of the issue and asking lawmakers to take steps to reduce the ease with which child sexual abuse material (CSAM) is created and published online. .
The charity did not name the specific site where the images were found, but a popular AI art website allowed users to publish sexually graphic images of very young models. has been accused of. When the BBC approached Civit.ai about the issue in November, the company said it took the possibility of CSAM on its site “very seriously” and that users could “depict underage characters and people as adults”. It said it is asking the community to report images it believes “depict.” Photorealistic Context”.
LFF also warned that young people were making CSAM without realizing the seriousness of the crime. For example, one whistleblower was concerned about her 12-year-old son who used an AI app to create inappropriate topless photos of his friend and then searched for terms like “naked teen” online. was doing.
Criminal proceedings have recently begun in Spain and the United States against a boy who used an undressing app to create naked photos of his school friends.
In the UK, the head of the National Crime Agency, Graham Biggar, said in December that he wanted tougher penalties for offenders in possession of child abuse images, saying AI abuse images were “important because people who view these images… That’s because we value it,” he added. Whether real or AI-generated, it greatly increases the risk that criminals will sexually abuse children themselves. ”
Some contributors have asked that their names not be used in this article.