In commemoration of Safer Internet Day in February 2024, we must reiterate that without freedom there is no safety, both online and offline. Especially since both adversaries are now equipped with the most powerful cyber repression tool ever: artificial intelligence (AI).
AI as a tool of oppression and deception
Internet freedom has declined globally for 13 consecutive years, according to the nonprofit organization Freedom House’s annual report. The novelty of the latest article in this report, “The Suppressive Power of Artificial Intelligence,” lies in its title. AI has been used by governments around the world to restrict free speech and exploit dissent.
This oppression is both direct and indirect. AI models directly greatly enhance the detection and removal of prohibited speech online. Opposing opinions cannot spread if they are immediately blocked. AI-based facial recognition can also help identify protesters, making it risky for images to be shared on social media.
AI indirectly promotes repressive goals by spreading misinformation. Two factors play an important role here. First, chatbots and other AI-based tools can automate the cost-effective distribution of large amounts of false information across platforms. Second, AI tools can generate fake images, videos, and audio content that distort reality. These fabrications, even when identified as false, foster a general distrust of publicly available information. Mistrust prevents people from acting cooperatively.
Threats to a safer internet
Government powers to monitor and suppress online activity, enhanced by AI, also directly threaten personal safety. Opposition leaders and members of the public who express opposing views may be subject to cyberbullying and vilification. By automating the tracking and identification of people online, we can make them disappear with surprising efficiency.
Moreover, some individuals and organizations, whether private or public, in conflict become targets of cyber-attacks by nation-states. These will be further enhanced by new advances in AI and could become even more dangerous and harmful. It is thus easy to see how AI-powered surveillance undermines both freedom and security at the same time.
However, threats to online safety do not only come from powerful forces. The Safer Internet Day initiative is about how individuals threaten each other on the Internet, in many ways, from cyberbullying to identity theft. AI tools are also now at least somewhat readily available to all internet users. Some of their uses are particularly alarming.
CSAM is on the rise
It would be bad enough if AI technology were used to create explicitly pornographic deepfakes of adults. Both governments and individuals do this to discredit and damage people, or for personal gratification. Even worse is when it is done to create child sexual abuse material (CSAM).
AI-generated CSAM and explicit content is already circulating online. The fact that child pornography can now be created with a simple prompt poses an unprecedented challenge to law enforcement and other agencies fighting for a safer internet. First, there are already insufficient resources to remove all such content from websites. Its expected spread will make the situation even worse.
Second, investigating actual new cases of child abuse and tracking ongoing abusers is more complex. The difficulty of distinguishing known fake or manipulated content from newly surfaced depictions of actual child exploitation adds new challenges. If this material does not depict an actual child, it also raises legal questions about how its creation and ownership should be treated.
Finally, manipulating photos of fully clothed minors to create hyper-realistic sexual versions opens up a whole new horizon of child exploitation. This is a devastating blow to campaigns for a safer internet.
Reversing the tide: Improving the Internet with AI
Concerns about a flood of AI-generated CSAM are fueling support for an EU bill that would require messaging platforms to scan private messages for CSAM and grooming activity. The proposal has also sparked criticism from other concerns that if the EU moves toward such measures, it will begin to lean toward repressive surveillance like those seen in other regions.
Although solutions that balance privacy and security in this area are still up for debate, organizations must take protective measures in the public Internet domain. The AI here is dangerous because it can do a lot of things very fast. Automate content creation and various other tasks that would otherwise take significant time and resources. The answer to this question comes through the use of AI-driven automation to our advantage. It’s already been done.
Before the wave of AI-generated CSAM threatened the internet, the Lithuanian Telecommunications Regulatory Authority (RRT) was already using AI-powered tools to remove genuine CSAM from websites. As part of Project 4β, Oxylabs developed this tool free of charge to automate his RRT tasks and improve results.
Surfshark researchers used data from this project to estimate that more than 1,700 websites in the EU may contain unreported CSAM. Surfshark’s analysis shows that there is much work to be done regarding automated scanning solutions on the public internet.
This is where AI can be used to improve both the freedom and safety of the internet. To promote its use as a tool for good, we as a society can:
- We continue to improve our AI-based web scraping to detect and accurately identify all CSAM.
- Invest in training a convolutional neural network (CNN) to create an AI model that efficiently distinguishes between real and fake.
- Provide investigative journalists with AI-based data collection tools and other data collection tools to extract and report information hidden by repressive governments.
- Explore the potential of AI as a cybersecurity tool, with a focus on exposing fake news while protecting data that can be used to identify individuals.
Of course, this is just the beginning. Other ways in which AI can enhance cybersecurity will become apparent as the field continues to evolve.
summary
In the face of that threat, we tend to forget that AI itself is neither good nor bad. It doesn’t have to oppress us or put us in danger. We can develop it to protect us both online and offline.
Similarly, internet freedom doesn’t have to make us any less secure. Safety and freedom are not mutually exclusive. Therefore, there is no need to sacrifice one for the other. When the balance is right, freedom makes us safer and security liberates us.
Julius Černiauskas is the CEO of Oxylabs