When you read a product review on Amazon, browse the comments section of a CNN article, or get angry at a provocative tweet, can you be sure that the person on the other side of the screen is actually a real, living human being?
Absolutely not.
According to a recent report from Imperva, bots account for 47% of all internet traffic, with “bad bots” making up 30% of that – a staggering statistic that threatens the integrity on which the open web is based.
But even if the user is human, there’s a good chance the account is being operated under a fake identity, meaning “fake users” are now just as prevalent online as real users.
The risk of bot campaigns is no stranger to Israel: since October 7, a massive misinformation campaign orchestrated by bots and fake accounts has manipulated public opinion and policymakers.
Monitoring online activity during wartime; The New York Times It found that “one day after the conflict began, roughly a quarter of accounts on Facebook, Instagram, TikTok and X (formerly Twitter) that posted about the conflict appeared to be fake… Within 24 hours of the Al Ahli Arab Hospital explosion, more than a third of accounts posting about it on X were fake.”
With 82 countries holding elections in 2024, the risk of bots and fake users has reached crisis levels: last week, OpenAI had to disable the accounts of an Iranian group that was using ChatGPT bots to generate content aimed at influencing the US elections.
Election Influence and the Wider Impact of Bots
As Rwanda prepares for elections in July, researchers at Clemson University found 460 accounts spreading AI-generated messages in support of incumbent President Paul Kagame on X. And in the past six months alone, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) has identified influence campaigns targeting protesters in Georgia and spreading confusion about the death of an Egyptian economist, both run by fake X accounts.
While bots and fake users pose a threat to national security, online businesses also pay a heavy price.
Imagine a business where 30-40% of your overall digital traffic is generated by bots and fake users. This scenario creates a series of problems, including distorted data leading to poor decision-making, poor understanding of customer funnels and website analytics, sales teams chasing false leads, and developers focused on products with fictitious demand.
The impact is enormous: A study by CHEQ.ai, a go-to-market security platform and Key1 portfolio company, revealed that more than $35 billion in ad spend will be wasted in 2022 alone, resulting in more than $140 billion in lost potential revenue.
Ultimately, fake users and bots undermine the very foundation of modern business, creating distrust in data, results, and in some cases, across teams.
The introduction of Gen AI will only add fuel to the fire of the fake web. The technology will “democratize” the ability to create bots and fake identities, lowering the barriers to attack and increasing their sophistication and vastly expanding their reach.
The seriousness of this problem cannot be overstated, but can anything be done to minimize the enormous economic, geopolitical and social damage?
Now is the time for a global response to take back control of the internet and rebuild trust.
Education is critical in combating the fake online epidemic. Raising awareness of the tactics of bots and fake accounts helps society recognize and mitigate their impact. Understanding the distinctive signs of fake users – incomplete profiles, generic information, repetitive phrases, abnormally high activity levels, shallow content, and limited engagement – is a critical first step. But as bots become increasingly sophisticated, this challenge becomes increasingly complex, highlighting the need for continued education and vigilance.
Moreover, public policies and regulations must be put in place to restore trust in the digital environment. For example, governments can and should mandate that large social networks implement best-of-breed bot mitigation tools to crack down on fake accounts.
Striking the right balance between the freedom of these networks, the integrity of the information posted, and the harm that may be caused is not easy, but establishing these boundaries is essential to maintaining the longevity of these networks.
Businesses have developed a variety of tools to mitigate and block invalid traffic, ranging from basic bot mitigation solutions to prevent distributed denial of service attacks to specialized software to protect APIs from bots attempting to steal data.
More advanced bot mitigation solutions employ sophisticated algorithms that perform real-time tests to ensure traffic integrity by analyzing account behavior, interaction levels, hardware characteristics, and automation tools, as well as detecting non-human behavior such as abnormally rapid typing and scrutinizing email and domain history.
While AI is contributing to the bot problem, it is also proving to be a powerful tool in fighting bots. AI’s enhanced pattern recognition capabilities can distinguish legitimate bots from malicious bots more accurately and quickly. Companies like CHEQ.ai are leveraging AI to help marketers ensure their ads reach human users and are placed in safe, bot-free environments, effectively combating the growing threat of bots in digital advertising.
From national security to corporate integrity, the impacts of the fake internet are far-reaching and dire. But there are effective ways to mitigate the problem that deserve renewed attention from public and private organizations. By raising awareness, strengthening regulations, and implementing proactive protections, we can all contribute to a more accurate and much safer internet.
The author is Co-founder and Partner at Key1 Capital.