LONDON — The rise of artificial intelligence is increasing the threat of election disinformation around the world, making it possible for anyone with a smartphone and a devious imagination to create false but convincing information aimed at deceiving voters. You can easily create content with
This represents a quantum leap from just a few years ago, when creating fake photos, videos, or audio clips required a team with the time, technical skills, and funding. Today, free and low-cost generative artificial intelligence services from companies like Google and OpenAI allow anyone to create high-quality “deepfakes” with just a simple text prompt.
A wave of AI deepfakes related to elections in Europe and Asia have been circulating through social media for months, serving as a warning to more than 50 countries heading to the polls this year.
“You don’t have to look very far to see that there are people who are clearly confused about whether something is real or not,” said Henry Ajdar, a leading generative AI expert based in Cambridge, England. I did.
Ajder, who runs a consulting firm called Latent Space Advisory, says the question is no longer whether AI deepfakes will influence elections, but how much.
As the US presidential election heats up, FBI Director Christopher Wray recently warned of a growing threat, saying generative AI will make it easier for “foreign adversaries to engage in malign influence.”
AI deepfakes can be used to smear or soften images of candidates. Voters can be steered toward or away from candidates, or even avoid voting altogether. But perhaps the biggest threat to democracy, experts say, is that the proliferation of AI deepfakes could undermine public trust in what they see and hear.
Recent examples of AI deepfakes include:
— Video of Moldova’s pro-Western president supporting a Russia-friendly party.
— Audio clip of Slovakia’s Liberal Party leader discussing voter fraud and beer price increases.
— A video of an opposition lawmaker in conservative Muslim-majority Bangladesh wearing a bikini.
The novelty and sophistication of this technology makes it difficult to trace who is behind an AI deepfake. Experts say governments and businesses have not yet been able to stop the deluge and are not moving quickly enough to fix the problem.
As technology advances, “it’s going to be difficult to get clear answers about a lot of fake content,” Ajdel said.
Trust is damaged
Some AI deepfakes aim to sow doubts about a candidate’s loyalty.
In Moldova, an Eastern European country that borders Ukraine, pro-Western President Maia Sandu is a frequent target. One AI deepfake that circulated just before her local election depicted her endorsing a pro-Russian party and announcing her plans to resign.
Moldovan authorities believe that the Russian government is behind this activity. Ahead of this year’s presidential election, Sandu’s adviser Olga Roska said the purpose of deepfakes was to “undermine trust in electoral processes, candidates and institutions, but also trust among people.” talk. The Russian government declined to comment on the matter.
China has also been accused of weaponizing generated AI for political purposes.
In Taiwan, an autonomous territory that China claims as its own, AI deepfakes drew attention earlier this year, raising concerns about US interference in local politics.
In a fake video circulating on TikTok, U.S. Rep. Rob Whitman, vice chairman of the U.S. House Armed Services Committee, says he would increase U.S. military support for Taiwan if the incumbent party’s candidate is elected in January. It shows promise.
Whitman said TikTok, a Chinese-owned company, was being used to spread “propaganda” and accused the Chinese Communist Party of trying to interfere in Taiwan’s politics.
Chinese Foreign Ministry spokesperson Wang Wenbin said the Chinese government does not comment on the fake video and opposes interference in other countries’ internal affairs. He emphasized that Taiwan’s elections are “a local issue in China.”
blurred reality
Audio-only deepfakes are particularly difficult to verify because, unlike photos and videos, there are no obvious signs of manipulated content.
In Slovakia, another country under the shadow of Russian influence, an audio clip resembling the voice of the Liberal Party leader was widely shared on social media days before parliamentary elections. The video allegedly showed him talking about beer price hikes and voting fraud.
Ajdel said it was natural for voters to fall for the deception because humans are “much more accustomed to judging with our eyes than we are with our ears.”
In the United States, a robocall impersonating US President Joe Biden urged voters in New Hampshire to abstain from voting in the January primary. It later emerged that the call was directed to a political consultant who said he was trying to promote the dangers of AI deepfakes.
In poor countries with lagging media literacy, even low-quality AI fakes can be effective.
A similar incident occurred in Bangladesh last year when opposition lawmaker Rummin Farhana, a vocal critic of the ruling party, was incorrectly depicted as wearing a bikini. The video went viral, sparking outrage in the conservative Muslim-majority country.
“They trust everything they see on Facebook,” Farhana said.
Experts are particularly concerned about the upcoming elections in India, the world’s largest democracy and where social media platforms are a breeding ground for disinformation.
Challenge to democracy
Some political campaigns are using generative AI to enhance candidates’ images.
In Indonesia, the team behind Prabowo Subianto’s presidential campaign introduced a simple mobile app to forge deeper connections with supporters across the vast island nation. The app now allows voters to upload a photo and create an AI-generated image of themselves on his Subianto.
As the variety of AI deepfakes increases, authorities around the world are scrambling to develop guardrails.
The European Union has already asked social media platforms to reduce the risk of spreading disinformation and “election manipulation”. AI deepfakes will be required to have special labels from next year, but this is too late for June’s EU parliamentary elections. Still, the rest of the world lags far behind.
The world’s largest technology companies recently voluntarily signed an agreement to prevent AI tools from interfering with elections. For example, the company that owns Instagram and Facebook has announced that it will begin labeling deepfakes that appear on its platforms.
But apps like Telegram chat service have difficulty controlling deepfakes. The Telegram chat service has not signed a voluntary agreement and uses encrypted chats that are difficult to monitor.
Some experts worry that efforts to curb AI deepfakes could have unintended consequences.
Tim Harper, a senior policy analyst at the Center for Democratic Technology in Washington, said that well-intentioned governments and businesses walk the “very thin” line between political comment and “an illegal attempt to smear a candidate.” He said there was a possibility of trampling.
Major generative AI services have rules that limit political disinformation. But experts say it’s still too easy to circumvent the platform’s restrictions or use alternative services that don’t have similar safeguards.
Even without malicious intent, the increasing use of AI is problematic. Many popular AI-powered chatbots are still spewing out false and misleading information that can disenfranchise voters.
And software isn’t the only threat. Candidates could try to mislead voters by claiming that real events that are viewed against them were fabricated by AI.
“A world in which everything is questionable, a world in which everyone can choose what to believe, is also a very difficult world for a thriving democracy,” said Lisa Leppel, a researcher at the International Electoral Systems Foundation in Arlington, Virginia. ” he says.
__
Swenson reported from New York. Associated Press writers Julhas Alam in Dhaka, Bangladesh, Krutika Pati in New Delhi, Huizhong Wu in Bangkok, Edna Tarigan in Jakarta, Indonesia, Dake Kang in Beijing and Bucharest, Romania Stephen McGrath contributed to this report.
__
The Associated Press receives support from several private foundations to enhance our explanatory coverage of elections and democracy. Learn more about AP’s Democracy Initiative. AP is solely responsible for all content.