Instead, the document amounts to a manifesto stating that AI-generated content, much of which is created by companies’ tools and posted on their platforms, poses a risk to fair elections, and how to reduce that risk. It outlines the steps to follow. Label suspected AI content and educate the public about the dangers of AI.
“The intentional and private generation and distribution of deceptive AI election content has the potential to deceive the public in a manner that jeopardizes the integrity of the electoral process,” the agreement states.
AI-generated images, or “deepfakes,” have been around for several years. But their quality has improved rapidly over the past year, to the point where some fake videos, images, and audio recordings are difficult to distinguish from the real thing. The tools to create them are also now widely available, making them much easier to create.
AI-generated content is already appearing in election campaigns around the world. Last year, an ad supporting former Republican presidential candidate Ron DeSantis used AI to imitate former President Donald Trump’s voice. In Pakistan, presidential candidate Imran Khan used AI to deliver a speech from prison. In January, robocalls impersonating President Biden urged people not to vote in the New Hampshire primary. The call featured Biden’s AI-generated voice.
Tech companies are under pressure from regulators, AI researchers and political activists to curb the spread of fake election content. The new agreement is similar to a voluntary pledge the same companies and several others signed after a meeting at the White House in July to work to identify and label fake AI content on their sites. promised. In the new agreement, the companies also pledge to be transparent about their efforts to educate users about deceptive AI content and identify deepfakes.
Tech companies already have their own policies regarding AI-generated political content. TikTok does not allow fake AI content of celebrities if used for political or commercial advocacy. Meta, the parent company of Facebook and Instagram, is asking political advertisers to disclose whether they use AI in ads on their platforms. YouTube requires creators to label AI-generated content with a realistic-looking label when posting to their Google-owned video site.
Still, attempts to build widespread systems to identify and label AI content across social media have yet to materialize. Although Google is showing off its “watermarking” technology, it is not requiring customers to use it. Adobe, the owner of Photoshop, has established itself as a leader in suppressing AI content, but its stock photo website was recently filled with fake images from the Gaza war.