As the US presidential election approaches, the internet is awash with photos of Donald Trump and Kamala Harris, from perfectly timed assassination attempts to mundane images of crowds at rallies to some surprisingly bizarre shots of the candidates burning flags and holding guns. actually Of course it happens, but generative AI imaging tools are now so good and readily available that we can no longer trust our own eyes.
Several major players in the digital media industry are working to sort out this mess, and the solution so far is more data, specifically metadata, attached to photos that tells them what’s real and what’s fake, and how the fakes were made. One of the best-known systems for this, C2PA authentication, is already backed by companies like Microsoft, Adobe, Arm, OpenAI, Intel, Truepic, and Google. This technical standard provides important information about the provenance of an image, allowing viewers to identify if it has been manipulated.
“Provenance technologies like Content Authentication offer a promising solution by acting like a nutrition label for digital content, allowing official event photos and other content to feature verifiable metadata like dates and times, and, where appropriate, indicating whether AI was used,” said Andy Parsons, C2PA steering committee member and senior director of CAI at Adobe. The Verge“This level of transparency helps dispel doubts, especially during breaking news and election periods.”
But if all the information needed to authenticate an image is already embedded in the file, where is that information? And why doesn’t the photo show some kind of “verified” mark when it’s posted online?
The problem is interoperability. There are still big gaps in how this system is implemented, and it will take years to get all the necessary parties on board to make it work. And I can’t Without unanimous consent, the effort may be doomed to failure.
The Coalition for Content Provenance and Authenticity (C2PA) is one of the largest groups trying to address this confusion, along with the Content Authenticity Initiative (CAI), which Adobe launched in 2019. The technical standard they developed, which uses encrypted digital signatures to verify the authenticity of digital media, is already well established. But this progress is frustratingly still inaccessible to ordinary people who encounter questionable images online.
“It’s important to realize we’re still in the early stages of adoption,” Parsons said. “The specification is final and robust. It’s been reviewed by security experts. There are very few implementations, but that’s the natural progression of a standard being adopted.”
The problem starts with where the image comes from: the camera. Some camera brands, like Sony and Leica, embed a cryptographic digital signature based on the C2PA open technology standard into the photo the moment it’s taken. This signature provides information like the camera settings and the date and location the image was taken.
It is currently supported by only a handful of cameras, both in new models such as the Leica M11-P, and through firmware updates for existing models such as Sony’s Alpha 1, Alpha 7S III, and Alpha 7 IV. Other brands such as Nikon and Canon have also committed to adopting the C2PA standard, but most have not yet done so in any meaningful way. C2PA is also missing from the most accessible cameras for most people: smartphones. Neither Apple nor Google responded to inquiries about implementing C2PA support or a similar standard on iPhones or Android devices.
Even if the camera itself doesn’t record this valuable data, important information can be applied during the editing process. Software such as Adobe’s Photoshop and Lightroom, two of the photography industry’s most widely used image editing apps, can automatically embed this data in the form of C2PA-enabled content authentication. Content authentication records when and how an image has been altered, including through the use of AI-generated tools, to help identify images that have been accidentally tampered with.
However, many applications, including Affinity Photo and GIMP, do not support a uniform, interoperable metadata solution that could help resolve the authenticity issue. Some members of these software communities have expressed a desire to do so, as it might bring more attention to the issue. Phase One, developer of the popular professional photo editing software Capture One, said: The Verge The company said it is “committed to supporting photographers” affected by AI and is “exploring tracking features such as C2PA.”
Even if a camera supports authenticity data, that doesn’t mean it will reach viewers. The now-iconic photo of Trump pumping his fist after the assassination attempt, and the photo that appears to show a bullet flying through the air as it was fired at him, were taken with a C2PA-compliant Sony camera. But that metadata information is not widely accessible to the public because the online platforms where these images circulated, such as X and Reddit, don’t display the metadata when the images are uploaded and made public. The New York Timesdo not visibly flag your verification credentials after using them to authenticate a photo.
Part of the hurdle is getting platforms to come on board in the first place, as well as figuring out the best way to present that information to users. Facebook and Instagram are two of the biggest platforms that check content for markers like the C2PA standard, but they only flag images that have been manipulated using generative AI tools, and don’t present any information to verify “real” images.
These labels can also cause problems if they’re unclear: Meta’s “Made with AI” label angered photographers because it was applied too aggressively and seemed to hide minor edits. It has since been updated to de-emphasize the use of AI. Meta didn’t say whether it would expand the system, but said it believes “broader adoption of content authentication” is necessary to establish trust.
Truepic, an authenticity infrastructure provider and another C2PA member, says these digital markers contain enough information to provide more detailed information than platforms currently offer. “The architecture is there, but we need to research the best way to display these visual indicators so that everyone on the internet can actually see them and use them to make better decisions rather than simply say it’s either all generative AI or all real,” said Mounir Ibrahim, chief communications officer at Truepic. The Verge.
X does not currently support the standard, but Elon Musk has previously said that the platform “will probably support it.”
A key part of the plan is getting online platforms to adopt the standard. X, which has come under scrutiny from regulators as a hotbed of misinformation, is not a member of the C2PA initiative and does not appear to have put forward an alternative. But X owner Elon Musk has signaled his support for the initiative. “It’s a good idea, we should probably do it,” Musk said when Parsons suggested it at the 2023 AI Safety Summit. “It would be good to have some kind of authentication method.”
Even if by some miracle we were to wake up tomorrow, the world of technology every Despite platforms, cameras, and creative applications supporting the C2PA standard, denialism is a powerful, pervasive, and potentially insurmountable obstacle. Providing people with documented, evidence-based information doesn’t help if they simply ignore it. Misinformation can be completely unfounded, as shown by the ease with which Trump supporters believed the accusation that Harris faked her rally audience, despite extensive evidence to the contrary. People only believe what they want to believe.
But cryptographic labeling systems are likely the best approach currently available for reliably identifying real, manipulated, or artificially generated content at scale. Alternative pattern analysis methods, such as online AI detection services, are notoriously unreliable. “Detection is probabilistic at best, and we don’t see a detection mechanism where you can upload images, videos, or digital content and achieve 99.99 percent accuracy in real time and at scale,” Ibrahim says. “And while watermarking is robust and very effective, our view is that it’s not interoperable.”
However, no system is perfect, and even more powerful options like the C2PA standard can only do so much – for example, while image metadata can easily be removed by taking a screenshot, there is currently no solution for this, and its effectiveness is determined by how many platforms and products support it.
“None of these are panaceas,” Ibrahim said. “They reduce the risk of harm, but there will always be bad actors using generators to try to deceive people.”