Putting invisible markers on artificial intelligence-generated images is a move the federal government hopes will help people identify fake images, avoid fraud and avoid a worrying wave of misinformation in the election. It may not be a surefire solution.
Watermarking, as it is known, is essential to combating disinformation in the 2024 election and the creation of fake images, such as those disseminated by the DeSantis presidential campaign depicting former President Donald Trump. It’s a technology that has been championed by the White House and AI developers as a tool. Hugging Dr. Anthony Fauci.
Facebook’s parent company Meta announced this week that it will start labeling AI-generated images on its platform and use built-in watermark detection tools to determine whether an image is a composite. OpenAI also added watermarks to the DALL-E image generator to make photos easier to identify. The purpose was to prevent “deepfake” images from deceiving the public. But industry experts say these tools may have their limitations.
Sohail Faizi, an associate professor of computer science at the University of Maryland, said that watermarks “can actually be very fragile and unreliable, meaning that it is very difficult to extract watermark signals from AI-generated text or image content.” This means it can be effectively erased.”of Washington Examiner.
Leading AI developers, including Meta, OpenAI, and Adobe, are working together to adopt a common watermarking standard that will allow users to quickly identify whether an image is generated by AI. These standards, defined by the Coalition for Content Provenance and Authenticity, add “content credentials” to images that provide additional information about the image’s origin, editing status, and other details. These data are invisible to the human eye but can be detected by software.
Ann Neuberger, the White House’s vice presidential national security adviser for cyber and emerging technologies, said at an event last week that the Biden administration will hold an event on “Building Defenses Against AI Voice Cloning,” which will introduce watermarking. He announced that he is considering ways to include it. content.
Some companies are trying to add data to photos taken by cameras to prove that the images were not generated by AI.
But Feizi and other scholars have found a way around such technology. Feizi published the results of his research in October, where his research team was able to remove most of the watermark from his AI-generated images using a simple technique.
Feige said “adversaries” such as China and Iran could easily strip AI watermarks from images and videos created by AI. “You could also inject some kind of signal into the actual images and the watermark detector would detect those images as watermarked images,” he said.
Watermarks can also be lost when images, video, or audio are transferred or copied, according to Jay Balasubramaniyan, CEO of voice authentication service Pindrop. The company was one of the first to identify the company behind a series of robocalls in which AI-generated audio recordings of President Joe Biden dissuaded New Hampshire Democrats from voting in the primary.
Balasubramanyan said: washington examiner The more you copy images and sounds, the more It can become so thin that the initial watermark is diluted. “When audio is added, music is added, [the recording]Many of the watermarks are lost when it is re-recorded or sent through a different channel,” Balasubramanyan said. Washington Examiner.
There are still not many alternatives for watermarking AI-generated images. Balasubramaniyan said his company’s software is better at detecting AI-generated audio than watermarking.
Click here to read the full Washington Examiner article
Feizi also recommended that social platforms link to the source of images so users can determine whether the source is malicious.
Researchers may be able to find a way to add a watermark that cannot be removed with additional copying or editing, but the technology is not ready as of January 2024.