AI providers and government agencies have announced a series of initiatives aimed at strengthening the internet’s defenses against AI-generated misinformation.
Last week, major AI players announced new transparency and detection tools for AI content. Hours after Meta detailed plans to label AI images from external platforms, OpenAI announced it would begin including metadata for images generated by ChatGPT and its API for DALL-E. A few days later, Google announced that it would be joining the steering committee of the Coalition on Content Provenance and Authenticity (C2PA), a leading group that sets standards for various types of AI content. Google will also begin supporting Content Credentials (CC), a type of “nutrition label” for AI content created by C2PA and the Content Authenticity Initiative (CAI). Adobe, which founded CAI in 2019, released a major update to CC in October.
This update was particularly notable in several ways by incorporating major delivery platforms into the standardization process. Bringing platform-level participation can also help drive mainstream adoption of AI standards and help people better understand how to determine whether content is real or fake. CAI senior director Andy Parsons said giants like Google are supporting the “snowball effect” needed to improve the Internet’s information ecosystem. It also requires collaboration between companies, researchers, and various government agencies.
The design and use of C2PA standards by major AI model providers will also help drive uniform adoption across both content creation and distribution platforms. Parsons noted that his Firefly platform, which is proprietary to Adobe, was already C2PA compliant when it launched last year.
“Model providers want to disclose what models were used, and to determine whether that model produced something (is it newsworthy, celebrity, etc.) If there’s a need, we want to be able to do that,” Parsons told Digiday.
Government agencies are also looking for ways to prevent AI-generated misinformation. Last week, the Federal Communications Commission banned AI-generated voices in robocalls, making them illegal under the Telephone Consumer Protection Act, following recent AI deepfake robocalls similar to President Joe Biden’s. Meanwhile, the White House announced that more than 200 people, including numerous universities, businesses, and other organizations, have joined the new AI consortium. The European Commission is also collecting comments on her DSA guidelines on election integrity.
AI-powered political micro-targeting is a major concern. State legislatures passed new laws related to AI-related political advertising. Lawmakers have also introduced bills, but none have received support so far. According to a recent study reported by Tech Policy Press, large-scale language models can be used to easily and effectively develop micro-targeted political ad campaigns on platforms like Facebook. Last week, Meta’s own semi-independent oversight board also called on the company to “urgently reconsider” its manipulated media policies for content created using AI, and even when it doesn’t use AI.
While authenticating AI content helps promote trust and transparency, experts say it’s even more important to block bad actors from spreading misinformation across social and search. I am. However, accurately detecting AI deepfakes and text-based fraud is not easy.
Josh Lawson, director of the Aspen Institute’s AI and Democracy Initiative, said it’s important to curb the distribution of AI misinformation. He said the AI content creation standards are “very sanitary” for major platforms, but do not stop bad actors from using open source or jailbroken AI models to create questionable AI content. Stated. He compared the supply and demand of misinformation to an hourglass.
“We see generative AI as a force that increases supply, but it still needs to get to people,” Lawson said. “If you can’t reach the people, you can’t influence the election.”
Concerns about AI could distract from ongoing concerns about online privacy.in post Last week on X, Meredith Whitaker, president of privacy messaging app Signal, said the election-year focus on deepfakes is “distracting and conveniently ignores the documented role of surveillance advertising.” said. He also noted that companies like Meta and Google (which has also rolled back regulations on political advertising in recent years) could benefit from the distraction.
“In other words, if we don’t have the platforms and tools to strategically disseminate deepfakes, they won’t even be there,” Whittaker wrote.
Prompts and Products: AI News and Announcements
- Google has rebranded its Bard chatbot to Gemini as part of a major expansion of its flagship large-scale language model. We also announced new AI capabilities across Google’s various products and services.
- While technology companies used Super Bowl V’s mainstream audience to tout new AI capabilities, non-tech advertisers are using generative AI to create campaigns for the big game and aim to stand out. I was there. Super Bowl advertisers with commercial marketing AI capabilities include Microsoft, Google, Samsung, Crowdstrike, and Etsy.
- Mixed reality apps powered by generative AI are already available on Apple Vision Pro headsets. Early examples include ChatGPT, Wayfair, and Adobe Firefly.
- A new report from William Blair examines the impact of generative AI on businesses.
- Advertising, social, and cloud providers continue to tout generative AI in various earnings calls and investor calls. Examples from last week include Omnicom, IPG, Snap Inc., Pinterest, Cognizant, and Confluent. But Amazon’s own cloud CEO warns that the generative AI hype could reach the scale of the dot-com bubble.