“We hope this is a big step forward,” Mehta executive told “GMA.”
Meta will begin labeling images created by OpenAI, Midjourney and other artificial intelligence products, Nick Clegg, the company’s global president, announced in an interview on ABC’s “Good Morning America” on Tuesday.
Clegg said the labels will be rolled out in the coming months and will identify AI-generated images posted to Facebook, Instagram and Threads. Clegg added that images created with Meta’s own tools will also be labeled.
“We hope this is a big step forward in our efforts to ensure that people understand what they see and understand where what they see comes from,” said Clegg. he told GMA. “As the distinction between human and synthetic content becomes blurred, people want to know where the line is.”
However, Clegg acknowledged that labels are not a “perfect solution” due to the scale and complexity of AI-generated content on the platform.
Clegg said in a blog post on Tuesday that Meta currently cannot identify AI-generated audio and video produced using external technology platforms. To address this issue, Clegg said Meta plans to add the ability for users to voluntarily label audio or video as AI-generated when they upload it to the platform.
The risks posed by AI-generated content have sparked widespread concern in recent weeks.
A fake, AI-generated sexually explicit image of pop star Taylor Swift went viral on social media late last month, garnering millions of views. In response, the White House called on Congress and tech companies to take action.
White House Press: “Social media companies make their own decisions about content moderation, but they have an important role to play in enforcing their own rules to prevent the spread of misinformation and intimate images of real people without their consent.” We think there is,” Secretary Karine Jean-Pierre told ABC News White House Correspondent Karen Travers.
Another incident last month focused attention on the election risks posed by AI. A fake robocall posing as President Joe Biden’s voice deterred individuals from voting in the New Hampshire primary.
As Americans head to the polls in 2024, Clegg said technology companies must take action to ensure users can reliably identify whether online content is authentic.
“In an election year like this, it’s up to us as an industry to make sure the technology provides as much visibility as possible so people can tell the difference between what’s synthetic and what’s not. It’s our responsibility,” Mr Clegg added.
In September, a bipartisan group of senators proposed a bill that would ban the use of deceptive AI content that falsely portrays candidates for Congress in political ads.
Asked if Mehta would support the bill, Clegg said he would support legislation to regulate AI, but did not comment specifically on the Senate bill.
“To ensure that we have proper transparency about how these big AI models are built and that they are properly stress tested to ensure they are as secure as possible, I think it’s right to put in guardrails,” Clegg said. “Yes, I think there is definitely a role for the government.”
“With so many important elections taking place around the world, Meta will be labeling AI-generated images until next year,” Clegg said in a blog post. The extended period will give Meta an opportunity to evaluate its efforts.
“What we learn will inform industry best practices and our own approach going forward,” Mr Clegg said.