Gaps in the federal ban on voice robocalls generated by artificial intelligence (AI) highlight concerns about the lack of regulation for their use in other digitally modified content and campaigns.
Last week, the Federal Communications Commission (FCC) unanimously voted to ban the use of AI-generated voices, recognizing them as “artificial” under the Telephone Consumer Protection Act. The vote came shortly after a phone call featuring an AI-generated voice impersonating President Biden went viral across New Hampshire ahead of the state’s primary election.
Experts say the FCC’s ban is a welcome step toward curbing deceptive AI-generated content, but it’s not enough.
“Audio content is, of course, very important, but it’s just one type of content,” says Julia Stoyanovic, an associate professor at New York University’s Tandon School of Engineering.
“We need to understand more about AI-generated media and how to regulate the use of such media, as well as whether these media should be banned or used more generally when used in certain circumstances. We need to think holistically about how we hold ourselves accountable.”
Under the Telephone Consumer Protection Act, which restricts the use of artificial or prerecorded voice messages in telemarketing calls, the FCC may impose fines on robocallers and remove charges from telephone companies that facilitate illegal robocalls. You can block calls.
“AI-generated voice clones and images are already causing confusion by making consumers believe that fraud and fraudulent activity is legitimate,” FCC Chairwoman Jessica Rosenworcel said in a statement. Stated.
“No matter which celebrity or politician you support, no matter what your relationship with your family is when they call for help, we can all be targeted by these fake calls. There is a gender.”
“That’s why the FCC is taking steps to designate this emerging technology as illegal under current law, and we’re encouraging our partners in state attorney’s offices across the country to develop new technologies that can be used to combat these frauds and protect consumers. We provide tools,” she added.
The FCC and other federal regulators face continued pressure from the nonprofit Public Citizen and other advocacy groups calling for AI guardrails ahead of the 2024 election.
“This rule meaningfully protects consumers from the rapidly spreading AI fraud and deception,” said Robert Wiseman, President of Public Citizen.
“Unfortunately, through no fault of the FCC, this action does not go far enough to protect our people and our elections.”
Due to the FCC’s limited scope, AI-generated images and videos are not regulated as political campaigns, and their supporters are increasingly using such materials ahead of elections.
“We are talking about political advertising here,” Stojanovic added. “And of course, this is probably the most important issue facing us this year, an election year.”
Experts and advocates are now ramping up pressure on the Federal Election Commission (FEC) to fill the gap left by the FCC’s robocall ban and step up efforts to regulate AI. Public Citizen, which led the petition asking the FEC to clarify the rules, said the agency is moving too slowly.
In August, the FEC voted to consider clarifying rules against fraudulently misrepresenting candidates or political parties to explicitly cover deceptive AI in election campaigns.
The FEC has yet to issue an update on the rule since the public comment period ended in October, and a spokesperson previously told The Hill there was no update on timing.
FEC Commissioner Sean J. Cooksey (R) told the Washington Post last month that the commission and staff were “diligently considering the thousands of public comments submitted” and that the commission “will continue to do so until early summer.” He said he expects this rule to be resolved.
Nick Penniman, founder and CEO of Issue One, a nonpartisan political reform group, said in a statement that the FCC’s rules are “a positive step, but they are not enough.” Penniman asked Congress to take action to “prohibit fraudsters from using deceptive AI to interfere with elections” and the FEC to prohibit fraudsters from using deceptive AI in campaign communications. They requested that the wording prohibiting its use be clarified.
“The unregulated use of AI as a means to target, manipulate, and deceive voters is an existential threat to democracy and election integrity. This is not a future possibility and requires decisive action. It’s a current reality that requires it,” Penniman said.
The FCC may have a hard time enforcing the new ban as the FEC and Congress consider its next steps.
Jessica Furst Johnson, an elections attorney at Holtzman Vogel and general counsel for the Republican Governors Association, said the challenges of identifying AI-generated content could hinder the rule’s effectiveness.
First-Johnson said FCC rules rely on reports from recipients of robocalls, so voters can’t complain about the alleged use of AI based on calls from political parties or candidates they don’t support. He said he was likely to file a complaint.
Stojanovic also warned that it would be “very difficult” to enforce the rules without some automation.
“If this is something that is very difficult for humans to discern, whether it’s a robocall using machine-generated voice or human speech, it’s also going to be very difficult for automated detection,” Stoyanovic said. he told The Hill.
“And if you can’t automate this, it’s going to be very difficult in general,” she continued.
Some social media companies are also taking steps to curb the spread of AI-generated political content ahead of the election.
Meta, the parent company of Facebook, Instagram, and Threads, is ramping up efforts to detect and label AI-generated images.
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.