Two days before the New Hampshire primary, Gail Huntley was just one of thousands of people who reportedly received a phone call from President Joe Biden telling them not to vote.
“It is important to save your vote for the November election,” he said. “Only this Tuesday’s vote will allow the Republican Party to seek re-election of Donald Trump.”
But the call wasn’t from Biden. It was a deepfake AI-generated message created by Texas-based Life Corporation that imitated the president’s voice.
“I was hoping that the people who received the call would know to ignore that message and go vote,” Huntley said.
more:Big tech promises to crack down on election AI deepfakes in 2024. Will they keep their word?
Preparing to vote: See who’s running for president and compare their positions on important issues with our voter guide
Following the New Hampshire election interference, the Federal Communications Commission has made it illegal to use robocalls that use artificial intelligence-generated voices.
“Bad actors are using AI-generated voices in unsolicited robocalls to blackmail vulnerable families, imitate celebrities, and misinform voters. We are warning the scammers behind these robocalls,” said FCC Chair Jessica Rosenworcel. “State attorneys general will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”
What is a deepfake?
First, let’s define this term. The AI-generated deepfakes in question are videos, images, and audio that digitally manipulate the appearance, sound, and behavior of political candidates and election officials. The question is whether political ads intentionally or inadvertently mislead voters about when, where, and how to vote.
What is going on at the federal level?
To date, there is no federal law banning deepfakes.
Congressional efforts to tame AI-generated content are long overdue, but three months after Biden signs executive order to reduce AI risks to national security and consumer rights The White House Council on Artificial Intelligence was held in January.
House Speaker Mike Johnson of Louisiana and House Minority Leader Hakeem Jeffries of New York recently joined forces with 12 Democrats and 12 Republicans to work together to regulate the use of artificial intelligence in politics, among other things. announced the establishment of a joint task force.
Unfortunately, the committee will be focusing on future political campaigns, not 2024.
“There will be a tsunami of disinformation in 2024. We’re already seeing it, and it’s only going to get worse,” Darrell West, a senior fellow at the Brookings Institution’s Center for Technology Innovation, said in February. He spoke at the beginning of the year. . “People are expecting this election to be close, and a swing of 50,000 votes in three or four states could be decisive.”
What are countries doing to limit AI deepfakes?
Five states have already enacted laws restricting AI in political communications: Minnesota, Michigan, California, Washington, and Texas. But what about the rest of the country?
Since January, more than 30 states have introduced more than 50 bills to regulate deepfakes in elections, focusing on disclosure requirements and prohibitions, according to Public Citizen. It remains to be seen whether the bill can neutralize deepfakes this election cycle.
The main contents of the state law are as follows:
minnesota
In 2023, the Arctic state passed a bipartisan bill with near-unanimous consent that would criminalize multiple forms of deepfakes. This law makes it a crime to disseminate deepfake images used to influence an election without consent within 90 days of the election. The consequences for creating a deepfake range from thousands of dollars to five years in prison.
colorado
In Colorado, a candidate election deepfake disclosure bill has been introduced that, if passed, would require disclosure of deepfake AI communications related to political candidates in the same way as political ads. meaning? Candidates could sue deepfake creators for exact and punitive damages.
new hampshire
Similarly, a bill introduced in New Hampshire would require disclosure of the use of AI in political ads. The bill would ban deepfakes and deceptive AI within 90 days of an election unless there is full disclosure.
Hawaii
In the Aloha State, the proposed bill would task the Hawaii Elections and Expenditure Commission with investigating and imposing fines on AI-generated deceptive information. As in Colorado and New Hampshire, one bill would require AI disclaimers and give the state Campaign Expenditure Commission the power to impose penalties within 90 days after an election.
California
Among the five latest bills introduced to the Rotunda’s floor in February is the California AI Accountability Act, which would require state agencies to notify users when interacting with AI, and would Submitted by State Senator Bill Dodd.
Rep. Gail Pellerin, who represents southwest San Jose, introduced a bill that would ban “substantially deceptive” political deepfakes four months before and two months after Election Day.
nebraska
The Nebraska Legislature has passed a bill that would ban the distribution of AI-generated deepfakes for 60 days before an election, and explicitly ban deepfakes intended to misrepresent the secretaries of state or election officials to mislead voters. A bill is being considered.
Virginia
The Virginia General Assembly is working with Governor Glenn Youngkin to address artificial intelligence challenges. Recommendations include the creation of a task force to assess the impact of deepfakes, misinformation, and the impact on data privacy.
Other AI-focused bills include one that would regulate the use of the technology by developers, one that would require impact assessments before public agencies can use the technology, and one that would ban the creation and use of deepfakes. There is.
How aware will voters be of AI in the 2024 election and what can they do about it?
Craig Holman, a Capitol Hill lobbyist who works on government ethics at the nonprofit Public Citizen, believes 2024 will be the first deepfake election cycle, with AI influencing voters and influencing election results. ing.
“Artificial intelligence has been around for a while, but it’s only in this election cycle that we’ve seen it advance to the point where most people can’t tell the difference between deepfakes and reality. The advances in AI are breathtaking,” Craig said. Public Citizen wasn’t always focused on election deepfakes. This is the first year that nonprofits are tackling the issue head-on. Holman said what changed for him was seeing a Republican National Committee ad shortly after Biden announced he would seek re-election in 2024. What he saw there shocked him. The ads include footage of President Biden and Vice President Harris laughing in a room, China bombing Taiwan, thousands of people crossing the border into the United States, and San Francisco being locked down due to the fentanyl crisis. It had been. None of it was real. . And although I knew the image was fake, I couldn’t visually tell the difference.
more:As the birthplace of the technology, California could be a leader in AI regulation
Holman’s prediction is already coming true, as evidenced by the Biden deepfake in New Hampshire. Is it too late to put the AI genie back in the bottle?
No, says Ashley Casoban, managing director of the Artificial Intelligence Governance Center of the International Association of Privacy Professionals.
“It’s not just about understanding how; [AI technology is] Are they being used, then how are they being used, sometimes maliciously, and then what kinds of different mitigations do we need to put in place… We need really strict laws,” Casoban said.
“While these acts continue to occur and technology is becoming more and more pervasive, it is not too late to put in place proper rules, proper training and other types of safety measures.”
Additional reporting by Elizabeth Beyer, Melissa Cruz, Margie Cullen, Sarah Gleason, Maya Marchel Hoff, Kathryn Palmer, Sam Woodward and Jeremy Yurow