A multistate task force is also preparing for a possible civil lawsuit against the company, and the Federal Communications Commission has filed a lawsuit against the company after an industry group identified a Texas-based company as the source of the calls. Ordered Ringo Telecom to stop allowing illegal robocall traffic.
Formella said the move is meant to warn New Hampshire and other states that they will take action if they use AI to interfere in elections.
“Don’t try it,” he said. “If you do, we will work together to investigate and work with our partners across the country to locate you and take all possible enforcement actions under the law. The consequences of your actions will be severe. Sho.”
New Hampshire has issued subpoenas to Life Corporation, Ringo Telecom and other individuals and entities that may have been involved in the calls, Formella said.
Life Corporation, its owner Walter Monk, and Ringo Telecom did not respond to requests for comment.
The announcement comes as increasingly advanced AI tools create new opportunities to interfere in elections around the world by creating fake audio recordings, photos, and even videos of candidates and muddying the waters of reality. , foresees new challenges for state regulators.
The robocalls were an early test of a patchwork of state and federal enforcement officials relying primarily on election and consumer protection laws enacted before generative AI tools were widely available to the public.
The criminal investigation was announced more than two weeks after the tip-off reports surfaced, highlighting the challenges for state and federal enforcement officials in responding quickly to potential election interference.
“When the stakes are this high, you don’t have hours or weeks to spare,” said Hany Farid, a professor at the University of California, Berkeley, who studies digital propaganda and misinformation. “In reality, the damage is likely to be done.”
In late January, 5,000 to 20,000 people received calls from an AI-generated impersonation of Biden telling them not to vote in the state’s primary. The appeal appealed to voters: “It is important to save your vote for the November election.” Formella said it was still unclear how many people did not vote based on these calls.
The day after the report surfaced, Formella’s office announced it would investigate the matter. “These messages appear to be an illegal attempt to disrupt the New Hampshire presidential primary and suppress New Hampshire voters,” he said in a statement. “New Hampshire voters should completely ignore this message.”
Despite this action, Formella did not provide information about which company’s software was used to create the AI-generated robocalls for Biden.
Farid said the audio recordings were likely created by software from AI voice cloning company Eleven Labs, according to an analysis he conducted with researchers at the University of Florida.
Eleven Labs, which was recently valued at $1.1 billion and raised $80 million in a funding round co-led by venture capital firm Andreessen Horowitz, allows anyone to sign up for a paid tool that allows them to clone audio from existing audio samples. It has become.
Eleven Labs has been criticized by AI experts for not having enough guardrails in place to prevent it from being weaponized by fraudsters looking to scam voters, the elderly and others.
The company has suspended the account that created the robocall deepfake of Biden, according to news reports.
“We are dedicated to preventing abuse of our audio AI tools and take all incidents of abuse very seriously,” said Matti Staniszewski, CEO of Eleven Labs. “While we cannot comment on specific incidents, we will take appropriate action when an incident is reported or detected and will put mechanisms in place to assist authorities and stakeholders in taking response measures.”
AI experts say the robocall incident is one of several that highlights the need for better policies within technology companies to ensure their AI services are not used to skew elections. .
In late January, OpenAI, the creator of ChatGPT, banned the developer from using the tool after the developer built a bot that imitated leading Democratic presidential candidate Dean Phillips. Phillips’ campaign supported the bot, but after the Washington Post reported on it, OpenAI deemed it in violation of rules against using its technology in campaigns.
Experts say technology companies have tools to regulate AI-generated content, such as watermarking audio to create a digital fingerprint and guardrails that don’t allow copies of audio that says certain things. That’s what it means. Experts say companies can also join a coalition aimed at preventing the spread of misleading information online by developing technical standards to establish the origin of media content.
But Farid said many tech companies are unlikely to implement safeguards anytime soon, regardless of whether their tools threaten democracy.
“Twenty years of history has taught us that technology companies don’t want to put guardrails around their technology,” he said. “It’s bad for business.”