New research shows new AI-powered tools generate inaccurate election information with harmful or incomplete answers more than half the time.
The study, by AI Democracy Projects and nonprofit media outlet Proof News, finds that as the U.S. presidential primary is underway across the country, Americans are turning to chatbots like Google’s Gemini and OpenAI’s GPT-4 for information. The announcement was made amid a growing number of Experts have expressed concern that the emergence of powerful new forms of AI could lead to voters receiving false and misleading information and discouraging them from voting.
The latest generation of artificial intelligence technology, including tools that allow users to generate text content, video, and audio almost instantly, is being heralded as ushering in a new age of information by providing facts and analysis faster than humans can. I am. But new research has found that these AI models tend to encourage voters to go to polling places that don’t exist or fabricate illogical answers based on rehashed old information.
For example, researchers found that one AI model, Meta’s Llama 2, incorrectly responded to a prompt that voters in California could vote by text message. Voting by text is not legal anywhere in the United States. Additionally, none of the five AI models tested – OpenAI’s ChatGPT-4, Meta’s Llama 2, Google’s Gemini, Anthropic’s Claude, and French company Mistral’s Mixtral – featured campaign logos such as MAGA hats. Correctly states that wearing worn clothing is prohibited on the ballot in Texas under Texas law. .
According to the Brookings Institution, some policy experts believe that AI could help improve elections, such as by enhancing tabulation machines that can scan ballots faster than poll workers and by detecting voting anomalies. I think it’s sexual. But such tools are already being misused, including by allowing bad actors, including governments, to manipulate voters in ways that undermine the democratic process.
For example, an AI-generated robocall was sent to voters days before last month’s New Hampshire presidential primary. Fake version of President Joe Biden’s voice They are asking people not to vote in elections.
On the other hand, some people using AI are facing a different problem. Google recently suspended its Gemini AI image generator, but says it will restart it in the coming weeks because the technology produced information that contained historical inaccuracies and other concerning reactions. For example, according to the Wall Street Journal, when asked to create images of German soldiers during World War II, when the Nazi party controlled the country, Gemini provided racially diverse images. It seems like he did.
“They say they put extensive safety and ethical testing into their models,” Axios technology policy reporter Maria Kuri told CBS News. “We don’t know exactly what their testing process is. Users are finding historical inaccuracies, which raises the question of whether these models are coming out too soon. ”
AI models and hallucinations
Meta spokesperson Daniel Roberts told The Associated Press that the latest findings are “meaningless” because they don’t accurately reflect how people interact with chatbots. Anthropic said he plans to roll out a new version of its AI tool in the coming weeks to provide accurate voting information.
In an email to CBS MoneyWatch, Meta pointed out that Llama 2 is a developer model, not a tool for consumer use.
“When we sent the same prompt to Meta AI, a product used by the general public, the vast majority of responses directed users to resources to find reliable information from state election officials. This is exactly what our system does. ,” a Meta spokesperson said.
”[L]”Large language models can ‘hallucinate’ misinformation,” Alex Sanderford, head of trust and safety at Anthropic, told the AP.
OpenAI said it plans to “continue to evolve our approach as we learn how our tools are being used,” without providing further details. Google and Mistral did not respond to requests for comment.
“It was scary.”
In Nevada, which has allowed same-day voter registration since 2019, four of the five chatbots researchers tested falsely claimed that voter registration would be blocked weeks before Election Day.
“I was scared more than anything because the information that was provided was wrong,” said Nevada Secretary of State Francisco Aguilar, a Democrat who attended last month’s testing workshop.
Many U.S. adults believe AI tools will increase the spread of false and misleading information during this year’s election, according to a recent poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy. I am concerned that this may not be the case.
But in the United States, Congress has yet to pass legislation regulating AI in politics. For now, the technology companies behind the chatbots will govern themselves.
–From a report by the Associated Press.