Oren Etzioni, founder of TrueMedia, said a big reason campaigns and others use AI-generated disinformation is to make reactions and reactions illegal.
COLUMBUS, Ohio — Artificial intelligence has been around for a long time through means like Google search, spell check, and predictive text. Generative AI, on the other hand, is a different animal.
“Those are your fake images, those are images that you manipulated, images that make it look like something is happening that never actually happened,” said 10TV’s national VERIFY team reporter and said researcher Kelly Jones.
In the early days, it was fairly easy to find generated AI images. However, as technology advances, images are becoming more complex. For now, Jones said there is still some information or “artifacts” to look for when determining whether an image is real or generated by AI.
“Artificial intelligence technologies are evolving, but they’re not 100% good at creating things like fingertips or facial features. They’re not very good at capturing what someone sees.” she says.
AI generators are still not very good at capturing humanity, especially in audio and video. Technology often eliminates pauses and inflections in speech.
Despite this technology’s flaws, it is still used to try to influence people’s opinions during election years.
A robocall before the New Hampshire primary featured a voice resembling President Biden. An AI-generated photo of former President Donald Trump on a plane with convicted sex offender Jeffrey Epstein has gone viral on social media.
“Disinformation is not new. What is new is how easy and cheap it will be to create in 2024. This is not to say that you should never believe anything, but rather that when you see something, you should be very careful to scrutinize it and confirm its source. ,” said Oren Etzioni, founder of TrueMedia.
Etzioni founded TrueMedia, a nonprofit organization that identifies the use of AI and deepfakes to spread disinformation in political campaigns.
He said a big reason campaigns and others use AI-generated disinformation is to make reactions and responses illegal.
“The danger is that when you see something, you have a strong emotional reaction in the first few seconds, you get furious, you get surprised, you scroll, you click, you forward this to a lot of friends, you get very angry.” It’s easy. And I have a lot of friends who will forward it to their next friend,” he said. “We need to understand these sources, especially sites like TikTok and YouTube, where so many people get their news, that they can potentially be manipulated. You need to understand.”
The issue of using AI in political campaigns is getting the attention of Ohio lawmakers. Amherst Democrat Joe Miller introduced House Bill 410, which would require the use of deepfakes and AI to be clearly labeled. Miller said its use could prove dangerous to the country’s elections.
“People could change their minds and vote for bad information, or worse, say, ‘I don’t trust this system,’ and give up on a system that’s been around for 250 years and has worked well. There is,” Miller said.
Ohio Lt. Governor Jon Husted introduced an AI education toolkit for educators. In an interview with 10TV, he said AI can be used as a valuable tool in the classroom.
“AI will become your 24/7 tutor who will help you solve any problems that may arise at home,” he said. “When there’s a question we don’t know the answer to, we work together to tackle it. That’s how I think of AI as a collaborator.”
He added that there are many people using AI for more sinister reasons.
“There are people who are using it to do bad things, to defraud the elderly, to try to spread misinformation and dialogue,” Husted said. “Even if the candidate doesn’t use it, outside forces will use it, and in this do-it-yourself manner, the candidate is no longer even in charge of the campaign, so it’s impossible to ban it. I think that’s what happens there. ”
Jones said there are still things people can do to avoid being fooled by AI.
“Is this person or character in this AI video saying something like this? Is it too good to be true? Are there any reports of people saying something like this?” Jones asked. asked. “The biggest thing we want to tell our readers is, if you see something suspicious about content you see online, don’t automatically share it. Do your own research, check the context, follow your instincts, and take a sniff. Ask yourself if you can pass the test.”
Learn more about AI detection from 10TV’s national VERIFY team here.
Local news: recent coverage ⬇️


