WOODBURY, Minnesota — Dr Manjeet Rege sat at his kitchen table and smiled slightly as he watched an AI-generated video of Donald Trump and Elon Musk dancing to “Staying Alive.”
“It really shows off the power of AI,” Rege says, “and it also shows off some amazing dance moves.”
The video, which Trump shared with Mr. X on Thursday, had been viewed 43.2 million times as of Friday afternoon. The top comment read, “Is this real?”
“If the 2016 election was a ‘social media’ election, the 2024 election can be called an ‘AI’ election,” Rege said.
Rege is director of the Center for Applied Artificial Intelligence at the University of St. Thomas and teaches AI there.
“It’s now much easier to create content that looks very realistic,” he said.
According to Rege, this is partly due to the emergence of generative AI, which first began with the release of ChatGPT in 2022 and has been strengthened by developing programs every day since then.
“Generative AI has really become so prevalent that you don’t need advanced knowledge of AI to create content that looks highly realistic,” Rege said.
It won’t be long before we see AI being used in American politics, and while some will use its power for good — such as using it to shape policies that reflect the shared concerns of voters — others will use it to spread misinformation, Leger said.
After about 15,000 people attended a Kamala Harris rally near Detroit, Michigan, an X user posted photos of an AI-generated crowd, claiming that Harris’ crowd wasn’t real.
“This is a perfect example of disinformation,” Rege said. “I think we need better regulation here.”
Rege said the Federal Election Commission and the Federal Communications Commission have disagreed on how AI should be regulated.
So far, Congress has not passed any legislation to regulate AI powers. In October 2023, President Joe Biden issued an executive order seeking to curb malicious uses of AI, placing much of the responsibility for self-regulation on social media platforms, but Léger said this could be a flawed system.
“Social media companies want to spread more content,” he says. “They want people to log on. They want more people to engage on their platforms. If you leave it up to them to set limits, and it’s like an honor system, it might not be as effective.”
end.”
Rege said media literacy is crucial in the run-up to the election to avoid misinformation, but as AI advances, it could become harder for even a trained eye to distinguish real from fake, he said.
“People may not have time to really analyze whether something is AI-generated or not. In the last two or three days (before the election), people who are undecided or who have already decided on a candidate may see something generated by AI and immediately change their mind and vote for another candidate,” Legge said.