Admirers of Donald Trump interacted with black people in hopes that the photo would be interpreted by voters as a sign of increased support from key voter groups that would strengthen the president’s attempts to reclaim the white nation. has created a fake AI-generated image depicting the former US president. House.
Anyone can create fake images using commercially available generative AI tools. One such effort showed President Trump surrounded by young black men, shared by a parody account on It has been viewed more than 1.3 million times, according to BBC Panorama, after it was incorrectly described as showing.
another image The same account shows Trump standing fist-pumping at what appears to be a protest, with the caption: “No one has done more for the black community than Donald Trump.”
These fake images look real. Their spread is interpreted as a sign that Trump is campaigning and gaining popularity among the African-American community, an important demographic in the U.S. election.
Mark Kaye, a popular conservative radio talk show host with more than 1 million followers, used AI to create an image of a black man and Trump and shared it on Facebook. He admitted that the photo was fake. “I’m not taking pictures of what’s actually happening. I’m a storyteller,” he insisted.
He did not believe he was doing anything wrong by creating and spreading a false image. “I’m not claiming that’s accurate. I’m not claiming that it’s accurate. ‘Hey, look, Donald Trump was at this party with all the African-American voters. Look how much I love you!’ I’m not saying that.”
“If someone votes one way or another because of one photo they saw on a Facebook page, that’s the problem with that person, not the post itself,” he concluded.
The U.S. government and social media companies are ramping up efforts to monitor and combat political deepfakes.
“There are no specific or credible threats to election activity today,” a senior official from the U.S. Cybersecurity and Infrastructure Security Agency said at a briefing Tuesday.
“We have put a lot of effort into focusing on the increasing risks posed by generative AI capabilities in this area. These are the threat vectors that we have focused on to understand the threat and the steps that can be taken to mitigate it. ”
Meanwhile, companies like OpenAI, Meta, and Google have agreed to label AI-generated images produced by their models.
However, since labels can be removed, there is no reliable way to detect synthetic content. ®


