Hulu runs an anti-Hamas ad that appears to have been created using artificial intelligence to show an idealized version of Gaza, suggesting that without Hamas this paradise destination might exist. he claimed.
The 30-second spot opens like a tourist ad and shows palm trees and a coastline. There are five-star hotels and children are playing there. People dance, eat, and laugh, and a narration encourages visitors to “experience a culture rich in tradition.” But that suddenly changes, and the man’s smiling face turns into a frown. “This is what Gaza might be like without Hamas,” the narrator says. A new series of images flashes, this time showing fighter jets and weapons, as well as children wandering the streets and holding guns.
The ad appears to flatten the decades-long conflict between Israel and the Palestinians and centuries of war in the region into a 30-second ad, and uses AI to spread its message. The reality of who is to blame for the suffering of Palestinians in Gaza is far more complex than the short ad portrays. Hamas, considered a terrorist organization by the United States, Canada, Britain, Japan and the European Union, took control of the Gaza Strip in 2007. Israeli forces and settlers occupied Gaza from the 1967 war until 2005, when Israeli forces occupied it. The people then withdrew from the Palestinian territories. The United Nations and several other international organizations still consider Gaza to be effectively occupied, although the United States and Israel dispute that label.
As of last week, more than 25,000 people had been killed in Gaza since October, according to the Gaza Ministry of Health. The United Nations estimates that 1.9 million people, or about 85% of the population, have been forced to flee the Gaza Strip. The October 7 Hamas attack that sparked the current crisis killed around 1,200 Israelis.
The ad appears to contain images created using generative AI based on aesthetics, misperspectives, and repetition of similar facial expressions. The ad itself also acknowledges that the scenes in the first half of the ad are not real, but imagined in a conflict-free city. WIRED consulted two AI image detection companies, Inholo and Sensity, about the ad, and both said AI was used to create the first part of the ad. Activists have used generative AI throughout the conflict to rally support for each side.
While this ad isn’t actually a deepfake, it shows that rapid advances in generative AI can be used to create realistic and emotional propaganda. Even if you know something isn’t real, its content can still have an impact. Some people continue to share deepfakes, even when they depict incredibly outlandish situations.
The apparently AI-generated re-imagining of Gaza is similar to the TikTok trend of using AI to render an alternate history, and non-profits focused on the use of images and videos to protect human rights. said Sam Gregory, executive director of the for-profit organization Witness. Here, he said, AI seems to be used as a “cheap production tool” to persuade an audience or reinforce an existing point of view, or to “generate news coverage of the use of AI itself.” says Gregory.