The internet believes this Gaza video is AI-generated; here’s our evidence proving otherwise.
A new video posted from southern Gaza has sparked intense discussions online regarding its authenticity, with some claiming it may have been generated using artificial intelligence. The footage, which gained traction on Tuesday, features an individual in a camouflage face covering and baseball cap performing a heart sign and a “shaka” sign against the backdrop of a large crowd waiting for food aid at a distribution site in Rafah. Following analysis by NBC News and Get Real Security, a cybersecurity firm that focuses on detecting generative AI, it was determined that the video was not AI-generated.
Their investigation revealed no signs of artificial manipulation. Furthermore, NBC News successfully geolocated the video to the Tal as Sultan aid distribution site, which was recently established through a collaborative effort involving Israel’s Coordinator of Government Activities in the Territories and the Gazan Humanitarian Foundation (GHF). The GHF confirmed to NBC News that the video was initially circulated by their team, although they were unable to identify the individual featured.
The foundation refuted claims of the video being fake or AI-generated, deeming such assertions as unfounded and irresponsible. As the video circulated on social media, it ignited debates among users over its authenticity. Some argued it was AI-generated, expressing skepticism about believing online content in general.
Hany Farid, co-founder of Get Real Security and a professor at UC Berkeley, noted during an interview that he did not observe any indications that the video was created by AI. Among the details highlighted were the discernible “Ray Ban” logo on the individual’s sunglasses and consistent shadowing in the footage. In a broader context, the emergence of AI-generated videos is not new, especially amid conflicts; previous instances include fabricated videos related to the Gaza war and misleading footage from other conflicts being misattributed.
Farid emphasized the dual nature of generative AI, stating it can both produce misleading content and obscure factual details from sensitive situations.