A wave of artificially generated videos and images depicting the conflict between Iran, the United States, and Israel is spreading across social media, accumulating hundreds of millions of views. Digital media experts report that online creators are using newly accessible AI technology to produce this fabricated content, often to capitalize on platform monetization programs that reward high engagement.
This surge in synthetic media includes everything from fake missile strikes on major cities to doctored satellite images of military bases, creating a challenging environment for users seeking reliable information. The ease and low cost of these AI tools have significantly lowered the barrier for producing convincing but false conflict footage.
Key Takeaways
- Creators are using new AI tools to produce fake videos and images of the Iran conflict for financial gain.
- This AI-generated content has received hundreds of millions of views across platforms like X, TikTok, and Facebook.
- Examples include fabricated videos of missile strikes in Tel Aviv and a burning Burj Khalifa in Dubai.
- Experts warn this trend erodes public trust and makes it harder to verify real events.
- Social media platform X has taken initial steps to demonetize unlabeled AI conflict content, but the broader problem persists.
The New Frontline of Disinformation
As military strikes began on February 28, social media feeds quickly filled with dramatic footage. However, a significant portion of this content was not real. One widely circulated AI-generated video, for instance, appeared to show missiles hitting the Israeli city of Tel Aviv, complete with explosion sounds. This single piece of synthetic media was used in over 300 separate posts and shared tens of thousands of times.
Another fake video claiming to show Dubai's Burj Khalifa skyscraper engulfed in flames was viewed tens of millions of times, spreading at a moment of high anxiety for residents and tourists in the region. According to digital media expert Timothy Graham from Queensland University of Technology, the creation of such content has become disturbingly simple.
"What used to require professional video production can now be done in minutes with AI tools. The barrier to creating convincing synthetic conflict footage has essentially collapsed," Graham stated.
This accessibility has created what experts describe as an unprecedented situation. "The scale is truly alarming and this war has made it impossible to ignore now," Graham added.
A Growing Toolkit of Deception
The technology behind these fakes is advancing rapidly. Tools like OpenAI's Sora, Google's Veo, and the Chinese AI app Seedance are capable of generating highly realistic video from simple text prompts. This has led to a flood of content that is difficult for the average user to distinguish from authentic footage.
Monetization Fuels the Misinformation Machine
A primary driver behind this surge of fake content is financial incentive. Platforms like X (formerly Twitter) have creator programs that pay users based on the engagement their posts receive. High-impact, dramatic content—real or fake—tends to go viral, generating significant revenue for the accounts that post it.
An executive at X recently acknowledged the issue, stating that "99%" of the accounts spreading these AI-generated videos were attempting to "game monetization."
The Economics of Fake News
Timothy Graham estimates that X's Creator Revenue Sharing program could pay approximately $8 to $12 per million verified user impressions. To qualify, creators must achieve five million organic impressions within three months and hold a premium subscription. "Once you're in, viral AI-generated content is basically a money printer," Graham explained. "They've built the ultimate misinformation enterprise."
In response to the problem, X announced it would temporarily suspend creators from its monetization program if they post AI-generated videos of armed conflict without a proper label. However, other major platforms like TikTok and Meta have not publicly announced similar measures.
Beyond Videos: Fabricated Satellite Imagery
The misinformation campaign has evolved beyond video. A new and concerning development is the use of AI to generate fake satellite images. Following real strikes on the U.S. Navy's Fifth Fleet headquarters in Bahrain, a fabricated photo began circulating that claimed to show extensive damage to the base.
The image, which was shared by the state-linked newspaper The Tehran Times, appeared to be an altered version of a publicly available satellite photo of the base from a year prior. Analysis suggests the fake was generated or edited with a Google AI tool. The deception was revealed by small details, such as three vehicles that remained in the exact same position in both the real and fake images, despite allegedly being taken a year apart.
Eroding Trust in Evidence
Experts are concerned about the long-term consequences of this trend. Mahsa Alimardani, a researcher at the Oxford Internet Institute, warns about the damage to public trust.
"Fake videos like these have a detrimental impact on people's trust in the verified information they see online and make it much harder to document real evidence," Alimardani said.
The problem is compounded when AI systems are used to verify information. In many observed cases, X's own AI chatbot, Grok, incorrectly assured users that the fake videos were authentic, further muddying the waters.
An Unresolved Challenge for Platforms
While social media companies claim to be updating their moderation and detection systems, the speed and scale of AI content present a formidable challenge. The core business model of these platforms, which prioritizes engagement, is often at odds with the goal of promoting accurate information.
"The deeper issue is that engagement-driven monetization and accurate information are fundamentally in tension," said Graham. He noted that no platform has fully resolved this conflict, and perhaps none ever will.
As generative AI technology becomes even more sophisticated and widespread, the battle against monetized misinformation is set to become a defining challenge for the digital age, with real-world consequences for global security and public understanding of critical events.





