A significant volume of artificially generated images and videos is circulating on social media platforms, creating a confusing and often false narrative of the ongoing war in Iran. These fabricated visuals, depicting everything from non-existent explosions to fake troop movements, have been viewed millions of times, marking a new and challenging phase in information warfare.
In just the first few weeks of the conflict, researchers have identified more than 110 distinct pieces of AI-generated content. This digital flood is spreading rapidly across major networks like X, TikTok, and Facebook, as well as through private messaging applications, making it difficult for the public to distinguish fact from fiction.
Key Takeaways
- Over 110 unique AI-generated videos and images about the Iran conflict were identified in a two-week period.
- The content falsely depicts dramatic scenes of war, including missile strikes, urban destruction, and military actions.
- A majority of the fabricated content appears to promote pro-Iranian narratives, aiming to exaggerate military success and regional devastation.
- Social media platforms are struggling to manage the spread of this misinformation, despite some policy changes.
An Unprecedented Scale of Digital Deception
The conflict in Iran has triggered a surge of synthetic media unlike any seen in previous global events. The sheer volume and rapid dissemination of this content represent a significant escalation from earlier conflicts, including the war in Ukraine. Experts note that the accessibility of new AI tools has enabled this rapid proliferation.
"Even compared to when the Ukraine war broke out, things now are very different," said Marc Owen Jones, an associate professor of media analytics at Northwestern University in Qatar. "We’re probably seeing far more A.I.-related content now than we ever have before."
By the Numbers: A Look at the Fakes
An analysis of the AI-generated content from the first two weeks of the conflict revealed a wide range of fabrications:
- 37 pieces depicting active, but fake, combat scenarios.
- 8 images and videos showing destruction of cities that were never attacked.
- 5 videos of soldiers supposedly crying or protesting the war.
- 43 memes and other overt uses of AI for propaganda purposes.
This content is not limited to one side of the conflict but covers a wide spectrum of narratives. Fabricated scenes include Israelis reacting to explosions in Tel Aviv, Iranians mourning casualties, and American naval ships being hit by missiles—all events that did not happen as depicted.
The Anatomy of a Fake: Creating a False Reality
Modern AI tools allow almost anyone to create highly realistic simulations of war with simple text prompts, often for free. This has led to the creation of an alternate, more dramatic version of the war that is highly optimized for social media engagement.
Real footage from conflict zones is often filmed from a distance, particularly at night, showing missiles as small points of light and explosions as distant plumes of smoke. In stark contrast, AI-generated content frequently resembles a Hollywood action film.
"The use of A.I. images of places in the Gulf — being burnt or damaged — becomes more important in Iran’s playbook because it allows them to give a sense that this war is more destructive and maybe more costly for America’s allies than it might actually be." - Marc Owen Jones
These fakes often feature exaggerated details like massive fireballs, mushroom clouds, and sonic booms rippling through cityscapes. In one widely circulated fake video, a shaky camera view from a Tel Aviv balcony shows the skyline being bombarded by missiles. Analysts identified it as AI-generated, partly due to the prominent Israeli flag in the foreground—a common artifact when AI is prompted to create a scene related to Israel.
A New Front in Information Warfare
The strategic use of this AI-generated content has become a potent tool for shaping public perception. According to a study by social media intelligence company Cyabra, the majority of these fakes promote pro-Iranian views. They are often used to falsely demonstrate military superiority or to create an impression of widespread devastation caused by U.S. and Israeli actions.
Case Study: The U.S.S. Abraham Lincoln
A clear example of this strategy unfolded around the U.S.S. Abraham Lincoln. On March 1, Iran’s Islamic Revolutionary Guards Navy claimed a successful attack on the aircraft carrier. This claim was immediately followed by a wave of AI-generated images and videos showing the ship on fire. The content was celebrated by Iranian users as proof of a successful counteroffensive. The United States later confirmed the attack was unsuccessful and the ship was unharmed, but not before the false narrative had spread widely.
This tactic goes beyond simple misinformation. "This is a natural front for Iran to try and exploit and it feels like this is one of the reasons it is so voluminous," explained Valerie Wirtschafter, a fellow at the Brookings Institution. "It’s actually a tool of war."
The Platform Response and Lingering Challenge
Social media companies have been slow to adapt to this new wave of synthetic media. While many AI generation tools can embed invisible watermarks to identify content as fake, these are easily removed or obscured by those looking to spread disinformation. Very few of the videos analyzed contained any such watermarks.
In response to the growing problem, some platforms are taking initial steps. X, owned by Elon Musk, announced it would suspend accounts from receiving ad revenue for 90 days if they post unlabeled AI-generated content related to armed conflict. The measure is aimed at removing the financial incentive for spreading such falsehoods.
However, experts suggest that the primary motivation for many state-linked accounts is not profit but influence. The ability to shape the narrative, demoralize an adversary's population, and project strength remains a powerful incentive. As AI technology continues to advance, the line between real and fake footage will likely become even harder to discern, presenting a persistent challenge for journalists, governments, and the public alike.





