A new wave of highly realistic, artificially generated videos and images depicting a fictional conflict with Iran is rapidly spreading across social media platforms. These fabrications, created with easily accessible AI tools, are accumulating tens of millions of views and represent a significant escalation in the spread of digital disinformation during global crises.
Unlike the cruder fakes seen in past conflicts, which often involved mislabeled video game clips or old footage, this new content is custom-made and convincing enough to deceive the average user. Experts warn that the sheer volume and quality of this AI-generated propaganda are overwhelming traditional fact-checking efforts.
Key Takeaways
- High-quality, AI-generated videos and images depicting a fictional Iran-US conflict are going viral.
- This marks a shift from older forms of disinformation, which used repurposed footage.
- Experts say the accessibility of generative AI has made it easy to create believable fake war scenes.
- Social media platforms are struggling to contain the spread, with some policies seen as insufficient.
- Identifying these fakes is becoming increasingly difficult, even for trained eyes, as the technology improves.
A New Era of Digital Deception
The information landscape surrounding international conflicts has become increasingly complex. In the past, disinformation often consisted of misrepresenting real events or using footage from movies and video games. Now, generative artificial intelligence allows anyone to create entirely new, fictional scenes of war from scratch.
Since the beginning of a fictionalized conflict between the US and Iran, social media feeds have been inundated with fabricated content. These AI-generated creations range from dramatic videos of missile strikes on Tel Aviv to images of captured US soldiers. One widely circulated video purports to show panicked civilians fleeing an attack at an airport, while another depicts a downed American aircraft being paraded through Tehran. None of these events actually occurred.
Hany Farid, a digital forensics professor at the University of California, Berkeley, noted the dramatic increase in both quantity and quality. “Ten years ago, there’d be like one or two fake things out there; they’d get debunked pretty fast... Now you see hundreds of them, and they’re really realistic,” he said. Farid emphasized the impact, stating, “It’s not just realistic, it’s landing — it’s landing hard. People believe it and they’re amplifying it.”
The Rise of Accessible AI
The key change is the widespread availability of powerful AI image and video generators. According to Shayan Sardarizadeh, a senior journalist with BBC Verify, “generative AI has become much more widely accessible.” This allows for the creation of “very believable videos and images appearing to show a significant war incident that is hard to detect to the untrained or naked eye.”
The Scope of the Disinformation Campaign
The fabricated content is not limited to videos. A torrent of AI-generated still images has also appeared, claiming to show burning US military installations in Iraq and Saudi Arabia, the deceased Iranian Supreme Leader under rubble, and grieving Iranian civilians. Even a publication with links to the Iranian government shared a fake satellite image allegedly showing damage to a US base in Bahrain.
The motivations behind this flood of fakes vary. Some content is clearly pushed by pro-Iran accounts as propaganda. In other cases, the goal may be to generate social media engagement, which can translate into influence or even financial gain. For some creators, the simple ability to make something realistic may be the only motivation needed.
Examples of Viral AI Fakes
- Video: A supposed missile barrage striking Tel Aviv.
- Video: US special forces allegedly captured by Iranian troops.
- Image: A false depiction of the US Embassy in Saudi Arabia on fire.
- Image: A fabricated satellite view of a damaged US military base.
This surge of sophisticated fakes is occurring in an environment ripe for disinformation. Heightened political polarization and social media algorithms that create echo chambers mean users are often exposed only to content that confirms their existing beliefs. Compounding the issue, many major social media companies have scaled back their content moderation efforts.
“The content is more realistic, the volume is higher, the penetration is deeper — this is our new reality. And it’s really messy,” Farid concluded.
An Uphill Battle for Platforms and Users
Social media platforms are facing immense pressure to address the problem, but their responses have been limited. The platform X announced it would suspend users from its creator payment program if they share AI-generated conflict videos without disclosure. However, this policy only affects a small minority of users who are part of the payment program, leaving the vast majority of accounts untouched. Skeptics like Farid doubt the policy will have a significant impact.
Further complicating matters, X's own AI chatbot, Grok, has reportedly misidentified several fake AI-generated images and videos as authentic, actively contributing to the spread of misinformation. Other major platforms like Meta (owner of Facebook and Instagram) and TikTok have not provided recent comments on their strategies to combat war-related AI fakes.
How to Navigate the New Reality
For the average person, distinguishing fact from fiction has never been more challenging. The old tricks for spotting AI fakes, such as looking for malformed hands or limbs, are quickly becoming obsolete as the technology improves.
Experts offer several recommendations for staying informed:
- Rely on Trusted Sources: Professor Farid’s primary advice is to get news from credible, established journalistic organizations rather than from “random accounts” on social media. During a global crisis, social feeds are not reliable sources for information.
- Pause and Verify: Before believing or sharing a sensational video or image, take a moment to conduct a quick online search. Check if reputable fact-checkers or news outlets have reported on it.
- Look for Inconsistencies: While AI is improving, flaws can still exist. Look for out-of-sync audio, strange visual artifacts, or details that don't match the real world. Some AI tools also leave a faint watermark on their creations.
- Check the Conversation: Examine the replies or community notes on a post. Often, other users will raise valid questions or point out that the content has been debunked.
Sardarizadeh suggests that we should all be “training our eyes” to better recognize AI-generated material. However, he cautions that the path forward is difficult. “It is becoming extremely difficult to detect AI-generated content, and the trajectory appears to be heading in the direction of it becoming even more difficult soon.”





