As artificial intelligence-generated videos become increasingly realistic, experts are pointing to a surprising clue that can help you spot a fake: poor image quality. Grainy, blurry, or low-resolution footage is often more deceptive because it effectively hides the subtle errors and inconsistencies that advanced AI models still produce.
Viral clips, including one of bunnies on a trampoline and another of a couple falling in love on a subway, have fooled millions online. A common thread among these deceptive videos is their intentionally degraded appearance, which mimics old security camera footage or a poor-quality phone recording, making it harder for the human eye to detect digital manipulation.
Key Takeaways
- Low-quality video is not proof of a fake, but it is a significant warning sign that a clip may be AI-generated.
- Creators of fake content often intentionally degrade video quality to hide AI artifacts like unnatural skin textures or background inconsistencies.
- Experts advise checking a video's length, as most AI-generated clips are very short, often under 10 seconds.
- In the long term, verifying the source and origin (provenance) of a video will be more reliable than trying to spot visual flaws.
The Psychology of Deception
Social media feeds are now saturated with AI-generated content, making it difficult to distinguish between reality and digital fabrication. While the most advanced AI video generators, such as OpenAI's Sora and Google's Veo, can produce high-definition clips, the fakes that are most likely to trick you are often the ones that look the worst.
Hany Farid, a professor of computer science at the University of California, Berkeley, and a leading expert in digital forensics, explains that poor quality is one of the first things he examines when analyzing a video. "If I'm trying to fool people, what do I do? I generate my fake video, then I reduce the resolution... And then I add compression that further obfuscates any possible artefacts," he says. "It's a common technique."
This method works because it obscures the small but telling mistakes AI still makes. These include unnaturally smooth skin, strange patterns in clothing or hair, and background elements that shift or move in physically impossible ways. In a crystal-clear video, these errors might be noticeable. In a blurry one, they blend into the noise.
Viral Fakes and Common Traits
Several high-profile examples demonstrate this principle in action. A video appearing to show wild bunnies on a trampoline, framed as grainy security footage, garnered over 240 million views on TikTok. Another pixelated clip of a supposed chance romantic encounter on the New York subway also went viral before being revealed as an AI creation. In both cases, the low fidelity of the video added a layer of perceived authenticity that disarmed viewers' skepticism.
Resolution vs. Compression
It's important to understand the two factors that contribute to poor video quality. Resolution refers to the number of pixels in an image; fewer pixels mean a less detailed, blockier picture. Compression is a process that reduces a video's file size by discarding some data, which can result in blurring and other visual distortions. Both can be used to hide the tell-tale signs of AI.
According to Farid, there are three key elements to scrutinize when you suspect a video might be fake: resolution, quality, and length.
The Telltale Sign of Length
The length of a video is often the easiest giveaway. Generating AI video is computationally expensive, so most publicly available tools limit clips to just a few seconds.
"The vast majority of videos I get asked to verify are six, eight or 10 seconds long," Farid notes. "For the most part, AI videos are very short, even shorter than the typical videos we see on TikTok or Instagram."
While multiple short clips can be stitched together to create a longer video, this often results in noticeable cuts or transitions every few seconds, another potential red flag for the discerning viewer.
Preparing for a Post-Truth Future
Experts warn that relying on visual cues is a temporary strategy. The technology is improving at an exponential rate, and the flaws we can spot today may be gone within months or a couple of years.
"I would anticipate that these visual cues are going to be gone from video within two years, at least the obvious ones, because they've pretty much evaporated from AI-generated images already," says Matthew Stamm, a professor at Drexel University who heads the Multimedia and Information Security Lab. "You just can't trust your eyes."
The long-term solution, according to digital literacy experts, requires a fundamental shift in how we consume online content. Mike Caulfield, a research scientist, argues that we must learn to treat video the same way we treat text: with inherent skepticism until the source is verified.
Researchers are developing advanced forensic techniques to detect "statistical traces" left behind by AI generation. These digital fingerprints, invisible to the naked eye, can reveal differences in pixel distribution or other patterns that are unique to machine-created content.
The focus must move from analyzing a video's pixels to investigating its provenanceβwhere it came from, who posted it, and what context surrounds it. Technology companies are also working on standards that could embed verifiable information into a file at the moment of its creation, essentially creating a digital birth certificate for images and videos.
Ultimately, a combination of technological tools, public education, and intelligent policy will be needed to navigate an information landscape where seeing is no longer believing. As Stamm puts it, "I think this is the greatest information security challenge of the 21st Century... I'm not prepared to give up hope."





