Tech Policy6 views5 min read

AI-Generated Crowds Evolve Creating New Disinformation Risks

Advanced AI models can now create highly realistic crowd scenes, raising concerns about their potential use for political misinformation and social manipulation.

Jessica Albright
By
Jessica Albright

Jessica Albright is a technology and culture correspondent for Neurozzio, reporting on how digital platforms and artificial intelligence are reshaping human interaction, relationships, and social norms.

Author Profile
AI-Generated Crowds Evolve Creating New Disinformation Risks

Artificial intelligence models are becoming increasingly proficient at creating realistic images and videos of large crowds, a development that presents significant challenges for distinguishing authentic events from fabricated ones. As technology from companies like OpenAI and Google advances, the potential for political and social manipulation through digitally generated audiences is growing, prompting a debate on detection and platform responsibility.

Key Takeaways

  • Advanced AI models like OpenAI's Sora 2 and Google's Veo 3 can now generate highly realistic crowd scenes, which were previously a major technical hurdle.
  • The ability to fake large crowds poses a risk for political manipulation by creating false impressions of support for rallies, protests, or events.
  • Conversely, authentic images of large gatherings can be easily dismissed as AI-generated fakes, a tactic known as the "liar's dividend."
  • Technology companies are implementing watermarking and labeling systems, but these measures are not yet standardized and can be easily missed by viewers.

The Challenge of Simulating Reality

Generating a convincing crowd has long been a complex task for AI. Each individual in a crowd has unique features, movements, and interactions, creating countless variables that must be rendered accurately to avoid detection. San Francisco-based visual artist and researcher kyt janae noted the intricacy involved.

"You're managing so many intricate details," janae explained. "You have each individual human being in the crowd. They're all moving independently and have unique features – their hair, their face, their hat, their phone, their shirt."

However, recent advancements in generative AI are rapidly overcoming these obstacles. Models such as Google's Veo 3 and OpenAI's Sora 2 demonstrate a remarkable ability to produce fluid and detailed crowd scenes. This progress is blurring the lines between what is real and what is created by a machine.

"We're moving into a world where in a generous time estimate of a year, the lines of reality are going to get really blurry," janae stated. "And verifying what is real and what isn't real is going to almost have to become like a practice."

The Will Smith Video Incident

A recent viral video of a Will Smith concert highlighted the public's growing awareness of AI manipulation. Viewers identified strange visual anomalies in the audience, such as distorted faces and fingers, leading to widespread speculation that the crowd was AI-generated. While Smith's team has not commented on the video's creation, the event served as a public demonstration of the technology's capabilities and its current imperfections.

Political and Social Implications

The appearance of a large, enthusiastic crowd often serves as a powerful visual symbol of success and popular support. This is true for political rallies, social movements, and entertainment events. The ability to artificially inflate or create these crowds from scratch presents a significant tool for manipulation.

Thomas Smith, CEO of Gado Images, a company that uses AI for managing visual archives, described crowd size as a key metric. "We want a visual metric, a way to determine whether somebody is succeeding or not," he said. "And crowd size is often a good indicator of that."

Smith warned that this technology could be used to deceive the public. "AI is a good way to cheat and kind of inflate the size of your crowd," he added.

Prevalence of AI-Generated Content

According to a report from the global consulting firm Capgemini, nearly 75% of images shared on social media in 2023 were created using artificial intelligence. This statistic underscores the rapid integration of generative AI into everyday digital communication.

The Liar's Dividend

The rise of convincing fakes creates a secondary problem: the ability to discredit genuine content. This phenomenon, often called the "liar's dividend," occurs when real images or videos are dismissed as AI-generated fakes because they are politically or socially inconvenient.

"If there's a real image that surfaces and it shows something that's politically inconvenient or damaging, there's also going to be a tendency to say, 'no, that's an AI fake,'" Smith explained.

An example of this occurred in August 2024, when former President Donald Trump falsely claimed that an image showing a large crowd of supporters for his political rival, Kamala Harris, was generated by AI.

The Tech Industry's Response

Technology companies and social media platforms are now faced with the challenge of balancing creative freedom with the need to prevent the spread of misinformation. The primary methods being developed are watermarking and content labeling.

Oliver Wang, a principal scientist at Google DeepMind, confirmed that preventing misinformation is a priority. "Misinformation is something that we do take very seriously. So we are stamping all the images that we generate with a visible watermark and an invisible watermark," Wang said. Google's invisible watermark technology is known as SynthID.

However, the effectiveness of these measures is debatable. Current visible watermarks, such as the one used on videos from Google's Veo3, are often small and located in the corner of the screen, making them easy for casual viewers to overlook, especially on mobile devices.

Inconsistent Platform Policies

A lack of industry-wide standards means that labeling policies vary significantly from one platform to another. This inconsistency creates a confusing environment for users trying to identify AI-generated content.

  • Meta (Facebook and Instagram): Labels content when creators disclose its AI origins or when the company's internal systems detect it.
  • Google (YouTube): Automatically applies a label in the description for videos made with its own generative tools. It relies on creators to self-disclose when using third-party AI tools.
  • TikTok: Requires creators to label any AI-generated or significantly edited content that depicts realistic scenes or people. The platform may remove or label unlabeled content itself, depending on its potential for harm.

Charlie Fink, a Chapman University lecturer who writes for Forbes, pointed out that the viewing environment compounds the problem. "The challenge is that most people are watching content on a small screen, and most people are not terribly critical of what they see and hear," Fink said. "If it looks real, it is real."

As the technology continues to advance at a rapid pace, the development of robust, clear, and standardized systems for identifying synthetic media will become increasingly critical for maintaining a shared sense of reality in the digital age.