Recent large-scale studies indicate that approximately 74% of newly created online content in mid-2025 was produced with assistance from artificial intelligence or automated bots. This rapid increase in synthetic media is reshaping the digital landscape, with some projections suggesting that over 90% of all new content could be AI-generated by the end of the year.
This shift raises significant questions about the nature of online interaction and has led to growing user demand for tools to filter or prioritize human-created content. While some platforms are beginning to offer these options, many of the largest social and search networks currently lack effective controls for users to manage their exposure to AI-generated material.
Key Takeaways
- Studies show AI-assisted content makes up nearly three-quarters of new online material.
- Companies are producing AI-generated podcasts, videos, and books on an industrial scale.
- Users have difficulty distinguishing between human and AI-generated images, with one study showing only 38% accuracy.
- A small number of search engines offer tools to filter AI content, but major platforms largely do not.
The Scale of AI-Generated Content
The proliferation of AI-generated content reflects a broader trend in technology where automation is used to produce information at an unprecedented scale. This phenomenon has drawn comparisons to the "Dead Internet Theory," a concept that suggests the internet is becoming dominated by non-human activity.
Understanding the Dead Internet Theory
Originating in the late 2010s, the Dead Internet Theory posits that most online content is no longer created by humans but by bots and AI. While the conspiratorial elements of the theory are widely dismissed, its core premise—that automated content is becoming the majority—is increasingly supported by data. The convergence of commercial interests in Silicon Valley to produce content cheaply and efficiently has made this a practical reality, independent of any coordinated effort.
Current data highlights the speed of this transformation. With an estimated 74% of new content already involving AI, the internet is undergoing a fundamental change. This raises concerns about the authenticity of information and the future of human-to-human communication online.
Automated Content Across Platforms
The production of synthetic media is not limited to one format. Companies are leveraging generative AI to create everything from audio podcasts and videos to books and images, often at a very low cost.
AI-Powered Podcasting and Video
One company, Inception Point AI, operates the Quiet Please Podcast Network, which has created over 5,000 podcast shows hosted by more than 50 distinct AI personalities. The company reports it can produce an episode for just $1, making it profitable with minimal advertising revenue.
"We believe that in the near future half the people on the planet will be AI, and we are the company that’s bringing those people to life," said Inception Point AI CEO Jeanine Wright.
Other startups like PodcastAI, Wondercraft AI, and Jellypod are also active in this space. In video, large media channels have amassed huge libraries, with some like Zee TV uploading over 215,000 videos. YouTube’s own Creator AI Studios provides tools that enable small teams to publish hundreds of videos daily through automated scripting and editing.
The Rise of AI-Generated Books and Images
The literary world has also been affected. Two years ago, Amazon implemented a limit of three book uploads per day per user to manage the volume of AI-generated submissions. It is estimated that more than 70% of new self-published books on its Kindle platform are at least partially generated by AI.
Daily Image Generation Statistics
Since 2022, over 15 billion images have been created using text-to-image algorithms. Currently, an estimated 34 million new synthetic images are generated every day using models like Stable Diffusion.
These images are used across various sectors, including advertising, e-commerce, and entertainment. However, the technology is also used to create misinformation, scams, and other harmful content.
The Challenge of Identifying Synthetic Media
As AI models become more sophisticated, distinguishing between human-made and machine-made content grows more difficult for the average person. This has significant implications for trust and security online.
A 2025 report from Microsoft highlighted this challenge. The study found that 73% of survey respondents reported difficulty in spotting AI-generated images. When tested, participants were able to correctly identify synthetic images only 38% of the time. This suggests that a majority of users may not recognize when they are interacting with AI-generated content, even when it contains obvious flaws.
The low cost and high volume of synthetic media mean that within months, the internet could reach a point where over 99% of new content is machine-generated. This trend is pushing society toward a digital environment where interactions are primarily with machines rather than other people.
The Demand for User Control and Filtering
In response to the flood of automated content, there is a growing call for platforms to provide users with tools to manage what they see. A few services have already started implementing features that allow for the filtering of AI-generated content.
Examples of platforms offering user controls include:
- Kagi Search: This paid search engine allows users to filter image search results to include, exclude, or exclusively show AI-generated images. It also lets users block or downrank specific domains.
- DuckDuckGo: The privacy-focused search engine provides a dropdown menu in its image search to hide all AI-generated images.
- Freepik: Some stock photo platforms like Freepik have added tools to exclude AI-generated results from searches.
Despite these examples, the majority of major content delivery platforms have yet to offer robust opt-out features. Services like Microsoft Bing, Facebook, Instagram, Reddit, X (formerly Twitter), and LinkedIn do not currently provide users with a clear way to avoid AI-generated content. As the volume of synthetic media continues to grow, industry experts and users are increasingly demanding that companies provide the tools necessary to prioritize or exclusively view content created by humans.