Recent academic studies are raising questions about the cognitive impact of relying on artificial intelligence for creative and research-based tasks. An experiment conducted at the Wharton School of the University of Pennsylvania found that individuals using AI-generated summaries produced significantly less nuanced and original work compared to those using traditional web search methods.
This finding contributes to a growing body of evidence suggesting that while AI tools are marketed as productivity enhancers, their overuse may inadvertently hinder the very cognitive skills they are meant to support, such as critical thinking and creativity.
Key Takeaways
- A study found that people using AI for a writing task produced generic and less helpful advice compared to those using traditional Google search.
- Traditional search users developed more nuanced and comprehensive ideas, touching on physical, mental, and emotional health.
- Experts suggest this points to a risk of cognitive outsourcing, where critical thinking skills may atrophy from over-reliance on AI summaries.
- The phenomenon is being discussed in the context of 'brain rot,' a term describing a perceived decline in cognitive sharpness due to digital media.
A Tale of Two Research Methods
In a revealing experiment, Shiri Melumad, a professor at the Wharton School, assigned a simple task to a group of 250 participants. They were asked to write advice for a friend on living a healthier lifestyle. The group was split into two: one half could only use AI-generated summaries from Google's AI tools, while the other half used the traditional Google web search to gather information.
The results were stark. The group relying on AI produced advice that was described as generic and obvious. Their suggestions included basic tips like “eat healthy foods,” “stay hydrated,” and “get lots of sleep.” While factually correct, the advice lacked depth and originality.
In contrast, the participants who conducted their own research using a standard search engine delivered more sophisticated and thoughtful guidance. Their advice explored the interconnected pillars of wellness, including not just physical health but also the critical roles of mental and emotional well-being. This group demonstrated a deeper level of synthesis and understanding.
Study at a Glance
Task: Write advice on leading a healthier lifestyle.
Group 1 (AI Users): Produced generic advice (e.g., stay hydrated).
Group 2 (Traditional Search): Produced nuanced advice (e.g., focus on physical, mental, and emotional wellness).
The Risk of Cognitive Outsourcing
The findings from Dr. Melumad's study highlight a concern that resonates across multiple academic fields: the potential for cognitive outsourcing. When we rely on AI to summarize information and generate ideas, we are essentially offloading the mental processes of research, analysis, and synthesis.
While this can save time, some researchers worry it may prevent us from engaging in the deep thinking required to form complex thoughts and original insights. The process of sifting through search results, evaluating different sources, and connecting disparate pieces of information is a cognitive workout. AI summaries, by design, remove this effort.
This isn't a new phenomenon. We've seen similar effects with other technologies, like the way GPS navigation has reduced many people's ability to develop spatial awareness and remember routes. The concern is that AI, which operates on a much more complex cognitive level, could have a far broader impact on our mental abilities.
"The tech industry tells us that chatbots and new A.I. search tools will supercharge the way we learn and thrive... But... people who rely heavily on chatbots and A.I. search tools for tasks like writing essays and research are generally performing worse than people who don’t use them."
'Brain Rot' in the Digital Age
The term 'brain rot' has gained traction online to describe a perceived decline in cognitive function and attention spans, often attributed to overconsumption of low-quality social media content. Now, some are applying the concept to the uncritical use of generative AI.
The core issue is the passive consumption of information. AI chatbots and search summaries present information as a finished product, often stripped of context, sourcing, and conflicting viewpoints. This can discourage the user from questioning, verifying, or digging deeper—all essential components of active learning and critical thought.
How to Mitigate the Risks
Experts are not advocating for abandoning AI altogether. Instead, they suggest a more mindful and strategic approach to using these powerful tools. The goal is to use AI as a collaborator, not a replacement for thinking.
Strategies for Healthy AI Use
- Use AI as a Starting Point: Treat AI-generated content as a first draft or a brainstorming partner, not the final product.
- Question and Verify: Always fact-check AI-provided information. Ask the AI for its sources and then review them yourself.
- Engage in Deep Work: Deliberately set aside time for tasks that require focused, deep thinking without the aid of AI.
- Retain the Research Process: Continue to use traditional search methods to compare sources and develop your own understanding before turning to AI for summarization.
The Future of Learning and Work
As AI becomes more integrated into our daily workflows and educational systems, understanding its cognitive effects is crucial. The promise of AI is to augment human intelligence, freeing us from tedious tasks to focus on higher-level creativity and strategy.
However, early research indicates a potential pitfall. If we aren't careful, we risk creating a dependency that weakens the very skills we need to innovate. The Wharton study serves as an important reminder that the process of discovery is often as valuable as the final answer.
The challenge moving forward will be to find a balance—leveraging AI for its incredible efficiency while actively preserving and strengthening our innate human capacity for deep, creative, and critical thought. The conversation is no longer just about what AI can do for us, but also about what it might be doing to us.





