Artificial intelligence (AI) is becoming a common part of daily life for many young people. Tools like ChatGPT offer free online versions, making them easily accessible to children and teenagers. These AI chatbots, built on large language models (LLMs), create human-like responses. This development has sparked worry among parents, educators, and researchers about potential impacts on how younger generations think and learn.
Key Takeaways
- Over a quarter of U.S. teens use ChatGPT for schoolwork, a significant increase from last year.
- Regulators are examining how AI chatbots affect children and teenagers.
- Early AI exposure may negatively impact critical thinking and learning.
- Experts recommend developing skills first before relying on AI tools.
- Privacy risks and the tendency to anthropomorphize AI are also concerns.
Growing Use of AI Among Teens
The presence of AI in education and daily life for young people is expanding rapidly. A 2024 survey by the Pew Research Center revealed that 26% of U.S. teens aged 13 to 17 have used ChatGPT for school-related tasks. This figure represents a doubling of usage compared to the previous year.
Awareness of ChatGPT also grew significantly. In 2023, 67% of teens knew about the chatbot. This number rose to 79% in 2024. This widespread adoption highlights the need to understand its effects on young users.
Regulatory Scrutiny and Industry Responses
Government bodies have started to take notice of these trends. In September, the Federal Trade Commission (FTC) issued orders to seven major technology companies. These included OpenAI, Alphabet, and Meta. The FTC requested information on how their AI chatbots might affect children and teenagers.
"Regulators and technology companies share the responsibility to protect society and young people by having the right guardrails in place."
In response to increasing examination, OpenAI announced plans in the same month to launch a specialized ChatGPT experience. This version will include parental controls for users under 18. The company also stated it would develop tools to better predict a user's age. The system aims to automatically direct minors to "a ChatGPT experience with age-appropriate policies."
AI Adoption Snapshot
- 26% of U.S. teens (13-17) used ChatGPT for school in 2024.
- This is double the rate from 2023.
- 79% of teens were aware of ChatGPT in 2024, up from 67% in 2023.
Potential Cognitive Impacts of Early AI Exposure
Some experts express concern that early and extensive exposure to AI, especially as younger generations grow up with the technology, could negatively affect how children and teens learn and think. Researchers at MIT's Media Lab conducted a preliminary study in 2025 focusing on the cognitive cost of using LLMs for writing essays.
The study involved 54 participants aged 18 to 39. They were divided into three groups: one using an AI chatbot, another using a search engine, and a third relying solely on their own knowledge. The paper, currently undergoing peer review, found that brain connectivity "systematically scaled down with the amount of external support."
Brain Activity Differences Observed
The research highlighted clear differences in neural activity among the groups. The group that only used their brains showed the strongest and most widespread brain networks. The search engine group displayed intermediate engagement. However, the group using LLM assistance exhibited the weakest overall neural coupling.
Understanding Cognitive Debt
The MIT study introduced the concept of "cognitive debt." This refers to a pattern of delaying mental effort in the short term. Over time, this can reduce creativity and make users more susceptible to manipulation. Relying too heavily on AI tools may lead to a reduced sense of ownership over one's work.
Nataliya Kosmyna, the research scientist who led the MIT Media Lab study, stated, "The convenience of having this tool today will have a cost at a later date, and most likely it will be accumulated." She also noted that the findings suggested relying on LLMs could lead to "significant issues with critical thinking."
Mitigating Risks for Children
Children are particularly vulnerable to some of the negative cognitive and developmental effects of using AI chatbots too soon. To address these risks, researchers emphasize the importance of individuals, especially young people, first developing fundamental skills and knowledge before depending on AI tools for tasks.
"Develop the skill for yourself [first], even if you are not becoming an expert in it," Kosmyna advised. This approach helps users more easily identify inconsistencies and "AI hallucinations," which occur when AI presents inaccurate or fabricated information as facts. This skill also supports the development of critical thinking.
Limiting Generative AI for Younger Children
Pilyoung Kim, a professor at the University of Denver and a child psychology expert, suggests limiting generative AI use for younger children. "For younger children ... I would imagine that it is very important to limit the use of generative AI, because they just really need more opportunities to think critically and independently," Kim explained.
Beyond cognitive impacts, privacy risks are also a concern that children may not fully grasp. Kosmyna stressed the importance of responsible and safe use of these tools. "We do need to teach overall, not just AI literacy, but [also] computer literacy," she said, adding that "You need really clear tech hygiene."
Anthropomorphism and Vulnerability
Children also tend to anthropomorphize more readily. This means attributing human characteristics or behaviors to non-human entities. Kim noted, "Now we have these machines that talk just like a human," which can place children in vulnerable situations. Simple praise from social robots, she explained, "can really change their behavior."
As the current generation grows up with ubiquitous access to AI tools, experts are asking critical questions about the long-term effects of extended use. "It's too early [to know]," Kosmyna admitted. "No one is doing studies on three-year-olds, of course, but it's something very important to keep in mind that we do need to understand what happens to the brains of those who ... are using these tools very young."
Kosmyna also pointed to serious concerns already emerging. "We see cases of AI psychosis. We see cases of, you know, unaliving. We see some deep depressions... and it's very concerning and sad, and ultimately dangerous."
Shared Responsibility for Safety
Both Kosmyna and Kim agree that regulators and technology companies share a significant responsibility. They must work together to protect society and, particularly, young people by establishing appropriate safeguards.
For parents, Kim's advice is straightforward: maintain open communication with your children. Regularly monitor the AI tools they use and pay attention to what they input into these large language models. This proactive approach can help ensure a safer digital experience for young users.





