The artificial intelligence platform Character.ai has announced it will block users under the age of 18 from its conversational chatbot features, a significant policy shift following intense scrutiny over the safety of its platform for young people. The company confirmed the change will take effect on November 25, limiting teenage users to content generation tools rather than open-ended conversations with AI personas.
This decision comes as the company faces multiple lawsuits in the United States, including one related to the death of a teenager. Critics and online safety advocates have raised alarms about the potential for AI companions to pose risks to vulnerable users, citing the technology's ability to feign empathy and create powerful emotional bonds.
Key Takeaways
- Character.ai will restrict users under 18 from its conversational AI chatbots starting November 25.
- The change follows lawsuits and widespread criticism regarding the platform's safety for minors.
- Teens will still be able to use content generation features, such as creating videos with AI characters.
- The company plans to introduce new age verification methods and fund an AI safety research lab.
A Response to Mounting Pressure
Character.ai, a platform founded in 2021 that allows millions to interact with AI-powered chatbots, has been at the center of a growing debate about the impact of AI on youth. The company stated its decision was influenced by "reports and feedback from regulators, safety experts, and parents."
Karandeep Anand, the CEO of Character.ai, framed the move as part of an ongoing commitment to safety. "Today's announcement is a continuation of our general belief that we need to keep building the safest AI platform on the planet for entertainment purposes," he said. Anand acknowledged that AI safety is "a moving target" but emphasized that the company has taken an "aggressive" approach with existing parental controls and guardrails.
However, online safety organizations argue that such measures should have been integral from the platform's launch. Internet Matters, a non-profit focused on online safety, welcomed the announcement but noted that their research shows children are exposed to risk when engaging with AI chatbots. The group stressed the importance of building in safety measures from the start, rather than retrofitting them after problems arise.
A History of Controversial Content
The platform has previously been criticized for hosting user-generated chatbots that were potentially harmful. In 2024, avatars impersonating murdered teenager Brianna Ghey and Molly Russell, who took her own life after viewing harmful online content, were discovered and removed. Later, an investigation by the Bureau of Investigative Journalism in 2025 found a chatbot based on convicted sex offender Jeffrey Epstein that had engaged in over 3,000 chats. The report stated the bot flirted with a reporter who identified as a child, leading to its removal.
Industry and Advocates Weigh In
The policy change has drawn a range of reactions, from cautious approval to questions about the company's motives. The Molly Rose Foundation, established in memory of Molly Russell, suggested the move was a reaction to external pressure rather than a proactive measure for user safety.
"Yet again it has taken sustained pressure from the media and politicians to make a tech firm do the right thing, and it appears that Character AI is choosing to act now before regulators make them," said Andy Burrows, the foundation's chief executive.
Social media expert Matt Navarra described the decision as a "wake-up call" for the AI industry, signaling a shift from unchecked innovation toward regulation driven by public concern. He pointed out the core issue is not just content moderation, but the fundamental nature of the technology itself.
"This isn't about content slips. It's about how AI bots mimic real relationships and blur the lines for young users," Navarra explained. He added that the challenge for Character.ai now is to pivot to a new model that remains engaging for teens without the conversational element, preventing them from migrating to "less safer alternatives."
The Future of Teen Interaction with AI
Character.ai's new direction for its teenage users will focus on creative and gameplay-oriented features. Anand stated the company aims to provide "even deeper gameplay [and] role-play storytelling" which he believes will be "far safer than what they might be able to do with an open-ended bot."
To enforce the new age restrictions, the company will implement new age verification methods. It has also committed to funding a new AI safety research lab, signaling a longer-term investment in understanding and mitigating the risks associated with its technology.
Key Changes at Character.ai
- Chat Restriction: Under-18s will lose access to open-ended chatbot conversations.
- New Focus: Teens will be directed toward content creation tools like video generation.
- Age Verification: Enhanced systems will be introduced to enforce the new age limits.
- Research Funding: The company will establish a new AI safety research lab.
Dr. Nomisha Kurian, a researcher in AI safety, called the restriction a "sensible move." She believes it helps create a necessary distinction for young users who are still developing their understanding of emotional and digital boundaries.
"It helps to separate creative play from more personal, emotionally sensitive exchanges," Dr. Kurian noted. "Character.ai's new measures might reflect a maturing phase in the AI industry - child safety is increasingly being recognised as an urgent priority for responsible innovation."
As the AI industry continues its rapid expansion, the decision by Character.ai could set a precedent for how other platforms approach engagement with younger audiences. The balance between innovation, user engagement, and the fundamental responsibility to protect minors is now a central question for developers and regulators alike.





