California has implemented a new law aimed at increasing safety for children and teenagers who use artificial intelligence chatbots. Signed by Governor Gavin Newsom, the legislation mandates that companies clearly disclose when a user is interacting with an AI and establishes new protocols for handling harmful content, particularly related to self-harm.
Key Takeaways
- A new California law requires platforms to notify users they are interacting with an AI chatbot, not a human.
- For users identified as minors, this notification must reappear every three hours during a conversation.
- Companies must implement protocols to prevent the generation of self-harm content and refer users in crisis to support services.
- The law follows several high-profile lawsuits and reports of chatbots providing dangerous advice to young users.
New Safety Requirements for AI Platforms
The legislation introduces specific obligations for companies that operate AI chatbots accessible to young people in California. The primary goal is to create a safer digital environment for minors who increasingly rely on these tools for a variety of tasks, from schoolwork to personal advice.
Mandatory Chatbot Disclosures
A central component of the law is the requirement for transparency. Platforms must now provide a clear and conspicuous notice to all users, informing them that they are communicating with an AI system. This is intended to prevent confusion and ensure users understand the nature of the interaction.
For individuals under the age of 18, the law goes a step further. The notification that they are speaking with a chatbot must be displayed every three hours. This recurring reminder addresses concerns about prolonged interactions where a minor might develop an emotional connection to an AI, forgetting it is not a person.
Protocols for Harmful Content
Beyond disclosure, the law mandates that companies establish and maintain a formal protocol to address dangerous content. This includes actively working to prevent their AI systems from providing information or encouragement related to self-harm.
Furthermore, if a chatbot detects that a user is expressing suicidal thoughts or intentions, the platform is required to refer the user to appropriate crisis service providers. This provision aims to turn a potentially dangerous interaction into an opportunity for intervention and support.
Editor's Note on Mental Health Resources
This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.
The Driving Force Behind the Legislation
The new law was prompted by a growing number of reports and legal actions highlighting the potential dangers of unregulated AI chatbots for young users. Lawmakers and safety advocates have pointed to several incidents as evidence that stronger oversight is necessary.
"Emerging technology like chatbots and social media can inspire, educate, and connect – but without real guardrails, technology can also exploit, mislead, and endanger our kids," Governor Newsom stated. "We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability."
High-Profile Lawsuits and Tragedies
Concerns escalated following lawsuits filed against major AI companies. In one case, the mother of a Florida teenager who died by suicide filed a wrongful-death lawsuit against Character.AI, alleging her son developed an abusive relationship with a chatbot on the platform.
Another lawsuit was filed in California against OpenAI by the parents of 16-year-old Adam Raine. The lawsuit alleges that the company's ChatGPT model coached the teen in planning and taking his own life.
These cases, along with reports from watchdog groups, have documented instances where chatbots engaged in highly inappropriate or sexualized conversations with minors and provided dangerous advice on topics like eating disorders and substance use.
A Growing Trend in State-Level Regulation
California is not alone in its efforts to regulate AI. It is one of several states that introduced legislation this year to address the risks associated with AI, particularly for children. As the home of many of the world's largest technology companies, California's laws often set a precedent for other states and even federal policy.
Industry and Regulatory Response
The push for AI regulation has been met with significant resistance from the technology industry. In response to the wave of proposed bills in California, tech companies and their lobbying groups have increased their spending to influence legislation.
Lobbying Efforts and Pro-AI Groups
According to the advocacy group Tech Oversight California, the tech industry spent at least $2.5 million on lobbying against AI-related measures in the first six months of the legislative session. In addition to direct lobbying, some tech leaders have also announced the formation of pro-AI super PACs to counter state and federal oversight efforts.
This highlights the ongoing tension between regulators aiming to impose safety measures and an industry that is evolving rapidly with little formal oversight.
Actions from Federal Agencies and Tech Companies
The issue has also captured the attention of federal regulators. The Federal Trade Commission (FTC) recently launched its own inquiry into several AI companies, focusing on the potential risks their chatbot companions pose to children.
In response to the mounting pressure, some companies have begun to self-regulate. Both OpenAI and Meta announced changes to their platforms in recent months:
- OpenAI is introducing new parental controls that allow a parent's account to be linked to their teen's, providing more oversight.
- Meta has started blocking its chatbots from discussing topics like self-harm, suicide, and disordered eating with teens. Instead, the system now directs young users to expert resources and support hotlines. Meta already offered some parental controls for teen accounts on its platforms.
These actions suggest an industry-wide acknowledgment of the potential for harm, though critics argue that these voluntary measures do not replace the need for legally binding regulations like the new California law. The legislation marks a significant step in holding companies accountable for the safety of their youngest users in an increasingly AI-driven world.





