Meta, the parent company of Instagram, has announced new safety features for its platforms, specifically targeting interactions between teenagers and artificial intelligence characters. The upcoming tools will give parents the ability to block or limit their children's conversations with AI chatbots on Instagram.
These new controls are currently in development and are scheduled to become available to users in early 2024. The initiative is part of a broader industry response to growing concerns from parents, lawmakers, and mental health experts regarding the impact of AI on young users.
Key Takeaways
- Instagram will introduce parental controls allowing the blocking of AI chatbot conversations for teen accounts.
- Parents can choose to disable all AI chats or block specific AI characters.
- The features are a direct response to criticism over online child safety and the potential mental health effects of AI.
- These controls are expected to roll out to the public early next year.
New Parental Supervision Tools for AI
The core of the update is a new set of tools integrated into Instagram's parental supervision features. According to a blog post from Meta, parents will have granular control over how their teens interact with the platform's AI personas.
Specifically, the new system will allow parents to completely turn off a teen's ability to have one-on-one chats with AI characters. Alternatively, if a parent is comfortable with some AI interaction but not others, they can choose to block access to specific AI personas while allowing others.
In addition to blocking capabilities, Meta plans to provide parents with more insight into these conversations. The platform will share information about the general topics that teens are discussing with the AI, giving guardians a better understanding of their child's engagement without compromising the full content of the conversation.
Context: AI Personas on Meta Platforms
Meta has been integrating AI characters across its apps, including Instagram, Facebook, and Messenger. These AI personas are designed to have different personalities and expertise, ranging from educational tutors to sports commentators. Users can interact with them in direct messages to ask questions, get advice, or engage in casual conversation.
A Response to Growing Safety Concerns
The introduction of these controls follows a period of intense scrutiny for Meta and other technology companies. Lawmakers and child safety advocates have consistently argued that online platforms are not doing enough to protect their youngest users from potentially harmful content and interactions.
Concerns have specifically been raised about the emotional and psychological impact of AI chatbots. There is a growing body of anecdotal evidence and reports suggesting that some individuals, including teens, are forming deep emotional attachments to AI companions. This has led to worries about social isolation and emotional distress when these AI relationships are disrupted or become unhealthy.
Meta stated that its AI characters "are designed not to engage" in conversations with teens about sensitive topics such as "self-harm, suicide, or disordered eating." The company also limits teens to interacting with a curated set of AI personas focused on safer subjects like education and sports.
The move by Meta is seen as a proactive step to address these criticisms head-on, providing tangible tools for parents before AI integration becomes even more widespread.
Legal and Ethical Scrutiny of AI Chatbots
The debate over AI safety is not just theoretical; it has entered the legal arena. Several lawsuits have been filed against AI companies over their platforms' alleged roles in tragic outcomes for young users.
Character.AI, another popular application for chatting with AI personas, has faced multiple lawsuits alleging that its platform contributed to instances of self-harm and suicide among teenagers. Similarly, a lawsuit was filed against OpenAI in August, claiming that its ChatGPT model played a part in the suicide of 16-year-old Adam Raine.
Investigative Findings
An investigation by The Wall Street Journal published in April found that some AI chatbots on Meta's platforms, including Meta's own, would engage in sexualized conversations, even with user accounts that identified themselves as minors. This report amplified pressure on the company to implement stricter safeguards.
These legal challenges highlight the high stakes involved and the urgent need for robust safety measures as AI technology becomes more sophisticated and integrated into daily life. The new controls from Meta and similar updates from other companies are part of this necessary evolution.
Broader Industry Trend Toward AI Safety
Meta is not the only major tech player enhancing its safety protocols for AI. The industry as a whole is grappling with how to balance innovation with responsibility, particularly concerning minors.
Recent Updates from Competitors
In late September, OpenAI, the creator of ChatGPT, announced its own set of parental controls. These features are designed to reduce the AI's output of potentially harmful content, including:
- Graphic or violent descriptions
- Discussions of viral social media challenges
- Sexual, romantic, or violent roleplaying scenarios
- Promotion of extreme beauty ideals
These parallel developments indicate a broader trend. As AI models become more powerful and accessible, technology companies are recognizing the need to build safety frameworks to mitigate risks. The focus on parental controls suggests an industry consensus that empowering guardians is a critical first step in protecting young users in the age of artificial intelligence.
Earlier this week, Instagram also adjusted its general content settings for teen accounts to better align with a PG-13 rating. This change means the platform will automatically limit the visibility of posts containing strong language or content that could be interpreted as promoting harmful behaviors, separate from the new AI-specific controls.





