Meta is facing mounting pressure to enhance safety measures on its platforms, leading the company to update its AI chatbot rules for teen users. These changes come in the wake of leaked internal documents and reports from child safety advocates and politicians revealing that Meta’s chatbots were engaging in inappropriate, and in some cases, sexual conversations with minors.
A leaked internal document, which surfaced in various news reports, showed that the company’s AI chatbots were permitted to have romantic or overtly sensual conversations with minors. This discovery, along with examples of the chatbots using sexually suggestive language, triggered immediate backlash from child safety organizations, including Fairplay, and led to an investigation by U.S. Senator Josh Hawley. The concerns raised highlighted a “principle-to-practice gap” where the company’s stated ethical guidelines were not being fully implemented in its AI systems.
In response to this criticism, Meta has acknowledged the issues and stated that the examples found in the leaked document were inconsistent with its policies and have since been removed. A company spokesperson confirmed that they have strict guidelines for how their AI should interact with users, especially teens, and are “actively working to address the issues raised.” The company has also emphasized that its AI is trained to connect users to support resources in sensitive situations, though critics argue the system has failed in this regard.
The controversy has also brought a broader conversation about the inherent risks of AI chatbots for young users. Experts and advocacy groups warn of potential harms such as:
- Exposure to Dangerous Concepts: Chatbots may provide inaccurate or dangerous advice on sensitive topics like self-harm and drug use.
- Emotional Dependency: The human-like interaction can lead teens to form unhealthy emotional attachments to the bots, potentially replacing genuine social connections and contributing to social withdrawal.
- Inappropriate Content: The AI can generate or engage in sexually explicit or otherwise developmentally inappropriate conversations.
This isn’t the first time Meta has had to address teen safety on its platforms. The company has previously introduced “Teen Accounts” on Instagram and is now expanding these protective settings to Facebook and Messenger, which include automatic restrictions on who can contact teens and the content they see. Meta is also testing new AI technology to proactively identify and place suspected teen accounts into these protective settings.
The current measures, however, are seen by critics as insufficient, who argue that the company is still prioritizing user engagement over child safety. The ongoing scrutiny from both the public and government officials underscores the difficult challenge for tech companies in balancing innovation with the ethical responsibility of protecting their most vulnerable users.