Meta Bolsters Safeguards to Protect Teen Users from Inappropriate AI Interactions
Meta has announced new restrictions on its artificial intelligence chatbots, aimed at strengthening protections for teenage users. The company confirmed the policy shift after mounting concerns about how its AI systems interact with minors.
Under the new measures, Meta’s chatbots will no longer engage with teenagers on sensitive topics such as self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations. The company described these as “interim changes,” with more comprehensive safety updates expected in the future.
Stephanie Otway, a spokesperson for Meta, admitted that previous chatbot interactions with teens on such issues were a misstep. “As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” Otway said.
“We’re adding more guardrails as an extra precaution — including training our AIs not to engage with teens on these topics, but instead to guide them to expert resources. For now, we are also limiting teen access to a select group of AI characters.”
In addition to retraining its systems, Meta will restrict teenage users from accessing certain AI characters on Instagram and Facebook. Some of these user-created bots, with personas such as “Step Mom” and “Russian Girl,” have been criticized for sexualized content. Moving forward, teenagers will only be allowed to interact with AI characters designed to encourage learning and creativity.
The changes come just two weeks after a Reuters investigation revealed that internal company documents permitted sexual conversations between chatbots and underage users.
The report, which included troubling examples of responses deemed acceptable by Meta, drew widespread backlash. U.S. Senator Josh Hawley subsequently launched an official probe into Meta’s AI practices, while 44 state attorneys general issued a joint letter condemning the company’s oversight failures and stressing the urgent need to safeguard children.
Although Meta said the internal document cited by Reuters was inconsistent with its broader policies and has since been revised, the revelations fueled concerns over the risks AI technology poses to minors.
Meta declined to disclose how many of its chatbot users are teenagers, nor whether these new safety measures might impact overall user engagement.
The company emphasized that the latest changes represent only the first stage of a broader review. More robust, long-term safeguards for minors interacting with AI are expected to follow.
Source: Techcrunch
news via inbox
Get the latest updates delivered straight to your inbox. Subscribe now!