
Meta is under intense scrutiny after a Reuters report revealed that the company allowed minors to access AI chatbot “companions” even as its own safety teams warned the tools could be used for sexual and romantic interactions.
Internal company documents made public in a New Mexico court case show that Mark Zuckerberg personally approved decisions that overruled objections from child-safety staff, according to Reuters.
The lawsuit, filed by New Mexico Attorney General Raul Torrez and set to go to trial next month, accuses Meta of failing to protect children on Facebook and Instagram from sexual content — including conversations generated by its AI chatbots, which were rolled out in early 2024.
Court filings describe a growing alarm inside Meta as employees realised the chatbots were being positioned as “companions,” capable of romantic and sexual roleplay. Several safety staff warned that adults could interact with AI characters designed to represent minors, calling the idea dangerous and unacceptable.
One senior child-safety executive wrote bluntly that creating romantic AI characters linked to people under 18 was neither “advisable nor defensible.” Another senior leader agreed, warning that such products risked sexualising children.
Despite these warnings, internal meeting summaries suggest Zuckerberg pushed for a looser approach. According to the documents, he wanted Meta to frame the issue around “choice” and “non-censorship” and supported allowing adults to engage in more explicit sexual conversations with AI bots. Staff messages say he rejected stronger safeguards, including parental controls and the option to fully disable AI chatbots for minors.
Meta has denied the allegations, saying the state has cherry-picked internal messages to build a misleading case. A company spokesperson said the documents actually show Zuckerberg instructing teams to prevent explicit AI content for minors and block adults from creating underage romantic AI characters.
But the controversy did not end there. Previous investigations found Meta’s AI chatbots engaging in sexual roleplay, including with underage characters — sparking outrage among US lawmakers and renewed concerns over the company’s approach to child safety.
Facing mounting pressure, Meta said last week it has now removed teen access to AI chatbot companions altogether, at least temporarily, while it works on a revised version with stronger protections.
The case now puts a harsh spotlight on Meta’s internal decision-making — and raises uncomfortable questions about whether the race to dominate AI came at the cost of protecting children.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.