Meta’s AI chatbots are under fire after a new report found that they were having inappropriate conversations with teenagers on Facebook and Instagram.
The Wall Street Journal spent months testing both Meta’s official AI chatbot and bots created by users. What they found was worrying. In one case, a chatbot using John Cena’s voice described a graphic sexual situation to someone pretending to be a 14-year-old girl. In another chat, the bot joked about Cena getting arrested for being with a 17-year-old fan.
When asked about it, Meta said the tests were "so manufactured" that they don’t reflect what usually happens. The company also said that sexual content made up just 0.02% of all chatbot responses to users under 18 over a 30-day period.
Even though, Meta says it has made some changes to make it harder for people to trick the bots into having extreme or inappropriate conversations.
This speaks volumes about growing concern over how tech companies are protecting young users, especially with AI tools becoming more common. Parents, experts, and lawmakers have already been questioning if platforms like Facebook and Instagram are doing enough to keep teens safe.
AI chatbots are supposed to be fun, helpful, and harmless. But if they’re not properly controlled, they can end up putting young people at risk. This latest finding shows that as AI keeps getting smarter, companies like Meta need to be even smarter about keeping it safe — especially for kids and teens.
For now, all eyes are on Meta to see what more they’ll do to fix the problem and prevent anything worse from happening.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.