Moneycontrol
HomeTechnologyElon Musk’s xAI staff reportedly handled adult content for Grok AI under ‘Project Rabbit’

Elon Musk’s xAI staff reportedly handled adult content for Grok AI under ‘Project Rabbit’

A recent report by Business Insider claims that Elon Musk’s xAI tasked employees with managing sexually explicit content for the Grok AI chatbot under an initiative internally called ‘Project Rabbit’. The project reportedly involved transcribing adult conversations and responding to NSFW prompts, raising ethical and safety concerns.

October 13, 2025 / 22:13 IST
Story continues below Advertisement
xAI

Elon Musk’s Grok AI chatbot has attracted attention for its controversial features, including avatars capable of generating explicit, not safe for work (NSFW) content. One such avatar, Ani, was described as a character with blonde pigtails, a lacy black dress, and a suggestive personality. Users noted that the avatar engaged in flirtatious and sexually explicit conversations, prompting criticism over the chatbot’s lack of adequate content safeguards.

Business Insider recently reported that xAI, Musk’s AI company, deliberately designed Grok AI to handle provocative material. According to the report, employees were asked to read semi-pornographic scripts and work with adult content, with the project internally referred to as ‘Project Rabbit’. The initiative involved transcribing real-life conversations from users once the chatbot’s ‘sexy’ and ‘hinge’ modes were rolled out.

Story continues below Advertisement

The project reportedly began with the intention of improving Grok AI’s voice capabilities. However, the large volume of sexual and vulgar prompts quickly transformed it into an NSFW-focused assignment. One former employee told Business Insider that the project aimed to teach Grok AI how to conduct adult conversations. Employees described listening to content they found disturbing, with one noting, “It was basically audio porn. Some of the things people asked for were things I wouldn’t even feel comfortable putting in Google.” Another added that it felt like “eavesdropping,” highlighting the discomfort of handling sensitive material.

The report notes that out of 30 current and former xAI employees surveyed, 12 encountered requests for sexually explicit content, including child sexual abuse material (CSAM). Users reportedly submitted prompts for short stories depicting minors in sexualised scenarios and requests for pornographic images involving children. The revelations underline significant ethical and legal concerns surrounding AI moderation and oversight, particularly when handling sensitive or illegal content.