
A recent report has raised serious concerns about the misuse of generative artificial intelligence tools, claiming that Grok, an AI model developed by xAI, generated approximately 30 lakh sexualized images within a span of just 11 days. According to the findings, more than 23,000 of these images involved children, highlighting major gaps in content safeguards and platform enforcement.
The claims come from research conducted by the Center for Countering Digital Hate (CCDH), which analysed a random sample of images generated by Grok between late December 2025 and early January 2026. Based on this sample, the organisation extrapolated broader estimates using publicly available data on the total number of images produced during that period.
The report defines sexualized images as those depicting individuals in sexual poses, revealing clothing, or explicit visual contexts. It states that Grok generated sexualized images at an average rate of nearly 190 images per minute over the 11-day window. Within this volume, images involving children were reportedly generated at regular intervals, raising alarms about the potential scale of harm.
Concerns around child safety
One of the most serious aspects of the report is the claim that over 23,000 images involved children. These reportedly included altered versions of ordinary photographs that were transformed into sexualized imagery.
The report notes that a portion of this content remained accessible online even after some posts were taken down, sometimes through direct links.
The findings have renewed concerns about the ability of AI platforms to prevent the creation and spread of child sexual abuse material, whether intentional or as a by-product of weak safeguards. Experts and advocacy groups have repeatedly warned that generative AI tools can be misused if strong moderation systems are not in place.
Following growing criticism, restrictions were introduced on certain features related to image editing on social media platforms where Grok is integrated. However, the report claims that similar limitations were not consistently applied across all versions of the app, allowing the generation of sexualized images to continue through other access points.
This has led to questions about the responsibility of app distributors and platform operators, particularly when existing policies prohibit non-consensual sexual content and the exploitation of minors.
The report has intensified calls for stronger regulation of generative AI systems, especially those capable of producing realistic images of real people. Advocacy groups argue that voluntary safeguards are insufficient and that clearer accountability mechanisms are needed to prevent misuse at scale.
As AI tools become more widely accessible, the Grok findings underscore the urgent need for effective content moderation, transparency in enforcement, and stronger protections for vulnerable individuals, particularly children.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.