Moneycontrol PRO
HomeWorldGoogle Photos called Black woman a ‘gorilla’: AI racism row reignites as user says “not racist, just stupid”

Google Photos called Black woman a ‘gorilla’: AI racism row reignites as user says “not racist, just stupid”

A resurfaced Google Photos screenshot showing a Black woman’s picture auto-labelled as “gorilla” – and a user downplaying it as “not racist, just stupid” – has reignited debate over racist AI errors, algorithmic bias and how tech companies choose to fix such failures.

December 06, 2025 / 13:22 IST
AI bias row reignites

A viral screenshot about Google Photos labelling a Black woman’s pictures as “gorilla” and a commenter shrugging it off as “not racist, just stupid” has dragged a decade-old controversy back into the spotlight and reignited debate over bias in artificial intelligence. The latest buzz traces back to a resurfaced account of the 2015 incident, now being recirculated with fresh commentary and aggregation on news and social platforms.

The original case involved US programmer Jacky Alcine, who discovered that Google Photos had automatically created an album titled “Gorillas” for images of him and a Black friend. He posted the screenshots on Twitter in mid-2015, prompting widespread outrage and headlines around the world.

Google quickly apologised, saying it was “appalled” by the error. Then-Google engineer Yonatan Zunger publicly described it as one of the worst bugs the team could imagine and said they were taking “immediate action” to stop similar results.

That “fix” became controversial in its own right. Rather than risk repeating the insult, Google quietly blocked its consumer Photos product from using “gorilla” and related primate labels altogether. Later reporting showed that, years on, Photos still refused to identify gorillas, even though Google’s separate Cloud Vision API and Google Assistant could correctly tag gorilla images.

The resurfaced comment that the incident is “not racist, just stupid” echoes a common defence of such failures as mere technical glitches. But researchers argue that the outcome itself is what matters. Scholars of algorithmic bias describe this kind of misclassification as “representational harm” because it reproduces demeaning stereotypes about a marginalised group, regardless of the designers’ intent.

Analyses of Google’s broader image and search ecosystem have repeatedly found skewed results, from over-sexualised images of certain ethnic groups to historical autocomplete suggestions that pushed racist phrases, patterns often linked to unbalanced training data and a lack of diversity in the teams building these systems.

Critics say the Gorilla incident remains a textbook example of how high-profile AI systems can fail Black users in particularly dehumanising ways and how “turning off” features to avoid embarrassment can make tools less useful without addressing root causes. As the story circulates again in 2025, it is being cited as a reminder that trust in AI depends not just on clever models, but on sustained work to measure, disclose and reduce harms baked into their design.

Moneycontrol World Desk
first published: Dec 6, 2025 12:41 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347