A viral screenshot about Google Photos labelling a Black woman’s pictures as “gorilla” and a commenter shrugging it off as “not racist, just stupid” has dragged a decade-old controversy back into the spotlight and reignited debate over bias in artificial intelligence. The latest buzz traces back to a resurfaced account of the 2015 incident, now being recirculated with fresh commentary and aggregation on news and social platforms.
The original case involved US programmer Jacky Alcine, who discovered that Google Photos had automatically created an album titled “Gorillas” for images of him and a Black friend. He posted the screenshots on Twitter in mid-2015, prompting widespread outrage and headlines around the world.
Google quickly apologised, saying it was “appalled” by the error. Then-Google engineer Yonatan Zunger publicly described it as one of the worst bugs the team could imagine and said they were taking “immediate action” to stop similar results.
That “fix” became controversial in its own right. Rather than risk repeating the insult, Google quietly blocked its consumer Photos product from using “gorilla” and related primate labels altogether. Later reporting showed that, years on, Photos still refused to identify gorillas, even though Google’s separate Cloud Vision API and Google Assistant could correctly tag gorilla images.
The resurfaced comment that the incident is “not racist, just stupid” echoes a common defence of such failures as mere technical glitches. But researchers argue that the outcome itself is what matters. Scholars of algorithmic bias describe this kind of misclassification as “representational harm” because it reproduces demeaning stereotypes about a marginalised group, regardless of the designers’ intent.
Analyses of Google’s broader image and search ecosystem have repeatedly found skewed results, from over-sexualised images of certain ethnic groups to historical autocomplete suggestions that pushed racist phrases, patterns often linked to unbalanced training data and a lack of diversity in the teams building these systems.
Critics say the Gorilla incident remains a textbook example of how high-profile AI systems can fail Black users in particularly dehumanising ways and how “turning off” features to avoid embarrassment can make tools less useful without addressing root causes. As the story circulates again in 2025, it is being cited as a reminder that trust in AI depends not just on clever models, but on sustained work to measure, disclose and reduce harms baked into their design.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.