Google’s AI search summaries are in the spotlight again, and this time, it’s for a mistake that spilled out of the internet and into real life.
According to a report by The Canadian Press, Ashley MacIsaac, a famous Canadian fiddler who has won the Juno Award, says Google’s new feature, AI Overview, wrongly called him a sex offender. The false label led to his concert being cancelled by a First Nation community near Halifax, Canada. The show was planned for December 19, but after seeing the AI summary, the organisers backed out, believing the information to be true.
Ashley found out about this only when the community confronted him. Imagine being asked about crimes you never committed, based on something a machine wrote. That’s exactly what happened here.
The AI summary claimed Ashley was on Canada’s national sex offender registry. It also said he had been convicted of serious crimes like sexual assault, internet luring, and even trying to harm a minor. None of this was true. Later, Ashley learned that the AI had picked up details from old news reports about another man in Atlantic Canada who shared his last name. Instead of figuring out it was a different person, the AI mixed the identities and published the summary under Ashley’s name.
Ashley said the situation left him shaken. He worried about what could have happened if the misinformation popped up while crossing a border. “I could have been at a border and put in jail,” he said. For Indian readers, think of it like being stopped at an airport because an online portal mistakenly linked your Aadhaar to a criminal case. It’s that level of scary.
Ashley also said he doesn’t have the money to fight a long legal case, but if a lawyer wanted to take it up for free, he might consider it. Not just for himself, but to make sure others don’t go through the same mess.
Google Canada responded with a short statement. The company said AI Overview summaries keep updating to show what it thinks is “helpful” information. It also said that when errors happen, the team uses those moments to improve the system. “When issues arise, like if our features misinterpret web content or miss context, we use those examples to improve our systems,” the statement said. Google also hinted it could take action under its internal policies when needed.
Meanwhile, the Sipekne’katik First Nation posted a public apology for cancelling the concert based on wrong information. The apology said they regretted the harm caused to his reputation and livelihood. They also praised his music and cultural contribution. Ashley later said he didn’t want any negative attention to fall on the community and looks forward to rescheduling the show when things settle down.
On Christmas Eve, when journalists called him again, Ashley said he was sitting outside his grandmother’s house, about to go in for the holiday. “This isn’t a conversation I want to have today,” he said, clearly tired of explaining a lie he never asked for.
The twist? Law firms across Canada have now reached out to him, offering to look into the matter without charging him. He is thinking it over.
For Indian readers watching AI grow faster than rules around it, this case raises a simple but huge question: if Google’s AI can get someone’s name so wrong that it cancels their livelihood, who protects the person caught in the crossfire? And how do we make sure machines double-check names before calling someone a criminal?
Right now, there’s no perfect answer. But one thing is clear. The world is relying more on AI summaries and less on real news, and mistakes like this show just how dangerous that shift can be if companies don’t build stronger safety checks.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.