Moneycontrol
HomeNewsOpinionIt’s risky to be this deferential to artificial intelligence

It’s risky to be this deferential to artificial intelligence

Police, schools and HR departments are too trusting of fallible algorithms. A special brand of human overseer is needed

June 09, 2022 / 14:30 IST
Story continues below Advertisement

Back in 2018, Pete Fussey, a sociology professor from the University of Essex, was studying how police in London used facial recognition systems to look for suspects on the street. Over the next two years, he accompanied Metropolitan Police officers in their vans as they surveilled different pockets of the city, using mounted cameras, and facial-recognition software.

Fussey made two important discoveries on those trips, which he laid out in a 2019 study. First, the facial-recognition system was woefully inaccurate. Across all 42 computer-generated matches that came through on the six deployments he went on, just eight, or 19 percent, turned out to be correct.

Story continues below Advertisement

Second, and more disturbing, was that most of the time, police officers assumed the facial-recognition system was probably correct. “I remember people saying, ‘If we’re not sure, we should just assume it’s a match,’” he says. Fussey called the phenomenon ‘deference to the algorithm’.

This deference is a problem, and it’s not unique to police.