It’s hard not to cringe at UK Prime Minister Rishi Sunak’s pronouncement that Britain should “lead the way” on AI regulation. After all, AI will impact the globe in different ways, and Brexit hasn’t exactly inspired confidence in British policymaking.
Yet Brexit is partly why the UK could fill this watchdog role successfully. AI is moving fast, threatening to create more bias and inequality in society, and governments need ideas for guidelines yesterday. Britain not only has the expertise and infrastructure to draw up rules around AI, it can move quickly, thanks in part by having thrown off the shackles of EU frameworks and obligations. By contrast, it’ll be another two years before the European Union’s AI Act comes into force, even though its Parliament voted almost unanimously in favor of it Wednesday. And while US senators are keen to set up an independent AI regulator, aggressive Silicon Valley lobbying makes that unlikely. Congress, after all, has never passed a federal law to regulate Big Tech.
Britain offers a temperate middle ground between the onerous approache of Europe and the more laissez-faire US, where most AI innovation is coming from. The UK has moved fast on tech governance before — its data protection rules for children were among the first in the world and copied by California; its Online Safety law will roll out before a similar set of rules from the EU; and its antitrust watchdog pursued a flurry of high-profile cases against Alphabet Inc’s Google, Apple Inc and Meta Platforms Inc after Brexit.
The Brits have meanwhile been upgrading their government machinery, allowing them to tackle AI more effectively. For instance, the UK recently fused its top regulators into a single body for coordinating decisions, which will be vital for a technology that impacts almost every industry. In addition to a formidable commercial law sector and a centuries-long reputation for the rule of law, the UK also hosts one of the world’s biggest AI companies, Google DeepMind, and a culture where technologists liaise with the government as advisers. It doesn’t hurt that everyone speaks English too.
“There’s only one positive to Brexit and it’s that the UK is now in a position to be a global leader in AI governance,” says Saul Klein, co-founder of London venture capital firm LocalGlobe.
In February 2023, the UK combined its government departments covering business and digital sectors into one, called the Department for Science, Innovation and Technology. In the eyes of the government, that elevated tech to a similar level of importance as the economy and national security, says Klein, who is a non-executive director of the new department.
The UK also brought together its top regulators that handle issues like antitrust, data privacy and online harms under one umbrella, called the Digital Regulatory Cooperation Forum. That will make it much easier to set and enforce rules on AI with one voice. Why spend years building a new regulator when existing watchdogs have set themselves up to address the technology’s disruptions together, says Rachel Coldicutt, who specialises in tech regulation and is an executive director of Promising Trouble, a social enterprise that builds and supports alternatives to Big Tech.
How might Britain try to regulate AI? Its upcoming rules on online harm borrow principles from health and safety law, giving companies a legal “duty of care” to maintain a safe environment. Britain will probably go down a similar route with AI, according to regulatory experts, adopting a risk-based approach to regulation.
Demis Hassabis, the co-founder of DeepMind and a government adviser, has similar views: He recently advocated for the “precautionary principle” when regulating AI. That essentially means taking precautions when you are uncertain of the risks — like telling a child not to play soccer inside the house to avoid breaking anything. The alternative would be to let them play, and then only when they break a lamp, create new rules about soccer in the house.
There is one glaring problem with the UK’s efforts to oversee AI, though, and it speaks to the country’s chummy and insular tendencies. Sunak recently announced a global AI safety summit taking place this fall to kick off the country’s regulatory ambitions, and neglected to include people who work in civil society.
The summit will bring together nations, researchers and “leading tech companies” such as Google DeepMind and Microsoft Corp., according to the 1,400 word press release announcing the event. Yet nowhere alongside the big names did it mention people researching flaws in present-day AI systems. A UK government spokesman said that the summit was aimed at creating “international guardrails for the safe and responsible development of AI.”
Current AI problems are serious and widespread. They affect women, people of color and other minority groups more often than not, from algorithms that check asylum-seeker applications to those that filter job ads. British organisations like the Ada Lovelace Institute, Demos, Connected by Data and Promising Trouble, which investigate such flaws, should be invited to discuss AI safety with big corporate names too.
“We know from the last 20 years that asking tech companies to regulate themselves doesn’t work, and the idea they are experts in safety is hilarious,” says Coldicutt. “What we really want in an AI safety summit is people talking about the world we want.”
Britain already has the infrastructure and reputation in place to move decisively on AI governance. But Sunak must resist being starstruck by big names like OpenAI, and make sure his policies take into account the problems happening on the ground. He should listen to the experts on those issues too.
Parmy Olson is a Bloomberg Opinion columnist covering technology. Views are personal and do not represent the stand of this publication.
Credit: Bloomberg
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
