Moneycontrol
HomeNewsOpinionIs AI an existential threat? 'Extinction' warning may do more harm than good

Is AI an existential threat? 'Extinction' warning may do more harm than good

A one-sentence statement signed by industry experts is impressive in its humility but it is more likely to generate backlash than assent. One common-sense objection might go like this: If you’re so concerned about extinction, why don’t you just stop working on AI?

June 02, 2023 / 17:31 IST
Story continues below Advertisement

Existential risk from AI will be packaged and commodified, like so many other ideas under capitalism, such as existential risk from climate change.

Sometimes publicity stunts backfire. A case in point may be the one-sentence warning issued this week by the Center for AI Safety: “Mitigating  the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The list of signatories is impressive, including many executives and software engineers at leading AI companies. Yet for all the apparent sincerity on display, this statement it is more likely to generate backlash than assent.

Story continues below Advertisement

The first problem is the word “extinction.” Whether or not you think the current trajectory of AI systems poses an extinction risk — and I do not — the more you use that term, the more likely the matter will fall under the purview of the national security establishment. And its priority is to defeat foreign adversaries. The bureaucrats who staff the more mundane regulatory agencies will be shoved aside.

US national security experts are properly skeptical about the idea of an international agreement to limit AI systems, as they doubt anyone would be monitoring and sanctioning China, Russia or other states (even the UAE has a potentially powerful system on the way). So the more people say that AI systems can be super-powerful, the more national-security advisers will insist that US technology must always be superior. I happen to agree about the need for US dominance — but realise that this is an argument for accelerating AI research, not slowing it down.