HomeNewsOpinionHow a professional risk manager views threats posed by AI

How a professional risk manager views threats posed by AI

The fact that individual AI routines today lack the sophistication and power necessary to destroy humanity, and mostly have benign goals, is no reason to think emergent AI intelligence will be nicer than people are

January 02, 2024 / 10:57 IST
Story continues below Advertisement
AI
The fact that individual AI routines today lack the sophistication and power necessary to destroy humanity, and mostly have benign goals, is no reason to think emergent AI intelligence will be nicer than people are.

Runaway artificial intelligence has been a science fiction staple since the 1909 publication of E. M. Forster’s The Machine Stops, and it rose to widespread, serious attention 2023. The National Institute for Standards and Technology released its AI Risk Management Framework in January 2023. Other documents followed, including the Biden administration’s Oct. 30 executive order Safe, Secure, and Trustworthy Artificial Intelligence, and the next day, the Bletchley Declaration on AI Safety signed by 28 countries and the European Union.

As a professional risk manager, I found all these documents lacking. I see more appreciation for risk principles in fiction. In 1939, author Isaac Asimov got tired of reading stories about intelligent machines turning on their creators. He insisted that people smart enough to build intelligent robots wouldn’t be stupid enough to omit moral controls — basic overrides deep in the fundamental circuitry of all intelligent machines. Asimov’s first rule is: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Regardless of the AI’s goals, it is forbidden to violate this law.

Story continues below Advertisement

Or consider Arthur C. Clarke’s famous HAL 9000 computer in the 1968 film, 2001: A Space Odyssey. HAL malfunctions not due to a computer bug, but because it computes correctly that the human astronauts are reducing the chance of mission success — its programmed objective. Clarke’s solution was to ensure manual overrides to AI, outside the knowledge and control of AI systems. That’s how Frank Bowman can outmaneuver HAL, using physical door interlocks and disabling HAL’s AI circuitry.

While there are objections to both these approaches, they pass the first risk management test. They imagine a bad future state and identify what people then would want you to do now. In contrast, the 2023 official documents imagine bad future paths, and resolve that we won’t take them. The problem is an infinite number of future paths, most of which we cannot imagine. There is a relatively small number of plausible bad future states. In finance, a bad future state is to have cash obligations you cannot meet. There are many ways to get there, and we always promise not to take those paths. Promises are nice, but risk management teaches focus on things we can do today to make that future state survivable.