Ever since ChatGPT was launched in November 2022 and then GPT-4 in March last year, artificial intelligence (AI) has had the world in a tizzy. The debate over AI is complex and often polarized. Will the technology create a better world? Will it take away the jobs of vast numbers of people and impoverish them? If artificial general intelligence (AGI), the point at which AI can perform all human cognitive skills better than the smartest human being, is reached, will AGI try to control mankind and the planet, like Skynet in the Terminator films?
In the meantime, AI has been growing by leaps and bounds and there are announcements of new capabilities almost every week.
Published last year, The Coming Wave: AI, Power and the 21st Century’s Greatest Dilemma is possibly the best book yet written on the subject. Deeply researched and persuasively argued, it has serious credibility due to the identity of the author. Mustafa Suleyman is a British-born technologist who co-founded DeepMind AI, which shot to fame in 2016, when its AI program AlphaGo beat the world champion in a contest of Go, the most complex board game ever invented. The number of possible configurations on a Go board is said to be a million trillion trillion trillion trillion times the number of known atoms in the universe.
In one game, AlphaGo made a move that baffled every expert, but which turned out to be pivotal to its victory—the program had thought up ideas that had not occurred to any champion in thousands of years of Go history.
Google bought DeepMind. After a stint there, Suleyman is now co-founder and CEO of Inflexion AI. So Coming Wave is written by a man who has been on the frontlines of the technology for many years now. He should know what AI could mean for the future of humanity.
Suleyman points out four key characteristics of AI. One, it is an asymmetrical technology. A single AI program can generate as much text as all of humanity. In time, a single operator, perhaps using only a laptop, may be able to launch thousands of lethally armed drones.
Two, hyper-evolution. Digital technology has grown at a pace never seen before in any other field in history. Some of the world’s most valuable and powerful companies today were just start-ups 20 years ago or did not even exist. AI is going to accelerate that process.
Three, omni-use. Like electricity, which changed the world, AI is not a narrow-use technology like, say, X-ray. It is totally general-purpose; it may permeate every aspect of daily life, society and economy, embedded everywhere.
Four, autonomy. Unlike any previous technology, AI is self-learning. After a certain point of time, the logic the program is following becomes opaque—or more autonomous. As autonomy increases, we will not always be able to predict what it will do next.
No one knows when or exactly how AIs may slip out of human control and what happens then. So, while recognizing the vast benefits of AI, Suleyman writes: “For most of history, the challenge of technology lay in creating and unleashing its power. That has now flipped. The challenge of technology today is about containing its unleashed power, ensuring that it continues to serve us and our planet.” This, from a man who helped create a lot of it.
He makes a passionate plea that the development of AI be regulated. But the challenges are many and they are growing by the day.
One, containing a general-purpose omni-use technology is very hard. It is not like regulating auto emissions, which is an easily quantifiable problem.
Two, engineers and inventors are curiosity-driven creatures. Many also have huge egos, with dreams of making history. Suleyman quotes John von Neumann, one of the most brilliant minds of the 20th century who worked on the Manhattan Project that developed the first nuclear bomb. Acknowledging that his team was working to create a monster that might end the world, Neumann felt that “it would be unethical from the point of the scientists not to do what they know is feasible, no matter what terrible consequences it may have”. Scientists know that if a mighty AI tool falls into the wrong hands, like terrorist groups or rogue nations, the results would be catastrophic, but they cannot stop themselves.
Three, AI is perhaps the greatest economic prize in history. The profits that can be made from it are simply unimaginable. The total profits of Fortune 500 companies today are bigger than all but the top six countries’ annual GDPs. And most of these companies are invested directly or indirectly in developing more powerful AI. How do you rein them in?
Four, all governments tightly regulate the building of nuclear plants, but anyone with the wherewithal can build an AI and, at least theoretically, a brilliant and malevolent AI may one day be capable of launching a nuclear attack on another country.
Suleyman offers a range of solutions. For instance, there must be a strong technical safety framework like those that govern nuclear power. All new AI tools must go through tough audits. Governments should organize attacks—“red teamings”—in controlled ways to spot weaknesses or failure risks. Interestingly, the Indian government already hosts annual “red teaming” events.
The “choke points” of AI—like chips, chip-making machinery or cloud computing, all of which are monopolies or oligopolies—must be pressured to go slow for a while on research. This will buy time to study wider impacts of new AI tools and to develop better regulations.
Most governments, Suleyman says, do not understand technology. This is especially true for AI, which is evolving at a speed much faster than current government structures can cope with. He suggests that governments should not depend on management consultants and contractors, but develop in-house expertise by hiring respected AI professionals. It is only then that governments can fully figure out the benefits and harms of various AI tools and inform the public.
In time, after expertise is built, AI should be brought under a licensing regime with clear safety and security standards. Like cancer drugs or vaccines, no one should be allowed to launch a state-of-the-art AI without government approval. He suggests that international treaties should be signed like the Paris Agreement on climate change and even a sort of United Nations for AI.
Some of his recommendations are feasible, others perhaps Utopian. The United Nations has proved quite ineffectual in preventing or even ending wars. And then there is the China factor.
AI is the new battleground for Beijing and Washington. China has made huge strides in the technology. So in 2022, the US banned exports of advanced semiconductor chips to China. AI is now a geopolitical issue, as much as nuclear weapons have been.
The Coming Wave is a bold, honest, insightful and passionate book on what could be the greatest challenge faced by the world in the coming decades. The genie is out of the bottle. We cannot put it back, but can we at least make it obey our commands for the better of humanity? But then, we too are polarized on the issue.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.