HomeNewsTechnologyAI needs the equivalent of Asimov's 3 rules of robotics. Here's why

AI needs the equivalent of Asimov's 3 rules of robotics. Here's why

As self-learning systems grow more complex, even their creators can't fully grasp why the platforms are behaving as they do. This is what people like Elon Musk and Steve Wozniak are worried about.

April 09, 2023 / 11:02 IST
Story continues below Advertisement
Illusions
The genie is out of the bottle. There is already a burgeoning industry in developing apps based on ChatGPT and advising businesses on how best to use the tool. (Image:Miriam Espacio via Pexels)

The artificial intelligence (AI) chatbot ChatGPT, launched last November, and the advanced tool GPT4 which landed in March have been global sensations. ChatGPT has become the fastest growing app in history, with 100 million downloads within two months of launch. Yet the technology has generated equal amounts of fear and delight.

It could immensely enhance productivity and corporate profits, accelerate human progress, but also wipe out many millions of jobs in dozens of professions and destabilize the very notion of what is true and what is not.

Story continues below Advertisement

Within a fortnight of the launch of GPT4, an array of business leaders, AI experts and academics, including Tesla CEO Elon Musk, legendary co-founder of Apple Computer Steve Wozniak and historian Yuval Noah Harari, signed an open letter calling for a freeze for six months on work on AI systems that can compete with human-level intelligence.

“Should we let machines flood our information channels with propaganda and untruth?” the letter asked. “Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”