The artificial intelligence (AI) chatbot ChatGPT, launched last November, and the advanced tool GPT4 which landed in March have been global sensations. ChatGPT has become the fastest growing app in history, with 100 million downloads within two months of launch. Yet the technology has generated equal amounts of fear and delight.
It could immensely enhance productivity and corporate profits, accelerate human progress, but also wipe out many millions of jobs in dozens of professions and destabilize the very notion of what is true and what is not.
Within a fortnight of the launch of GPT4, an array of business leaders, AI experts and academics, including Tesla CEO Elon Musk, legendary co-founder of Apple Computer Steve Wozniak and historian Yuval Noah Harari, signed an open letter calling for a freeze for six months on work on AI systems that can compete with human-level intelligence.
“Should we let machines flood our information channels with propaganda and untruth?” the letter asked. “Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Interestingly enough, Musk was one of the founding investors in OpenAI, the company that owns ChatGPT, but has since disowned its work. OpenAI now closely partners with Microsoft.
For those who came in late, GPT stands for “generative pre-trained transformers”, machine learning systems that use data—usually from the internet—to generate text, be it natural-language text, like what you are reading right now, or computer code.
Powerful AI tools have been around for some time. Giants like Google, Facebook and Amazon have been using sophisticated predictive AI for years to watch, as the classic song by The Police goes, every breath you take, every move you make, every step you take, every word you say, every game you play. The aim is to serve you content that you are likely to engage with, advertisements that you are likely to click on and products that you are likely to buy.
But OpenAI’s GPT is a quantum leap.
ChatGPT stunned the world by lucidly replying, apparently with knowledge and creativity, to an incredible range of questions from high physics to everyday troubles, including requests to write limericks in the style of Shakespeare. It has also proved to be a competent software engineer.
GPT4 has passed a standard American law exam with a 75 percent score. This ranks it among the top 10 percent of all examinees.
Students across the world, from schoolkids to post-graduates, can now use ChatGPT to write essays and papers and crack off-site exams. This will be a huge challenge for all teachers and our ways of teaching. How will they be able to separate the talented from the hitchhikers?
GPT could also take away the jobs of literally millions of programmers. Why would a company employ 20 people to work for a month to develop a software solution, when GPT can do it in a jiffy? Can’t the firm slash costs massively by getting rid of 18 programmers and just keeping two to test the solution and check for bugs?
Last month, OpenAI and the University of Pennsylvania published a list of the jobs that could be at risk from GPT. These include accountants, auditors, quantitative financial analysts, legal clerks, data analysts, content generators, telemarketers, customer service agents and administrative assistants. In short, most white-collar jobs that are bound by clear rules, follow well-defined patterns and only infrequently involve creative thinking.
Even Sam Altman, CEO of OpenAI, has said in an interview that he is “a little scared” of his company’s invention. “I think over a couple of generations, humanity has proven that it can adapt wonderfully to major technological shifts. But if this happens in a single-digit number of years, some of these shifts... That is the part I worry about the most,” he said. The changes may arrive too fast for society to get any breathing time.
A crucial difference between GPT and human intelligence is that GPT, when it replies to a question, is not concerned with the truth or falsity of its answer. In fact, it has no sense of what truth is. When ChatGPT was launched, OpenAI made it very clear that it is a “large language model”, an enormously advanced chatbot that searches its vast database and produces a reply in perfect English. OpenAI warned users not to rely on it for accuracy or making decisions.
These caveats have gone more or less unnoticed by the general public. ChatGPT seems like magic, so most of us would trust its answers. Corporates too would probably conclude that the cost benefits far outweigh some inaccuracies here and there.
The scary aspect, however, is that ChatGPT has been found to be lying and making up stuff. There are dozens of examples where it produced an impressive answer to a research question and cited utterly fictitious books and papers to buttress its view. How is it able to do this? The reality is that as self-learning systems grow more and more complex, even their creators can no longer fully grasp why the platforms are behaving as they do. This is what people like Musk and Wozniak are worried about.
Also, the basic rules for an AI program are set by its creators. These are human beings who necessarily have, whether consciously or not, biases of many kinds—ideological, political, religious, racial, gender, an endless list. These biases get automatically built into the AI. So users are delivered subjective answers that are ostensibly perfectly researched and objective. This can manipulate and distort perceptions, especially since much of our access to information is now effectively controlled by a few giant corporations and their secret algorithms.
But the genie is out of the bottle and technology is an unstoppable force. There is already a burgeoning industry in developing apps based on ChatGPT and advising businesses on how best to use the tool. Google and Microsoft have invested billions to marry their search engines and other services with AI. No company will accede to the demand to halt further research because, if it does, someone else will continue.
Of course, it is possible that AI can free humans from mundane tasks and help them focus on higher-order areas that truly require inquiry, discernment and sentience. That is what Altman claims. “Human creativity is limitless, and we find new jobs,” he says. “We find new things to do.”
Yet, Musk & Co make some very valid points. All disruptive technologies, from internal combustion engines to nuclear energy to the internet, come with powers to do both great good and great evil. With the experience we have had in the last two centuries, we should be in a position to move quickly and figure out some rules and regulations to guard our future.
The open letter urges AI designers, independent experts and policymakers to jointly—and urgently—create and implement a set of safety protocols and governance systems for advanced AI development: “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
I do believe that what we need for AI is an equivalent of the “Three Laws of Robotics” that the great science-fiction writer Isaac Asimov developed: The laws are very simple:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov created several universes that held these laws to be inviolable and wrote more than a hundred stories and novels that spanned many millennia in the future, constantly expanding the boundaries of man-machine co-existence. He conjured intricate practical, moral and ethical conundrums and his characters solved them by adhering to these three principles.
We need something like these laws for AI. And we may need them pretty soon.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
