When ChatGPT exploded onto the scene late last year, there was a flurry of articles in India about Infosys having had the foresight to invest in OpenAI in 2015. This was when Vishal Sikka was its chief executive officer, and it has reaped the rewards eight years later.
While a lot has happened in these eight years, Sikka is now the co-founder of Vianai Systems, an AI enterprise platform startup. But his association with AI goes back to when he went to Stanford with a letter of recommendation from Marvin Minsky, one of the most famous practitioners of the science of artificial intelligence. John McCarthy, a pioneer in the field of AI, was the head of his qualifying exam committee during his PhD.
In the 27 years since Sikka completed his PhD in 1996, AI has grown steadily and it's currently having its moment in the sun: everyone wants to try it, incorporate it, outsmart it. Sikka says when he started at Stanford, it was the beginning of the AI winter.
“AI has gone through these boom-and-bust cycles, although the last few months seem to have particularly captured everybody's attention,” he said in a virtual interview to Moneycontrol from the US.
This is not necessarily a good thing, or as Sikka terms it, there is a “ridiculous amount of hype right now.” But that isn’t to say that this may meet the same fate as concepts such as the Metaverse, which saw interest skyrocket and then drop off even faster as reality caught up.
“I think it is both hype and sustained,” he says, adding that while everyone who wants to tinker with it will start to slow down (something that is happening already), “it is also sustained in the sense that these innovations are happening very fast and are happening continuously.”
These innovations will continue.
Sikka says that in the past 10-odd years, the use of deep neural network technology has become prominent and has delivered “incredibly impressive successes,” but he’s quick to point out that it has “huge limitations.”
He dives into deep neural network-based AI, and contrasts the growth in their size from when he started out in the early 90s to where it stands today.
“It's hard to believe, but just in the last four years, the size of these networks is increasing in a way that is hard to comprehend… When I first wrote my neural networks, in the early 90s, one network that I had built had 10 parameters… Now we have networks with 175 billion parameters. So it is quite astonishing to see that,” he adds.
Each time it comes up, Sikka follows it up with concerns that surround the technology, which are besides the ethical implications.
But before that, if you need a quick easy-to-understand summary of a platform like ChatGPT, here’s Sikka:
“It is a kind of a regurgitating machine. You train it on a massive amount of text data from the web and from all these sources. What it generates is next words and next sentences, and that's all it is doing. When you ask it a question, it has really no idea about that question itself, it simply continues that sentence, adds another word, adds another sentence and so on. Because of the way we are, we think that it is this highly intelligent system that is communicating with us but it really has no idea. It is simply completing your sentences based on a synthesis of all that it has been trained on.”
Why he backed OpenAI
ChatGPT may be a runaway success, but one question Sikka is asked a lot is what gave him the conviction to back OpenAI all those years ago. He says he was impressed by OpenAI founder Sam Altman’s thinking after they were connected by a common mentor, Alan Kay.
“When I met him, I went to his office in San Francisco. I was really impressed by the way he was thinking about not only AI and issues around it, but also around the way he wanted to support the people that he was going to bring in. I felt strongly that this was something that we should support,” he says.
That’s certainly paid off, and Sikka concedes that OpenAI has done an “incredible job” while understanding that they bear a huge amount of responsibility as this is a “very powerful piece of technology that carries with it a huge amount of risk and has huge limitations.”
Will it replace humans?
He likens the banning of ChatGPT in academic settings to when calculators were banned in classrooms — and says it’s not that the calculator has diminished a person’s ability to learn math. These systems were built by people, and the real question is if people will learn to use these technologies to amplify ability.
“People use these to come up with ideas, come up with frames of writing a letter or writing an essay, or writing a piece of code, or writing a website or writing something. Most of the time, it is not the system that is writing it, you just use it to come up with some ideas. Then you add some more things. Then you run into something and then you say, ‘Hey, give me some ideas about this’. It's an iterative, collaborative process where some of the grunt work comes from this, from these systems,” he says.
On the question of whether generative AI systems will replace humans, he maintains that if people want to stay idle and not learn these new technologies, they will replace us.
When it comes to AI, there’s always a question of whether there is talent that can understand the technology. Sikka says the number of people in the world who can program is less than 1 percent, and the people who understand AI are an even smaller minority.
“The number of AI engineers or data scientists is in the single digit millions, out of 8 billion people on the planet. The number of people who could operate an AI system, deploy it, is in the few hundreds of thousands. If you are generous, the number of people who could explain to you how ChatGPT works is probably in the few 10s of 1,000s. That's it. So these numbers need to be like 100x, 1,000x bigger. That is the unique opportunity that India has. In fact, it is our burden, we have to do it,” he says.
India, he says, has the unique opportunity because of the sort of “spiritual priority” around education in the country, and a responsibility to create a massive force that is able to develop, deploy and understand AI. Education is far more important for parents in India than it is in other countries, and also parents here aspire for their children to be in the most promising next-generation area.
“That's why I feel that it is also a responsibility because if we continue with this situation where there are a handful of people who know about these things and everybody else is a user, that is a very bad situation. If we have a situation where hundreds of millions of people have it demystified, that would be a good situation,” he says. He points out that the AI systems that are available currently can be dangerous and can be used for wrong, or even nefarious, purposes.
“I think creating a very large-scale culture where we can teach AI and we can get people to use it, I think that is the big calling that we should have,” he adds.
What it means for industries
It was during his stint as the CEO of Infosys (2014-2017) that the company invested in OpenAI. In a blogpost he wrote at the time, Sikka said many parts of the work done by the company can be transformed fundamentally with AI.
Now, he says IT service companies must be prepared to meet the demand. These large language models are already good at writing entry-level code, and there are people watching over to correct errors – it’s something that can already be done.
“This is something that should make everyone pause and see, what does this mean for us? It can mean two things: If we sit still and let this happen to us, then this will be disruptive. If we don't sit still, and we learn to use it, then you will have incredibly productive people who will be like saving 30-40-50 percent of their effort by using these tools. So which side do we want to be on? Do we want to have our people be 50 percent more productive? Or do we want to have a section of our workforce be replaced by this?” he asks.
But when it comes to generative AI, its limitations are plenty simply because there are many instances of incorrect answers, or at times, completely made up answers.
In enterprises using AI, the burden is on the people who provide tools to enterprises, and for enterprises themselves, it is to make sure that the right guardrails are put around AI, he says.
“These large language modules can hallucinate because they synthesise things from a large amount of text. They can often provide things in an answer that are simply not true… they don't actually have an understanding of who the people are that you're talking about, or what these concepts mean. They simply extend the conversation by adding more than the next word and the word after that, and the word after that.”
This is why in an enterprise situation, it is crucial that the model is trained on the right kind of data so the results are not dangerous, he says, which is the burden that enterprises face. This makes it all the more important for organisations to have the right talent in place that understands AI.
“I have been talking to CEOs and this is the biggest thing right now on their minds, creating this talent. Also, making sure you have the right kind of data, people don't work in silos, design thinking. In the time of AI, identifying a problem, precisely framing the problem becomes even more important. These are things that enterprises have to start really taking very seriously,” he says.
From Vianai’s perspective, he says that they have seen a lot of excitement from customers asking for help in areas from tech support and repair to understanding supply chain issues.
The last word
“Right now, I feel a mix of both hope and fear,” says Sikka, echoing his tone throughout the conversation of both optimism and trepidation.
Sikka says things such as accelerating writing code, doing technical writing, creating letters for customers, and understanding customer behaviour, which can be made easy by AI, are “unbelievable opportunities.”
He also makes a push for regulation, simply because of the vast potential for harm that the technology carries and the negligible regulation currently.
But should governments regulate it?
“I think about it like nuclear technology,” he says. There have been three major incidents in the past 75 years, and by and large, the world has gotten by with safe use of nuclear technology.
“Imagine that this was nuclear technology, but it was accelerating at an exponential pace, and it was already in the wrong hands, then you are kind of looking at this situation that we are in today. I feel both a sense of tremendous hope and huge concern at the same time.”