Moneycontrol PRO
Swing Trading 101
Swing Trading 101

Davos 2026: AI is continuously improving despite perception that excitement has faded, says Andrew Ng

In an interview to Moneycontrol at Davos 2026, Andrew Ng speaks about why open source should be key to India's AI strategy, why fears of mass AI-driven job losses are overstated, Google's comeback and the 'white-hot' AI race

January 20, 2026 / 12:20 IST
AI pioneer Andrew Ng with Moneycontrol's Chandra R Srikanth at the World Economic Forum in Davos on January 19
Snapshot AI
  • Andrew Ng says AI is improving steadily, but public excitement has waned
  • He believes fears of mass AI-driven job losses are overstated
  • Ng backs open source as key to India's AI strategy and urges upskilling

For a casual observer, it is easy to get the sense that artificial intelligence (AI) has stopped moving forward. In reality, the technology continues to improve in steady, incremental ways, AI pioneer Andrew Ng told Moneycontrol in an interview at the World Economic Forum in Davos.

He believes that public perception tends to move in "step changes". With so much attention on AI today, many of those incremental advances no longer feel emotionally exciting, especially compared with the impact of seeing the technology for the first time.

Ng, the founder of DeepLearning.AI, managing general partner at AI Fund and co-founder of Coursera, warned that United States becoming less immigrant-friendly is a "huge unforced error".

He also spoke about why open source should be key to India's AI strategy, why fears of mass AI-driven job losses are largely overhyped, Google's comeback and the 'white-hot' AI race, and whether we are in an AI bubble. Edited excerpts of the interview:

Google has made a big AI comeback and they are doing very well compared to others. What does it say about the centrality of founders for a company in Silicon Valley even after decades? And, how do you view the landscape when it comes to OpenAI versus Google versus Anthropic?

Google's certainly having a moment. Gemini 3 is a very strong model but I think the entire AI landscape is white hot. This creates a lot of opportunities for the Anthropics and the OpenAIs and the Googles and many others of the world to play a big role. Clearly, Larry and Sergey have continued to be tremendous forces at Google.

Having built and led the Google Brain team in the early days, I know Google has an amazing team as one of many companies with amazing teams. I'm glad to see them doing so well.

Also readDavos 2026: Google’s having a moment but AI landscape is ‘white hot,’ says Andrew Ng

Anthropic has also made a significant push on the enterprise front. Is there space for everyone here? How is it going to pan out?

The market landscape as I see it, in the consumer space, ChatGPT is an incredibly strong consumer brand. So that's relatively defensible and difficult for new entrants to attack. Google, with its distribution channels, is also strongly in the mix, which accounts for why Gemini has such strong momentum.

So for general information discovery in the consumer space, we see ChatGPT and Google Gemini as the two market leaders. New entrants are trying but not sure what their best moves are to gain adoption.

However, in the enterprise space, Claude has been an amazing model for coding. Many of my friends and I would love to never have to code without AI systems again. But in addition to Claude Code, Google and OpenAI have other offerings as well such as Gemini CLI, OpenAI Codex are also strongly in the mix.

Once upon a time, Google Search won the horizontal information discovery and then there was lots of room for different verticals like travel and retail to compete in their sectors.

Right now, ChatGPT and Gemini are doing very well for the horizontal information discovery but there are a lot of verticals, including coding and the future of many other business verticals to be built out.

Watch the full interviewAndrew Ng at Davos: Why Google Is Back, AI Isn’t in a Bubble & Open Source Is Key for India

In terms of overall valuation, there is always this concern that are AI valuations in Silicon Valley in bubble territory? There are also concerns about these circular big tech deals happening, A invests in B, B invests in C and then they give each other business. How do you assess this? Is it sustainable?

There are some parts where I don't think there is a bubble and some parts where one could ask the question.

If I look at the actual business use cases, applications and the application layer, other than the AI foundation model layer, I don't think there is a bubble. Yes, the hype is rising rapidly, but the actual business value is rising rapidly too. It is rising from a low base in terms of the number of business applications that large companies can build and deploy and we're seeing this playbook in Silicon Valley. Maybe it's a small percentage now, but it's compounding rapidly and project a few years in the future, we have a sense of that being valuable, even more valuable in the future. If anything, the application layer feels underinvested.

Then there is AI model inference, which is when someone takes an AI model, uses it to serve queries to generate results. We clearly need more capacity here because demand for AI inference is just through the roof, be it through open weight models, open-source models or the proprietary models. I wish we could use more of these, but for various reasons, be it cost or capacity, many teams are throttled in how much we use. So I am excited about the infrastructure build out for AI inference infra.

Now, it doesn't mean no one will lose money. It is possible to still have an oversupply, but I think a lot of that capacity will get used.

The one place where there is a question of overinvestment and the risk is higher is AI model training. The cost of training AI models has become so large that if any of the companies paying the large bills run into trouble or pull back a bit, the future of some of their investments could be challenging.

That said, I am still cautiously optimistic and generally bullish about the sector and most of the leading players in the field. I am excited to see the world continue to invest heavily in AI, especially the application layer, but also definitely the inference layer.

Also readDavos 2026: Are we in an AI bubble? Andrew Ng says it depends on where you look

While benchmark evals of top LLM companies seem to be getting better, why is it that improvement in AI models seem to have stagnated for general users? Do you think incremental improvements in AI user experience will be more important? It will be more inference rather than training?

Human perception is a funny thing. It is true that AI hasn't gotten much better for me in the last 24 hours. But if I look back even a couple of months to when Gemini 3 and Claude Opus 4.5 were released, as a user those were significant step-ups relative to the last model.

Technology improves smoothly and incrementally but public perception tends to move in step changes. Early in 2022, I wrote that GPT-3 was a huge step forward because I saw it changing text processing. The so-called ChatGPT moment was a great step, but it wasn't as big of a leap as it felt to someone who hasn't seen it at all.

The public is paying so much attention to AI now that lots of incremental changes aren't as emotionally exciting as the step change that happens when you haven't seen things. But consumer usage of AI and tools are getting much better.

In terms of fears of job losses, are we already seeing that happen to some extent? Are companies hiring fewer people? All of them are talking about productivity gains from using tools like Cursor, and now Claude Cowork is expected to play a disruptive role.

Job loss related to AI to date has been overhyped and we're in the very early phases of that. There were tech layoffs over the past year and a half, but the true cause for a lot of them was overhiring. Sometimes people will ascribe to AI, even though that isn't actually the full story.

Having said that, there are a few job roles that are struggling and are in trouble because of AI. Contact centers are clearly affected. I also worry about the jobs of translators and voice actors.

But the reality is, if you look at the task-based analysis of jobs - take a job, break it into tasks, see what AI can do. For many jobs, AI can only do 30-40% of the work now and for the foreseeable future. We still need people to do 60-70% of the work. This means that person that uses AI will be much more productive and will replace someone that does not use AI. But AI itself cannot do 100% of the job and this will be true for vast majority of job roles.

Also readDavos 2026: Andrew Ng says AI-driven job losses have been overstated

Is physical AI going to be a near-term reality? How far are we from people having a robot assistant become commonplace?

Physical AI is very exciting. The progress in robotics is faster than I've seen in the last decade or more than a decade. It will still take a little bit of time, as moving bits instead of atoms takes longer than most of us wish, but the progress is very fast. I am optimistic about that.

I also wanted to touch upon the geopolitics of it. US President Donald Trump is in focus this week amid the differences that we are seeing between the US, Europe and other countries. What does this mean for AI, now that it has become a strategic priority? People are also talking about how China seems to have outpaced other countries in many areas. How is this going to evolve?

The US and China are two AI powerhouses. The US becoming less immigrant-friendly is a huge unforced error. The United States is fortunate to have many skilled immigrants from India and other nations who want to come here and work with us.

As an immigrant to the US myself, I feel like it's been such a blessing. To make it harder for these wonderful immigrants to join us in the United States, not just currently high-skilled immigrants, but that 17, 18-year-old college student who will be high-skilled in the future. I think that's a huge unforced error.

ReadDavos 2026: US becoming less immigrant-friendly is a 'huge unforced error', says Andrew Ng

At the same time, for the human element of it, I have friends in the US who have lived here for 10 years, kids born in the US, bought a house but still in line for a green card. This turmoil and disruption, I've seen how it shakes up families and I think that has been awful at a human level.

Now speaking of India, one of the things that has been widely talked about is the concept of sovereign AI. It makes sense for India to not want a different nation, friendly or hostile, or a single business to control India's access to AI technology. I was thrilled to see the increased investments and discussions about having Indian startups go global.

At the same time, there is one thing I want to put up for consideration: India uses many programming languages and operating systems, like Linux, that it doesn't own or control, but are open source. That means that no other country or no other company can mess with India's ability to access it.

One relatively efficient, cheap way to ensure India's continued access to AI is not necessarily to build everything internally, although that would be fine with enough money and resources but rather to contribute to open source. No one can mess with it since it is open.

Also readDavos 2026: Andrew Ng backs open source as key pillar of India’s AI strategy

In terms of geopolitics, Chinese open-source models have really taken off. Many Chinese open-source models are ahead of the US open-source ones. This gives China tremendous geopolitical influence. When someone asks an AI model a question, if a developer incorporates a Chinese model into their software because it's open, then that software will more likely answer the question in a way that reflects the values of the nation of origin.

Whether or not India wants a world where many are using models from China, reflecting those values, is a question that I think India and many other nations are asking ourselves, especially when we decide how much to invest in open research and open models. Open models mean you don't control it but no adversary can do so either.

Your views on India come just ahead of the global AI impact summit that's happening next month. What do you think India should focus on apart from the open-source angle that you mentioned?

This is a good time to invest heavily in AI. I don't just mean pouring dollars into data centre infrastructure. Some of that is good. Even more important is upskilling. Although the disruption has not yet come, the outsourced professional services and IT services industry is at risk because people that know AI will replace people that don't.

If India's industry goes through this industry transformation and upskills people, then this highly talented and skilled workforce will keep providing even higher-level services.

But if individuals don't upskill quickly enough, then the outsourced industry will face a lot of challenges. At Coursera and Deeplearning.ai, we try to do a lot of work to serve India specifically and I feel a lot of urgency to make sure that India has a resource to upskill the population with AI.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Chandra R Srikanth
Chandra R Srikanth is Editor- Tech, Startups, and New Economy
Vikas SN
Vikas SN covers Big Tech, streaming, social media and gaming industry
first published: Jan 20, 2026 12:20 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347