Moneycontrol PRO
LAMF
LAMF

What’s behind Nvidia’s new $1 trillion AI forecast? Jensen Huang lays out the next wave

Nvidia CEO Jensen Huang sees $1 trillion orders for Blackwell and Rubin chips by 2027; unveils new AI chips, racks and auto push.
March 17, 2026 / 06:56 IST
From Groq chips to autonomous fleets, Nvidia expands AI push as demand from startups and Big Tech surges
Snapshot AI
  • Nvidia predicts $1 trillion chip orders by 2027, doubling forecast
  • Vera Rubin system launches, enhancing performance per watt
  • Uber to launch Nvidia-powered self-driving fleets in 28 cities by 2028

Nvidia CEO Jensen Huang said the company expects purchase orders for its next-generation Blackwell and Vera Rubin chip platforms to reach $1 trillion by 2027, sharply raising its earlier estimate of a $500 billion opportunity.

Speaking at Nvidia’s annual GTC developer conference in San Jose on March 17, Huang said demand for AI infrastructure is accelerating across startups and large enterprises alike, driven by the rapid expansion of AI applications.

“If they could just get more capacity, they could generate more tokens, their revenues would go up,” Huang said, underlining how compute constraints remain a bottleneck for the industry.

AI demand surges as tokens explode

Nvidia’s bullish outlook comes as AI usage shifts from chatbots to more advanced “agentic” applications that can autonomously execute tasks, leading to a sharp increase in token generation, the basic unit of AI computation.

This surge is intensifying demand for faster and more efficient inference, an area where Nvidia’s GPUs continue to dominate.

The company, now the world’s most valuable publicly listed firm with a market capitalisation of about $4.5 trillion, has been riding this wave. It expects revenue to jump about 77 percent year-on-year to roughly $78 billion this quarter, extending a streak of more than 55 percent growth over 11 consecutive quarters.

Vera Rubin and energy efficiency take centre stage

Nvidia is set to launch its Vera Rubin system later this year, positioning it as a major leap in performance and efficiency.

The system, comprising about 1.3 million components, is expected to deliver 10 times more performance per watt than its predecessor, Grace Blackwell, according to the company.

That improvement is critical as energy consumption emerges as a key constraint in scaling AI infrastructure globally.

Groq 3 chip and new rack systems expand Nvidia’s stack

At the event, Huang also unveiled the Groq 3 Language Processing Unit (LPU), marking Nvidia’s first major product from the startup it largely acquired in a $20 billion deal in December.

The Groq 3 chip is designed to complement Nvidia’s GPUs, with one core optimised to accelerate processing speeds.

Nvidia introduced a new Groq 3 LPX rack system capable of housing 256 LPUs, which will work alongside Rubin-based systems. Huang said the configuration can boost tokens-per-watt performance of Rubin GPUs by up to 35 times.

“We unified two processors of extreme differences, one for high throughput, one for low latency,” Huang said, adding that expanding memory capacity remains critical for scaling AI workloads.

Kyber architecture signals next leap in AI infrastructure

Looking beyond Rubin, Nvidia showcased a prototype of its next-generation rack architecture, Kyber.

The system will integrate 144 GPUs arranged vertically to improve density and reduce latency. Kyber will be part of the Vera Rubin Ultra platform, expected to ship in 2027.

The design reflects Nvidia’s push to optimise both performance and physical infrastructure as AI workloads become more complex and resource-intensive.

Developer tools target next wave of AI apps

Huang also highlighted the rise of “OpenClaw”, an open-source AI framework gaining traction for enabling autonomous agents that can perform tasks without continuous human input.

To support this ecosystem, Nvidia introduced a developer stack called NemoClaw, aimed at making such applications enterprise-ready on its hardware.

“It finds OpenClaw, it downloads it. It builds you an AI agent,” Huang said, positioning Nvidia as not just a chipmaker but a full-stack AI platform provider.

Autonomous driving partnerships widen scope

Beyond data centres, Nvidia is expanding its footprint in automotive AI.

Huang said Uber plans to deploy fleets powered by Nvidia’s Drive AV software across 28 cities globally by 2028, starting with Los Angeles and San Francisco next year.

Automakers including Nissan, BYD, Geely, Isuzu and Hyundai are also building Level 4 autonomous vehicles using Nvidia’s Drive Hyperion platform. Isuzu and China’s Tier IV are developing autonomous buses powered by Nvidia’s AGX Thor chip.

Moneycontrol World Desk

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347