
Nvidia used its CES 2026 stage to make a bold claim about the future of autonomous driving. The company unveiled Alpamayo, a new family of open source AI models, simulation tools and datasets aimed at teaching robots and self-driving vehicles how to reason, not just respond.
Calling it “the ChatGPT moment for physical AI,” Nvidia CEO Jensen Huang said Alpamayo marks a shift in how machines interact with the real world. “Machines are beginning to understand, reason, and act in the real world,” Huang said, adding that Alpamayo allows autonomous vehicles to think through rare and complex scenarios, drive more safely, and even explain why they made a particular decision.
At the centre of the announcement is Alpamayo 1, a 10-billion-parameter vision-language-action model built around chain-of-thought reasoning. Unlike traditional driving systems that rely heavily on prior examples, Alpamayo 1 is designed to break down unfamiliar situations into steps, evaluate multiple outcomes, and choose the safest course of action. Nvidia says this allows vehicles to handle edge cases such as traffic light failures at busy intersections or unusual pedestrian behaviour, even if they have never encountered those situations before.
Ali Kani, Nvidia’s vice president of automotive, explained that the model reasons through every possible option before acting. As Huang put it during his keynote, Alpamayo does not simply convert sensor input into steering or braking commands. It reasons about the action it is about to take, explains the logic behind that decision, and then executes the chosen trajectory.
Nvidia is making Alpamayo’s core model openly available on Hugging Face, allowing developers to fine-tune it into smaller and faster versions for production vehicles. The company says the model can also be used beyond direct driving control, including for auto-labelling video data or evaluating whether an autonomous system made a sensible decision in a given scenario.
Alongside the model, Nvidia is releasing a substantial open driving dataset containing more than 1,700 hours of footage collected across diverse geographies and conditions. The dataset focuses on rare and complex scenarios that are difficult to capture at scale but critical for improving safety.
To complement real-world data, Nvidia is also leaning on its Cosmos generative world models, which can create synthetic driving environments for training and testing. Developers can combine synthetic and real data to stress-test Alpamayo-based systems before deploying them on public roads.
Rounding out the launch is AlpaSim, an open source simulation framework available on GitHub. AlpaSim is designed to recreate full driving environments, including sensors, traffic and road conditions, allowing developers to validate autonomous systems safely and at scale.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.