
Tesla chief executive Elon Musk has weighed in on Nvidia’s newly announced Rubin AI chips, offering a measured assessment that cuts through some of the excitement generated at CES 2026. While Musk acknowledged the ambition and technical sophistication behind the platform, he suggested that the chips are unlikely to become fully operational at scale in the near term. According to Musk, it could take another nine months before the technology and its supporting software ecosystem are ready for large-scale deployment.
Musk’s comments came in response to a video shared on X by influencer Sawyer Merritt, which highlighted the architecture and performance claims surrounding Nvidia’s next-generation AI hardware. Rather than criticising the design itself, Musk focused on the realities of scaling advanced compute platforms, particularly when both hardware and software stacks are evolving simultaneously.
At CES 2026, Nvidia chief executive Jensen Huang introduced the Rubin platform as the successor to the company’s Blackwell architecture. Huang claimed that Rubin would deliver up to five times the performance of its predecessor, positioning it as Nvidia’s most powerful AI system to date. Unlike previous generations, Rubin is not just a single chip but a tightly integrated six-chip platform designed using what Nvidia calls an “extreme codesign” approach.
The Rubin platform combines Rubin GPUs with Vera CPUs, alongside NVLink 6 interconnects, Spectrum-X Ethernet Photonics, ConnectX-9 networking cards, and BlueField-4 data processing units. Nvidia says designing these components together helps reduce data movement bottlenecks and improves efficiency when training and running large AI models. The Rubin GPU itself is rated to deliver up to 50 petaflops of inference performance using NVFP4 precision, while the Vera CPU is focused on orchestrating data flow and handling AI agent workloads.
Musk urges realism on Rubin’s scaling timeline
Despite these headline figures, Musk’s remarks highlight a familiar challenge in the AI industry. Hardware capabilities often outpace the readiness of software, tooling, and deployment infrastructure. In his post, Musk suggested that while the hardware leap is real, it would take significant time before the software stack matures enough to run smoothly across large data centres. His view implies that early adopters may face optimisation hurdles before Rubin can be used reliably at scale.
The discussion around Rubin also intersected with Nvidia’s ambitions in autonomous driving. During the same CES keynote, Huang unveiled Alpamayo, Nvidia’s new self-driving technology built on a vision-language-action model. Huang described it as a “ChatGPT moment” for autonomous vehicles, signalling a major shift in how driving systems interpret and respond to real-world environments.
Big performance claims meet practical deployment hurdles
Musk responded more cautiously to this claim. He noted that distribution and real-world deployment of such systems would be extremely difficult, regardless of how capable the underlying models appear in demonstrations. Ashok Elluswamy, Tesla’s head of AI, echoed similar concerns, reinforcing the idea that autonomy is as much an operational challenge as a technical one. Even so, Musk stopped short of dismissing Nvidia’s efforts and publicly wished the company success in advancing self-driving technology.
Nvidia’s Rubin platform may represent a major step forward in AI hardware design, the path from CES showcase to real-world impact is likely to be slower and more complex than headline performance numbers suggest.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.