Huawei started shipping its most advanced artificial intelligence chip cluster, CloudMatrix 384, to Chinese domestic clients, a key breakthrough in Beijing's drive towards AI hardware self-reliance amid US export controls aimed at Nvidia's semiconductors, the Financial Times reported.
The tech giant, based in Shenzhen, has already sold more than 10 CloudMatrix 384 systems, sources with direct knowledge of the situation said. Among its initial buyers are large Chinese data centres that host Chinese tech companies cut off from Nvidia's AI chips by new US trade barriers imposed under President Donald Trump.
CloudMatrix 384: A response to US export bans
CloudMatrix 384 links 384 of Huawei’s Ascend 910C chips into a tightly integrated high-performance computing system, using a proprietary “super node” optical interconnect that boosts overall performance. While each Ascend chip falls short of Nvidia’s GB200 in standalone performance, the scale and architecture of the system allow it to rival Nvidia’s widely used NVL72 cluster.
Huawei claims CloudMatrix outperforms Nvidia’s NVL72 by 67% in compute power and has over three times the aggregate memory capacity, according to internal presentations reviewed by the Financial Times. Analysts credit the performance gains to Huawei’s expertise in telecommunications infrastructure, particularly in optimizing data transmission within large chip networks.
Huawei bridges performance gap with smart design
This indicates China now has an AI system that can beat Nvidia's," said Dylan Patel, the founder of chip consultancy SemiAnalysis. "They're making up for lower chip performance with leading-edge networking.
Timing is everything. Last month, Nvidia revealed it anticipates a $5.5 billion revenue impact after the US government imposed tighter controls on exporting its H20 chip to China—designed in the first place to meet earlier export limits. This has created an opening in China's AI hardware market that Huawei is quick to seize.
More power, more engineers, more cost
Still, CloudMatrix 384 has notable drawbacks. The system’s reliance on a larger number of chips results in significantly higher energy consumption and operational costs. Power demands are steep, and Huawei’s software ecosystem is less mature than Nvidia’s CUDA platform, requiring more hands-on maintenance by highly trained engineers. Industry sources estimate manpower costs for CloudMatrix could be three to five times higher.
Despite these trade-offs, Huawei’s offering is gaining traction. In a market where access to Nvidia’s cutting-edge GPUs has become severely limited, CloudMatrix is seen as a viable—if more expensive—alternative. Each unit is priced around RMB 60 million ($8.2 million), more than double the estimated $3 million cost of Nvidia’s NVL72, though final prices vary by contract.
A strategic move in a high-stakes chip war
Huawei would not comment on the rollout, but its foray into the AI chip cluster market is an ambitious bid to take on Nvidia's dominance, particularly in China. With a robust domestic demand, plenty of engineering talent, and government support, Huawei is well-placed to establish itself as a player in the domestic AI infrastructure market.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.