Moneycontrol

Microsoft says its Maia AI chip will not replace Nvidia or AMD as cloud demand keeps rising

Microsoft has started deploying its first in-house AI chip, Maia 200, but CEO Satya Nadella says the company will continue buying AI hardware from Nvidia and AMD. Despite strong performance claims, Microsoft sees custom silicon as a complement rather than a replacement as AI demand and supply constraints persist.

January 31, 2026 / 16:58 IST
Story continues below Advertisement
Microsoft
Snapshot AI
  • Microsoft begins deploying its Maia 200 AI chip in data centers
  • Maia 200 offers faster AI inference than Amazon and Google's latest chips
  • Microsoft will keep buying Nvidia and AMD chips alongside its own processors

Microsoft has officially begun deploying its first generation of homegrown AI chips, but the company is making it clear that this does not signal a retreat from third-party silicon suppliers. Speaking this week, Microsoft CEO Satya Nadella said the company will continue purchasing AI chips from Nvidia and AMD, even as it ramps up use of its own Maia processors.

The new chip, called Maia 200, has been deployed inside one of Microsoft’s data centres, with broader rollout planned over the coming months. Microsoft describes Maia 200 as an AI inference-focused processor, optimised for the heavy computational workloads involved in running large AI models in production rather than training them from scratch.

Story continues below Advertisement

Microsoft has shared early performance figures suggesting Maia 200 delivers higher processing speeds than Amazon’s latest Trainium chips and Google’s most recent Tensor Processing Units. While the company has not disclosed full benchmarking details, the message is clear: Maia 200 is intended to compete at the top end of custom AI silicon designed by cloud providers.

Like other hyperscalers, Microsoft’s move into in-house chip development is partly driven by the cost and scarcity of advanced AI hardware. The ongoing supply crunch for Nvidia’s most powerful accelerators has made it difficult and expensive for cloud providers to secure sufficient capacity, even for their own internal teams.