
Microsoft has officially begun deploying its first generation of homegrown AI chips, but the company is making it clear that this does not signal a retreat from third-party silicon suppliers. Speaking this week, Microsoft CEO Satya Nadella said the company will continue purchasing AI chips from Nvidia and AMD, even as it ramps up use of its own Maia processors.
The new chip, called Maia 200, has been deployed inside one of Microsoft’s data centres, with broader rollout planned over the coming months. Microsoft describes Maia 200 as an AI inference-focused processor, optimised for the heavy computational workloads involved in running large AI models in production rather than training them from scratch.
Microsoft has shared early performance figures suggesting Maia 200 delivers higher processing speeds than Amazon’s latest Trainium chips and Google’s most recent Tensor Processing Units. While the company has not disclosed full benchmarking details, the message is clear: Maia 200 is intended to compete at the top end of custom AI silicon designed by cloud providers.
Like other hyperscalers, Microsoft’s move into in-house chip development is partly driven by the cost and scarcity of advanced AI hardware. The ongoing supply crunch for Nvidia’s most powerful accelerators has made it difficult and expensive for cloud providers to secure sufficient capacity, even for their own internal teams.
Despite this, Nadella pushed back against the idea that Microsoft’s custom silicon would reduce its reliance on external vendors. He described Microsoft’s relationships with Nvidia and AMD as strong partnerships built on parallel innovation rather than competition. According to Nadella, vertical integration is a strategic option, not a mandate, and building in-house systems does not mean abandoning best-in-class hardware from other suppliers.
The Maia 200 chip will play a key role internally at Microsoft. According to Mustafa Suleyman, who leads the company’s Superintelligence team, Maia 200 will be used by Microsoft’s own AI researchers working on frontier models. Suleyman, a former co-founder of Google DeepMind, suggested the chip gives his team early access to high-performance hardware at a time when demand far outstrips supply.
Microsoft has increasingly invested in building its own AI models, a move widely seen as a way to reduce long-term dependence on partners such as OpenAI and Anthropic. At the same time, Microsoft confirmed that Maia 200 will also support OpenAI’s models running on the Microsoft Azure cloud, reinforcing the chip’s role across both internal and customer-facing workloads.
Even with Maia 200 entering service, access to cutting-edge AI hardware remains constrained. Both external customers and internal Microsoft teams continue to compete for limited capacity, highlighting why Microsoft is unwilling to rely on a single source of silicon. Custom chips, third-party accelerators, and long-term supplier relationships are all part of the same strategy.
Suleyman underscored the importance of Maia 200 in a post on X, calling its launch a milestone for his team and noting that the Superintelligence group would be the first to use it. For Microsoft, the message is consistent: custom AI chips are essential, but they are only one piece of a much larger and increasingly complex AI infrastructure puzzle.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.