
The AI arms race is no longer just about algorithms. It is about hardware. More specifically, memory.
As artificial intelligence models grow larger and more complex, they require vast quantities of high-performance chips to train and deploy. But the industry is heavily supply-constrained. Prices have climbed, deliveries have stretched, and some consumer electronics companies are feeling the knock-on effects through higher component costs.
Speaking to CNBC, Demis Hassabis, chief executive of Google DeepMind, said physical infrastructure limits are now acting as a brake on AI progress.
“There is so much more demand” for Gemini and other models than Google can currently serve, he said, adding that hardware constraints are “constraining a lot of deployment”.
The bottleneck does not stop at product rollouts. It also affects research. Training experimental models at scale requires enormous computing clusters packed with advanced memory. Without sufficient chips, researchers cannot test ideas at the scale needed to determine whether they work.
At companies such as Google, Meta and OpenAI, access to compute has become a prized asset. As Mark Zuckerberg has noted, top AI researchers typically ask for two things beyond pay: minimal bureaucracy and maximum chips.
The memory choke point
While GPUs often grab headlines, memory is just as critical. In particular, AI companies are racing to secure high-bandwidth memory, or HBM, which is essential for training and running large language models efficiently.
Production of advanced memory chips is dominated by three players: Samsung Electronics, Micron Technology and SK Hynix. All are struggling to meet soaring demand from AI hyperscalers while continuing to supply long-standing PC and electronics customers.
Hassabis acknowledged that even Google’s custom silicon strategy offers only partial insulation. The company designs its own TPUs, or Tensor Processing Units, for internal use and for customers of its cloud platform. That has given it some independence from external chip suppliers and put it in closer competition with Nvidia.
Yet even with proprietary processors, Google still relies on a limited number of suppliers for key components, particularly memory. “In the end, it comes down to a few suppliers of a few key components,” Hassabis said.
Spending shows no sign of slowing
Despite the constraints, investment is accelerating. On its fourth-quarter earnings call, Google projected capital expenditure of $175 billion to $185 billion for 2026, underscoring how central AI infrastructure has become to its future.
The message is clear. AI ambition is no longer limited by ideas. It is limited by atoms. Until memory supply catches up with demand, the world’s largest technology companies will remain locked in a high-stakes scramble for silicon.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.