A working group of artificial intelligence (AI) experts has recommended that the Indian government create computing infrastructure of 24,500 graphics processing units (GPU) at 17 centres to enable innovation in the emerging tech area in the country by startups and academia.
Under the proposal, about 14,500 GPUs have been recommended for training of AI models and high performance storage, while the rest 10,000 is for AI inferencing.
A GPU centre is a group of computers with GPUs on each node to train neural networks for image and video processing.
Earlier today, Union Minister Rajeev Chandrasekhar said that based on the suggestion made by the panel, the government was considering making AI chips and GPU infrastructure in the country through a public-private partnership model.
The overarching infrastructure comprises of the three-layered systems, high-end compute infrastructure, inference arm, and edge compute that are strategically distributed to meet users’ computational requirements efficiently.
This distributed architecture ensures that users can seamlessly transition between these resources while harnessing their distinct capabilities.
To facilitate smooth data exchange and collaboration across these distributed components, the secure distributed data grid plays a pivotal role. The data grid acts as a robust and secure framework, enabling users to upload, download, and exchange large datasets and AI models seamlessly.
According to the report, the United States is at the forefront of AI with numerous supercomputers and high- performance computing (HPC) facilities, including those operated by government institutions, academic research centers, and private entities.
China has also made significant strides in AI compute infrastructure, boasting several supercomputers and state-of-the-art facilities. According to the November 2022 Top500 list, there are 34 economies that possess a "top supercomputer" based on the Top500 methodology.
In terms of market share of compute infrastructure and AI, companies like NVIDIA, Intel, AMD, IBM, and Google are leading this domain. These industry giants invest heavily in research and development to create high-performance processors, GPUs, Application Specific Integrated Circuit (ASIC) AI chips, and software frameworks.
In addition to on-premises installations, the hardware is available through cloud service providers (CSPs) like Amazon Web Services, Microsoft Azure, and Google Cloud Platform. These CSPs enable users to access scalable AI compute resources on- demand, reducing the need for extensive infrastructure investments and facilitating widespread adoption of AI
technologies. These advancements are forecast to bring more than fivefold increase in global AI infra market.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.