HomeNewsBusinessOpenAI, Broadcom Working to Develop AI Inference Chip

OpenAI, Broadcom Working to Develop AI Inference Chip

OpenAI has been planning a custom chip and working on such uses for the technology for around a year, the people said, but the discussions are still at an early stage.

October 30, 2024 / 08:18 IST
Story continues below Advertisement
The AI startup and chipmaker are also consulting with Taiwan Semiconductor Manufacturing Co., the world’s largest chip contract manufacturer, said the people, who asked not to be identified because the discussions are private.
The AI startup and chipmaker are also consulting with Taiwan Semiconductor Manufacturing Co., the world’s largest chip contract manufacturer, said the people, who asked not to be identified because the discussions are private.

OpenAI is working with Broadcom Inc. to develop a new artificial intelligence chip specifically focused on running AI models after they’ve been trained, according to two people familiar with the matter.

The AI startup and chipmaker are also consulting with Taiwan Semiconductor Manufacturing Co., the world’s largest chip contract manufacturer, said the people, who asked not to be identified because the discussions are private. OpenAI has been planning a custom chip and working on such uses for the technology for around a year, the people said, but the discussions are still at an early stage.

Story continues below Advertisement

OpenAI declined to comment. A representative for Broadcom didn’t respond to a request for comment, and a TSMC spokesperson said the company doesn’t comment on rumors and speculation. Reuters reported on OpenAI’s ongoing talks with Broadcom and TSMC on Tuesday. The Information reported in June that Broadcom had discussed making an AI chip for OpenAI.

The process for taking a chip from design to production is long and expensive. OpenAI is less focused on graphics processing units, chips that are used to train and build generative AI models — a market that has been cornered by Nvidia Corp. Instead, it’s looking for a specialized chip that will run the software and respond to user requests, a process called inference. Investors and analysts expect the need for chips to support inference will only grow as more tech companies use AI models to field more complex tasks.