OpenAI is reportedly working on developing its AI chips
If successful, OpenAI’s chip strategy will place it among a growing list of tech giants building semiconductors to support their businesses.
Generative AI—the type that powers ChatGPT—requires tremendous computing power to learn from data (training) and make predictions in real-time (inference).
To do this at scale, OpenAI – which helped commercialize generative AI – has become one of Nvidia's largest customers, purchasing vast quantities of Nvidia’s high-performance graphics processing units (GPUs).
However, with demand soaring and supply constraints tightening, it’s a costly arrangement. This has spurred OpenAI to consider various paths to reduce heavy dependency on Nvidia, from designing custom AI chips to diversifying suppliers.
Initially, OpenAI weighed the idea of building its own semiconductor factories, known as “foundries,” to fully control the chip production process. However, the enormous costs and time requirements of setting up foundries have led OpenAI to shelve this plan, at least for now. Instead, the company is turning its focus to in-house chip design, akin to what Apple does with its custom silicon for iPhones.
A report by Reuters citing unnamed sources reveals that OpenAI has tapped Broadcom and Taiwan Semiconductor Manufacturing Company (TSMC) as key players in this effort.
Broadcom which has extensive experience with custom chip designs for giants like Google, will help OpenAI accelerate its chip development without the need for in-house manufacturing. TSMC, meanwhile another key player in global chip production, is set to manufacture these custom chips, providing OpenAI with production capacity and expertise.
According to the sources, the chip development efforts will be spearheaded by a small team of 20 engineers, including ex-Google engineers who worked on its custom Tensor Processing Units (TPUs). With a timeline stretching toward 2026, this team will work to develop custom chips tailored to inference tasks—the AI processes that require the system to make predictions or decisions in real-time.
While today’s greatest demand may lie in training chips, analysts anticipate a major shift as AI adoption widens. They expect the demand for these inference chips will skyrocket as companies increasingly embed AI into customer-facing products and services. By planning ahead, OpenAI is aiming to secure a robust supply of these chips in anticipation of the shifting market needs.
Alongside these new partnerships, OpenAI has also increased its supplier diversity by integrating AMD chips into its operations, complementing its ongoing reliance on Nvidia.
If successful, OpenAI’s chip strategy will place it among a list of tech giants, including Google and Amazon, that have ventured into custom chip design to better control the technology underlying their businesses. It could also mean some bad news for its major GPU supplier, Nvidia, which holds over 80% of the global market share.
Fueled by a recent raise of $6.6 billion that drove it to a valuation of $100 billion, this approach highlights how OpenAI, despite its startup roots, is exploring first-party technology to drive its AI advancements and compete with the likes of Amazon, Meta, Google, and Microsoft in the high-stakes AI race.