OpenAI partners Broadcom for 10-gigawatt custom chips, challenging Nvidia.

With a 10-gigawatt custom chip pact, OpenAI confronts AI's energy crisis and redefines the hardware battleground.

October 13, 2025

OpenAI partners Broadcom for 10-gigawatt custom chips, challenging Nvidia.
In a strategic move to secure its computational future, OpenAI is partnering with semiconductor giant Broadcom to co-develop and deploy a staggering 10 gigawatts of custom artificial intelligence accelerators.[1][2][3] This multi-year agreement, which will see OpenAI-designed chips manufactured and rolled out by Broadcom, represents a massive bet on bespoke hardware as the key to unlocking the next generation of artificial intelligence.[2][4] The collaboration underscores the immense and growing demand for specialized computing power, a demand that is reshaping the technology landscape and putting a strain on global energy resources. For OpenAI, the creator of ChatGPT, this partnership is a critical step in its ambitious quest to build artificial general intelligence, a pursuit that requires an almost unimaginable scale of computational power.
The partnership between OpenAI and Broadcom is a multi-billion dollar endeavor, with deployments of the new AI accelerator and network systems slated to begin in the second half of 2026 and conclude by the end of 2029.[1][2][5] Under the terms of the agreement, OpenAI will take the lead on designing the accelerators and the overall systems, allowing the company to embed its deep understanding of AI models directly into the hardware.[2][3] Broadcom, with its extensive experience in silicon design and manufacturing, will then develop and deploy these custom systems.[1][2] This collaboration is the culmination of over 18 months of joint work between the two companies.[1][6] The resulting infrastructure will be scaled using Broadcom's Ethernet and other connectivity solutions, highlighting a move towards tightly integrated and optimized AI data centers.[2][7] The deal significantly expands OpenAI's committed hardware capacity, which now totals an estimated 26 gigawatts when including existing partnerships with Nvidia and a recent 6-gigawatt deal with AMD.[1]
This move toward custom silicon is a strategic imperative for OpenAI as it seeks to gain a competitive edge and reduce its reliance on third-party chip suppliers, particularly the dominant market leader, Nvidia.[8][9] By designing its own chips, OpenAI can create hardware specifically tailored to the unique workloads of its large language models, potentially leading to significant improvements in performance, energy efficiency, and cost-effectiveness.[10][8] OpenAI's CEO, Sam Altman, has emphasized that developing their own accelerators is a critical step in building the necessary infrastructure to realize the full potential of AI.[2][3] This vertical integration of hardware and software is a strategy also being pursued by other tech giants like Google, Amazon, and Microsoft, all of whom are developing their own custom AI chips to optimize their AI services and control their technological roadmaps.[11][12]
The OpenAI-Broadcom partnership sends a clear signal to the broader AI hardware industry that the era of near-total reliance on general-purpose GPUs from a single vendor may be coming to an end. While Nvidia still holds a commanding market share of over 80% in the AI chip market, the increasing trend of in-house chip design by major AI players presents a long-term challenge to its dominance.[13][14][15] Broadcom, already a major player in custom silicon for hyperscale customers like Google, stands to benefit significantly from this trend.[1] The deal with OpenAI solidifies Broadcom's position as a key enabler of the AI revolution and a viable alternative for companies seeking bespoke hardware solutions. For the broader market, this increased competition could lead to greater innovation, more diverse hardware options, and potentially lower costs for AI compute in the long run.
The sheer scale of the 10-gigawatt figure highlights the astronomical energy demands of modern artificial intelligence. A single gigawatt can power a large city, and the computational infrastructure required to train and run frontier AI models consumes vast amounts of electricity.[12] The International Energy Agency projects that electricity demand from data centers worldwide is set to more than double by 2030, with AI being the most significant driver of this increase.[16] In the United States alone, data centers are projected to account for nearly half of the growth in electricity demand over the next five years.[16] OpenAI's massive hardware acquisitions, including the 10 gigawatts from Broadcom and another 10 gigawatts from Nvidia, underscore the reality that access to and the ability to power massive data centers is becoming a primary bottleneck in the advancement of AI.[17] This insatiable appetite for power raises significant questions about the sustainability of the AI industry and is forcing a reckoning with the need for more energy-efficient hardware and greener energy sources to power the data centers of the future.[18]
In conclusion, OpenAI's ten-gigawatt pact with Broadcom is more than just a large-scale hardware deal; it is a declaration of intent and a reflection of the fundamental shifts occurring in the AI industry. It showcases OpenAI's aggressive strategy to secure the computational resources necessary to stay at the forefront of AI research and development. The move towards custom silicon, mirrored by other tech giants, signals a diversification of the AI hardware market and a growing challenge to Nvidia's long-held dominance. Perhaps most profoundly, the immense scale of this partnership casts a stark light on the escalating energy requirements of artificial intelligence, presenting a critical challenge for the technology industry and society as a whole to address as we move further into an AI-powered future. The race to out-compute everyone is on, and it is being measured in gigawatts.

Sources
Share this article