OpenAI is progressing with its strategy to decrease its dependence on Nvidia for chip supply by creating its initial iteration of artificial intelligence silicon in-house.
Sources have informed Reuters that the ChatGPT manufacturer is finalizing the design for its first in-house device and intends to submit it for fabrication at Taiwan Semiconductor Manufacturing Co (2330).TW) in the coming months. The preliminary design is transmitted to a semiconductor factory through a process known as “taping out.”
OpenAI is creating its initial iteration of in-house artificial intelligence silicon to mitigate its dependence on Nvidia for its chip supply.
OpenAI’s AI Chip to Be Manufactured by TSMC
Sources told Reuters that the ChatGPT manufacturer is finalizing the design for its first in-house chip in the coming months.
The chip will be sent to Taiwan Semiconductor Manufacturing Co (2330.TW) for fabrication. The preliminary design is transmitted to a semiconductor factory through a process known as “taping out.”
The update indicates that OpenAI is on course to achieve its ambitious objective of mass production at TSMC in 2026. A completed chip will be produced in approximately six months, and a typical tape-out costs tens of millions of dollars unless OpenAI pays a substantial premium for expedited manufacturing.
The silicon’s functionality on the initial tape-out is not guaranteed, and a failure would necessitate the company to diagnose the issue and repeat the tape-out process.
According to the sources, the training-focused chip is perceived as a strategic instrument within OpenAI that enhances the company’s bargaining power with other chip suppliers. OpenAI’s engineers intend to create processors with progressively more sophisticated capabilities with each subsequent iteration following the initial device.
If the initial tape-out proceeds smoothly, the ChatGPT manufacturer could test an alternative to Nvidia’s chips later this year and mass-produce its first in-house AI chip.
OpenAI’s intention to submit its design to TSMC this year indicates the startup’s rapid advancement on its initial design. This process can take other chip designers years to complete.
Despite years of effort, major technology companies like Microsoft and Meta have been unable to produce processors that meet their standards. The recent market rout caused by the Chinese AI startup DeepSeek has prompted questions about the necessity of fewer processors in the future to develop powerful models.
OpenAI’s in-house team, led by Richard Ho and doubled in size to 40 individuals in the past few months, is developing the chip in collaboration with Broadcom (AVGO.O).
Ho transitioned from Alphabet’s Google, where he served as the director of the search engine’s custom AI chip program, to OpenAI over a year ago. Last year, Reuters was the first to report on OpenAI’s intentions to collaborate with Broadcom.
Nvidia’s Dominance in the AI Chip Market and Rising Alternatives
Ho’s team is significantly smaller than that of large-scale initiatives at tech giants like Amazon or Google. According to industry sources with expertise in chip design budgets, a single iteration of a new chip design for an ambitious, large-scale program could cost $500 million. The expenses associated with constructing the requisite software and peripherals could double.
OpenAI, Google, and Meta, among other generative AI model creators, have demonstrated that the models have become more intelligent due to the increasing number of chips connected in data centres. Consequently, they have an insatiable demand for the chips.
Microsoft has declared that it will allocate $80 billion in 2025, while Meta has committed to investing $60 billion in AI infrastructure within the next year.
Nvidia’s processors are currently the most prevalent and account for approximately 80% of the market. OpenAI is involved in the $500 billion Stargate infrastructure program, which U.S. President Donald Trump announced last month.
However, the increasing costs and reliance on a single supplier have prompted major customers, including Microsoft, Meta, and now OpenAI, to investigate in-house or external alternatives to Nvidia’s processors.
The sources stated that OpenAI’s in-house AI chip, capable of training and operating AI models, will initially be deployed on a limited scale and primarily used for running AI models. The chip will play a restricted function in the organization’s infrastructure.
Openai would require recruiting hundreds of engineers to establish an extensive endeavour like Google or Amazon’s AI chip program.
TSMC manufactures OpenAI’s AI microprocessor using its cutting-edge 3-nanometer process technology. According to sources, the chip frequently employs a systolic array architecture and high-bandwidth memory (HBM), which Nvidia also uses in its processors. It also has extensive networking capabilities.
![](https://www.chiangraitimes.com/wp-content/uploads/2024/10/IMG_7867-2.jpg)
Salman Ahmad is known for his significant contributions to esteemed publications like the Times of India and the Express Tribune. Salman has carved a niche as a freelance journalist, combining thorough research with engaging reporting.