Tech & Gadgets

OpenAI builds first chip with Broadcom and TSMC and scales back the foundry’s ambition

OpenAI is working with Broadcom and TSMC to build its first in-house chip designed to power its artificial intelligence systems, while adding AMD chips alongside Nvidia chips to meet rising infrastructure demands, sources told Reuters.

OpenAI, the fast-growing company behind ChatGPT, has been exploring a range of options to diversify chip offerings and reduce costs. OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories known as “foundries” for chip production.

The company has abandoned ambitious plans for the foundry for now due to the cost and time required to build a network, and plans instead to focus on internal chip design efforts, said sources, who requested anonymity because they were not authorized to discuss private matters. business.

The company’s strategy, detailed here for the first time, highlights how the Silicon Valley startup uses industry partnerships and a mix of internal and external approaches to secure chip supply and control costs, like larger rivals Amazon, Meta, Google and Microsoft. As one of the largest chip buyers, OpenAI’s decision to source from a wide range of chip makers in developing its custom chip could have broader implications for the technology sector.

Shares of Broadcom rose after the report, ending Tuesday’s trading more than 4.5 percent higher. AMD shares also extended their gains from the morning session, ending the day up 3.7 percent.

OpenAI, AMD and TSMC declined to comment. Broadcom did not immediately respond to a request for comment.

OpenAI, which has helped commercialize generative AI that produces human-like answers to questions, relies on significant computing power to train and run its systems. As one of the largest buyers of Nvidia’s graphics processing units (GPUs), OpenAI uses AI chips both to train models, where the AI ​​learns from data, and to make inferences, where AI is applied to make predictions or decisions based on new information.

Reuters previously reported on OpenAI’s chip design efforts. The Information reported on discussions with Broadcom and others.

According to sources, OpenAI has been working with Broadcom for months to build its first AI chip focused on inference. Demand for training chips is currently higher, but analysts predict that the need for inference chips could surpass it as more AI applications are deployed.

Broadcom helps companies including Google, a unit of Alphabet, refine chip designs for production and also provides parts of the design that allow information to be moved quickly to and from the chips. This is important in AI systems where tens of thousands of chips are strung together to work together.

OpenAI is still deciding whether to develop or acquire other elements for its chip design, and may bring in additional partners, two sources said.

The company has assembled a chip team of about 20 people, led by top engineers who previously built Tensor Processing Units (TPUs) at Google, including Thomas Norrie and Richard Ho.

Sources said OpenAI has secured production capacity from Taiwan Semiconductor Manufacturing Company through Broadcom to make its first custom-designed chip in 2026. They said the timeline could change.

Currently, Nvidia GPUs have a market share of over 80%. But shortages and rising costs have led major customers like Microsoft, Meta and now OpenAI to explore internal or external alternatives.

OpenAI’s planned use of AMD chips via Microsoft’s Azure, first reported here, shows how AMD’s new MI300X chips are looking to capture a slice of the market dominated by Nvidia. AMD expects AI chip sales of $4.5 billion in 2024, following the chip’s launch in the fourth quarter of 2023.

Training AI models and operational services like ChatGPT are expensive. According to sources, OpenAI expects a loss of $5 billion this year on revenue of $3.7 billion. Computing costs, or expenses for hardware, electricity and cloud services needed to process large data sets and develop models, are the company’s largest expense, prompting efforts to optimize usage and diversify suppliers.

OpenAI has been cautious about poaching talent from Nvidia as it wants to maintain a good relationship with the chipmaker it continues to work with, especially for access to next-generation Blackwell chips, sources said.

Nvidia declined to comment.

© Thomson Reuters 2024

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button