- Advertisement -
- OpenAi adds Google TPUs to reduce the dependence on Nvidia GPUs
- TPU adoption highlights the Push of OpenAi to diversify the calculation options
- Google Cloud Wins OpenAi as a customer despite competitive dynamics
Openi It is said that it is going to use Google’s Tensor Processing Units (TPUs) to feed chatgpt and other products.
A report of Reuterswho quotes a source that is familiar with the move, notes that this is the first major shift of OpenAi from Nvidia Hardware, which has formed the backbone of OpenAi’s Compute Stack so far.
Google leasing TPUs through its cloud platform and adds OpenAi to a growing list of external customers that includes AppleAnthropic and safe super intelligence.
Nvidia not abandoned
Although the chips that are rented are not the most advanced TPU models from Google, the agreement reflects the efforts of OpenAI to reduce the conclusion costs and diversify outside of both Nvidia and Microsoft Azure.
The decision comes as the conclusion of the conclusion grows in addition to using the chatgpt, now more than 100 million active users every day.
That question represents a significant part of the estimated annual calculation budget from OpenAi from OpenAi.
Google’s V6E “Trillium” TPUs are built for Steady-State Inference and offer high transit with lower operational costs compared to top-end GPUs.
Although Google refused to comment and OpenAi did not respond immediately ReutersThe scheme suggests a deepening of infrastructure options.
OpenAi continues to rely on Microsoft-supported Azure for the most part of its implementation (Microsoft is somehow the largest investor in the company), but delivery problems and price pressure around GPUs have uncovered the risks of depending on a single supplier.
By bringing Google into the mix, not only improves the ability of OpenAI to scale up the calculations, it also comes into line with a wider trend in the industry in the direction of combining hardware sources for flexibility and price determination.
There is no suggestion that OpenAi is considering abandoning Nvidia, but the recording of Google’s TPUs adds more control over costs and availability.
The extent to which OpenAI can integrate this hardware into its pile is still too viewed, especially in view of the long-term dependence on the software ecosystem of Cuda and Nvidia tooling.
Maybe you like it too
- Advertisement -