You will not believe how car battery technology drives the future of the AI -Supercomputers from Google
- Advertisement -
- Advertisement -
- Liquid cooling is no longer optional, it is the only way to survive AI’s thermal attack
- The leap to 400vdC lends heavily from the supply chains of electric vehicles and designogic
- The TPU -Supercomputers from Google are now performed on Gigawatt Scale with 99.999% uptime
As the demand for artificial intelligence -workloads intensifies, the physical infrastructure is of data centers undergoes a fast and radical transformation.
The will of Google, MicrosoftAnd Meta are now based on technologies that were initially developed for electric vehicles (EVs), in particular 400VDC systems, to tackle the double challenges of power delivery with high density and thermal management.
The emerging vision is of data centers that can supply up to 1 megawatt power, combined with liquid cooling systems that are designed to manage the resulting heat.
Borrow EV technology for evolution of data center
The shift to 400vdC -flow distribution marks a decisive break of legacy systems. Google previously defended the switch from the industry from 12VDC to 48VDC, but the current transition to +/- 400VDC is made possible by EV-Teelingsketens and by necessity.
The Mt. Diablo initiative, supported by Meta, Microsoft and the Open Compute Project (OCP), is intended to standardize interfaces at this voltage level.
Google says that this architecture is a pragmatic movement that releases valuable stretch space for calculation sources by disconnecting the power output of IT racks via AC-to-DC Sidecar units. It also improves efficiency by around 3%.
However, cooling has become an urgent problem. With chips from the next generation, each consuming more than 1,000 watts, the traditional air cooling is quickly outdated.
Liquid cooling has emerged as the only scalable solution for managing heat in high -density calculations.
Google has embraced this approach with complete implementations; The liquid-cooled TPU pods now work on Gigawatt scale and have delivered 99.999% uptime over the past seven years.
These systems have replaced large cooling bodies with compact cold plates, which effectively halves the physical footprint of serverheardware and four -fold calculation density compared to previous generations.
Despite this technical performance, however, skepticism is justified. The push to 1 MW racks is based on the assumption of continuously rising demand, a trend that may not come out as expected.
While Google’s roadmap emphasizes the growing power needs of AI – projecting more than 500 kW per stretch by 2030 – it will remain uncertain whether these projections will keep on the wider market.
It is also worth noting that the integration of EV-related technologies in data centers not only produces efficiency outstanding, but also new complexities, in particular with regard to safety and usability in high-voltages.
Nevertheless, the collaboration between hyperscalers and the Open Hardwarommunity signal a shared recognition that existing paradigms are no longer sufficient.
Maybe you like it too
- Advertisement -