AMD brings the AI networking battle to Nvidia with a new DPU launch
AMD has unveiled an upgraded data processing unit (DPU) as it looks to stake its claim to power the next generation of AI.
The new Pensando Salina DPU is the company’s third generation and promises 2x performance, bandwidth and scale compared to the previous generation.
AMD says it can support 400G throughput, meaning faster data transfer speeds than ever before, a huge advantage as companies around the world look for faster and more efficient infrastructure to meet AI demands.
Pensando Salina DPU
As with previous generations, AMD’s latest DPU is split into two parts: the front end, which provides data and information to an AI cluster, and the back end, which manages data transfer between accelerators and clusters.
In addition to the Pensando Salina DPU (which handles the front-end), the company also announced the AMD Pensando Pollara 400 to handle the back-end.
The industry’s first Ultra Ethernet Consortium (UEC)-ready AI NIC, the Pensando Pollara 400 supports next-generation RDMA software and is supported by an open ecosystem of networking, giving customers the flexibility needed to deliver the new Embrace the AI era.
The AMD Pensando Salina DPU and AMD Pensando Pollara 400 are now in customer testing, with a public release planned for the first half of 2025.