AMD releases details of 288GB MI355X accelerator: 80% faster than MI325X, 8TB/s memory bandwidth
We already knew a lot about AMD’s next-generation accelerator, the Instinct MI325X, from a previous event in June 2024 – but the company has now revealed more at its AMD Advancing AI event.
First off, we knew that the Instinct MI325X was a minor upgrade to the MI300X, with the same CDNA 3 architecture, but just enough power to make it a viable alternative to the H200, Nvidia’s AI powerhouse.
Astute readers will also notice that AMD has reduced the onboard HBM3e memory capacity from 288GB to 256GB, with the memory capacity now only 80% more than Nvidia’s flagship, rather than the enviable 2x improvement.
Preparing the ground for the MI355X
To make matters a little more unclear, AMD also mentioned another SKU, the MI325X OAM, which, wait for it, will have 288GB of memory – we’ve asked for clarification and will update this article in due course.
AMD has provided some handpicked performance comparisons with Nvidia’s H200:
- 1.3x the inference performance on Mistral 7B at FP16
- 1.2x the inference performance on Llama 3.1 70B at FP8
- 1.4x the inference performance on Mixtral 8x7B at FP16
The company also revealed that the accelerator has 153 billion transistors, which is the same as the MI300X. The H200 has only 80 billion transistors, while Blackwell GPUs will be at the top of the scale with over 200 billion transistors.
The star of the show, however, had to be the MI355X accelerator, which was also announced at the event with a launch date in the second half of 2025. Manufactured on TSMC’s 3nm node and featuring AMD’s new CDNA 4 architecture, it introduces FP6 and FP4 formats and is expected to deliver 80% improvements on FP16 and FP8, compared to the current MI325X accelerator.
Elsewhere, the Instinct MI355X will offer 288GB HBM3E and 8TB/s memory bandwidth, an improvement of 12.5% and 33.3% over its immediate predecessor. An 8-unit OXM platform, also launching in the second half of 2025, will offer as much as 18.5 petaflops in FP16, 37PF in FP8, 74PF in FP6 and FP4 (or 9.3PF per OXM).
The MI355x will compete with Nvidia’s Blackwell B100 and B200 when it launches in 2025, and will play a key role in Lisa Su’s bid to boost AMD’s ambitions to overtake its rival.
Nvidia remains firmly in the driver’s seat, with over 90% of the global AI accelerator market, making it the most valuable company in the world at the time of writing, with a share price at an all-time high and a market cap of $3.3 trillion.
AMD also unveiled its new family of EPYC 9005 series CPUs with a 192-core model costing almost $15,000.