Nvidia will be happy: Samsung’s archrival announces it has started production of HBM3E that will be used in Blackwell Ultra GPUs
South Korean memory giant SK Hynix has announced that it has begun mass production of the world’s first 12-layer HBM3E, with a total memory capacity of 36 GB, a massive increase from the previous 24 GB capacity in the 8- layer configuration.
This new design was made possible by reducing the thickness of each DRAM chip by 40%, allowing more layers to be stacked while maintaining the same overall size. The company plans to begin volume shipments in late 2024.
The HBM3E memory supports a bandwidth of 9600 MT/s, which translates to an effective speed of 1.22 TB/s when used in an eight-stack configuration. The improvement makes it ideal for processing LLMs and AI workloads that require both speed and high capacity. The ability to process more data faster allows AI models to work more efficiently.
Nvidia and AMD hardware
For advanced memory stacking, SK Hynix uses innovative packaging technologies, including Through Silicon Via (TSV) and the Mass Reflow Molded Underfill (MR-MUF) process. These methods are essential for maintaining the structural integrity and heat dissipation necessary for stable, high-performance operation in the new HBM3E. The improvements in heat dissipation performance are especially important for maintaining reliability during intensive AI processing tasks.
In addition to the increased speed and capacity, the HBM3E is designed to provide improved stability, with SK Hynix’s proprietary packaging processes ensuring minimal warpage during stacking. The company’s MR-MUF technology provides better internal pressure management, reducing the chance of mechanical failure and ensuring long-term durability.
Initial sampling for this 12-layer HBM3E product began in March 2024, with Nvidia’s Blackwell Ultra GPUs and AMD’s Instinct MI325X accelerators expected to be among the first to use this improved memory, taking advantage of up to 288 GB of HBM3E to support complex AI calculations. SK Hynix recently turned down a $374 million advance from an unknown company to ensure it could supply Nvidia with enough HMB for its in-demand AI hardware.
“SK Hynix has once again broken the technology boundaries, demonstrating our industry leadership in AI memory,” said Justin Kim, president (Head of AI Infra) at SK Hynix. “We will continue our position as the No. 1 global provider of AI memory as we steadily prepare next-generation memory products to overcome the challenges of the AI era.”