South Korean memory large SK Hynix has launched it has begun the mass manufacturing of the sphere’s first 12-layer HBM3E, featuring a full memory skill of 36GB, a sizable lengthen from the earlier 24GB skill within the 8-layer configuration.
This recent create changed into made doubtless by lowering the thickness of every and every DRAM chip by 40%, allowing more layers to be stacked whereas striking ahead the identical overall size. The company plans to starting up volume shipments by the conclude of 2024.
The HBM3E memory supports a bandwidth of 9600 MT/s, translating to an efficient bustle of 1.22 TB/s if ragged in an eight-stack configuration. The attain makes it very most attention-grabbing for handling LLMs and AI workloads that require both bustle and excessive skill. The flexibility to process more data at faster rates enables AI gadgets to speed more efficiently.
Nvidia and AMD hardware
For evolved memory stacking, SK Hynix employs innovative packaging applied sciences, including Thru Silicon By strategy of (TSV) and the Mass Reflow Molded Underfill (MR-MUF) process. These solutions are mandatory for striking ahead the structural integrity and heat dissipation required for stable, excessive-performance operation within the recent HBM3E. The enhancements in heat dissipation performance are in particular valuable for striking ahead reliability at some stage in intensive AI processing initiatives.
Moreover to its elevated bustle and skill, the HBM3E is designed to present enhanced balance, with SK Hynix’s proprietary packaging processes guaranteeing minimal warpage at some stage in stacking. The company’s MR-MUF know-how permits for better administration of inside stress, lowering the possibilities of mechanical failures and guaranteeing long-term sturdiness.
Early sampling for this 12-layer HBM3E product began in March 2024, with Nvidia’s Blackwell Ultra GPUs and AMD’s Intuition MI325X accelerators anticipated to be amongst the first to utilize this enhanced memory, taking excellent thing about up to 288GB of HBM3E to enhance complex AI computations. SK Hynix currently rejected a $374 million evolved charge from an unknown company to pick out up obvious it could perhaps perhaps perhaps provide Nvidia with ample HMB for its in-request AI hardware.
“SK Hynix has all as soon as more damaged by plot of technological limits demonstrating our alternate management in AI memory,” acknowledged Justin Kim, President (Head of AI Infra) at SK Hynix. “We can proceed our spot as the No.1 worldwide AI memory provider as we progressively prepare next-era memory merchandise to conquer the challenges of the AI era.”