The doorway to deep learning.

HIGH BANDWIDTH MEMORY

Scroll to learn more High Bandwidth Memory

What is HBM? High bandwidth memory (HBM) is a high-speed memory technology that uses micro-pillar grid array (MPGA) structures to combine the storage capacity and bandwidth of multiple interconnected dynamic random-access memory (DRAM) memory chips.

Why is HBM needed? Many computing applications today are constrained by insufficient memory capacity and bandwidth – especially in high-performance computing areas such as artificial intelligence and data analytics. HBM technology is helping to reduce those memory constraints and squeeze more productivity from their information technology investments.

High bandwidth history The Joint Electron Device Engineering Council (JEDEC) adopted the original HBM standard in 2013, and accepted the second-generation HBM2 version of the technology as an industry standard in January 2016.

Stacked memory The HBM2 standard allows up to eight interconnected 8-gigabit (Gb) DRAM chips (also called dies) to be stacked on top of each other, like layers in a cake. Each stack can store up to eight gigabytes (GB) of data, and multiple stacks can be combined on the same multi-chip package.

40,000 tiny holes Each 8Gb HBM2 die contains more than 5,000 “through silicon vias” (TSVs), which are tiny copper-filled holes that act as wires to connect one die to the next. A full stack of eight connected 8Gb HBM2 chips uses more than 40,000 TSVs. Most HBM2 devices also have spare TSV connections that can be switched on if data transmissions are delayed on other TSV data paths. Performance benefits HBM2’s performance benefits include...

Capacity. Each 8-chip memory stack holds up to 8 GB of data. Wide interface. A 1024-bit input/output interface provides a wide path to deliver maximum data between memory and processor. Speed. Each 8-chip memory stack can move up to 256 GB of data per second – more than 7x the maximum bandwidth of conventional GDDR5 DRAM chips. Energy savings. HBM memory achieves more than 3x the bandwidth per watt of GDDR5 DRAM. Increased data throughput at a lower frequency means less energy is required to do the same work. Space savings. Stacking multiple memory chips, and bringing the resulting stacks closer together significantly reduces the amount of circuit board space needed.

Potential markets HBM2 DRAM is well suited for artificial intelligence (AI) applications and a variety of other high-performance computing tasks, including...

• AI machine learning and deep learning • Autonomous vehicle development • and gaming • Computer vision • Advanced graphics rendering • analytics • Server networks

Initial customers HBM technology was originally developed for graphic companies such as AMD and , but is gaining favor in many applications where fast data movement is necessary. Initial HBM2 customers and their products include…

• AMD Vega-based Accelerators and Graphics cards • Nvidia P100 and V100-based Tesla Accelerators and Titan V Graphics cards • Intel Kaby Lake G Core i7 Processors for notebooks • Intel Stratix 10 MX FPGAs for Accelerators