Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

AMD Set to Unleash High Bandwidth Memory

The super-fast stacked chip technology for GPUs may put Nvidia on defense.

May 19, 2015
AMD Logo

Advanced Micro Devices on Tuesday revealed some details about its High Bandwidth Memory (HBM) interface for GPUs, beginning with the company's next-gen Radeon 300-series products and eventually, APUs combining central processing and graphics processing.

AMD chief technology officer Joe Macri said the technology was seven years in the making, according to PCWorld. The big bonus for the chip maker—rival Nvidia is "at least a year behind" on HBM, the site quoted Macri as saying.

The technology involves "stacking" RAM chips to provide up to three times the performance of currently arranged GDDR5 memory used by GPUs. AMD can also squeeze a lot more HBM into the space taken up by conventional GDDR5 memory—to wit, 1GB of HBM fits onto a 5mm-by-7mm cell, whereas you'd need a 24mm-by-28mm footprint for 1GB of GDDR5.

Patrick Moorhead, principal analyst for Moor Insights & Strategy, said AMD's new memory technology should prove to be an almost immediate advantage for the company in the consumer market.

"AMD has worked for years to make HBM a reality, and I believe they will have a time-to-market advantage on end products with the technology," Moorhead said. "HBM helps solve what will become a big issue in graphics—scaling memory performance and power.

"I see applicability in graphics cards, PC APUs, and even some enterprise server workloads. The software is here today for the consumer workloads, but AMD or its partners will need to work hard to make a dent in the HPC market."

Recommended by Our Editors

So how does HBM work? The new memory interface required a new specification and the development of a "new type of memory chip with low power consumption and an ultra-wide bus width," which AMD accomplished in collaboration with Hynix and other partners, Hot Hardware reported.

"HBM DRAM chips are stacked vertically, and 'through-silicon vias' (TSVs) and 'μbumps' are used to connect one DRAM chip to the next, and then to a logic die, and ultimately the interposer. TSVs and μbumps are also used to connect the SoC/GPU to the interposer and the entire assembly is connected onto the same package substrate. The end result is a single package on which the GPU/SoC and High Bandwidth Memory both reside," the site explained.

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

TRENDING

About Damon Poeter

Reporter

Damon Poeter

Damon Poeter got his start in journalism working for the English-language daily newspaper The Nation in Bangkok, Thailand. He covered everything from local news to sports and entertainment before settling on technology in the mid-2000s. Prior to joining PCMag, Damon worked at CRN and the Gilroy Dispatch. He has also written for the San Francisco Chronicle and Japan Times, among other newspapers and periodicals.

Read Damon's full bio

Read the latest from Damon Poeter