Jaohni t1_irdfkqr wrote
Reply to comment by oscardssmith in Samsung announces 36 Gbps GDDR7 memory standard, aims to release V-NAND storage solutions with 1000 layers by 2030 by Avieshek
So, imagine you have one lane to transfer data from memory to a processor. You're probably going to clock that lane as quickly as you possibly could, right? Well, that means it'll have the lowest latency possible, too. But, if you added a second lane, you might not be able to totally double bandwidth, because you might not be able to clock both lanes as high as just the one, but maybe you get 1.8 or 1.9x the bandwidth of just the one...At the cost of slightly higher latency, in this case, 1.1x the latency.
The same idea is basically true of HBM versus GDDR. GDDR essentially has overclocked interconnects to get certain bandwidth targets, and as a consequence has lower latency, but with HBM it's difficult to clock all those interconnects at the same frequency, so you get higher bandwidth and higher latency overall. Because it's less efficient to overclock those lanes, though, HBM ends up being less power hungry (usually).
Viewing a single comment thread. View all comments