Comments
AnimalNo5205 t1_ir9xnpe wrote
To head off the “but we barely have DDR5” comments, the G is important here. This is memory intended for use by graphics cards and GDDR6 has been a think for years now. AMD tried to move the industry towards a new standard called High Bandwidth Memory with their RX Vega products but that effort never got anywhere
Pavetsu t1_ir9zh28 wrote
Consoles also use only GDDR, there's n DDR in them.
Avieshek OP t1_ir9zr4m wrote
Samsung hasn't forgotten about HBM in their press release.
Techn028 t1_ira4rwk wrote
Hbm was so cool on the fury cards, my only wish is that there was more of it and the chip could push harder
Powerman293 t1_ira5ehs wrote
Question, what stuff asides from GPUs and game consoles uses GDDR? As a consumer ot seems like these technologies only exist for those things. But clearly more stuff must use it right?
RAZR31 t1_iragpla wrote
There is if you put the game disc in.
Jaohni t1_irajujt wrote
I wouldn't say that HBM never went anywhere; it was a high bandwidth, high latency alternative to GDDR's (relatively) low bandwidth, low latency, which was achieved by essentially overclocking the interconnects in GDDR, leading to HBM being much more power efficient. And then they overclocked their Vega series through the moon, but anyway...
...HBM is still alive and well, but it's more commonly used in server and workstation applications ATM, where bandwidth is worth as much as the compute in the right workload. We might actually see some high end gaming GPUs in a year and a half to two and a half years here, as certain incoming trends in game rendering (raytracing, machine learning, and so on), can benefit from increased bandwidth, though at least on the AMD side I think they'd prefer to do 3d stacked cache as beyond having a higher effective bandwidth, it also basically improves the perceived latency, and power efficiency is more heavily improved than via using HBM.
ElXGaspeth t1_iraq1ku wrote
It's not 1000 etch cycles like you think of. It will be 1000 layers of the (if I remember correctly) word lines/bit lines stacked up. The cells are vertical columns that run through the whole staircase. They etch the columns for the NAND cells and the staircase for landing contacts differently. It'll be a lot of etching, but not one per layer like you're picturing. It'll more likely be multiple decks of etching.
I'm a little rusty at this, though. I was mainly a DRAM guy.
owari69 t1_irasa8v wrote
I didn’t see a release date for GDDR7 in the article, so I’m guessing we’ll see the first version of it being used for the next wave of GPUs after this generation, so likely 2024. I also doubt it launches at those advertised speeds. I’d guess more like 28-30Gbps for the first version, but I’d love to be wrong.
Still, it’s good to see GDDR7 get announced. Memory bandwidth has definitely been in short supply for GPUs the last couple years, given the increasing reliance on cache to bolster effective bandwidth in the absence of big speed increases in memory itself.
AfraidBreadfruit4 t1_irb0eqq wrote
>there's n DDR in them.
It's GDR in english /s
User9705 t1_irb8re4 wrote
Just get a 20TB HD and write NVME on the top! Problem solved 🤣 /s
6SixTy t1_irbkko9 wrote
I wouldn't call it HBM as a concept an abject failure, as the tech has found a niche in Nvidia and AMD's top dog price-is-no-object accelerators.
Problem was that AMD tried to sell cards with the tech to consumers, which don't really benefit from the high bandwidth part of the tech, so all it really did at the end of the day was bump up the cost and limit VRAM amounts.
Also, tbd on new info from RDNA3, as its supposed to include MCM.
oscardssmith t1_irdexbw wrote
As I understood it, HBM isn't higher latency. It's just more expensive. Is that incorrect?
Jaohni t1_irdfkqr wrote
So, imagine you have one lane to transfer data from memory to a processor. You're probably going to clock that lane as quickly as you possibly could, right? Well, that means it'll have the lowest latency possible, too. But, if you added a second lane, you might not be able to totally double bandwidth, because you might not be able to clock both lanes as high as just the one, but maybe you get 1.8 or 1.9x the bandwidth of just the one...At the cost of slightly higher latency, in this case, 1.1x the latency.
The same idea is basically true of HBM versus GDDR. GDDR essentially has overclocked interconnects to get certain bandwidth targets, and as a consequence has lower latency, but with HBM it's difficult to clock all those interconnects at the same frequency, so you get higher bandwidth and higher latency overall. Because it's less efficient to overclock those lanes, though, HBM ends up being less power hungry (usually).
ChrisFromIT t1_irdfpaa wrote
The issue with HBM is that the costs are way to expensive, hence why you typically only find them in enterprise grade GPUs.
Grass---Tastes_Bad t1_irdib6l wrote
Question, Why does it need to be more than just GPU’s in your opinion? Is that somehow not enough or something?
Powerman293 t1_irdjmrn wrote
No I am just curious. The enterprise side of computing seems very interesting to learn about. I find it can be much more interesting than the current desktop space where 2/3 companies play in the CPU/GPU space.
I am interested in learning where this stuff goes outside of the normal consumer perview.
System32Missing t1_ire1hyv wrote
Machine learning.
AutoModerator t1_ir9qowu wrote
We have multiple giveaways running!
Phone 14 Pro & Ugreen Nexode 140W chargers Giveaway!
WOWCube® Entertainment System!
reMarkable 2 next generation electronic paper tablet giveaway!
Hohem Go AI-powered Tracking Smartphone Holder Giveaway!
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.