While HBM still seems set to be the next big thing when it comes to VRAM on high-end graphics cards, we may have another contender in 2016 as reports are claiming that Micron is currently developing GDDR6 memory. This isn't the first time we have heard about a new version of GDDR memory as some of next year's Pascal GPUs from Nvidia are rumoured to use GDDR5X memory.
This information comes from an exclusive Fudzilla report, with the site claiming to be in touch with a source at Micron. The source confirmed that the new GDDR6 memory standard will offer bandwidths as high as 10 to 14 gigabits per 4GB module, in comparison GDDR5 offers 7Gbps per 4GB module.
One of the benefits of GDDR6 memory will be its similarity to GDDR5, which will make it easy to manufacture and implement with new graphics cards. That said, the new memory may not be able to match the efficiency gains of HBM, on top of that, HBM 2 is also due out next year, which is set to increase performance and efficiency even further.
Previous rumours have said that the new high-end GPUs from Nvidia and AMD will feature HBM 2, while lower tier cards will now make use of GDDR5X/GDDR6 memory. If this report is to be believed, then this seems like a likely outcome for next year's new GPU launches.
KitGuru Says: This isn't the first time we have heard about a new version of GDDR memory but it is still interesting, particularly since it could mean that GPUs across the board could get memory upgrades next year.
this is failure, long term plans, recovered in past designs, makes words like failure for the future, stand out
Will be interesting… to see what get what and what will actually be there, in the future.. 😉
Until HBM’s production and price gets cheap enough, this may be the best next alternative for cheaper cards.
I’d take 14Gbps GDDR5 over HBM2 for current gen if it weren’t for the lower power usage/heat on HBM2. GDDR5 has much lower latency compared to HBM2. Bandwidth is huge on HBM2 but really unnecessary right now. But even current 7Gbps GDDR5 gets incredibly hot which limits overclock capacity on cards.
WCCF is claiming that Micron contacted them and told them that this is incorrect, and they’re only working on GDDR5X.
It’s from WCCF, so take it with a mountain of salt, but still.
http://wccftech.com/gddr6-memory-coming-2016-gpus/
Well… I will stay with my 980 till they release some good waterblocks for the new HBM GPUs by the end of next year. I really need a good 4K performance.
Do you have a source for how GDDR5X has lower latency than HBM2? Firstly, we don’t have any GPUs with HBM2 or GDDR5X to even have scientific data to make a comparison. Let’s default to HBM1 vs. GDDR5. By improving the proximity of RAM chips relative to the GPU die, the latency should actually be better with HBM1 vs. GDDR5:
http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/6
I guarantee AMD/NV will use HBM2 for flagship Big die chips in 2016 or in 2017 at the latest. The main reasons to go with GDDR5X will be because it’s cheaper, easier to manufacture in large quantities and more flexible with various combinations of 128-384 bit paths and capacity flexibility.
You may be right. I thought I remembered reading that GDDR5 had faster access times than HBM in the Fury X, based on memory performance comparisons between the 980ti and the Fury X, but can’t remember where and a quick google search didn’t bring anything up. So considering the lack of evidence on my part, I’ll side with you and defer to Anandtech on the matter. Cheers.