Home / Tech News / Featured Tech News / AMD has developed a GDDR6 memory controller for next-gen GPUs

AMD has developed a GDDR6 memory controller for next-gen GPUs

It looks like AMD's next generation graphics cards will be getting a memory boost, as the company has begun work on a GDDR6 memory controller. We already know that companies like Micron, Samsung and SK Hynix have begun work on GDDR6. Now we know that AMD has its hands on it.

Over on Linkedin (profile removed, screenshot here), one of AMD's Technical Staff, Daehyun Jun, listed development of a GDDR6 memory controller as one of their recent accomplishments. This isn't entirely unexpected, after all, Micron, SK Hynix and Samsung are expecting to begin phasing out GDDR5 with newer GDDR6 modules starting in 2018. However, this serves as good confirmation that AMD is actively working on it.

While SK Hynix previously said that its GDDR6 modules would feature on “forthcoming high-end graphics cards” in early 2018, it seems more likely that AMD will stick to HBM2 for its high-end graphics cards. Meanwhile, GDDR6 is more likely to pop up on AMD's mainstream offerings, like the hypothetical RX 600 series.

In terms of specifications, GDDR6 will increase bandwidth per pin to 16 GB/s, which is a significant boost from GDDR5x's 10 GB/s speed.

KitGuru Says: It is looking likely that 2018 will be the year we see GDDR6 start to take over from the ageing GDDR5 standard. Are any of you planning on upgrading your GPU next year?

Become a Patron!

Check Also

Wolverine Creative Director joins Xbox to lead Perfect Dark reboot

The Initiative has signed up former Wolverine creative director, Brian Horton, to lead work on the new Perfect Dark game.

6 comments

  1. I haven’t seen any form of significant testing with regard to real-world PC gaming when it comes to memory bandwidth saturation, or is that game engine-dependent? (Meaning different performance benefits on different game engines)

  2. Nikolas Karampelas

    Most of the times it is not given separate credit, I don’t even know if it is possible, so the only difference that can be from the memory bandwidth count in the overall score. So if you get 2 identical GPUs but with different memories (let’s say GDDR3 and GDDR5) then you can get a clear picture of the difference.
    The problem is that GPU makers find ways to make their products better even with lower memory bandwidths, like for example nvidia stick for so long with a 192bit bus and yet the performance is on par with the same class amd gpu with a 256bit bus.
    Also I remember the Radeon 285 that in paper it looked worst memory wise than the older 280X but in benchmarks it was above, because of some algorithm in board that did texture compression and helped the card to move more texture data in lower speeds.

    So it is not that simple to judge a GPU from the memory’s speed, bus or size.

  3. This, alongside power draw for the memory tend to be lower in each iteration, correct? That in itself seems to be an advantage, although that goes out the window if you start overclocking the memory as well.

  4. Nikolas Karampelas

    yeah, they always try to get more speed and bandwidth for lower energy.
    As far as I understand the lower energy they use and the most efficiently they use it the less heat is produced, and because of that they can push the speed higher, so more speed = more data is getting through.
    But bandwidth is important to, because speed is how fast the data is passing through on a clock cycle, but bandwidth is how many data can pass together in the same clock cycle.
    Of course all those are useless without a good GPU to use it. Like for example you can see some low end cards in the market with 2GB of GDDR5 but with the low speeds of the GPU it barely manage to use 1GB at best and even if they could they are limited from the 64bit bus on them.
    So balance is the key usally.

    I mean I could use 1 more GB in my Radeon 7850 1GB now, the card is capable of using it but I got the 1GB because of cost. But if I was getting a low end Radeon 240 with 2GB of VRAM the card can’t use all that, so it just a waste and a marketing gimick.

  5. Indeed. I recall some low-end GPUs packing a lot of memory. Too bad memory costs today are high, messing up price-performance ratios of decent products.

    I had a 7850 Radeon 2GB variant; burned it from playing ME: Andromeda in one of those lengthy maps that involved continuous fighting (gray screen of death)

  6. My hope would be for AMD to drop HBM memory entirely and just use the fastest GDDR6 for their top end cards. I say this mainly because of the mess it was with the Vega cards. First they were delayed because of lack of HMB2 in the market. Then we had the mess with the chips being put together at different locations and the HMB2 being installed in different ways and is causing problems for card makers.

    Back in the Fury days AMD was stuck at 4GB memory because that was all there was as an option. I know it will not happen most likely but their best bet is to drop HBM tech and use GDDR6 in their next low,mid,High end cards get rid of all the issues and the bad publicity from delays and other problems.