Home / Tech News / Featured Tech Reviews / Visiontek Radeon R9 Fury X 4GB Review

Visiontek Radeon R9 Fury X 4GB Review

Rating: 7.5.

Today we take a somewhat belated look at the latest AMD flagship, the R9 Fury X. This graphics card has been designed to tackle Nvidia in the high end, specifically their similarly priced GTX980 Ti. The Fury X is the first consumer GPU equipped with Stacked High Bandwidth Memory (HBM) – but the big question needs answered … how does it fare against the latest Nvidia solutions?
first page
First let us look at how the Fiji based Fury X slots into the current AMD line up.

GPU R9 390X R9 290X R9 390 R9 290 R9 380 R9 285 Fury X
Launch June 2015 Oct 2013 June 2015 Nov 2013 June 2015 Sep 2014 June 2015
DX Support 12 12 12 12 12 12 12
Process (nm) 28 28 28 28 28 28 28
Processors 2816 2816 2560 2560 1792 1792 4096
Texture Units 176 176 160 160 112 112 256
ROP's 64 64 64 64 32 32 64
Boost CPU Clock 1050 1000 1000 947 970 918 1050
Peak GFLOPS (SP) 5914 5632 5120 4849 3476 3290 8602
Memory Clock 6000 5000 6000 5000 5700 5500 500
Memory Bus (bits) 512 512 512 512 256 256 4096
Max Bandwidth (GB/s) 384 320 384 320 182.4 176 512
Memory Size (MB) 8192 4096 8192 4096 4096 2048 4096
Transistors (mn) 6200 6200 6200 6200 5000 5000 8900
TDP (watts) 275 290 275 275 190 190 275

The Fury X is built on the full Fiji die, on the 28nm process. The core architecture is based on GCN 1.2 which is found in the Tonga/Antigua based R9 285 and R9 380. The Fury X is a scaled up version of this architecture with more texture units and shading cores. Compute is a primary focus for AMD with their Fiji architecture – while for instance a card such as the R9 390X is equipped with 11 Compute Units, the Fury X has a whopping 16.
block diagram
Fiji has four Shader Engine Clusters, like the older Hawaii GPU but this time it is based around 16 CGN Compute Units, as we said – an increase from the 11 per cluster in Hawaii. Fiji has 4,096 stream processors, 256 TMU's and 64 ROPS. Double precision support is slower than the 390x – running at a mere 1/16th of SP.

Analysing the hardware proves interesting, because it is clear that the design is focused on shading pixels and is clearly more powerful than current Nvidia architecture. Nvidia have advantages in other areas however such as final output blending and situations which would call for heavy geometry loads.

AMD have been keen to address power consumption. According to engineers related to the project, the Fiji design utilises improved SV12 power gating. Incorporating a liquid cooler onto the Fury X also means that temperatures will be reduced, having the knock on effect of reducing power consumption. Reduced operating temperature allows the GPU to lose less power, hence reducing the overall power consumption level.

HBM
The adoption of low latency HBM memory will have a positive impact as it has a reduced power demand over GDDR5 memory. This memory is running at a super high bandwidth through a slower bus, connected to the GPU via an interposer. It also reduces the physical footprint required for a PCB design meaning smaller graphics cards now, and in the future. HBM memory forces data through an insanely wide 4096 bit bus while the memory is clocked at a very modest 500mhz. This configuration allows for incredible memory bandwidth performance, rated at 512GB per second.

As we have detailed in previous articles, first generation HBM is limited to four 1GB stacks for a total of 4GB. This could immediately be seen as a possible concern, especially as Nvidia's GTX980 Ti and Titan X are equipped with 6GB and 12GB of GDDR5 respectively. Not only that, but the lower cost AMD 390 (review HERE) and 390X (review HERE) are equipped with 8GB – or double the count of the more expensive, flagship Fury X flagship card.

We know that many of the latest game engines such as Grand Theft Auto 5 will reap reward from larger framebuffers at Ultra HD 4K resolutions, especially when the image quality is cranked. Generally, more memory is a good thing to see, although our tests have shown that a card such as the R9 390 won't really see much benefit from 8GB of GDDR5 at 4K – the core itself runs out of horsepower before the full 8GB can be fully utilised.

While the somewhat modest allocation of 4GB of HBM on Fury X could be seen as a major concern, AMD have stated that the buffer on this card is used as efficiently as possible, and AMD's Robert Hallock stated that 4GB is more than enough for games right now, and in the future. That said, you can be sure the second generation of HBM will likely double capacity on upcoming flagship graphics cards.
650px2
So what do I have in store for you today? For the last 9 days or so I have been running a series of tests at both 1440p and Ultra HD 4K resolutions – with the latest AMD and NVIDIA drivers – to keep everything on a completely even footing. It is time consuming, but worth it. All AMD video cards are tested with the 15.6 Catalyst Beta, and all Nvidia cards the Forceware 353.30 driver.

Become a Patron!

Check Also

Leaker claims Nvidia RTX 5070 Ti will pack 8,960 CUDA cores

Leaker Kopite7kimi, known for accurate Nvidia leaks, claims that a GeForce RTX 5070 Ti is in the works and could launch alongside the RTX 5080 at CES.

36 comments

  1. There is still a part of me thinks that wonders what if they had just used the regular memory instead of HBM would it have been cheaper ?

  2. AMD have recently said that they are working on the coil whine as it has been noted it is a bit of an issue – if I find the article i read I’ll post the link.

  3. thank you for a well written article with great information

  4. What I learned, 980 Ti>Fury X> 980.

    Can’t wait for dx12!

  5. That way you would have ended up with another 290x. And we already have them plenty, don’t we?
    The fact is Fury x is very exciting piece of engineering. I personally had two of them a week now, and I am yet to even begin to fiddle with everything they can offer 😉

  6. I think if its price was cut to 600$ it would be a great option .. i believe AMD didn’t do that in the first place due to pride only

  7. As always Kitguru thanks for another great review.

    Fury seems to be hitting the 4GB limit and stuttering to unplayable levels, 4GB Fury just doesn’t have enough memory for AAA gaming now and it sure isn’t enough to be called a 4K gaming future proofing solution flagship card.

    No 28nm GPU needed HBM1 bandwidth and its measly 4GB of VRAM, the HBM bandwidth is wasted and the 4GB of VRAM isn’t 4K gaming worthy.

    Gamers are Much better off with a GTX980Ti especially a heavily-overclocked, custom-cooled GTX 980 Ti. from one of Nvidia’s many partners.

    https://www.youtube.com/watch?t=226&v=8hnuj1OZAJs

  8. That was a very good read, I will always have a soft spot for AMD but the R9 Fury X try as it might just doesn’t quite live up to the hype or expectations. With Nvidia pretty much showing their full hand in march, It’s nothing short of disappointing to see the Performance crown remain with Nvidia 3 months down the line despite AMD’s best offering. If you still gaming at 1080p and 1440p there really is only one winner here.

  9. Still going to stick with AMD not going to give my money to a company that has to pay off game devs to make their cards look better.

  10. Fury is essentially a double-sized Tonga chip with a tweak (hence the HDMI 1.4a and DP 1.2a). I think using regular GDDR5 it wouldn’t be fast enough to compete

  11. LOL, AMD paid millions to developers to use mantle and make their cards look better.

    You’re a AMDuped hypocrite who doesn’t know what the hell he’s talking about.

  12. At 4K AMD fury x CF won the prize! left Nvda Titan x SLI in the dust read the new reviews from Digitalstorm.com and teaktown.com.

  13. How many games use Mantle compared to Nvidia Gameworks again?

  14. racistmalaysian

    Mantle is open. Game works is closed.

  15. Stupid question AMD hypocrite, go buy a watt sucking AMD Rebrandeon and figure out why your question is stupid.

  16. Not a VRAM problem when other cards with 4GB are not stuttering at 4k.

    http://www.pcper.com/reviews/Graphics-Cards/AMD-Radeon-R9-Fury-X-4GB-Review-Fiji-Finally-Tested/Grand-Theft-Auto-V

    And apparently nvidia are running lower IQ settings, at least in BF4.

    http://forums.overclockers.co.uk/showthread.php?t=18679713

  17. Who you trying to fool AMD tool.

    Mantle was a bug infested beta AMD API and never open.

    Game works are developmental tools that Nvidia invested in and owns why should they share it with AMD, nobody is stopping lazy AMD from making their own.

  18. racistmalaysian

    Dx12 = mantle. AMD is not lazy. The intention is Mantle to be open sourced.

  19. great review, well worth the wait – too many fud reviews up for this card on launch day. Always trust KitGuru to be honest. Thanks – JS.

  20. A great read this morning thanks Allan. I love the methodology and all the tests are on the same graphics driver. A few of the reviews on launch day had poor test methods with GPU’s using 3 or 4 different drivers ! and a few like TTL seemed just like a pat on the back for AMD with many contradictions throughout rather than a proper review – quite easy to spot. Always read KitGuru for honesty and this just shows you cut through the crap to get to the facts. Well done.

  21. Francisco Andrés

    They already have a v2 watercooled version out there in the wild. You can compare them by the sticker they have. The v1 cooler is the one used here and the v2 is a silver/chromatic logo.

    Cheers!

  22. Francisco Andrés

    Have a link with that statement? Because anyone could say that nVidia has paid even more since the times of TWIMTBP titles vs GameEvolved from AMD.

    No matter how you slice it, nVidia has burned more dough on “”””””helping”””””” developers than AMD.

    Also, we love blanket statements, don’t we? 🙂

    Cheers!

  23. Can’t compare other cards with HBM1, apparently Fury isn’t able to use all its 4GB of VRAM and becomes a Stuttering mess. Watch the videos Fury is a FUBAR Flagship that needed more VRAM.

  24. Maybe dx12 will utilize HBM a lot better. I struggle to believe AMD have released a useless card lol. Maybe they have who knows. I am waiting for the strix 980 ti sli. I think AMD are no longer the best option for low prices, it seems NVIDIA have taken that crown along with the best performers. It seems AMD is a pointless option for CPU and GPU

  25. That what AMD Fanatics say, that’s NOT what MS says. The intention of AMD GCN only mantle was to gain an advantage for their watt sucking ReBrandeons before DX12 which will float all GPU boats. After wasting millions on over pumped mantle debt laden AMD dumped their mantle for NOTHING, ZERO, ZILCH, NIX, NIL.

  26. maybe

  27. So you really have no argument I see glad to know.

  28. “racistmalaysian 18 hours ago
    Dx12 = mantle. AMD is not lazy. The intention is Mantle to be open sourced.”

    your assumptions are out of date and wrong unfortunately , however you can write patches and submit them for upstream Vulkan inclusion….

    care of Ryan Smith on July 2, 2015

    http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/12

    “….I wanted to quickly touch upon the state of Mantle now that AMD has given us a bit more insight into what’s going on.

    With the Vulkan project having inherited and extended Mantle, Mantle’s external development is at an end for AMD. AMD has already told us in the past that they are essentially taking it back inside, and will be using it as a platform for testing future API developments.

    Externally then AMD has now thrown all of their weight behind Vulkan and DirectX 12, telling developers that future games should use those APIs and not Mantle.

    In the meantime there is the question of what happens to existing Mantle games. So far there are about half a dozen games that support the API, and for these games Mantle is the only low-level API available to them. Should Mantle disappear, then these games would no longer be able to render at such a low-level.

    The situation then is that in discussing the performance results of the R9 Fury X with Mantle, AMD has confirmed that while they are not outright dropping Mantle support, they have ceased all further Mantle optimization.

    Of particular note, the Mantle driver has not been optimized at all for GCN 1.2, which includes not just R9 Fury X, but R9 285, R9 380, and the Carrizo APU as well.

    Mantle titles will probably still work on these products – and for the record we can’t get Civilization: Beyond Earth to play nicely with the R9 285 via Mantle – but performance is another matter.

    Mantle is essentially deprecated at this point, and while AMD isn’t going out of their way to break backwards compatibility they aren’t going to put resources into helping it either. The experiment that is Mantle has come to an end…..”

  29. KitGURU please make ********** WIN 10 benchmarks!!! Worst review ever. So useless.

  30. Sure.
    great review – are you blind?!
    They did benchmarks on win 7!.. who cares about windows 7…. so useless. The only review where I see Fury X behind GTX 980Ti in wicther 3 and metro

    Do you even use win 7?!

  31. Win7 tells you everything… who use win7 for gaming?
    Why so bad 1440P – how can DX11 on win7 feed 4096GCN cores? It cant… just please remake your bad review on win10.

  32. DX12 won’t do anything extra. All it’s meant to do is give deeper access levels and remove a lot of “going through the CPU” for programs. DX12 is not some kind of magical “fix all” thing. All it is going to do is reduce CPU load and INCREASE GPU LOAD (temps, TDP, etc as well) for games that will be coded in is. THAT’S IT. Maybe there’ll be some extra tech, but aside from that… nothing.

  33. ✣☯✤✥☯❋ . if you, thought Gloria `s posting on kitguru …♛♛♛♛ ———Continue Reading

  34. Now I see why AMD didn’t give you a sample. Garbage review. Only 5 games benched? Not a single win for Fury X? Yeah, whatever.

  35. More fun with this site kitguru … Keep Reading

  36. 12 year old kids on the internet. Ugh.