Maintaining our performance graphs has been a full time job in the last couple of months – Nvidia and AMD have been hard at work updating their drivers and in this review we test with AMD’s Crimson 16.1 driver and the Nvidia 361.43 WHQL driver. Due to public demand we also add in a range of tests at 1080p to supplement the results at 1440p and Ultra HD 4K resolutions.
Even though it is not close to being finished, we wanted to include some findings from an early access build of the (Windows 10) Direct X 12 capable Ashes of The Singularity by Stardock (website HERE) – this uses the Nitrous game engine. We are confident this is not an indication of how the game will run when it reaches retail at a future date, but still, it is interesting to showcase today. You can buy it from STEAM, over HERE.
If you want to read more about our test system, or are interested in buying the same Kitguru Test Rig, check out our article with links on this page. We are using an Asus PB287Q 4k and Apple 30 inch Cinema HD monitor for this review today.
If you have other suggestions please email me directly at zardon(at)kitguru.net.
Sapphire Nitro R9 Fury OC 4GB (1050mhz core / 500 mhz memory) & (1,128mhz core / 500 mhz memory)
Comparison Cards on test:
Sapphire R9 390 Nitro 8GB (Rev 2 w/ backplate). (1040mhz core / 1500 mhz memory)
Powercolor PCS+ Radeon R9 380X Myst Edition 4GB (1,020 mhz core / 1,475 mhz memory)
Sapphire Radeon R9 Nitro 380X (1,040 mhz core / 1,500 mhz memory)
ASUS Strix R9 380X DirectCU II OC (1,030 mhz core / 1425 mhz memory)
PowerColor Radeon R9 390 PCS+ (1,010 mhz core / 1,500 mhz memory) & (1,150 mhz core / 1,693 mhz memory).
Sapphire R9 295X2 (1,018 mhz core / 1,250mhz memory)
AMD Fury X (1,050 mhz core / 500 mhz memory)
Nvidia GTX Titan Z (706 mhz core / 1,753 mhz memory)
Nvidia GTX Titan X (1,000 mhz core / 1,753 mhz memory)
Asus GTX980 Strix (1,178 mhz core / 1,753 mhz memory)
Nvidia GTX980 Ti (1000 mhz core / 1,753 mhz memory)
Palit GTX970 (1,051 mhz core / 1,753 mhz memory)
Sapphire R9 390X Tri-X 8GB (1,055 mhz core / 1,500 mhz memory)
Sapphire R9 390 Nitro 8GB (1,010 mhz core / 1,500 mhz memory)
Sapphire R9 380 Nitro 4GB (985 mhz core / 1,450 mhz memory)
Palit GTX960 Super JetStream (1,279 mhz core / 1,800 mhz memory)
Software:
Windows 10 64 bit
Unigine Heaven Benchmark
3DMark 11
3DMark
Fraps Professional
Steam Client
FurMark
Games:
Ashes Of the Singularity DX12 mode (early access build)
Grid AutoSport
Tomb Raider
Grand Theft Auto 5
Metro 2033 Redux
We perform under real world conditions, meaning KitGuru tests games across five closely matched runs and then average out the results to get an accurate median figure. If we use scripted benchmarks, they are mentioned on the relevant page.
Game descriptions edited with courtesy from Wikipedia.
The utter power of the AiB 980Ti’s still stuns me to this day! nVidia did a stunning job on their final 28nm flagship imo…
With a price drop this card will be a damn good deal. Not as powerful as a 980ti, but cheaper.
I will agree, but i like what amd are doing too, i think both companies are releasing good products, although a lot of problems exist with both, they are finally making major investment,
there are 8gb versions in the pipeline, so crossfire those, and you have a sub £1k monster, able to hit 4k rez at 60fps.
Can you cite your source for that?
I’m not trying to be argumentative, I’d just like to read where you hear that there are 8GB Fury cards coming, as I’m under the impression that the Fiji memory controllers and the physical structure of HBM1 together impose a 4GB hard limit.
Heyyo, tbh sounds more likely that he is talking about the Fury X2 which is 8GB total… But that’s split between GPU Cores, so it’s 4GB per core. The only time it will act as a 8GB solution is games that will take advantage of explicit multi-adapter… From how hard it seems to implement according to Oxide Games and their use of it in Ashes of the Singularity? Odds are it will be a rare occurrence… So I wouldn’t bank on that 8GB of total VRAM.
Heyyo, WHOA! Dat Ashes of the Singularity though! Super impressive that it ties the Fury X and GTX 980 Ti reference card… Now I just wish there was Fable Legends also to benchmark… Hmm I wonder why Microsoft never released that benchmark to the public??? Sad face I am making…
Good enough that I bought one. EVGA does a super job with their factory overclocked cards that are only $10 more for a 15% o/c. Cool, quiet, fast, reliable, with excellent drivers. What more could I ask for?
AMD gave a great effort with the Fury X/Fury line but it really didn’t help their business all that much. AMD took a major loss when the GTX980Ti was released at a price level $150 less than what AMD was going to charge for the Fury X. AMD had to match the price and the AiB were furious and did not make many cards available. Between that, the hassle of a water cooler, and the coil and pump whine for about the same benchmarks as what NVidia was offering made folks shun the Fury and embrace the 980Ti. AMD had a real chance here too.
The GTXs suck with async compute. If you test AoS with AA (async compute) at 4k, you will understand why Nvidia asked reviewers not to use AA in benchmarks. Check out the Titan X and the r9 390 scores at 4k, crazy settings and Msaa x4. Same settings except the Titan X uses a newer version of AoS, the R9 uses an older version of AoS.
https://www.youtube.com/watch?v=JS0rhMYmehE
https://www.youtube.com/watch?v=PqPj4N9H77A
Hi there,
I have the Nitro card but somehow I cannot access the OC+ Bios, help!
hbm 2 fury
Heyyo, that information isn’t correct dude. It’s not that NVIDIA “sucks” at async compute… it’s apparently not implemented yet despite what their drivers looked like. http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/2130#post_24379702
The situation hasn’t changed yet either. NVIDIA still don’t have an async compute driver out and so far it’s unclear if it’ll mean native async compute performance or something similar to dynamic recompiling as we saw with DirectX 9 and Shader Code 24bit to 16bit (NVIDIA’s FX 5000 series at the time didn’t support 24bit natively, only 16bit, thus lost rendering time dynamically recompiling and lower framerates) which was evident when Half-Life 2 released. You can find an article about it if you Google Maximum PC shader code NVIDIA. The following generation of cards? Yes, the NVIDIA Geforce 6 series did natively support shader code 24bit.
With that said? I wouldn’t buy a Maxwell GPU as odds are since nothing has come forward yet about async compute working on Maxwell? It’s not worth the risk to get it. NVIDIA Pascal might have native async compute support… but same thing, I wouldn’t buy one until there was definitive proof from multiple review sources. Same idea as the beginning of the Dx9 era… I wouldn’t risk it unless I knew for certain it was working.
http://www.3dmark.com/fs/7490341 I7-6700K 4.6Ghz Kraken X61 DDR4 16GB 4400 Mhz XFX Fury x 1080 Mhz Sabertooth mark 1 z170
http://www.3dmark.com/sd/3810052
AMD’s expensive Fury is Not a good deal when compared with the many factory OCed versions of the performance per watt champion Maxwell GTX980Ti. HBM1’s bandwidth is wasted on 28nm GPUs, and with only 4GB it’s a 4K lame game gimped.
VERDICT: NO thank you AMD, I recommend a factory OCed GTX980Ti which is by far the biggest bang for your GPU buck.
It is mentioned in the reviews, but no one says what position is what:
1. the default BIOS of TDP 260w and a target T of 75C,
2. the second BIOS of TDP 300w and a target T of 80C!
Namely, light on is what (pressed), light off (unpressed) is what??
In my testing in both positions temperature under stress stopped at exactly 79C?? And that left me puzzled in the dark…?
Those aren’t launching until some time 2017. The landscape will most likely look completely different by then.
That score is a bit low, easily deserves a 9 imho. Fantastic cooler and performances, at the right price would’ve been a 9.5-10.