Home / Tech News / Featured Tech Reviews / Asus STRIX Gaming GTX 950 2GB DC2 OC Review

Asus STRIX Gaming GTX 950 2GB DC2 OC Review

Rating: 9.0.

Nvidia's Maxwell GPU architecture has been well-received by enthusiasts and gamers. One of the most promising features for Nvidia has been the architecture's ability to scale positively from low-end to high-end hardware, showing competitive performance and positive power consumption numbers along the way. But the gap between a £100 GTX 750 Ti and £150 GTX 960 is a sizeable one, and it's an area where AMD currently roams freely with the R7 370. Nvidia's counter-weapon: GTX 950.

Held together by the same GM206 GPU found on Nvidia's higher-end GTX 960, albeit with a number of features disabled, the GTX 950 is targeting gamers who want Full HD performance at 60 FPS. While that's not exactly difficult to achieve, the high-to-ultra image quality settings that the GTX 950 is designed to be used with may be a more appealing point that pricks gamers' interest.

GTX-950-650-2

The Asus STRIX Gaming GTX 950 DC2 OC graphics card uses a dual-slot cooler with two 75mm fans. This is a similar solution to what many GTX 950 board partners will be using on their models because there is unlikely to be a vendor shipping Nvidia's reference-style design.

GTX-950-650-1

Nvidia says that the GTX 950 is designed to offer the best performance in its class. With an MSRP of £129 and a TDP of 90W, the GTX 950's goal is to beat AMD's similarly-priced R7 370 while using less power to do so. Both in terms of TDP and price, the GTX 950 is sat directly between its GTX 750 Ti and GTX 960 siblings, both of which will remain in Nvidia's current product stack.

GPU GeForce
GTX 750 Ti (Maxwell)
GeForce
GTX 950
(Maxwell)
GeForce
GTX 960 (Maxwell)
GeForce
GTX 970 (Maxwell)
GeForce
GTX 980 (Maxwell)
GPU Codename GM107 GM206 GM206 GM204 GM204
Streaming Multiprocessors 5 6 8 13 16
CUDA Cores 640 768 1024 1664 2048
Base Clock 1020 MHz 1024 MHz 1126 MHz 1050 MHz 1126 MHz
GPU Boost Clock 1085 MHz 1188 MHz 1178 MHz 1178 MHz 1216 MHz
Total Video memory 2GB 2GB 2GB 4GB 4GB
Texture Units 40 48 64 104 128
Texture fill-rate 40.8 Gigatexels/sec 49.2 Gigatexels/sec 72.1 Gigatexels/sec 109.2 Gigatexels/sec 144.1 Gigatexels/sec
Memory Clock 5400 MHz 6600 MHz 7010 MHz 7000 MHz 7000 MHz
Memory Bandwidth 86.4 GB/sec 105.6 GB/sec 112.16 GB/sec 224 GB/s 224 GB/sec
Bus Width 128bit 128bit 128bit 256bit 256bit
ROPs 16 32 32 56
(following correction)
64
Manufacturing Process 28nm 28nm 28nm 28nm 28nm
TDP 60 Watts 90 Watts 120 Watts 145 Watts 165 Watts

On a technical level, the cut-down iteration of the GM206 GPU is, in many areas, effectively 75% of the core used on a GTX 960. The GTX 950 version of the GM206 GPU ships with 768 CUDA cores and 48 texture units. Those numbers are more closely aligned with the GTX 750 Ti version of Nvidia's first-gen Maxwell GM107 core, however specifically focussing on the number of ROPs puts clear daylight between the GTX 950 and its lower-end sibling.

The same 128bit memory interface found on the GTX 960 is present, however that may be less of a potential choking point given the reduced raw horsepower of the GTX 950's cut-down GPU. As was the case with the GTX 960, the same argument for more efficient utilisation of the GM206's 128bit memory interface, in comparison to Kepler, is made by Nvidia.

Clock speeds for the GTX 950 are sliced by comparison to GTX 960 frequencies. The reference core clock is rated at 1024MHz, with a maximum boost speed of 1188MHz. The 2GB of GDDR5 memory is rated to run at 1650MHz (6.6Gbps effective) to produce a bandwidth level of 105.6GB/sec. With that said, most board partners will be unlocking the GM206 core's overclocking potential and shipping their cards with higher, factory-overclocked frequencies.

Asus, for example, ships the STRIX Gaming GTX 950 DC2 OC with a core clock speed of 1165MHz, and rates for a 1355MHz boost frequency, while the memory runs at 1653MHz (6610MHz effective).

card-1

Extending to the GTX 950's features, the card supports the DirectX 12 API at feature level 12.1. A H.265 (HEVC) encoder/decoder engine built into the GPU, along with HDMI 2.0, shout loudly for the GTX 950 to be used inside a gaming HTPC. With the 90W TDP being low enough to comfortably fit inside SFF cases, the ability to output 60Hz video to a 4K TV (most of which do not have DisplayPort connections) is an important feature. HDMI 2.0 is a feature that team red's competitor card cannot offer.

One of the more notable changes between the GTX 75x cards and the GTX 950 is the TDP differential. While the GTX 750 Ti had a 60W TDP, the GTX 950 ups that number to 90W. Approximating TDP as an indicator of power consumption, the 90W rating narrowly tips the GTX 950 into a region where it requires a 6-pin PCIe power connector. This emphasises that Nvidia is focused on gaming performance with its new card, while the GTX 750 Ti, for example, still remains to cater for those wanting a graphics card to run on a PSU without a 6-pin PCIe cable (think Dell, HP, or some SFF units).

You can read more about the GM206 GPU's architecture and feature support in my colleague Allan's GTX 960 review HERE.

Become a Patron!

Check Also

Leaker claims Nvidia RTX 5070 Ti will pack 8,960 CUDA cores

Leaker Kopite7kimi, known for accurate Nvidia leaks, claims that a GeForce RTX 5070 Ti is in the works and could launch alongside the RTX 5080 at CES.

5 comments

  1. Ok people what think about this great great explanation about why AMD should be better than NVIDIA over DirectX12 for have best supports the Shaders asynchronouscheck this is not my argument but It seems well argued.

    first the souce:http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/400#post_24321843

    Well I figured I’d create an account in order to explain away what you’re all seeing in the Ashes of the Singularity DX12 Benchmarks. I won’t divulge too much of my background information but suffice to say
    that I’m an old veteran who used to go by the handle ElMoIsEviL.

    First off nVidia is posting their true DirectX12 performance figures in these tests. Ashes of the Singularity is all about Parallelism and that’s an area, that although Maxwell 2 does better than previous nVIDIA architectures, it is still inferior in this department when compared to the likes of AMDs GCN 1.1/1.2 architectures. Here’s why…

    Maxwell’s Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task. Now compare this with AMD GCN 1.1/1.2 which is composed of 8 Asynchronous Compute Engines each able to queue 8 Compute tasks for a total of 64 coupled with 1 Graphic task by the Graphic Command Processor. See bellow:

    http://cdn.overclock.net/4/48/900x900px-LL-489247b8_Async_Aces_575px.png

    Each ACE can also apply certain Post Processing Effects without incurring much of a performance penalty. This feature is heavily used for Lighting in Ashes of the Singularity. Think of all of the simultaneous light sources firing off as each unit in the game fires a shot or the various explosions which ensue as examples.

    http://cdn.overclock.net/8/89/900x900px-LL-89354727_asynchronous-performance-liquid-vr.jpeg

    This means that AMDs GCN 1.1/1.2 is best adapted at handling the increase in Draw Calls now being made by the Multi-Core CPU under Direct X 12.

    Therefore in game titles which rely heavily on Parallelism, likely most DirectX 12 titles, AMD GCN 1.1/1.2 should do very well provided they do not hit a Geometry or Rasterizer Operator bottleneck before nVIDIA hits
    their Draw Call/Parallelism bottleneck. The picture bellow highlights the Draw Call/Parallelism superioty of GCN 1.1/1.2 over Maxwell 2:

    http://cdn.overclock.net/7/7d/900x900px-LL-7d8a8295_drawcalls.jpeg

    A more efficient queueing of workloads, through better thread Parallelism, also enables the R9 290x to come closer to its theoretical Compute figures which just happen to be ever so shy from those of the GTX 980 Ti (5.8 TFlops vs 6.1 TFlops respectively) as seen bellow:

    http://cdn.overclock.net/9/92/900x900px-LL-92367ca0_Compute_01b.jpeg

    What you will notice is that Ashes of the Singularity is also quite hard on the Rasterizer Operators highlighting a rather peculiar behavior. That behavior is that an R9 290x, with its 64 Rops, ends up performing near the same as a Fury-X, also with 64 Rops. A great way of picturing this in action is from the Graph bellow (courtesy of Beyond3D):

    http://cdn.overclock.net/b/bd/900x900px-LL-bd73e764_Compute_02b.jpeg

    As for the folks claiming a conspiracy theory, not in the least. The reason AMDs DX11 performance is so poor under Ashes of the Singularity is because AMD literally did zero optimizations for the path. AMD is
    clearly looking on selling Asynchronous Shading as a feature to developers because their architecture is well suited for the task. It doesn’t hurt that it also costs less in terms of Research and Development of drivers. Asynchronous Shading allows GCN to hit near full efficiency without even requiring any driver work whatsoever.

    nVIDIA, on the other hand, does much better at Serial scheduling of work loads (when you consider that anything prior to Maxwell 2 is limited to Serial Scheduling rather than Parallel Scheduling). DirectX 11 is
    suited for Serial Scheduling therefore naturally nVIDIA has an advantage under DirectX 11. In this graph, provided by Anandtech, you have the correct figures for nVIDIAs architectures (from Kepler to Maxwell 2)
    though the figures for GCN are incorrect (they did not multiply the number of Asynchronous Compute Engines by 8):

    http://www.overclock.net/content/type/61/id/2558710/width/350/height/700/flags/LL

    People wondering why Nvidia is doing a bit better in DX11 than DX12. That’s because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what were seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia’s driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.

    But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or
    re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel… in comes in Mantle, Vulcan and Direct X 12.

    People wondering why Fury-X did so poorly in 1080p under DirectX 11 titles? That’s your answer.

    A video which talks about Ashes of the Singularity in depth:
    https://www.youtube.com/watch?v=t9UACXikdR0

    PS. Don’t count on better Direct X 12 drivers from nVIDIA. DirectX 12 is closer to Metal and it’s all on the developer to make efficient use of both nVIDIA and AMDs architectures..

  2. Nice and fine, NEM! Only problem with your wall of text is the context. Your talk about GCN being multithreaded by definition is overall wrong. What AMD suffers from is huge overhead in DX11 which for them luckily falls down in DX12. Nvidia optimized earlier for DX11 to compensate this and has not as much gains in comparison for DX12 instead.

    All the talk about the wonder drivers of AMD are ridiculous. Right now we have almost 100% DX11 games or older. Even if you start a DX12 game and have somehow good performance, it is in vain. As soon as you start an “old” DX11 game, the AMD drivers screw up with bad performance again. In short you hurt yourself with suggesting AMD if the buyer is not a pure DX12-player. Brabble about this as much as you want, but get away from the borderline game which is Ashes of Singularity. We will see many DirectX12 games in the next years with performance all over the place. This counts for both AMD and Nvidia.

    Cards like the GTX750 Ti, GTX960 and now the new GTX950 are for budget gamers and dedicated to MOBAs and RPGs and 1080p. With halfway balanced settings a player can get much more than in the past. Driver talk is uselss because all vendors have to set up their hardware for Windows 10 first. The graphic card brands are just one of many. To make predictions at this early state is overbearing.

  3. ❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦my neighbor’s ex makes 60/hour on the web……last monday I got another McLaren F1 from getting 4948 this most recent 4 weeks and-in abundance of, ten/k last-munth . with no defenselessness it’s the most satisfying work Ive ever done . I began this 10-months back and expediently start..ad bringin in more than 76 for consistently . take a gander at this site….

    ===LOOK AT THIS=== > tinyurl.com/Net22Money95Search ➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽ ➽➽➽➽➽➽ tAke a look and find more info clicking any link

  4. If you think AMD will screw up with DX 12 , You’re Wrong.With DX12 , you don’t Need Driver interventions.Because DX 12 almost talks to ACE.on other hand , AMD CGN does not need driver optimizations for DX12.while on DX11 , AMD needs Heavy optimized driver.

  5. Well, DX12 is not wonder technology. Lots of the improvements depend on the will and goals of the individual developers. I expect the performance for each vendor to be all over the place and will be up to the used game engine, developer team and of course partnerships with AMD and Nvidia will play a bigger role.

    The GTX950 is the new contender in the ring now. As a pure gaming card it is not as tempting for over 150$. As soon as the price drops somewhere from $130 to $149, it could be the new reference for cheap but halfway good gaming-HTPC builds. IIRC it is the only card in this price range except the more expensive GTX960 to feature a HDMI 2.0 connection. The GTX750 Ti is 2014 tech and still has HDMI 1.4. AMD has not a single card in ther line-up to compete in this regard. APIs or FPS alone can not replace missing connection types and industry standards like HDCP 2.2. ASUS is one of the brands to get the cards wisper-quiet in operation and even turn the fan off in idle.