Palit ships its GTX 950 StormX Dual in typical packaging for the company. A drivers CD and manual are supplied.
The card measures in at 220mm due to a slight overhang by the Palit's cooler. The cooler is a dual-slot form factor and should fit comfortably inside many smaller mATX or gaming mITX cases. A light blue colour for the cooler shroud should help to enhance a blue-themed build, although it may contradict other coloured themes.
Palit's cooler can switch its fans completely off when the GPU core runs below a certain temperature. So when you are browsing the web or even doing some light gaming, the cooler's fans enter their 0 dB mode. That is positive for users trying to build a quiet system that also has good cooling performance while gaming.
Unfortunately for many budget-conscious enthusiast who still value aesthetics, Palit uses a brown PCB rather than black. This shows up in a well-lit environment, but with white LED strips becoming increasingly popular in cases, it may be noticeable inside a build.
I do not like seeing the sticker on GPU retention bracket screws. This approach designed to prevent users from removing their cooler frustrates me – why should cleaning dust out of the card's heatsink potentially void my warranty if I know what I'm doing?
There is no backplate, as we would expect at this price range.
A single 6-pin power connector feeds the card which uses the GTX 950 GPU rated for a 90W TDP. Palit uses the power overhead to ramp up the factory-shipped clock speeds.
One SLI finger permits 2-way SLI usage. This could be an important factor for gamers who buy the GTX 950 now and may want a cheap-and-cheerful upgrade in a few months/years time.
Outputs are provided in the form of Dual-link DVI-D, DVI-I, HDMI 2.0, and DisplayPort 1.2. I think this is the ideal configuration for a card of GTX 950 performance level and price; gamers with an older secondary monitor can use VGA via the DVI-I port, and those interested in 4K have a choice of HDMI 2.0 or DisplayPort 1.2.
HDMI 2.0 support, providing 60Hz playback at 4K, is a big deal for media enthusiasts and those who may want to drive a pair of 4K monitors (primarily for work or light gaming). AMD's competing R7 370 does not offer HDMI 2.0 support, and it has not gone unnoticed by many potential customers.
The GTX 950 GPU supports usage of four simultaneous display outputs.
Palit extends the small PCB slightly past the 6-pin power connector in order to provide additional rigid mounting area for the dual-fan cooler.
Four Samsung GDDR5 chips form the 2GB of VRAM, while two empty memory chip spaces point towards Palit recycling an older or upcoming PCB design. Four power delivery phases are used and seem to be arranged in a 3+1 configuration.
A pair of 80mm fans (75mm blade-area diameter) force air through the heatsink. Palit says that the design of these fans has been learned from the design of turbojet engines (which should probably be referencing turbofan) in the aerospace industry. That point relates to the blade twist which helps enhance airflow capacity.
A solid aluminium block and fin array remove heat from the GPU and allow it to be dissipated. The design is clearly optimised towards cost-effectiveness as Palit is relying upon the conductivity of aluminium and the fin array design to spread heat across the cooler, rather than use heatpipes to transfer thermal energy.
Out-of-the-box operating frequencies for Palit's card are slightly higher than Nvidia reference values. The core runs at 1064MHz (40MHz greater), boost is 1241MHz (53MHz greater), and memory is 1653MHz (6610MHz effective – 10MHz greater). We recorded a maximum core boost frequency of 1291MHz during gaming in our well-cooled chassis.
Ok people what think about this great great explanation about why AMD should be better than NVIDIA over DirectX12 for have best supports the Shaders asynchronouscheck this is not my argument but It seems well argued.
first the souce:http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/400#post_24321843
Well I figured I’d create an account in order to explain away what you’re all seeing in the Ashes of the Singularity DX12 Benchmarks. I won’t divulge too much of my background information but suffice to say
that I’m an old veteran who used to go by the handle ElMoIsEviL.
First off nVidia is posting their true DirectX12 performance figures in these tests. Ashes of the Singularity is all about Parallelism and that’s an area, that although Maxwell 2 does better than previous nVIDIA architectures, it is still inferior in this department when compared to the likes of AMDs GCN 1.1/1.2 architectures. Here’s why…
Maxwell’s Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task. Now compare this with AMD GCN 1.1/1.2 which is composed of 8 Asynchronous Compute Engines each able to queue 8 Compute tasks for a total of 64 coupled with 1 Graphic task by the Graphic Command Processor. See bellow:
http://cdn.overclock.net/4/48/900x900px-LL-489247b8_Async_Aces_575px.png
Each ACE can also apply certain Post Processing Effects without incurring much of a performance penalty. This feature is heavily used for Lighting in Ashes of the Singularity. Think of all of the simultaneous light sources firing off as each unit in the game fires a shot or the various explosions which ensue as examples.
http://cdn.overclock.net/8/89/900x900px-LL-89354727_asynchronous-performance-liquid-vr.jpeg
This means that AMDs GCN 1.1/1.2 is best adapted at handling the increase in Draw Calls now being made by the Multi-Core CPU under Direct X 12.
Therefore in game titles which rely heavily on Parallelism, likely most DirectX 12 titles, AMD GCN 1.1/1.2 should do very well provided they do not hit a Geometry or Rasterizer Operator bottleneck before nVIDIA hits
their Draw Call/Parallelism bottleneck. The picture bellow highlights the Draw Call/Parallelism superioty of GCN 1.1/1.2 over Maxwell 2:
http://cdn.overclock.net/7/7d/900x900px-LL-7d8a8295_drawcalls.jpeg
A more efficient queueing of workloads, through better thread Parallelism, also enables the R9 290x to come closer to its theoretical Compute figures which just happen to be ever so shy from those of the GTX 980 Ti (5.8 TFlops vs 6.1 TFlops respectively) as seen bellow:
http://cdn.overclock.net/9/92/900x900px-LL-92367ca0_Compute_01b.jpeg
What you will notice is that Ashes of the Singularity is also quite hard on the Rasterizer Operators highlighting a rather peculiar behavior. That behavior is that an R9 290x, with its 64 Rops, ends up performing near the same as a Fury-X, also with 64 Rops. A great way of picturing this in action is from the Graph bellow (courtesy of Beyond3D):
http://cdn.overclock.net/b/bd/900x900px-LL-bd73e764_Compute_02b.jpeg
As for the folks claiming a conspiracy theory, not in the least. The reason AMDs DX11 performance is so poor under Ashes of the Singularity is because AMD literally did zero optimizations for the path. AMD is
clearly looking on selling Asynchronous Shading as a feature to developers because their architecture is well suited for the task. It doesn’t hurt that it also costs less in terms of Research and Development of drivers. Asynchronous Shading allows GCN to hit near full efficiency without even requiring any driver work whatsoever.
nVIDIA, on the other hand, does much better at Serial scheduling of work loads (when you consider that anything prior to Maxwell 2 is limited to Serial Scheduling rather than Parallel Scheduling). DirectX 11 is
suited for Serial Scheduling therefore naturally nVIDIA has an advantage under DirectX 11. In this graph, provided by Anandtech, you have the correct figures for nVIDIAs architectures (from Kepler to Maxwell 2)
though the figures for GCN are incorrect (they did not multiply the number of Asynchronous Compute Engines by 8):
http://www.overclock.net/content/type/61/id/2558710/width/350/height/700/flags/LL
People wondering why Nvidia is doing a bit better in DX11 than DX12. That’s because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what were seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia’s driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.
But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or
re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel… in comes in Mantle, Vulcan and Direct X 12.
People wondering why Fury-X did so poorly in 1080p under DirectX 11 titles? That’s your answer.
A video which talks about Ashes of the Singularity in depth:
https://www.youtube.com/watch?v=t9UACXikdR0
PS. Don’t count on better Direct X 12 drivers from nVIDIA. DirectX 12 is closer to Metal and it’s all on the developer to make efficient use of both nVIDIA and AMDs architectures…
❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦❦my neighbor’s ex makes 60/hour on the web……last monday I got another McLaren F1 from getting 4948 this most recent 4 weeks and-in abundance of, ten/k last-munth . with no defenselessness it’s the most satisfying work Ive ever done . I began this 10-months back and expediently start..ad bringin in more than 76 for consistently . take a gander at this site……
===LOOK AT THIS=== > tinyurl.com/Net22Money95Search ➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽ ➽➽➽➽➽➽ tAke a look and find more info clicking any link