Home / Component / Graphics / Nvidia GeForce Titan X 12GB: What you need to know and expect

Nvidia GeForce Titan X 12GB: What you need to know and expect

Nvidia Corp. unexpectedly unveiled its highest-performing single-chip gaming graphics card at the Game Developers Conference on Wednesday. The new GeForce GTX Titan X will emerge on the store shelves in the coming weeks and will be the pinnacle of the “Maxwell” architecture. Let’s recap what we already know about the GeForce GTX Titan X and what should we expect from it.

The Nvidia GeForce GTX Titan X graphics card with 12GB of onboard GDDR5 memory will be powered by the code-named GM200 graphics processor that contains over eight billion of transistors. The GPU, which is also known as the “Big Maxwell”, is the most complex graphics processor ever made. 12GB of memory indirectly confirm that the graphics processing unit sports 384-bit memory bus, which, given the “Maxwell” architecture (and ROP to memory controller ratio), points to 96 raster operating units. Earlier it was reported that the GM200 would feature 3072 stream processors and 192 texture units, which seem to be accurate information.

nvidia_geforce_gtx_titan_x

The “Big Maxwell” is projected to offer around 33 – 50 per cent higher performance than the GeForce GTX 980, given its configuration, depending on applications. However, keep in mind that Nvidia’s “Maxwell” was not designed to handle professional computing tasks, therefore, it does not support native double precision FP64 compute capabilities. Even the GM200 will not be able to beat its predecessor, the GK110 in high-performance computing tasks (e.g., simulations) that require FP64.

nvidia_gm200_gpu_geforce_titan_unofficial

If Nvidia managed to maintain transistor density of the code-named GM204 graphics processor with the GM200, then the chip itself should be pretty large, about 615mm², which will result in extremely high cost. If the GM200’s die size is indeed 615mm², then a 300mm wafer can produce hold only around 95 of such chips. TSMC’s revenue per wafer processed using 28nm fabrication technology was $5850 last year. Therefore, even if GM200’s yield rate is at 100 per cent (all dies are good and can run in full configuration), one chip should cost Nvidia over $61 without testing and packaging. At 85 per cent yield, one GM200 chip costs the GPU designer around $73 without testing and packaging.

The GeForce GTX Titan X graphics board will carry 12GB of GDDR5 memory running at 7GHz. 12GB of memory is not only excessive for today’s video games, but is also very expensive. For some reason, Nvidia decided not to use SK Hynix and Micron’s premium GDDR5 components capable of running at 8GHz, but even current memory sub-system provides 336GB/s of bandwidth, which is a lot. Keeping in mind that Nvidia’s “Maxwell” GPUs use memory bandwidth very efficiently, the new GeForce GTX Titan X should demonstrate unbelievable performance in ultra-high-definition resolutions, such as 3840*2160.

The ultra-expensive GPU and a lot of memory will naturally make the new Titan X a very expensive graphics card. Previously it was reported that the board will cost $1349 in retail when it becomes available.

Keep in mind that many of the details about the GeForce GTX Titan X are yet to be confirmed by Nvidia at the GPU Technology Conference later this month.

Discuss on our Facebook page, HERE.

KitGuru Says: The real surprise about the GeForce GTX Titan X is that Nvidia decided to install 12GB of GDDR5 memory on a consumer-class graphics card. This will ensure that the board’s performance will not be limited by memory capacity any time soon, but this naturally increases the price. What remains to be seen is how high performance of the new Titan X will be in real-life applications. While the board will clearly leave behind existing graphics solution, it is not completely clear whether it is worth $1349.

Become a Patron!

Check Also

Leaker claims Nvidia RTX 5070 Ti will pack 8,960 CUDA cores

Leaker Kopite7kimi, known for accurate Nvidia leaks, claims that a GeForce RTX 5070 Ti is in the works and could launch alongside the RTX 5080 at CES.

17 comments

  1. I say it will cost 1500$ with the reference cooler and won’t allow aftermarket cooling Solutions. The question is, are there really 12GB VRAM?

  2. For the price tag of this (knowing nvidia) I can probably build 1 or 2 complete high end gaming rigs..

  3. Grzesio Ziąbkowski

    12GB = 11,5GB of corse?

  4. ..so, about 80% faster than a single 970, which makes it roughly as fast as my dual 970 setup.

    $1300…no thanks.

    I’m sure that 12GB of VRAM will come in useful one day, but by then it will probably be too slow to run the games being released.

  5. wrong performance doesn’t stack like that you might get a 40% performance increase with 2 970’s over a single 970,that may change with DX12 but don’t hold your breath and unless Nvidia is economical with the truth again it’ll have a full uncompromized 12GB GDDR5 unlike your 970’s gimped 4GB

  6. Console ports will find a way with uncompressed assets.

  7. If its the full GM200 then probably yes. If nvidia releases something like a gtx780 to the titan, a lower binned gm200 chip, chances are it will be 6gb vram, but then it defo won’t be full 6gb vram. Probably 5.5 + 0.5 and then state how it would have been a 5gb card if not for vram stretching tech.

    Please AMD just release 6gb commercial cheap affordable high performance gpus already.

  8. did you not read it??
    “However, keep in mind that Nvidia’s “Maxwell” was not designed to handle professional computing tasks, therefore, it does not support native double precision FP64 compute capabilities. ”
    This is just a MASSIVE gaming card

  9. Μιχάλης Κιουλέπογλου

    It might be expensive but it is Nvidia. You also pay for the brand. And if you dont like it dont buy it or build your own titan x. Its for enthusiasts. Those who buy crazy stuff like custom cables and LEDs anyways.

  10. Sorry but it’s split into 6 platters. 3x.3.5GB & 3 x 0.5GB
    The Bandwidth is also split so 1.5GB total is almost unusable.

  11. Deleated

  12. WOW only 40%? With the new drivers AMDs are getting 92 – 99% from adding a second GPU.

  13. Dose it really matter? Theres no games out that even come close to using 12GB.

  14. depends on the game my 2 Powercolor PCS+OC R9 290’s that I have sometimes I get a 60-80% boost using both GPU’s sometimes it’s barely 20% or the difference is negligible.In DAI there is no performance boost from using to R9 290’s due to poor mantle implementation in the game.

    Right now though whether you have AMD/Nvidia performance and memory does not stack directly as I said MS claims DX12 will change that whether or not it actually happens we’ll have to wait and see.

  15. Yon know, there are some people out there that don’t just games…? For a video workstation this card is ideal. Most applications can’t make use of SLI and don’t need double precision but can make use of OpenCL commands when there are two chips on the same board.

  16. sapphire have 8gb 290x toxic,yes it is not reference but it is a AMD gpu.