Home / Tech News / Featured Tech News / Armari AMD Ryzen Threadripper 1950X versus Intel Core i9 7980XE – Shootout!

Armari AMD Ryzen Threadripper 1950X versus Intel Core i9 7980XE – Shootout!

LuxMark 3.1

OpenCL is a platform for harnessing GPU power for activities other than real-time 3D rendering to screen, also known as GPGPU. Unlike NVIDIA's CUDA platform, OpenCL is open source and can be ported to anything with processing power. So drivers are available for CPUs as well, both from Intel and AMD.

A popular tool for testing OpenCL performance is LuxMark. We haven't run this on many workstations before, so we only have one comparison amongst our past reviews. We ran the Sala scene on CPU only, GPU only, and then both.

Unfortunately, Intel's Core i9 does not currently appear to be supported by its OpenCL drivers, so we were not able to run the LuxMark 3.1 render on the Intel system's CPU, or on the CPU and GPU combined, using OpenCL. However, we did run the barebones C++ version of the LuxMark benchmark on both systems.

You get 82 per cent more performance form the Ryzen Threaderipper's 1950X's 16 cores compared to a Ryzen 7 1800X's 8 cores. In C++ mode, the Intel Core i9 7980XE is receiving a 16 per cent benefit from its two extra cores. The GPU scores are very interesting. The dual-GPU AMD Radeon Pro Duo in the V25R gives it a 12 per cent OpenCL advantage over the AMD Radeon Vega Frontier Edition, although the NVIDIA GeForce GTX Titan Xp is 14 per cent faster still.

Interestingly, it's swings and roundabouts where overall OpenCL performance is concerned between the V25R and S16T systems. What the latter gains in CPU performance it loses in GPU, giving an almost identical overall score. But the Radeon Vega is likely to be a better choice than the Radeon Pro Duo for a general workstation, because its modelling performance is significantly better.

LuxMark is a synthetic benchmark – it doesn't correspond directly to an application actually used in a production environment. It also uses OpenCL, the GPGPU API that is openly available to all hardware with drivers to run it.

But NVIDIA has its own proprietary GPGPU API called CUDA, which stands for Compute Unified Device Architecture. This name makes it sound as generally available as OpenCL, but in fact only NVIDIA graphics cards support it. There are a number of CUDA-enhanced 3D renderers out there, such as Octane Render, Redshift, and V-Ray.

The full list can be found on NVIDIA's website. But support for both CUDA and OpenCL is available in the latest version of the Open Source Blender, so let's turn to that application next.

Blender 2.79: Gooseberry Production Benchmark

Blender is a free and open source 3D creation suite. It supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, compositing and motion tracking, even video editing and game creation. The latest version at the time of writing, 2.79, supports rendering on the GPU as well as the CPU. In GPU mode, it will render using OpenCL with AMD graphics cards, and CUDA with NVIDIA graphics cards. For this test, we used the Gooseberry Production Benchmark. Project Gooseberry is the code name for the Blender Institute's 6th open movie, Cosmos Laundromat — a 10-minute short, the pilot for the planned first-ever free/open source animated feature film. The benchmark renders a single frame from this film in intermediate quality.

The render results are a little surprising. The Intel CPU is 62 per cent faster than the AMD one, but the latter is 11 per cent faster than when running the render via OpenCL on the Radeon Vega Frontier Edition. The surprising bit came when we tried to run the render on the Titan Xp. This was using CUDA rather than OpenCL, and is clearly not very well optimised at the default settings (32 x 32-pixel tiles) because it took over ten times longer than the CPU.

However, a bit more research showed us that GPU rendering on Blender can be very sensitive to tile size, so we tried a much larger 256 x 256-pixel tile size instead. This improved the AMD Radeon Vega Frontier Edition's performance by 27 per cent, so that now it was quicker than the Threadripper CPU, but allowed the NVIDIA GeForce GTX Titan Xp to be nearly ten times faster. Although both GPUs are sensitive to tile size, the Titan Xp's performance is much more significantly improved by tweaking this setting, showing just how important optimising your settings can be. As before, the Intel-NVIDIA combination wins on raw performance, but you need to configure things correctly.

Corona 1.3 Benchmark

Corona Renderer is a new high-performance (un)biased photorealistic renderer, available for Autodesk 3ds Max and as a standalone CLI application, and in development for Maxon Cinema 4D.

The development of Corona Renderer started back in 2009 as a solo student project of Ondřej Karlík at Czech Technical University in Prague. Corona has since evolved to a full-time commercial project, after Ondřej established a company together with the former CG artist Adam Hotový, and Jaroslav Křivánek, associate professor and researcher at Charles University in Prague.

Despite its young age, Corona Renderer has become a production-ready renderer capable of creating high-quality results. The Corona Benchmark outputs a single ray-traced frame from a sample production.

As with most of our other CPU tests, the Intel Core i9 is ahead of AMD's Ryzen Threadripper, in this case by 32 per cent.

Overall, it's no surprise that Intel's 18 cores beat AMD's 16 cores in most rendering tasks, despite the slightly lower clock speed of each one. However, the margin varies from 16 per cent in the LuxMark 3.1 C++ test to a whopping 62 per cent in Gooseberry, so there's no clear picture of how dominant the Core i9 is – it really depends on what you are running.

The picture is the same with the AMD graphics compared to NVIDIA's, and in fact even more pronounced. The GeForce GTX Titan Xp clearly has quite a bit more grunt than the Radeon Vega Frontier Edition, but if the GPGPU renderer isn't well optimised, this doesn't mean very much. AMD has put a lot of work into its OpenCL support, and its ProRender plugin for many professional applications really has a lot of potential to save massive amounts of production time.

Become a Patron!

Check Also

Lian Li launches Uni Fan TL Wireless with optional LCD screen

Lian Li is expanding its wireless fan lineup with the new Uni Fan TL Wireless …

37 comments

  1. Scappatella Col Morto

    fake benchmarks. sorry.. but isnt possible

  2. Then please, go ahead and post your own benchmarks of the two systems. Oh wait, you can’t. Keep your tin-hat “fake news” screaming to your self.

  3. Why does no one put a blu-ray player into their high end machines for these tests (or at all, even boutique companies make you do it as an upgrade)? If you have it hooked up to a monitor that also acts as your TV screen with HDMI connection, then why not make it your multimedia center like PS3/PS4/X-Box One was used for?

  4. AMD releases ThreadRipper, Intel Releases I9, AMD will release 12 nm new RyZens, seems like the competition and frog leaping is back.. great for customers..

    Intel was the king, then AMD was the king, then Intel was the king..

    One thing for sure, there is no more a single kind.. and that’s good because prices are going to go down in all categories.. no more monopolistic price hikes..

    Since AMD is back.. Cudo’s AMD…!!!

  5. If you think about it, it isnt a fair benchmarck.
    It should be every component at stock values. No OC.

  6. It seems like the ideal mix might be the Threadripper with the Titan XP.

  7. LOL ya thats what all the reviewers did with Ryzen 1600 vs 8400 no OC comparison then they say we will have OC results later.

  8. There is another variation with AMD that NVIDIA can’t compete with AMD. The Radeon Pro SSG, which contains 16 GB memory + 2 TeraBytes SSD highly integrated into the GPU..

    It can handle 8K, VR, 360 video stitching and other highly demanding graphics workloads like butter while the NVIDIA GPUs either crash with big data or just can’t perform smoothly..

    It is a $8000.00 GPU card, but if you want the best processing power for large data, you only have the high-end large-data AMD Radeon Pro SSG..

    CPU is not key here.. since most of these applications are GPU intensive and the CPU part is very secondary and the difference is negligible for these kinds of tasks when you have good enough CPUs form AMD or Intel.

  9. gimpeverydayvidia

    AMD is coming back to its old glory.

  10. Thanks for this nice review! Building a threadripper RX vega mgpu system myself this is interesting.

    I would like to point out that Armari could have read the QVL on ASRock Taichi and get better working G.Skill Flare-X memory for TR4 platform. It does provide benefit. 64GB 2933@CL14 is not impressive for a finely tuned system. It could be considered sloppy.

    Also selecting the Fatal1ty MB X399 instead would grant you 10GBit LAN.

  11. Hans Henrik Bergan

    AMD hasn’t been the king of CPUs since 2003ish Althon… And I don’t think they were ever the king of GPUs (nor were ATI)

  12. Phew, I’d like to know how many of these monsters they manufactured,

  13. 5850/5870 for a while when Nvidia had to postpone 470/480

  14. Possibly because people might just see optical drives as obsolete.
    I know I have since for about 14 years now and hadn’t really used a DVD or a Blu-ray.
    There are storage limitations to consider with optical disks vs massive SSD’s and HDD’s, plus the fact that its easier to damage your optical drives – whereas, it’s a lot cheaper to just get an external HDD (even portable ones top out today at 4TB) and just put loads of stuff there.

    Also, most movies and shows can easily have Blu-ray quality without requiring tons of space thanks to superior compression algorithms.

  15. AMD is coming back to its old glory.

  16. fake benchmarks. sorry.. but isnt possible

  17. Why? These are two production systems you can buy with three-year warranties. Armari will sell you precisely these systems.

  18. Fake account…

  19. Thanks! The Fatal1ty version of the board is quite a bit more expensive for just 10Gbit LAN, and although I’ve tested this in a previous ASRock board – it’s very quick – most people don’t have 10Gbit LANs so it’s a bit of a waste of money.

    Where the RAM is concerned, there is a bit of a RAM shortage in the UK right now. I’m absolutely certain Armari read the QVL. They have a direct line to ASRock and their suggestions for BIOS improvements are generally implemented immediately. But after testing, this was the best RAM that would work in the board with the BIOS at the time. During building, it appeared that you could either have faster RAM or a processor overclock, but not both at the same time. I’ve managed to run the system solidly at 3,066MHz, but it doesn’t make any difference to benchmark results. At 3,200MHz, it’s not entirely stable.

  20. Also, I just did a bit of research and G.Skill doesn’t have a Flare-X 3,200MHz kit as 4 x 16GB, only 2,933MHz, so I very much doubt it would be better than the Corsair memory used for this review.

  21. LOL ya thats what all the reviewers did with Ryzen 1600 vs 8400 no OC comparison then they say we will have OC results later.

  22. AMD is coming back to its old glory.

  23. Well, G.Skill Flare-X is supposed to be gauranteed Samsung B-Die which means that on TR4 you can push it higher. 3.8GHz and 3200mhz @cl14-13-13-28-42-1T is not unheard of (4x 16GB Samsung B-die DIMM’s).

    http://cdn.overclock.net/d/d5/d50fd86a_HCI2.jpeg

    Corsair can and do change suppliers as they seem fit. A certain version of the RAM corresponds to Samsung B-die but it is never specified beforehand on the order or merchandise. Only when you open the box and check the Samsung B-die lists around the web you know if you got Hynix or Samsung E-die which can’t be pushed well on TR4 platform.

  24. Well, G.Skill Flare-X is supposed to be gauranteed Samsung B-Die which means that on TR4 you can push it higher. 3.8GHz and 3200mhz @cl14-13-13-28-42-1T is not unheard of (4x 16GB Samsung B-die DIMM’s).

    http://cdn.overclock.net/d/d5/d50fd86a_HCI2.jpeg

    Corsair can and do change suppliers as they seem fit. A certain version of the RAM corresponds to Samsung B-die but it is never specified beforehand on the order or merchandise. Only when you open the box and check the Samsung B-die lists around the web you know if you got Hynix or Samsung E-die which can’t be pushed well on TR4 platform.

  25. AMD is still the absolute king here. Offering performance that high at such a lower price is the clear winner for me, hands down. Intel wants $2600 for that processor here in Canada, meanwhile I can get the top of the line Threadripper for $1200. More than double the price for marginal differences? No thanks.

  26. This screenshot is not my system but a well working prime/blender cunching system by chew.

    I would think 3.8GHz and 3200 cl 14-13-13 does fair better in performance than +200 on core. The Infinity fabric does depend on entirely on your ramspeed timings and frequency so what we usually find is that increasing RAM mhz with lower timings on TR4 does more than mhz numbers on a simple core clock.

  27. I know Ryzen gets more benefit from faster RAM than Intel appears to, but I’d love to see the benchmarks to prove what you are saying. We have found the opposite of what you are claiming. When I set the RAM on the Threadripper system to 3,066MHz and left the CPU at the same 4GHz, the Blender benchmark was EXACTLY THE SAME. And, I mean, to within a second. I’ll believe what you’re saying if you can show me benchmarks. Until then it’s uncorroborated theory.

  28. That makes no sense as both systems were tested with their off-the-shelf settings.

  29. Because it’s cheaper to buy a standalone bluray player for a TV that includes the correct codecs – Windows 10 doesn’t ship with the ones needed to natively play bluray discs.

  30. Good points Petar, though I am worried about the FCC deregulating internet access where the internet companies who don’t enjoy competition and love profits (like Comcast) will now put in paywalls for video websites (like Comcast already did for Netflix, making the company pay more so Comcast wouldn’t slow down the connection to the website to its customers. If Ajit V. Pai gets his way, we will have to pay internet companies twice (or maybe more than one company), one price for access to the internet and another price (or prices) to be permitted full speed access to the websites! I’d rather have a bluray player on my computer for both purchasing games and movies for the probable situation of Ajit V. Pai putting profits of cable (and other internet access) companies over the consumers wishes for net neutrality, and making me pay more to my internet company for access to websites I already paid them both to access!

  31. They’re roughly the same price mazty for either external or internal (sometimes the internal is cheaper because the HDMI connection and other things an external bluray player contains [like on the Playstation 3/4 for example] is already integrated into the computer), and the software to play the bluray disks comes with the software on the drives. So why aren’t these companies putting a $70 – $80 bluray player (as opposed to $25 – $50 for DVD-RW) inside a $3000 – to possibly as high as a $10000 machine?!

  32. I don’t like competition in general as it generates worst possible qualities.
    Also, it effectively makes companies come out with products that are basically identical to each other except with differences in features.

    Collaboration on the other hand from companies or even people at large to create highly innovative technologies and content by freely sharing ideas and removing planned obsolescence would be better.
    Couple it with recycling to harvest raw materials from older technology so you can make new one (and eliminate need for resource harvesting from the environment) and just put out the BEST that is possible using latest science.
    Current consumer technology is DECADES behind latest scientific knowledge and ‘competition’ is keeping us there artificially.

    Look at all the wonderful patents from several decades ago that we had the ability to turn into usable technology a LONG time ago.
    Instead, you mainly see people ‘protecting’ their intellectual rights as if their lifeline depends on it… and that’s the problem of a socio-economic system we live in, because it generates mistrust, competition and worst qualities in humans coupled with artificially induced scarcity.

    Profit and indefinite growth on a finite planet is the reason we are in this mess.

    Think of how much more we can achieve through free exchange of ideas, resources, etc… not for profit or competition but for preservation and restoration of the environment, advancement of the whole human species combined with increasing the living standard for everyone (not just select few).

    Until we change the current socio-economic system, the outdated rules in place will continue to exist and slow us down at every conceivable turn.

    As for people being forced to pay more to the ISP… that’s already happening as is.
    It’s a systemic issue at large that leads to these conditions, not lack of Blu-Ray or optical drives in general.

    Fact is, it is a lot more effective, faster and just overall better to store content on HDD’s for example because of storage capacity, lack of being able to easily damage the data, etc.
    Well, damaging the data can be relative depending on the situation… you can hit the computer or magnetize it if you wanted to, but most people don’t do that to their optical discs, let alone HDD’s.

  33. Hawaii among many others would like to say high (290/290X). Might wanna study up on GPU history a little more. Beat the crap outta Kepler in 2012 and nowadays Big Kepler (780 Ti) isn’t anywhere even freaking close (I’m talking entire tiers apart). Heck Hawaii nearly managed to go near toe to toe with the majorly revamped and drastically improved Maxwell 2 (GTX 980. Hawaii thrashed the 970) the year later with pretty much no changes whatsoever. I dunno if I can name another GPU arch in history that has aged as wonderfully as Hawaii. It just keeps kicking arse, year after year after year, while other cards fall to the wayside.

  34. And that’s far from the only one. People get serious selective amnesia when thinking about Nvidia vs AMD/ATI.

  35. The thing is, it does not really matter what was at the end of the day. It may sway your preference but what is always the most important is what products are out now or in the near future and who can deliver the best product for YOU regardless of their past failures/successes.

  36. Hans Henrik Bergan

    Marginal? Most games only use 1-2 cores. Some AAA games are optimized to use up to 4 cores, but not more than that. The css renderer in your browser use only 1 core (Firefox 57 is about to release the world’s first multithreaded css renderer, but it’s not out yet as of speaking), and as a programmer, I can assure you that optimizing programs for multiple cores is very difficult and hardly ever worth it. Thus, for the vast majority of programs, it doesn’t matter how many cores yiu

  37. *are manufacturing.
    This is an active production product, with very hefty & growing enterprise demand (for a uniquely niche, $8000 professional GPU that is). You might be thinking of it’s Fiji (Fury/Fury X/Nano) based predecessor which had the same Radeon Pro SSG name as this Vega model which replaced it. As far as that card goes, iirc AMD never released it into open sale so examples in the wild are few & far between. It was more of a “selectively sampled proof of concept” that let those major scientists & companies test and confirm the idea’s potential to drum up demand in advance of it’s much more refined and more widely available successor; which my all accounts appears to have worked exactly how AMD/RTG planned (for many Uber memory intensive workloads like editing/scrubbing 8K video footage, it’s literally the only GPU on the market that is able to handle them).