Home / Component / Graphics / AMD readies three new GPUs: Greenland, Baffin and Ellesmere

AMD readies three new GPUs: Greenland, Baffin and Ellesmere

In the recent years, Advanced Micro Devices has reduced the amount of new graphics processors it releases per annum, which lead to massive erosion of its market share and revenue. While the company hopes that its latest product lineup will help it to regain some of the lost share and improve earnings, the firm pins considerably more hopes on an all-new family of products that will be released in 2016.

AMD’s code-named “Arctic Islands” family of products will include three brand-new chips – “Greenland”, “Baffin” and “Ellesmere” – a source with knowledge of AMD plans said. The “Greenland” will be the new flagship offering for performance enthusiasts, whereas the “Baffin” and “Ellesmere” will target other market segments, such as high-end and mainstream. It is unclear whether the “Arctic Islands” family will take in any existing products, but it is possible that AMD may address certain markets with previous-gen products.

amd_radeon_artwork_angle_new

“Greenland” will be AMD’s first graphics processing unit based on the all-new micro-architecture, which development began a little more than two years ago. While the architecture is currently known as another iteration of GCN, the new ISA [instruction set architecture] will be so considerably different compared to the existing GCN that it has every right to be called “post-GCN”, the source said. It is likely that the “Greenland” will retain layout of the contemporary AMD Radeon graphics processing units, but there will be significant changes in on the deeper level.

The only official thing currently known about the new architecture, which Mark Papermaster, chief technology officer of AMD, calls the next iteration of GCN, is that it is projected to be two times more energy efficient compared to the current GCN. Essentially, this means means major performance enhancements on the ISA level. Thanks to the fact that the “Greenland” graphics processing unit will be made using either 14nm or 16nm FinFET process technology, expect it to feature considerably larger number of stream processors than “Fiji”.

amd_radeon_fiji_gpu

The “Greenland” graphics processor will rely on the second-generation high-bandwidth memory (HBM), so expect ultra-high-end graphics cards and professional solutions with up to 32GB of DRAM onboard with bandwidth of up to 1TB/s. Consumer-class “Greenland”-based products will likely come with 8GB – 16GB of memory. Due to usage of HBM, expect the “Greenland” chip and upcoming graphics cards on its base to resemble the currently available AMD Radeon R9 Fury-series adapters.

The number of transistors inside the “Greenland” as well as its die size are unknown. Since 14nm/16nm FinFET manufacturing technologies have considerably (up to 90 per cent) higher transistor density than contemporary TSMC’s 28nm fabrication process, it is logical to expect that the new flagship product will feature 15 – 18 billion of elements if it retains around 600mm² die size from the “Fiji”.

It is believed that AMD has already taped-out its “Greenland” graphics processing unit and is about to get the first silicon in the coming weeks.

amd_radeon_shop-home-component

Not a lot of information is known about the “Baffin” and the “Ellesmere”. The source stressed that both GPUs are brand-new and will be designed from scratch. Since the “Baffin” and the “Ellesmere” are named after bigger and smaller islands in Canada, it is likely that the former is a mainstream graphics chip with moderate die size, whereas the former is a small entry-level GPU. AMD began to work on “Ellesmere” about a year ago.

Expect AMD to begin talking about its next-generation graphics architecture in the coming months.

AMD did not comment on the news-story.

Discuss on our Facebook page, HERE.

KitGuru Says: As it traditionally happens, everything looks very good on paper. If AMD manages to release three brand new chips within a reasonable amount of time in 2016 and the chips will be competitive against Nvidia’s, then it has all the chances to win back market share. However, one should keep in mind that the “next GCN” or the “post-GCN” will not compete against Nvidia’s “Maxwell”, but will have to compete against Nvidia’s “Pascal”, which promises to be very powerful. As a result, expect 2016 is going to be an interesting year for the GPUs…

Become a Patron!

Check Also

Leaker claims Nvidia RTX 5070 Ti will pack 8,960 CUDA cores

Leaker Kopite7kimi, known for accurate Nvidia leaks, claims that a GeForce RTX 5070 Ti is in the works and could launch alongside the RTX 5080 at CES.

87 comments

  1. Nothing but damage control considering their beyond awful market-share numbers from earlier today. I really love ATi/AMD but with them being so close to the edge it really makes me wonder if buying a card from them is a risky proposition in regards to driver support in the near/medium term!!

  2. whereas the “latter”

  3. Or just the opposite, they have improved a lot in the last year, finally they made cards which can compete head-to-head against Nvidia ones (which forced the green to make the awful gamelocks to cripple AMD cards performance) and driver support is becoming a lot better.

    But I see is easy to bribe people with “free” games (like the batman AK and MGSV promos), and “features” (gamelocks) while assuring a de-facto monopoly.

    If AMD goes out of business, I’ll go out of PC gaming, If I wanted to be under a monopoly, I’d rather be a console gamer.

    I’m just shocked how the PC community loves an open platform but get bribed easily to blatantly supports companies which are putting more and more locks on it.

  4. All machinery back again … To make so much smoke as they can produce, as quickly as possible hahahaha. To be recovered 82% market share lost … Cundo they alone have lost 18% 5% down the road in a matter of months …. This series was a failure now it redone again..

  5. Your last point about putting locks on an otherwise open platform is very true.

  6. Gotta agree. Intel wasn’t innocent in the past either, as they actually used money (and lost money) to bribe OEM’s not to use AMD processors until AMD was so cash-starved that it couldn’t compete on R&D anymore.

    It was illegal and they were convicted of it in multiple courts in multiple countries, but far too late and with far too little in terms of monetary reward to actually turn the tide.

  7. So in your opinion this release of information is purely coincidental?

  8. Will it be an overclokers’ dream?

  9. Ahh…. well said

  10. Completely agree. Impossible to tell a greater truth

  11. No, is easy to see, thats is marketing, a useful and legal tool.

  12. AMD miss the opportunity to fight back with their advantage of 1st accessing to HBM memory.
    Hope they don’t miss it again on HBM2
    I’m more on red team, but the “Overclockers Dream” for the Fury is a shame.

  13. nope its just stock frequecies 290x beating 980ti ,
    https://a.disquscdn.com/uploads/mediaembed/images/2411/4452/original.jpg?w=800&h

    but then how was than maxwell 2,0 magically was transformed since just support hardware level 11_3 (than was very knowed til there have support ) and after fo arrive of TI all maxwell gpus become to support DX12.1 , LIke kind of [email protected] ,THATS ARE YOU NVIDID JUST BULLSHIT!!!

  14. here you are the 980ti being crushed by one 290x , think about fury x the shame how will performs in new dx12 games, and before you say its drivers nvidia did already launch driver to ashes mitology
    https://a.disquscdn.com/uploads/mediaembed/images/2411/4452/original.jpg?w=800&h

  15. Before you say NVidia fanboy I run AMD cards and CPU’s but I know from personal experience that AMD are always late to the game with drivers. Except maybe in this case. AMD are in from in terms of DX12 performance but given the amount of work AMD have been putting into their next gen hardware I’m not surprised. BTW 2 – 5 FPS is not crushing the TI given they are in the same power range. R9 290’s draw a shite load of power and the heat is incredible. I stress test mine just to warm my room up in winter. Not saying they are crap cards but writing off NVidia over being late for something is just plain stupid.

  16. I really hope they don use gcn1.0 again in nx yr lineup,it is getting very old and dated. On the other hand seeing how gcn has closed the gap with maxwell on dx12 games gives me hope that with the refinements they may at least have a chance against nvidia

  17. First off 980Ti isn’t being crushed by a 290X. Second here is an image showing how FURY X in same benchmark sits on par with the 980Ti in DX12 in Ashes game. So does that mean the 290X is EQUAL to the Fury X? See how that argument fell apart completely and just shows how Fury X is no better.

    The only thing this proves is that AMD was ultra bad at DX11 and with DX12 they end up on PAR with nvidias DX11 performance. It also shows that Nvidia was really good at DX11 optimizations per individual game. But in DX12 they need to change their driver behavior and step up.

    DX12 1080 and 4K results with 980Ti and Fury X in Ashes game, then at the bottom a DX11 benchmark showing how 980TI “CRUSHES” the Fury X in performance to paraphrase you:
    http://www.extremetech.com/wp-content/uploads/2015/08/DX12-Batches-1080p.png
    http://www.extremetech.com/wp-content/uploads/2015/08/DX12-Batches-4K.png
    http://www.extremetech.com/wp-content/uploads/2015/08/DX11-Batches-4K.png

  18. The 290x is a old card, it’s far from the 980ti’s power range.
    This just embarasses the 980ti being beaten by a significantly older card.

  19. Considering the 290X is less than HALF the price of the 980Ti, yeah, it’s getting crushed.

  20. Comparison linked is against fury x.

  21. “finally they made cards which can compete head-to-head against Nvidia ones”

    Finally? the 7000 series crushed nvidia on performance/price.

  22. ➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽ my amigo’s sister makes 83$/hour on the online……..on Sat I got a prime of the degree audi sinceobtaining a check for four216$ this most exceptional 4 weeks also, 10 thousand last-munth . obviously with respect to it, this to a great degree is that the most enchanting occupation Ive had . i really started six months/before and on an especially basic level immediately was making more than 87$ p/h . try this site………..

    ➽➽OVER HERE➽➽ tinyurl.com/Jobs25pReportOnline55t ➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽➽ GO TO THIS AND CLICK NEXT LINK INSIDE IT FOR WORK DETAILS AND HLP

  23. GrimmReaper WithaSpoon

    Hope they do it right this time. If they don’t, they might go way too far into shit, without a hope of coming back.

  24. Power = TDP… not performance wise.. *facepalm* the 290x is fairly higher in that aspect.

  25. Ok its hardly providing a beating. Yeah they are neck and next. How does dx11 performance compare? Notice NVidia hardly get any performance improvements out of DX12. Its probably because their drivers actually work. Maxwell architecture was not about delivering massive performance gains but about improving performance per watt. Me for one am glad me buying a R9 290 wasn’t a waste of cash but don’t count NVidia out just yet. Wait for their actual DX12 Cards not just the ones that “support it”. Yeah this is a win for AMD but like most of their wins it may be short lived.

  26. It’s a cpu limited rts game, not a showy gpu limited game.
    If you are into that type of game dx12 is a definite win for you.
    Maybe they will find graphical improvements that they can add on to games using this performance boost, but the eye candy of the games will still be gpu limited and the 290x will still perform less than 2/3 as well as the 980ti in these games.

  27. This article is about arctic islands. Remember VLIW? How about Kepler? Just sayin.

  28. so you say AMD has bad drivers, then use bad drivers as a defence for nvidia. Would you be willing to start proclaiming nvidia has crap drivers even when they release new drivers specifically for some games eg. witcher 3 drivers didn’t help kepler on release, ashes of the singularity had a driver that apparently didn’t help either.

    So why is AMD perpetually a victim of bad drivers but nvidia is not and when nvidia has bad drivers that just means they are going to win later?

  29. we’ve been having far more hype about pascal with even kitguru apparently lovestruck. Certain pascal will be awesome while not so sure about what AMD will bring.

  30. Hope all lineups get new architectural changes. I’m keeping my eye on the Flagship Greenland GPU. Really hope it performs better than 295×2/290x Crossfire. Otherwise I see no point in paying extra to get similar or weaker performance than my 290xs. Unless Ceossfiee support goes down the drain which I doubt if DX12 and it’s multicard feature takes off.

  31. Mmn, is very weird that pretty much all high end cards (new and old) end up at the same level. It wasn’t cpu bottlenecked cause they disabled some cores and got basically the same. There’s some other bottleneck somewhere.

  32. Still CPU bottle necked, cause if you disable cores and the performance stay the same, means that it is single threaded and does not get affected by disabling cores which aren’t used anyways.

  33. So, dx12 not multi threaded after all? The lie continues! Guess they can try lower clock.

  34. Just check the task manager CPU usage, if its distributed across all threads evenly, disabling cores won’t yield you much lower results, if its heavily single threaded, it will tax a single core at all times, same situation. Which means that the former wouldn’t rely on the CPU much, and the latter would be CPU bound.

  35. Its still a very powerful architecture that ages better than nVidia’s equivalent. Maxwell is a refinement of Kepler which is a distilled Fermi, there are not radically different, but still much different compared to AMD’s GCN iterations.

  36. paste the link to the review ..

  37. I hope AMD changes everything next year with Rx400 series and Zen processor.
    We need change, AMD needs change. Intel will just release Skylake 1.1 next year.
    And Cannonlake which is just Skylake 2.0 has been delayed about 1,5 years from 1H2016 to 2H2017.

    So this is your chance AMD. Either use 16nm finfet + or a 14nm HP Process ( do we have any? ) and turn this game.

  38. samsung and global foundries are working on a 14nm HP process by adapting their current 14nm LP process. dont know what the situation is there but either way we should see a 14nm HP at some point.

  39. Is there any news about that?

  40. So the game is not the bottleneck here? Because I find it curious that the benches for DX12 give very similar results.

  41. nah other than a report or 2 about other companies working with glo fo (samsung 14nm and IBM 7nm) nothing really concrete, amd hasnt been listed on tsmc’s list of people using their next nodes and have products taped out so i guess the samsung/glo fo thing is going well.

  42. AMD missed out on an opportunity with the 370(aka 265) by not incorporating freesync. Given the affordable nature of Freesync monitors compared to gsync, that feature alone would have given them a stranglehold on the sub $200 market. Oh well. I’m looking forward to how well they capitalize on HBM2 exclusivity(early access anyhow).

  43. hey, i remember kepler – it got gimped by nvidia to sell more maxwell right? Ah yes, maxwell – nvidia wised up and let dx12 do the gimping for it… smart!

  44. very striking points

  45. Sure.
    http://www.extremetech.com/gaming/212314-directx-12-arrives-at-last-with-ashes-of-the-singularity-amd-and-nvidia-go-head-to-head

  46. I guess the Fury X is getting crushed by the 290X as well then. Since the 980Ti and Fury X sit virtually on par in same tests when head to head.

  47. And with arctic islands, a change in architecture similar to that of kepler > Maxwell what will probably happen? AMD doesn’t have all of the driver support resources NVidia does. And while the new features don’t work as well, my keplers still give me a smooth 60 fps at 4k with the W3 – just barely. Back when I had dual 280X’s they got lucky to give me smooth anything at 1080.
    And this game looks like they combined battlezone 2 with a bullet hell game, both of which can be run simultaneously on a midrange laptop, and somehow it needs 20x the graphical resources to run? Gloat on your king of the pile of garbage victory all you want. you can have that one.

  48. so you compare 3 year old gpu with this year one .how nvidia like

  49. well i can draw it that fury nano win by 1000fps over 3way sli titan x

  50. he cant it’s in his head

  51. Lahey's Empty Liquor Bottle

    There is no Samsung (or GF copy exact) 14nmFF high power process, regardless of what Ben Mitchell says. There is a *higher* power one, but it’s still considered low power, hence its designation 14nmFF LP+. Both Zen and Arctic Islands will use it. NVIDIA on the other hand will be using a high power process, TSMC’s 16nmFF+ (as opposed to the low power without + designation of the mobile chips they have finally just begun producing). There’s no indication that their 16nmFF+ is anywhere close. Aside from the HBM2 exclusivity window for AMD, this likely means NVIDIA will be much later to the party with consumer Pascal (maybe 6 months plus).

  52. Lahey's Empty Liquor Bottle

    They aren’t. There will be no HP version. They already have the LP+, which is going into low volume production in Q4. Volume, Q1 ’16.

  53. For me, the problem with nVidia cards is more their lack of value. AMD made Radeon cards since GCN 1.0 back in 2011 ready for DX12 with their asynchronous shaders. They were pushing the envelope and putting advanced technology in their products early, which means that a customer who plunked down their hard-earned money got a card that not only did the job in DX11, but now runs FASTER in DX12, years later. That’s what I call looking after your customers. nVidia customers, on the other hand, have been paying a premium and now they’re not going to get any real performance boost in DX12 on top of that.

    AMD customers seem to be getting a better value for their money, that’s all.

  54. Funny story – back when I was running 680s, a 280 or 280x was nothing in comparison. Now, even the lowly 280 runs with 770s(680’s well tuned lil brother). That, friend, is proof that AMD is doing something right with drivers. No argument from me on the game being an underwhelming hog…

  55. Your whole comment is : “lol”…

  56. The reason why NVidia are better with drivers is because they are consistently providing great performance in a wide range of games. Very rarely is it that we see NVidia have to bring out new drivers to provide better performance in something. AMD it seems is always bringing out drivers that need to be redone. For example I was one of the many that was hit with a bug when running to screens at slightly different resolutions (1920×1080 and 1920×1200) I was faced with a massive cursor on one screen and a regular sized one on another and that’s if the cursor didn’t get corrupt all together. Little things like this that constantly effect the end users experience is exactly what is driving customers away from AMD. We see AMD lose more and more market share with every generation. Even when AMD hardware is top notch like the GCN architectures the thing that lets them down consistently is driver support. Also the way the cards are designed played a big role as well NVidia cards seem to be better suited to sequential processing where as AMD card are great at parallel processing but suffer when sequential processing is required. Kinda like their Bulldozer series CPU’s Great multithread CPU’s but terrible single thread performance. We also see this with GPGPU processing AMD’s chips are faster when it comes to coin mining and things like that. Anyhow my rant over.

  57. Yeah definitely agree. It was part of the reason why I went with the r9 290 despite it being compared to something like a 960/970 in dx 11 I new with Mantle being around that we would see some awesome gains with API that supported the hardware correctly. I do love that AMD are always pushing the “Norm” of the PC world and they have many massive achievements. One of the biggest I can remember was being able to run 32bit apps on 64bit CPU’s which forced intel to adopt their 64bit standard early on. The jump to 64bit would have been a very harsh transition otherwise. They are consistently doing this with APU’s and now GPU’s. I have no doubt that AMD helped majorly with the development of DX12 because Mantle being an open source platform agnostic API (well it was meant to be) meant than mainstream gaming could then be done on other operating systems. Despite the fact I complain about AMD’s drivers I still use their hardware because of the value it provides. But people shouldn’t bitch at NVidia having a premium price because they are very consistent with hardware and drivers and are still great performers.

  58. That’s why they are working with global foundries, this would basically be Samsungs 1st Hp process and global foundries have a lot more experience dealing with that.

  59. Ok people what think about this great great explanation about why AMD should be better than NVIDIA over DirectX12 for have best supports the Shaders asynchronouscheck this is not my argument but It seems well argued.

    first the souce:http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/400#post_24321843

    Well I figured I’d create an account in order to explain away what you’re all seeing in the Ashes of the Singularity DX12 Benchmarks. I won’t divulge too much of my background information but suffice to say
    that I’m an old veteran who used to go by the handle ElMoIsEviL.

    First off nVidia is posting their true DirectX12 performance figures in these tests. Ashes of the Singularity is all about Parallelism and that’s an area, that although Maxwell 2 does better than previous nVIDIA architectures, it is still inferior in this department when compared to the likes of AMDs GCN 1.1/1.2 architectures. Here’s why…

    Maxwell’s Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task. Now compare this with AMD GCN 1.1/1.2 which is composed of 8 Asynchronous Compute Engines each able to queue 8 Compute tasks for a total of 64 coupled with 1 Graphic task by the Graphic Command Processor. See bellow:

    http://cdn.overclock.net/4/48/900x900px-LL-489247b8_Async_Aces_575px.png

    Each ACE can also apply certain Post Processing Effects without incurring much of a performance penalty. This feature is heavily used for Lighting in Ashes of the Singularity. Think of all of the simultaneous light sources firing off as each unit in the game fires a shot or the various explosions which ensue as examples.

    http://cdn.overclock.net/8/89/900x900px-LL-89354727_asynchronous-performance-liquid-vr.jpeg

    This means that AMDs GCN 1.1/1.2 is best adapted at handling the increase in Draw Calls now being made by the Multi-Core CPU under Direct X 12.

    Therefore in game titles which rely heavily on Parallelism, likely most DirectX 12 titles, AMD GCN 1.1/1.2 should do very well provided they do not hit a Geometry or Rasterizer Operator bottleneck before nVIDIA hits
    their Draw Call/Parallelism bottleneck. The picture bellow highlights the Draw Call/Parallelism superioty of GCN 1.1/1.2 over Maxwell 2:

    http://cdn.overclock.net/7/7d/900x900px-LL-7d8a8295_drawcalls.jpeg

    A more efficient queueing of workloads, through better thread Parallelism, also enables the R9 290x to come closer to its theoretical Compute figures which just happen to be ever so shy from those of the GTX 980 Ti (5.8 TFlops vs 6.1 TFlops respectively) as seen bellow:

    http://cdn.overclock.net/9/92/900x900px-LL-92367ca0_Compute_01b.jpeg

    What you will notice is that Ashes of the Singularity is also quite hard on the Rasterizer Operators highlighting a rather peculiar behavior. That behavior is that an R9 290x, with its 64 Rops, ends up performing near the same as a Fury-X, also with 64 Rops. A great way of picturing this in action is from the Graph bellow (courtesy of Beyond3D):

    http://cdn.overclock.net/b/bd/900x900px-LL-bd73e764_Compute_02b.jpeg

    As for the folks claiming a conspiracy theory, not in the least. The reason AMDs DX11 performance is so poor under Ashes of the Singularity is because AMD literally did zero optimizations for the path. AMD is
    clearly looking on selling Asynchronous Shading as a feature to developers because their architecture is well suited for the task. It doesn’t hurt that it also costs less in terms of Research and Development of drivers. Asynchronous Shading allows GCN to hit near full efficiency without even requiring any driver work whatsoever.

    nVIDIA, on the other hand, does much better at Serial scheduling of work loads (when you consider that anything prior to Maxwell 2 is limited to Serial Scheduling rather than Parallel Scheduling). DirectX 11 is
    suited for Serial Scheduling therefore naturally nVIDIA has an advantage under DirectX 11. In this graph, provided by Anandtech, you have the correct figures for nVIDIAs architectures (from Kepler to Maxwell 2)
    though the figures for GCN are incorrect (they did not multiply the number of Asynchronous Compute Engines by 8):

    http://www.overclock.net/content/type/61/id/2558710/width/350/height/700/flags/LL

    People wondering why Nvidia is doing a bit better in DX11 than DX12. That’s because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what were seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia’s driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.

    But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or
    re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel… in comes in Mantle, Vulcan and Direct X 12.

    People wondering why Fury-X did so poorly in 1080p under DirectX 11 titles? That’s your answer.

    A video which talks about Ashes of the Singularity in depth:
    https://www.youtube.com/watch?v=t9UACXikdR0

    PS. Don’t count on better Direct X 12 drivers from nVIDIA. DirectX 12 is closer to Metal and it’s all on the developer to make efficient use of both nVIDIA and AMDs architectures.

  60. Well I figured I’d create an account in order to explain away what you’re all seeing in the Ashes of the Singularity DX12 Benchmarks. I won’t divulge too much of my background information but suffice to say
    that I’m an old veteran who used to go by the handle ElMoIsEviL.

    First off nVidia is posting their true DirectX12 performance figures in these tests. Ashes of the Singularity is all about Parallelism and that’s an area, that although Maxwell 2 does better than previous nVIDIA architectures, it is still inferior in this department when compared to the likes of AMDs GCN 1.1/1.2 architectures. Here’s why…

    Maxwell’s Asychronous Thread Warp can queue up 31 Compute tasks and 1 Graphic task. Now compare this with AMD GCN 1.1/1.2 which is composed of 8 Asynchronous Compute Engines each able to queue 8 Compute tasks for a total of 64 coupled with 1 Graphic task by the Graphic Command Processor. See bellow:

    http://cdn.overclock.net/4/48/900x900px-LL-489247b8_Async_Aces_575px.png

    Each ACE can also apply certain Post Processing Effects without incurring much of a performance penalty. This feature is heavily used for Lighting in Ashes of the Singularity. Think of all of the simultaneous light sources firing off as each unit in the game fires a shot or the various explosions which ensue as examples.

    http://cdn.overclock.net/8/89/900x900px-LL-89354727_asynchronous-performance-liquid-vr.jpeg

    This means that AMDs GCN 1.1/1.2 is best adapted at handling the increase in Draw Calls now being made by the Multi-Core CPU under Direct X 12.

    Therefore in game titles which rely heavily on Parallelism, likely most DirectX 12 titles, AMD GCN 1.1/1.2 should do very well provided they do not hit a Geometry or Rasterizer Operator bottleneck before nVIDIA hits
    their Draw Call/Parallelism bottleneck. The picture bellow highlights the Draw Call/Parallelism superioty of GCN 1.1/1.2 over Maxwell 2:

    http://cdn.overclock.net/7/7d/900x900px-LL-7d8a8295_drawcalls.jpeg

    A more efficient queueing of workloads, through better thread Parallelism, also enables the R9 290x to come closer to its theoretical Compute figures which just happen to be ever so shy from those of the GTX 980 Ti (5.8 TFlops vs 6.1 TFlops respectively) as seen bellow:

    http://cdn.overclock.net/9/92/900x900px-LL-92367ca0_Compute_01b.jpeg

    What you will notice is that Ashes of the Singularity is also quite hard on the Rasterizer Operators highlighting a rather peculiar behavior. That behavior is that an R9 290x, with its 64 Rops, ends up performing near the same as a Fury-X, also with 64 Rops. A great way of picturing this in action is from the Graph bellow (courtesy of Beyond3D):

    http://cdn.overclock.net/b/bd/900x900px-LL-bd73e764_Compute_02b.jpeg

    As for the folks claiming a conspiracy theory, not in the least. The reason AMDs DX11 performance is so poor under Ashes of the Singularity is because AMD literally did zero optimizations for the path. AMD is
    clearly looking on selling Asynchronous Shading as a feature to developers because their architecture is well suited for the task. It doesn’t hurt that it also costs less in terms of Research and Development of drivers. Asynchronous Shading allows GCN to hit near full efficiency without even requiring any driver work whatsoever.

    nVIDIA, on the other hand, does much better at Serial scheduling of work loads (when you consider that anything prior to Maxwell 2 is limited to Serial Scheduling rather than Parallel Scheduling). DirectX 11 is
    suited for Serial Scheduling therefore naturally nVIDIA has an advantage under DirectX 11. In this graph, provided by Anandtech, you have the correct figures for nVIDIAs architectures (from Kepler to Maxwell 2)
    though the figures for GCN are incorrect (they did not multiply the number of Asynchronous Compute Engines by 8):

    http://www.overclock.net/content/type/61/id/2558710/width/350/height/700/flags/LL

    People wondering why Nvidia is doing a bit better in DX11 than DX12. That’s because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what were seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia’s driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.

    But what about poor AMD DX11 performance? Simple. AMDs GCN 1.1/1.2 architecture is suited towards Parallelism. It requires the CPU to feed the graphics card work. This creates a CPU bottleneck, on AMD hardware, under DX11 and low resolutions (say 1080p and even 1600p for Fury-X), as DX11 is limited to 1-2 cores for the Graphics pipeline (which also needs to take care of AI, Physics etc). Replacing shaders or
    re-compiling shaders is not a solution for GCN 1.1/1.2 because AMDs Asynchronous Compute Engines are built to break down complex workloads into smaller, easier to work, workloads. The only way around this issue, if you want to maximize the use of all available compute resources under GCN 1.1/1.2, is to feed the GPU in Parallel… in comes in Mantle, Vulcan and Direct X 12.

    People wondering why Fury-X did so poorly in 1080p under DirectX 11 titles? That’s your answer.

    A video which talks about Ashes of the Singularity in depth:
    https://www.youtube.com/watch?v=t9UACXikdR0

    PS. Don’t count on better Direct X 12 drivers from nVIDIA. DirectX 12 is closer to Metal and it’s all on the developer to make efficient use of both nVIDIA and AMDs architectures.

  61. http://www.extremetech.com/gaming/212314-directx-12-arrives-at-last-with-ashes-of-the-singularity-amd-and-nvidia-go-head-to-head#comment-2210854362

  62. Kelly Todd Michaels

    How can you complain that Nvidia bribes people with free games, knowing good and damn well that AMD has done the same thing in previous years with their silver and gold bundles to buy their gpu’s.

  63. What?

  64. yeah, pls do so yourself

  65. You’re mixing up companies. Nvidia is the one releasing driver constantly to prop up performance.

    Never had any driver bugs.for every person crying about Amd drivers there’s one having issues with Nvidia drivers.

    Amd market share is low because of views looks yours, not actual facts

  66. Wrong. Nvidia are consistent with driver updates and they provide consistent performance. Very rarely do nvidia perform badly in something but rather consistent across many different games and other benchmarks. AMDs performance fluctuates between different types of loads. As an AMD user i have seen this all too much.

  67. As an Nvidia user I have seen the opposite. Worst case Nvidia drivers burn your gpu

  68. “but writing off NVidia over being late for something is just plain stupid”

    I take it you’re talking about Nvidia not as of yet having much of any asynchronous shader throughput. Can their next “driver fix” emulate graphics and compute workloads concurrently, while what is their reason to what seems like “we’re just getting around to it now” ?

    There’s the problem as I see it, Nvidia has known they were missing asynchronous shader ability, even to the point of sacrificing it even further to achieve efficacy gains for Maxwell. Basically stripping out all non-essential bits that in their minds don’t improve gaming TWIMTBP. Then while they’ve said the there in complete compliance with Dx12, in reality they have only rudimentary ability to assign graphics and compute workloads concurrently. All the time the “beat down” has been on AMD, even while they retained the ACE units central to the GCN architecture. Perhaps part of the reason AMD power numbers aren’t as great as Maxwell,

    The real worry… How many Nvidia sponsored Dx12 title’s over the coming year will have asynchronous workloads “fudged-with”, so those games still appear descent on Maxwell cards? Then when Pascal architecture shows in gaming cards (Q4 2016- into 2017), Nvidia just have batch of “new” GameWreck titles just in time to sell those “new” cards, causing Maxwell owners dump there old cards.

  69. Still appear decent on Maxwell hardware? The games will still run fine on Maxwell hardware regardless of asynchronous shaders. They just wont see an improvement in performance which means on the 980/TI things will run just as fast in dx12 and then faster in dx11. This is more suitable for more people as they like to play DX11 titles as there are still many in the works. You might get companies like Crytek and DICE who might add DX12 support to their engines if they haven’t already which is fantastic for all AMD owners (myself included I have the R9 290). I get that people are excited about the increase of performance in DX12 believe me I am too. But what is ridiculous is that people are saying that the Maxwell arch is shit, which it’s not. The r9 290’s are about the same speed as the GTX 970’s in most things but the 970 draw very little power compared to the R9 290. Only when DX12 or Mantle comes into play do the R9 series catch back up to the 980’s and TI’s.
    http://wccftech.com/amd-r9-390x-nvidia-gtx-980ti-titanx-benchmarks/
    Benchies there ^
    The GCN cards are obviously decent enough to warranty a rebranding again. But I can’t help see the similarity between AMD’s card and CPU’s. By this I mean they work well provided things are optimised for their hardware whereas NVidia and Intel just work well. Before someone goes on an AMD fanboyily justified rant I am grateful for AMD and their willingness to push the standards and for their contributions to DX 12 which was obviously derived a lot from Mantle (even if Mantle is faster still in some areas). I would imagine Microsoft saw their gaming platform threatened when Mantle was set to be for all platforms and with how much it out performed DX11. Imagine small Mantle (or Vulcan now I guess) powered Steam boxes. $100 CPU and a $400 GPU, $6-700 total system that is still great for games. It will be interesting to see what both companies do with DX12 now that it’s official and soon to be the most available DX platform out there.

  70. Kepler didn’t get gimped by NVIDIA to sell more Maxwell, you are yet another sour AMD fanboy with buyers remorse.

    Here’s the scientific PROOF that Kepler is just as good now as it was at launch. Also do try and keep in mind that NVIDIA needs to produce drivers for TWO different architectures that people expect to run games well while AMD only has one yet AMD’s drivers are still next to non existent as I can attest to when both my AMD powered PC’s go months without drivers while my NVIDIA ones get updated monthly.

    http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/70125-gtx-780-ti-vs-r9-290x-rematch.html

    So show me what “gimping” of Kepler you are talking about please?.

    Honestly though I would just give up, buyers remorse can be horrible when you buy a GPU and see that AMD doesn’t use any money to hire software engineers for better driver support and enhanced extras like Gameworks which I can understand may be frustrating however you should just accept it and stop trying to sh1t stir conspiracy theories to make NVIDIA seem worse when there is a reason why they have 81% discrete market share.

  71. Yes it’s proof that AMD don’t have enough money to hire enough software engineers so are always left playing catch up. Just look at any big game that launches with issues that are game breaking and 9 times out of 10 it’s for AMD gamers while NVIDIA breezes on by which has AMD zealots squaling about Gameworks and NVIDIA gimping however it’s they don’t ever make as big a fuss out of all the other games broken at launch with no connections to NVIDIA?. See if NVIDIA sponsored games and non NVIDIA sponsored games are broken on AMD at launch with the only constant in all of this is AMD , how then is it NVIDIA’s fault.

    Also have you noticed the amount of games that don’t have any NVIDIA software in them however that doesn’t stop AMD’s 290 rubbing shoulders with NVIDIA’s 770 for required settings despite the fact 290 is vastly superior in hardware and has 3gb of higher bandwidth GDDR5 to 770’s 2?. At first I thought it was a typo error, then I thought it was some back handed NVIDIA marketing until I found out it’s because AMD has very little contact with the developers when it comes to drivers so developers raise the AMD requirements to cover their own backs.

    I have a GigaByte Brix with AMD APU and discrete 275x GPU as well as an AMD PC with A10-7850K APU which aren’t my main gaming machines as I use NVIDIA in my main PC however being able to try both brands I can see how vastly superior NVIDIA are with drivers in regards to amount of supported games/features and how frequently we get them. AMD really is in trouble if it cannot get drivers sorted out as it will always decimate their enthusiast market who don’t like spending large amounts of cash only to be left biting their nails worrying about how well games with high production values will run on their expensive AMD hardware.

    This is something I take no enjoyment from and actually worry about as a strong hardware market needs fierce competition. I actually hope DX12 helps AMD but people believe DX12 will put an end to GPU’s needing drivers written for them as the developers will do it all instead, no offence but I don’t see developers optimising for lots of different architecture so believe we will always need AMD and NVIDIA looking at games and fixing things that they believe can be more efficient.

  72. No offence but I have already seen this benchmark after NVIDIA’s driver and it tells a totally different story. In any case I think most people will wait until they see which GPU’s run big budget games with high production values the best under DX12 and not some unfinished RTS with mediocre visuals where the developer has simply used up as many draw calls as they possibly can to attract attention to a game that otherwise wouldn’t get the time of day in comments sections.

    Lets wait and see how well games like Star Citizen run or even Gears of War and Fable seeing as they are both DX12 titles that use AMD’s GPU’s in XB1 to do asynchronous compute which means AMD really should have the upper hand but I am willing to bet NVIDIA cards run just as well if not better.

  73. Well the reason why people “cry” about AMD drivers is because 9 times out of 10 if a big budget game with high production values launches on PC with game breaking issues or performance 40% less than expected it happens on an AMD card. AMD fanatics hit back with Gameworks did it, NVIDIA’s fault however they totally ignore all the non NVIDIA related games with issues. See if games with NVIDIA Gameworks have issues on AMD cards as well as games that have nothing to do with NVIDIA the only constant in all of this is AMD.

    So games are likely more prone to issues on AMD at launch and AMD takes months to release drivers which means new games go months without any sort of optimisation or dual GPU support which will totally destroy their enthusiast market as no one wants to spend lots of money on a GPU and then bite your nails worrying about every up coming release whether it will work or be broken. Also dual GPU’s that don’t get dual GPU drivers for months at a time?.

    There is a reason why NVIDIA release drivers on a monthly basis or around every big game launch as it’s integral to gaming on PC.

  74. Give examples please. Besides crossfire/sli which have issues more often than not, which games with high production values have had issues on AMD?

    The reason nvidia delivers drivers so often is because they need to more than AMD. AMD used to release drivers that often when their architecture was less advanced. Nvidia has cut out part of their GPUs and put the function in their drivers (this is probably why their cards don’t age well eg. 960/280x/380 performing as well as a GTX 780).

    Give me some good examples please.

  75. Well I disagree with you as asynchronous compute is only one small feature and can very much be done with software BUT only IF you have good software engineers which NVIDIA does have. Just look at how an I3 CPU bottlenecks a 280x dropping the frame rate from 60fps down to 30fps in CPU demanding scenes while a 770 and i3 stick to their 60fps without the drops, this shows you what good software engineers can do and it also shows you how NVIDIA are able to keep their graphics pipeline sufficiently filled with tasks. So under DX12 it opens up ALL available CPU cores to send draw calls to the GPU which means developers will be able to keep GPU’s shaders sufficiently filled with tasks like never before. So no I don’t think asynchronous compute is going to be this magic bullet for AMD also keep in mind AMD doesn’t have raster ordered views or conservative raster.

    Also despite all AMD fanatic conspiracy theories NVIDIA never gimped Kepler to make Maxwell look better so I doubt they will need to do that for Pascal.

    http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/70125-gtx-780-ti-vs-r9-290x-rematch.html

    Why would you believe you couldn’t use the CPU to keep the GPU’s shaders sufficiently filled with tasks?. We will see soon enough when Directx12 games with high end production values launch on PC, you know games like Fable and Gears of War as both those games use AMD hardware’s asynchronous compute on XB1 however I bet NVIDIA will run just as well if not better.

    Also NVIDIA doesn’t impair AMD’s performance with Gameworks, it’s just AMD cried about it to deflect from the fact they can’t offer AMD users similar features as they don’t even have enough software engineers to keep game drivers churning out never mind Gameworks

    AMD loses because their drivers are abysmal, not because NVIDIA “gimps” them. I know this as I own an A10-7850k PC as well as GigaByte Brix with an APU and discrete 275x. AMD go months without delivering a driver while NVIDIA deliver on a monthly basis and even go out of the way of putting a driver out for a big game even if it’s between driver time.

  76. Well I am going to have to look as my memory is a little bad which has me forgetting the game but remembering that it happened. I will look but more often than not when we read about a game that has issues those issues are either on AMD or worse on AMD than NVIDIA.

    One example would be the fact that Digital Foundry noticed that an i3 heavily bottlenecks a 280x dropping the frames from 60fps down to 30fps however when replacing the 280x for 770 the games didn’t drop to 30fps and stayed close to the required 60fps target. The reason why this isn’t better documented is because any time we look at GPU benchmarks the website always uses the fastest possible CPU to remove any CPU bottlenecks so that the cards perform as best as possible when benchmarking however when buying a GPU on a budget you don’t pair it with an I7 CPU .

    Then you have GTA5 on AMD which has really high frame variance aka micro stutter even on their newest single flagship FuryX, lots of games on AMD have frame variance issues like this that take longer than usual to be addressed.

    Then any time a Gameworks game has issues AMD blames NVIDIA however when a Gameworks game runs substantially better than NVIDIA even with Gameworks features enabled no one mentions it. Look at FarCry4 that runs so much better on AMD with all the Gameworks features enabled

    The reason why NVIDIA release more drivers is simply because they have more money to spend on software engineers which ensures games launch without as many issues as AMD.

    Remember when AMD guys said NVIDIA gimped Kepler to make Maxwell seem better?. Well that’s not true at all as I can provide a link.

    http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/70125-gtx-780-ti-vs-r9-290x-rematch.html

    Also NVIDIA hasn’t cut anything out of their GPU’s and moved it over to software.

    Both NVIDIA and AMD needed to remove things from the high end GPU’s as they were already as big as they could get on 28nm fabrication so both AMD and NVIDIA replaced the FP64 double precision compute from FuryX and TitanX so that they could use the space for features that drive games well

  77. Of course it’s good again, they only gimped it to sell more Maxwells – get on geforce’s forums you goon …

  78. You do not need an i7 to not bottleneck those cards. All you need is a regular quad or six core. An i3 is bottom of the barrel. You could even get an eight core chip for under $150. I have no idea why they suggested that an i7 was needed. The problem is that dx11 weighs heavily on a single core and when you only have 2 like an i3, its a problem. Won’t matter much with dx12 and vulkan. But ok, dx11 bottlenecks with an i3 on higher end AMD GPUs.

    I have no issues with GTA 5 on my AMD card. I did find that people using nvidia cards have the issues you mention though. https://forums.geforce.com/default/topic/825965/if-you-have-quot-micro-stutters-quot-in-gta-v-or-elsewhere-/

    https://forums.geforce.com/default/topic/825650/gta-5-on-my-gtx-970-60-fps-but-micro-stutter-clipping/

    I don’t really care about gameworks. It’s not something AMD should be measured by. The effects of gameworks depends on what features it uses and how. Some of them are trivial and will have minimal impact.

    “AMD guys” are not AMD. who cares what they say. BTW those HWC benches are different from what most sites see. I checked. Especially in games like shadow of mordor https://www.reddit.com/r/AdvancedMicroDevices/comments/3if8ui/gtx_980_780_ti_and_r9_290x_rebenchmarked_xpost/cug9r2t

    Nvidia cut out their hardware scheduler when they moved to kepler. This saved them some power, but its a step backwards. they also took out other things. AMD didn’t remove anything till Fiji and all they did there was drop the double precision to still ahead of nvidia.

    nvidia updates drivers because 1. some people think its a good thing to get minor driver updates frequently that could break things. 2. they almost have to for optimal performance on new games because their architecture is less advanced. I hope your upgrade cycle is short because once those updates stop coming, the 980, for example, might be at 380 levels.

  79. They never suggested an i7 was needed, what I said was that AMD’s GPU’s like 280x get CPU bottlenecked by an i3 dropping from 60fps down to 30fps in CPU demanding scenes while equivalent NVIDIA cards don’t have this issue and run at the 60fps as NVIDIA’s drivers make use of the extra cores to help keep the GPU filled with tasks while AMD doesn’t so getting a 6 core CPU from AMD wouldn’t help.

    They never recommended an i7 , see I said that more websites would know about AMD’s GPU’s having this CPU bottlenecking however when ever a website reviews new GPU’s they always pair it with the fastest possible CPU to ensure there is no CPU bottlenecks however if you are buying a budget GPU then you are very unlikely to buy an i7 or even i5 which means people buying budget AMD GPU’s and pairing it with a budget CPU with have severe CPU bottlenecking all because of AMD’s Directx11 drivers while NVIDIA uses the other CPU cores/threads to try and ensure the GPU is filled with tasks. You know how under DX11 only one CPU core is able to send draw calls to the GPU?. Well NVIDIA’s drivers have done their best to get around these issues as much as possible while AMD hasn’t.

    So lets hope that Directx12 makes it a little more even in that regard.

  80. Nvidia’s “Pascal”, (which everything looks very good on paper) “promises” to be very powerful!

    That’s what the GK106 and GK107 where said to bring Let us not forget…

  81. The R7 370 is just “acceptable” for a 1080p, so why when you ante-up $250+ for a FreeSync monitor (and then 1080p and 144Hz) why back it up with a $130 card with hardly adequate FpS.

    Though at least AMD was realistic in *not* claiming using some lowly card and professing Adaptive-Sync usefulness. How many purchase the 750Ti to find they need to ante up $350 for a 24″ 1080p 144Hz G-Sync monitor and find it feels like a slide show. Even a 960 won’t provide always the FpS to make a 144Hz refresh seem perfect.

    While Adaptive-Sync technology has it’s merit I’m advising hold-off till you see 27″ 1440p panels that offer it with little upcharge! When I can get that for $300-350 I’ll be advocate for such an upgrade.

  82. the whole purpose of adaptive sync is to provide a smoother experience at fps/hz below typical monitor refresh rates. subpar GPUs such as the 370 are perfect candidates to combine with $150 freesync monitors. Personally, I just bought the wifeee a reference sapphire 290 for $220 off of newegg which accomplishes everything i need it to with reduced clock-speeds/power/temperatures(and a whole lotta headroom).

  83. “$150 freesync monitors” that’s what’s wrong with your premise!

    AS where do you find a FreeSync monitor for just $150? By the time that happens you’ll be 2 year from now and the 370 will not be a factor. Sure if there where $150 1080p 60Hz FreeSync monitors then I could agree… but there aren’t.

  84. Wrong – http://www.overclock3d.net/articles/gpu_displays/aoc_launches_99_pound_freesync_monitor/1

  85. Wow, in Europe OK you got me! A 21.5″ G2260VWQ6 is yes, but who in their right mind would buy a 21″ in 2015 for $150 lol. It must be a European thing to pay for wanting to be in 2005!

    The 24″ G2460VQ6 129 British Pound equals 194.30 US Dollar… if I see that in the States still not going to move that many

  86. How does any of this move us closer to abandoning rasterization in place of full real time 4k ray tracing? How far are we away from Hollywood level affects in games? Is this still 20 years away ?

    Until the day you can’t tell the difference between a game scene and a film at 4k, all this is all very boring.

  87. Christer Nonne Nilsson

    Batham Arkham night as nVidia exclusive and did that go well to PC market? nVidia claimed it ran so well on PC but they did withdraw it from Steam and tried to bug fix it over 3 months and new release but still big problems. nVidia worked closely with them but they failed to solve the problem. This has to do with Gameworks from nVidia. Check out latest patch for Fallout 4 version 1.3b where nVidia once again worked closely with game developers but failed because Besthesda didnt want to turn off alot off functions so AMD cards runs better.

    See for yourself on this video.Look at time 11:17 how nVidia runs better than AMD before this patch and at time 11:47 after this patch AMD crush nVidia.

    https://www.youtube.com/watch?v=O7fA_JC_R5s