It has been a while since we’ve had any interesting developments in regards to AMD’s upcoming Zen processors. However, this week someone managed to get a peek at a few Zen engineering samples, giving us a good idea of what spec AMD is currently working with for its new line of CPUs.
According to a leak that appeared on Guru3d, there are several Zen engineering samples floating around right now, a quad-core, an octa-core, a 24-core and a 32-core. The first two SKUs are going to run on the AM4 socket, while the last two are for server use, tying in to rumours we previously came across regarding AMD’s 32-core Zen plans.
The quad-core Zen CPU runs at a 65W TDP, while the eight-core model runs at a 95W TDP, so we can already start to see efficiency improvements over AMD’s last batch of desktop CPUs. The 24 core and 32 core server SKUs run at 150W and 180W respectively.
Quad-core Zen has eight threads and 2MB of L2 cache or 8MB of L3 cache, meanwhile the octa-core Zen has 16 threads and double the cache. Both of these CPUs currently run at 2.8GHz with a boost clock of 3.2GHz. However, while running in idle mode, both chips can clock down to just 550MHz and consume just 5 watts of power.
Apparently the source of this information has had a good track record with leaks in the past. However, we are not in touch with them, so we can’t independently verify the legitimacy of any of this information, so as with all leaks, take it with a grain of salt.
KitGuru Says: If this leak is accurate, then it is clear that AMD has made some strides in terms of CPU power efficiency. We still don’t have an exact release date for Zen, but current rumours point towards a possible October release date, so hopefully we hear more in the coming months. Are any of you currently waiting to see what AMD brings to the table with Zen?
AMD will be ready by 2020 I think
Release them already & hopefully they perform good compared to their last stuff. I want to do a massive upgrade to my main PC and if the Zens pan out to be very good I will either get one of those or wait for Intel to lower their prices once the competition for performance begins again and Intel feels the need to lower their prices like they did in the past when AMD CPU’s were on par or slightly better than an Intel for performance. Yes I know it has been a very long time since that happened but here is hoping it happens again this time around.
Hurry up and release so all my amd friends who angrily debate me can shut up already and see it’s nothing special, just like buttdozer and exygrater.
Id rhey can manage i5 2500k level performance from there highest spec model they may just be competitive again
If this will compete with intel pricewise with 8 cores than i’ll surelly move to that from my 4 core HT enabled Intel. But let’s not hype anything for now and see what these CPUs will bring to market. What makes me happy is the fact that AMD might sell higher core count at the same price as Intel forcing the Intel to drop the prices
That will be limited volume, full shelves in 2025.
That is about the worst case scenario, actually.
3.2 Ghz … So this is gonna be compete with Core i3 then :v
So you’re a moron ._.
Frequency doesn’t mean shit. You can overclock a Pentium 4 to over 6Ghz, and it would still get pummeled by Athlon 64 at like 2Ghz. Same way FX9590 gets rekt by i7 2600k.
Hey, AMD isn’t nVidia. They won’t go *that* low.
Hurry up and take your butthurt somewhere else. Sentient beings are trying to talk here.
it all comes down to the ICP of a single core/thread… shit a Intel premium a i3 completely kill amd 8350 when it comes to thread performance…so more cores or more hurts means shit if the ICP is crap-tastic
Really as in Zen is going to crush it and be easily competitive with i7s? Or worse case as in they shouldn’t even to try to compete with i5/i7 and resign themselves to the cheaper end of the market. Genuinely curious I want to hope Zen will turn it around, but AMD have been a let down for quite a while…
As in the i5 2500k is a low enough bar to beat that Excavator+40% cleanly skips right over it.
It has a base of 3.3Ghz and turbo of 3.7Ghz, which is right around the rumored clocks for the 95W 8-core Zen CPU.
It has only about 33% higher average IPC than Excavator, so Zen should beat it in single threaded workloads and multi-threaded workloads – whether they are integer or floating point heavy tasks.
It then just becomes a matter of overclocking, which will likely still favor the i5 2500k. I, nor anyone else who studied Zen in depth, believes that Zen will see 5Ghz on anything less than LN2 subzero cooling setups.
It’s an engineering sample, you’re not looking at the flagship’s final frequencies.
FYI, when Bulldozer was in ES five years ago, it has the same frequency too:
http://wccftech.com/amd-bulldozer-essample-leaked-benchmarked-tested-asus-sabertooth-990fx-am3/
I basically get paid approximately 6000-8000 bucks every month for freelancing i do from my home. If you are looking to complete easy online work for 2-5 hrs /day from your home and earn decent profit while doing it… This is a job for you… SELF97.COM
@anticeon:disqus
https://en.wikipedia.org/wiki/Instructions_per_cycle
https://en.wikipedia.org/wiki/Megahertz_myth
@anticeon:disqus
https://en.wikipedia.org/wiki/Instructions_per_cycle
https://en.wikipedia.org/wiki/Megahertz_myth
People like you make me scratch my head. You use terms and speak with assertion in a way as to suggest you know what you are talking about. But then you ask a question that has been answered for years now, suggesting you are so disconnected that your input isn’t helpful at all.
Yes, Zen is not sharing FPU. This has been known, not just leaked, since work started on Zen years ago. Whether the chip is going to be as good as the hype is up for debate, obviously, but from what we know with certainty this is going to be a very good chip that is miles better than the last generation.
<<o. ★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★✫★:::::::!be432p:….,……
I liked yhis post
Until you used the word rekd
I lost all respect then
5960X 3 GHZ
http://www.newegg.com/Product/Product.aspx?Item=N82E16819117404
Guess some people paid 1K for an I3…
Your friends must be getting sex, I can tell because they are probably married with kids and thus recognize the value of saving money.
I don’t think it all comes down to ICP, Look at the AMD 480 graphics card this round, it’s 7% less powerful than nvidia’s offering but cost 100% less. and finaly people are buying amd because of that. I would say you need to have “a reasonable ipc” and it mainly comes down to performance/price. which unless intel cuts the $1300 price of there upcomming 8 core, the amd will compete with, well heck, outright trump it.
it was used sarcastically tho
running program in multicore just doesnt work. i actually hope AMD make 5Ghz 6core proc instead of 8-16 thread but running at 3,2 Ghz.
Yeah but can I play The Witcher 3 on Ultra settings with an i3? Uhh no. But I can with my FX8350.
LOL…Use terms and speak with assertion….. he used ICP instead of IPC(Instructions Per Cycle)….twice so it cant be just butterfingering.
He has no idea what he is talking about. (might not even have an idea what FPU stands for)
it will compete with your stupid Arab head !! Did i7 6950X/5960x compete with i3 too because they are 3Ghz
you are classic dumb arab!! go back to cave or ride a camel in desert !!
i5 2500K is already junk now and on fx 6350 /fx 8300 level at best or worst then them!! go back to cave
Actually, whether or not IPC or core count matters more on the type of workload your computer is asked to accomplish. For example, core count actually matters more for encoding videos but IPC matters more for DX11 games (though core count will be more important in DX12 and Vulkan games than DX11 games).
Yeah, offering a card that is capable of 1080p at over 60 FPS Ultra on even demanding games (and even 1440p at over 60 FPS using Vulkan in Doom) at only US$199 has, unsurprisingly won over a lot of mind share for AMD – something they have been struggling to gain the last few years. If they do a similar thing with Zen they will likely get the same result.
Actually, it depends on the program. I have a friend that is a graphic artist and takes a lot of professional level photos. He is about to get a dual socket Xeon motherboard. For now he will only use a 6 core Xeon, but later on will add a second. Those 12 with 24 threads cores matter more for Photoshop than clock speed. IPC also matters more than clockspeed. That is why Xeons sell so well with people like him rather than only higher clocked but cheaper Intel CPUs. It would not take much effort to beat an i3 IPC wise. It is even coming close to i7s and Xeons IPC wise that is hard.
You are ignoring the fact that Global Foundries is shipping good volume with 14 nm unlike 28 nm. The difference is they paid for use of Samsung’s 14 nm rather than make their own.
you are the dum one. even video editing use single core in windows except the rendering process.
Silicon is nearing it’s limit in terms of clock speeds, Doubtful we will ever see stock 5-6ghz until they shift to a new material.
I sure would love a stock 8 core 5ghz cpu from either Intel or AMD though. 😉
Comments like yours are why the world is full of troubles!
Cut the personal insults, You can ridicule a piece of tech or company all you like or someone’s choice in buying something but don’t insult anyone.
I hope the final boost clocks for octa-core Zen are at least 3.8GHz. IPC seems similar to Broadwell-e so if they can get a faster boost clock than 6900K that makes it better value.
It definitely has to hit a clock of at least 3.7GHz so I feel confident it won’t bottleneck any games in the near future, so it can be both my video encoding and gaming machine.
Sharing? The problem with bulldozer was it’s cache size, no SMT and lower IPC. The 1FPU 2 integer per core setup worked fine for multitasking. No, it’s not as good as 8 FPUs, but an i3 doesn’t match 4 FPUs, either. An AMD 8350 gives some i7s a run for their money in multi-core performance, thus an i5 or i3 isn’t going to “kill” it in thread performance. An i5 or i7 this generation it can’t do much against, of course, because they have SMT and double the thread count. 4FPU/8int/8thr vs 8FPU/16thr, higher IPC and SMT. Zen will have 8/16, a SMT-type solution, improved cache, and a DDR4 controller this time. Those things alone change the way performance will go for Zen, and that’s before +40% IPC and lower TDP. You can’t just say that any i5 will beat an 8350. Nobody worth a grain of salt as a PC builder or tech would tell you that. You’d need the best of the last two years in i5.
you have no fucking clue what you’re talking about…
FX was stuck on 32nm, 14nm node requires less power.
Xeon 1231v3 is my benchmark for single threaded gaming performance with 3.8GHz boost clock exactly enough to avoid CPU bottleneck for the 22 car hosting I want to be able to do for my game in question. That server load is single threaded so any 6 core+ system I buy needs to be able to meet that single threaded performance.
Zen IPC is ahead of Haswell so a lower clock of 3.7GHZ is also acceptable.
Zen seemingly has less level 3 cache so that possibly counts against it.
Overclocking performance will be a big one as if you aim for a performance target over 4.1 GHZ the 5960X usually out performs the 6900K.
That’s the summary of the research I have put in.
Although in honesty I will most likely buy a Skylake quad core Xeon as it meets my requirements. Just that I believe Zen 8-core is the best of AMD’s CPU offerings and is worth considering if it offers decent budget value. Even though Skylake-E will undoubtedly outperform it when it launches in mid 2017.
No need to be salty.
Btw I normally like your comments on GPUs. You seem to have good knowledge. Don’t assume everyone here is an idiot haha.
https://youtu.be/nLkaNWo0EV0?t=2m4s
as seen here the Single core Performance of a I3 Kills FX 8300 chips..and it call comes down to the ipc.
sure multi core FX kills a i3 but more games ect are not coded to take advantage of this
Used the fx 8320 for a year, great oc. But perform isnt upto the mark. I am using i5 6500 now and it gives me anywhere near to 15-25 fps gain in all games.
Games like crysis 3, witcher 3 are optimized for fx and do fine on it. But fallout 4 and GTA V have very sluggish performance on fx. I am getting 60 fps constant in them after switching to i5 with gtx 970, I was getting 40-50fps earlier with same GPU.
My thoughts exactly. High performance 8C/16T Intel parts along with their motherboard platforms are just crazy expensive, be it Core i7-6900K, Xeon E5-1660v4 or Xeon E5-2667v4. If an AMD 8C/16T Zen + AM4 Motherboard cost noticably less, i wouldn’t care if IPC was even down somewhere between Ivy Bridge-E/EP and Haswell-E/-EP, i’d get it in a heartbeat.
also he uses “hurts” instead of “Hertz” when refering to clockspeed.
100% less cost would be free, and depending on the Game or Benchmark used it’s up to 25% slower than a 1060.
$199 is the price of the 4GB Models though, and some AAA Games are even running out of VRAM with 4GB at 1080p maxed out, let alone 1440p.
If you want to make a fair comparison, it’s $239 for 8GB 480s vs. $249 for 6GB GTX 1060s (Newegg lists 5 Cards at EXACTLY that price, and a 6th after mail-in rebate).
Only time will tell if 2GB less VRAM and the absence of Asynchronous Compute will hurt the GTX 1060 down the road, but RIGHT NOW it delivers up to 25% more performance in DX11 with less than 5% of increased cost.
Can you not understand that AMD already did 8 cores 5 Ghz, and it did nothing? Frequency does ABSOLUTELY NOTHING. You poor, dumb wanker.
Dude, have you forgotten FX 9590? And how shit it was?
of course old FX proc is piece of crap. im talking about Zen and the future.
Yeah, and FREQUENCY DOESN’T MEAN SHIT. For gods sake, look up the megahertz myth.
Sorry I meant %50 less. the aggregated scores show about 7% which is virtually unnoticeable in the freaking 5Tflops level, and keep in mind amd’s card actually has about 15% more power (Tflops) the vast majority of games are tailored for nvidia cards. but if you use a non partizan bench the cards are equal. but the amd will sli and cost 100 less. plus the ONLY reason nvidia put out such a powerfull card is BECAUSE of AMD.
NO game runs out at 4gb unless your doing utterly idiotic things like putting on antialising at 1080. and the nvidia card is %50 the cost not 5% your literally off by a factor of 10, and saying up to 25% is idiotic, the amd’s card is faster than the 1060 by as much as %15 in some games. the way you compare is by aggregated scores. otherwise you sound like a fanboy.
Just engineering sample
50% less is ONLY if you compare the 4GB RX480 @ $199 with the Founders Edition GTX 1060 @ $299.
The much fairer comparison though would be the 8GB RX480 @ $239 vs. Board Partner Versions of the GTX 1060 @ $249 (check Newegg.com, they have 5-6 models at exactly that price).
And i said UP TO 25%, because in some tests it’s UP TO 25% more performance on the 1060, i am well aware that there are some games that have a significanlty lower margin, and that there are games that perform better on on the RX 480 like Hitman, Quantum Break or Ashes of the Singularity. But aggregate performance difference don’t tell the whole story.
The way you misrepresent the difference between $239 and $249 being 50% (it is 4.184100418410042% by the way), the fact you disregard tests that show massive advanages for the nVidida Card, DEMAND we aknowledge tests that show massive advantages for the AMD Card, and then simply declare only aggregate scores are viable WITHOUT THE SHADOW OF A DOUBT PROVE you infact ARE a fanboy.
I own workstations with both nVidia Quadro and AMD FirePro cards, each have their strenghts and weaknesses, but people who feel the need to “destroy” the other camp to feel better about their own purchase decisions are just pathetic.
you would cry if u say how bottlenecked my GTX 1070 MSI Armor OC is on my FX 8350 I get no were near the required 120-144 fps on maxed out settings that my monitor needs..
for kicks and giggles I put the card in my friends I3 6300 and my FX was completely blown away
its sad to watch higher and smother frames on a I3 compared to a FX
Rainbow Six Siege on Utra HD Textures and every thing on, cracks out to about 4.8GIG VRam… its the most I seen my GTX 1070 pull from a single game soo far
On the plus side those Skylake i3s are great value for budget gamers.
2.8GHz/3.2 GHz boost clock settings is a lot lower than I expected for a node shrinkage and no internal graphics on the chip. If this chip overclocks to 4 GHz even on water, I would be very surprised.
Smaller nodes actually seem more challenging to tune to higher clocks. Look at how many years Intel has struggled to get them to where they are now. Compare almost 5 year old 32nm FX chips that clock from 4-5GHz, and almost 10 year old Pentium 4 (65nm+??) that clocked at 4GHz with early release 14nm Intel CPU clocks.
So yes, clock speeds will probably be the weak point of this year’s batches. But it will be made up for by high cores/threads which appeals to many but not all people. After they have more time to tune the new process they’ll do a refresh with nicer clockspeeds, just like from Kaveri to Godavari.
Also engineering samples are typically much lower clocked, so it’s possible they could surprise beyond expectations.
As Juaranga pointed out, they chose a process suited broadly for GPU, mobile, server, and not especially for very high powered desktops. But it doesn’t mean they can’t tweak it. For example, Athlon 845’s are all mobile Carrizo misprints that typically operate at 10-15W (tops 35W) in the mobile world; for the desktop it operates around 50W (tops 65W TDP), with boost at 3.8GHz. Those are pretty much the boost frequencies I’m hoping for with this year’s Zen release. Next year I think they will get them to boost well over 4GHz.
Early engineering sample. Think of this as the lowest binned version we will see in the wild, like an i5 6400. Final clocks for high binned version will be much higher.
Yeah look at the FinFet designs for Haswell 22nm vs Broadwell or Skylake 14nm. Just look at those fins and you’ll see why the older process was better at high heat, high clock designs with heat dispersion.
http://hothardware.com/articleimages/Item2219/small_Intel-14nm-Fin-with-Gate.jpg
Good example is Haswell-E vs Broadwell-E where the older chip does better if you start over clocking past 4.1GHZ.
Whereas the 14nm is clearly better and more power efficient at lower clocks like 4GHZ. But for overclockers 5820k and 5960X outperform 6800K and 6900K.
You must’ve setup your PC very poorly. My CPU has never bottlenecked in DX 11 or 12. So I have no idea what you experienced. How about you mention an actual game that I could test it out on and see what you do, or not?
believe me, my pc is no at all setup “poorly”
and for some of the games I play.. its only a small list can be found here… (check game section)
http://steamcommunity.com/id/maddoggyca/
Things like Dying light, Project cars, Rainbow six Seige, Doom 2016, Xplane X, DoveTail flight Simulator, Train Simulator, starcraft 2, Farcry Primal, HomeFront Revoulition, and so on all run worse on my AMD FX vs my frinds I3 6300 with the same GTX 1070… I know for a fact my AMD FX is the bottle neck here.. as seen here https://www.youtube.com/watch?v=urIhhd-kQXY
Where are you getting up to 25% performance increase? Have yet to see this. Please post evidence when you say something as it’s up to the one claiming something to prove it. Basic knowledge. On the website, NVIDIA said at tops, it would only beat it in DX 11 by 15%, so where does 25% come from? Once DX 12 and Vulkan API’s come into place (mind you nvidia optimized games will always win over AMD and the likewise), the rx 480 is sure set to pulverize the 1060 as it was coming close to the 1070 in the DOOM vulkan test. RX 480 is far more future proof as AMD doesn’t leave old cards in the dust and will better support dx12/vulkan/async.
I currently profit close to 6.000-8.000 dollars every month for freelancing at home. For those of you who are prepared to do simple online jobs for few hrs /day from your home and make good income in the same time… This is a gig for you… SELF92.COM
cfwq
Heard of DX12 I heard it’s great for that.
Yeah DX12 and Vulkan will both allow a game to efficiently use 8 cores + you can see this in most new DX12 titles where an FX 8350 is coming pretty close to a 6600k in being able to power a 1070 without any bottlenecking.
Go look up some videos RotTR DX12, Doom Vulkan and Ashes of the Singularity DX12.
If you’re looking for a processor to last you until 2019 – 2020 the new Zen lineup will be it, and it’s not exactly going to be bad on DX11 titles with current GPU’s you might lose a few frames 5 – 10% but given that the majority of new titles are DX12 titles and that trend in only going to get better moving into 2017 the Zen processors are going to kick the 6600k / 6700k’s asses moving forward.
I very much doubt they can push it much beyond what they are now due to the stated 95W TDP even if they manage the same clock rates as a 5960X that would be phenomenal as the 5960X is a 140W TDP part.
It’s an ES (Engineering Sample). Take the specs with a grain as they are always much lower than the final product. Engineering samples of the FX8350 that went out well before it’s release, which saw anywhere from 1.8GHz to 3.3GHz and the final release saw 4.0GHz with 4.2GHz turbo. A 2.8GHz with 3.2GHz turbo ES leads me to believe we shouldn’t have much problem seeing 4GHz with the final product.
It’s unfortunate that people like to try and speak with education in their tone when none at all exists. I’m not going to provide you with all of the details of Zen, but I will say that it is going to be a great chip. You should read up about it so that you actually know what you’re talking about and don’t look like someone lacking information.
For the majority of games today as well as any DX12 game now and into the future, an Intel i3 won’t even be able to touch a Zen FX.
The card simplky cannot full 8 GB, it is not powerful enough for that. Benchmarks have shown just a few frames per second difference between the 4 and 8 GB card. And Vulkan and DX12 will result in less need for higher memory amounts due to better memory management in games long term than OGL and DX11 since the developers know what is best for their games better than the generic memory management of drivers. So, in short, in the case of the RX-480 it is simply not worth spending the extra $30-40 for the extra 4 GB. You will never notice a few extra frames unless you are running something more demanding than the card was intended for. You would be better off to get a card with more powerful GPU or turn down the settings on the game.
As for RX-480 vs GTX 1060, the 1060 is winning in DX11 and OGL games, but being stomped by the RX-480 in DX12 and Vulkan games, so which card is the best to get depends a lot on how many new games one intends to get in the next few years.
But the point was not about a comparison of the two cards. It was simply that when a card is so powerful for the price point it is no surprise that it appeals to many people.
However, benchmarks are showing the 1060 already exceeding 60 FPS in most modern games at 1080p and over 100 in Doom with Vullkan at 1080p. It is even exceeding 60 at 1440p in Doom with Vulkan. It is even using Doon’s Ultra setting. That is plenty good enough for many people. In fact, the 1060 is not enough of an improvement in most benchmarks to push a better resolution than the 480, nor is it good enough to justify matching with a high refresh rate monitor. A 1070 is better for both cases, so if I was going to do either I would skip the 1060 for a 1070.
You don’t go by aggregate scores, you go by benchmarks of games that you are interested in.
Exceeding 4 GB by such a small margin is not an issue as most gamers have enough system RAM for video cards to be able to use their potential for system RAM to be treated like video RAM. In fact tests have been shown that even in the most memory intenseive games (such as GTA V) it makes little difference to performance use 4 GB and 6/8 GB as the small amount of slower system memory being used is not enough to make a significant difference to performance. It is just a few frames, which is not enough to be noticed unless overtaxing a card (eg running 4K on a 480 or 1060).
I’m pretty sure it was a bad setup somewhere. The 8120 is not the 8350. Two different CPUs. The IPC in the 8350 is 15-20% better across the board. Having 2 memory channels also helps and better instruction sets. In my crossfire setup, and before Dying Light ran perfectly smooth at 70-80FPS. With CF it’s 144-150 quite easily and shows 100% GPU usage, not even close to 50% CPU usage. Also, stop derailing the discussion. When proving your point, you need relevant benchmarks. Meaning, benchmarks and proof. 8120 != 8350 and your original statement was “Intel premium a i3 completely kill amd 8350”, which I’ve just stated as untrue, only for you to throw bogus information. We weren’t talking about the 8120, or the 8320 for that matter. All benchmarks show that the 8350 is at least 10% better stock. Back to the point, a poor build. Buying anything in the 8000 series was a flop until the revisions 8*50s and 8*70s came out. Slightly better TDP, improved IPC, more memory controllers, and so on. So, I still haven’t gotten a legitimate answer from you.
I really want to know how many PCIe lanes we’ll see with Summit Ridge in AM4. I’m hoping they’ll have at least 40 like Intel’s -E chips. The lower number of PCIe lanes are the only reason I can’t buy the standard non-E parts. I don’t need massive numbers of cores, just a few fast ones, and lost of PCIe lanes and RAM.
God i hope zen is bad ass fast i also hope vega threatens nvidia on the highest of ends these prices are crappy as hell. Amd needs to do something other than budget friendly stuff. I am building a 4k monster next year but i didnt want to have to spend 5k on it
Actually, there are quite a few games that breach the 4GB mark if you have it to spare, but why would you say that putting on Anti-Aliasing at 1080 is idiotic? Although trying to run multiple forms of AA at 4K is a bit much at this point in time, running multiple versions of AA on 1080P is something people have been doing on the regular, while breaking 60fps for over 6 years now (at least 5 generations of gpu’s and most likely when 1080P became the standard). You probably meant 4K.
As for Optimusidiot over there, yeah, has has his info backassward. 25% greater cost at 5-7% average performance increase. A much worse price/performance ratio (value).
Admittedly, frequency by itself may not do much for performance when the overall architecture is held back by it’s requirement to share resources between the half-cores in a module for Clustered Multi-threading CPU designs, e.g, Bulldozer. You reach a certain point where the processor becomes much to inefficient to push beyond in terms of frequency.
However, if you have an architecture that has a performance window that scales well the higher the frequency you go, then you will see good scaling of performance the higher you go. Unfortunately, the Bulldozer Architecture seems to fizzle out around 4GHz, which is why the 9590 at 5GHz requires almost double the power and doesn’t show even close to 25% increase in performance. If I forced myself to get a Piledriver FX, I’d most likely opt for the FX8320E and overclock it to 4GHz (better binned chip) or the 8370 (also better binned, but mainly because it comes with a Wraith cooler now).
Zen seems like it’s sweet spot frequency window covers 2.8 to 3.2 at the very least as engineering sample specs are being leaked here and there. Engineering samples of the 8350 came out anywhere from 1.8 to 3.3 and the final product ended up being 4.0 w/ 4.2 turbo. So…I’m thinking Zen scales well up to 4GHz, but not sure how much after that. Time will tell, but the fact that it has 40%+ IPC over excavator, means that it’s going to be one hell of a chip if priced properly.
Issue isn’t the frequency itself, it’s the diminishing returns you get by overclocking past a point which is 4.4 Ghz roughly. What I mean is OCing a CPU from let’s say 3Ghz to 4Ghz will yield more performance than OC from 4Ghz to 5Ghz. So really, past 3.5Ghz it’s pointless. You do get extra performance, sure, but it’s so marginal that increasing voltages and therefore cutting the lifespan of the product isn’t worth it.
What you’re doing there is using the figures from an architecture that relies on high clock speeds for decent performance to compare against an architecture that relies more on instruction throughput. I wouldn’t expect Zen’s improvement to be as drastic as that. Certainly 400-600 MHz seems likely, but anything above that is probably out of range.
I generally agree, but it actually depends on the architecture. That’s why I stated in my comment “frequency window” or “performance window”. There is a particular range in which frequency scales the best, then has diminishing returns thereafter. Each architecture is different.
I’m using historical data to hypothesize is what I’m doing. Also, the Bulldozer architecture didn’t rely on high clock speeds for decent performance. It was clocked at 4.0GHz. If you look at just about every mainstream desktop i7 over the past 6 generations, they’re all just about at 4GHz as well. Intel has had much better single core performance across those generations, but it’s not like AMD could ever make up for that by increasing the frequency. We saw that with the 9590. To increase frequency by 25%, they needed double the power, it required much greater cooling, and it only provided 10% performance. A very inefficient design at those higher clock speeds.
While it’s true that AMD chips have broken the frequency record with extreme cooling solutions, performance doesn’t scale. Bragging rights might, but only among extreme overclockers.
In any case, based on data of ES’s in the past as well as architecture in general, I would indeed still expect Zen’s improvement to be as drastic. The same that was leaked was 2.8 base with 3.2 boost. 3.2 is low, even for CPU’s 6 generations old. Even the older Stars cores were coming in at up to 3.7GHz. configurations. With the amount of improvements made since then, as well as their experience with higher frequency attempts in the Bulldozer heritage, I honestly feel that 4.0GHz is something to expect out of a stock part, even if it is the boost frequency, and especially considering that it is a 95W part.
“Also, the Bulldozer architecture didn’t rely on high clock speeds for decent performance. It was clocked at 4.0GHz. If you look at just about every mainstream desktop i7 over the past 6 generations, they’re all just about at 4GHz as well.”
4.00 GHz for Bulldozer-derived chips is pretty much the norm, whereas 4.00 GHz on Intel’s side is reserved for high-end Core i7s, or otherwise single-thread turbo frequencies. Completely different implementations.
“Intel has had much better single core performance across those generations, but it’s not like AMD could ever make up for that by increasing the frequency. We saw that with the 9590.”
Precisely. Piledriver at 4.00 GHz is no match for Haswell at 4.00 GHz, and the Bulldozer architecture and its siblings were designed to be a speed demon approach like NetBurst was.
Piledriver at 5.00 GHz for single-thread performance is no match for Haswell at 3.90 GHz. Considering thermal constraints that apply when clock speeds are increased (regardless of transistor size, although smaller transistors will focus more heat in less area), 5.00 GHz is not a normal clock speed for IPC-focused architectures like Skylake (or Zen), and therefore that is why Bulldozer is said to rely on high clock speeds. You have just confirmed what I said.
Samsung’s 14LPP node also makes Zen suffer from clock deficiency. 14HP would have been a better option.
Most likely fake info, bcoz tis came from a newly registered poster on Anandtech forum called ‘AMD Polaris’. Many of the best reliable leaks actually came from the far east, not from places like Anandtech forums.
Perhaps I should take your position so that I can be pleasantly surprised when Zen comes out instead of simply getting what I expect. =)
But how is pushing twice the power, increasing clock speeds, for only a 10% performance boost, “relying” on frequency? Frequency gives is almost nothing. I would say that neither of them rely on frequency. AMD was relying on hopes that its implementation of CMT would be successful, which, it obviously wasn’t.
Absolutely. I would love nothing more than for Zen to perform where original predictions put it (between Broadwell and Skylake), but after AMD has changed its stance on Zen performance several times, I’m merely being conservative. Anything on top would only be good.
The double power draw is the result of the high clock frequencies since it isn’t linear. The increase in performance going from 4.00 to 5.00 GHz with Piledriver is very minimal. The cache infrastructure is also very poor. Combined, the difference is miniscule compared to Haswell or Skylake. Even Sandy Bridge is ahead.
2.8GHz is a bit disappointing to be honest. I wonder how an 8-core Zen at 2.8GHz performs against an 8-core FX-9590 clocked at 4.7GHz (TDP put aside). If looking at Polaris where it seems AMD got surprised by lower than expected maximum clock speeds and higher than expected voltage/cooling requirements, then I am getting slightly concerned. There’s no way I am going to buy a new CPU from AMD which isn’t significantly faster than their flagship from the previous generation. Polaris is still a great and competitive GPU for the price (once it will be available and hit MSRP), but I hope it will improve over the next weeks and months and achieve higher clock speeds at lower voltage.
I think the extreme cases or the “up to” s tell even less of the story than the aggregate. Also if someone is targeting particular games they are likely to have looked those up already and know which card would suit them better.
From what I see the scaling on DX12 drops off after about 6 cores but there are still gains to be made on the 7th and 8th core and if you are running other stuff in the background those extra two cores are great. Video capturing, streaming, watching something on another screen or browsing the internet along with your game etc.
Perhaps I will wait for Zen+ or beyond because I’m in no hurry to upgrade my FX-8350. But who knows, right? Those die shots just might make me take my thin wallet out.
Latest 480 testing against 1060 using Vulkan produces a much different result. Just to be fair, both cards were tested using Vulkan and the 480 was up to 50% faster than the 1060. Game used for testing purposes was Doom. As DX12 matures and becomes more Vulkan like, this will place Nvidia in a position they are not used to. AMD are very progressive with their cpu and gpu designs. Lets face it, all previous DX api’s were designed to benefit Nvidia and Intel. Microsoft has seen the light and have released DX12 that benefit AMD gpu’s far more than Nvidia gpu’s. Go figure.
Once again, as more game content providers start using DX12 and Vulkan, the true Nvidia will reveal itself. AMD has proven to be more efficient and there are growing results showing that they are up to 50% faster than the 1060 when using the 480. Besides, AMD gpu’s have a longer life than Nvidia gpu’s. The only aspect of these latest cards is why AMD run their cards at 1300mhz and Nvidia at 1700mhz, both standard settings.
I also looked at Newegg. Prices for the 1060 range from $280 to $385. Quite expensive.
I couldn’t have said it better.
Also, going forward, all games will support DX12 and/or Vulkan and no other API. DX11 and prior is old technology. I would buy the 480 in a heart beat to the 1060.
I am probably going to buy the 470 myself. It will still be a good 1080p card considering it has very similar specs to the Radeon HD 7970 GHz Edition, but with the delta colour compression that reduced memory bandwidth requirements and various GCN 1-1, GCN 1.2 and Polaris improvements, it will no doubt perform better out of the two. Which is impressive for a US$150 card. (The regular 7970 was a US$550 card at launch let alone the GHz Edition.)
As for why, the Radeon RX-480 is $500 here in New Zealand and I just cannot afford that. But the 470 will be a good improvement from my current card (a Radeon HD 7950 with factory overclocks). Sadly we only get the 8 GB 480 here, not the 4 GB one.
First, the 5960X official timings are 3.0GHz and 3.3GHz on all 8 cores. That puts the speed in line with Haswell, except it does it at 95W TDP rather than 140W TDP. That is HUGE for a node shrink. It comes with 4MB L2 cache, compared to Intel’s 2MB L2 (unless the double cache only applies to the L3 cache, which makes no sense to give a quad core 512KB per core, but only give 256KB to the 8 core), and 16MB L3, compared to Intel’s 20MB L3 (which, if you have the 512KB L2, it won’t matter!). Broadwell-E was shit and an incremental step. Sure it has a higher base level, but overclocks worse than Haswell did, meaning way more limited! They both have SMT now, so as long as it works well, you are talking Haswell-E performance for between 1/2 to 1/4 of the cost. So where are you getting your numbers and facts from? You sound ignorant!
I’m waiting to see benchmarks and the power levels required. If it doesn’t jump up astronomically on a small OC (seen on some AMD products in the past), so long as you have amazing cooling, this will be right where it needs to be to reclaim a portion of the market. If it has no cold bug or cold boot bug, it will make a splash with extreme OCers, unlike Broadwell-E. Now, if you are arguing limitations because of instruction sets, that is a different story and we can discuss that. But what you said ignored existing hardware with similar specs!
First, the 5960X official timings are 3.0GHz and 3.3GHz on all 8 cores. That puts the speed in line with Haswell, except it does it at 95W TDP rather than 140W TDP. That is HUGE for a node shrink. It comes with 4MB L2 cache, compared to Intel’s 2MB L2 (unless the double cache only applies to the L3 cache, which makes no sense to give a quad core 512KB per core, but only give 256KB to the 8 core), and 16MB L3, compared to Intel’s 20MB L3 (which, if you have the 512KB L2, it won’t matter!). Broadwell-E was shit and an incremental step. Sure it has a higher base level, but overclocks worse than Haswell did, meaning way more limited! They both have SMT now, so as long as it works well, you are talking Haswell-E performance for between 1/2 to 1/4 of the cost.
I’m waiting to see benchmarks and the power levels required. If it doesn’t jump up astronomically on a small OC (seen on some AMD products in the past), so long as you have amazing cooling, this will be right where it needs to be to reclaim a portion of the market. If it has no cold bug or cold boot bug, it will make a splash with extreme OCers, unlike Broadwell-E.
The 5960X official timings are 3.0GHz and 3.3GHz on all 8 cores. That puts the speed in line with Haswell, except it does it at 95W TDP rather than 140W TDP. That is HUGE for a node shrink. It comes with 4MB L2 cache, compared to Intel’s 2MB L2 (unless the double cache only applies to the L3 cache, which makes no sense to give a quad core 512KB per core, but only give 256KB to the 8 core), and 16MB L3, compared to Intel’s 20MB L3 (which, if you have the 512KB L2, it won’t matter!). Broadwell-E was shit and an incremental step. Sure it has a higher base level, but overclocks worse than Haswell did, meaning way more limited! They both have SMT now, so as long as it works well, you are talking Haswell-E performance for between 1/2 to 1/4 of the cost.
I’m waiting to see benchmarks and the power levels required. If it doesn’t jump up astronomically on a small OC (seen on some AMD products in the past), so long as you have amazing cooling, this will be right where it needs to be to reclaim a portion of the market. If it has no cold bug or cold boot bug, it will make a splash with extreme OCers, unlike Broadwell-E.
Haswell-E OC is better than Broadwell-E. If you are talking 8 core and get somewhere around Haswell-E 5930K to 5960X performance and overclock at $200-300 a chip, and it overclocks well, it wins!
The 5960X official timings are 3.0GHz and 3.3GHz on all 8 cores. That puts the speed in line with Haswell-E, except it does it at 95W TDP rather than 140W TDP. That is HUGE for a node shrink. It comes with 4MB L2 cache, compared to Intel’s 2MB L2 (unless the double cache only applies to the L3 cache, which makes no sense to give a quad core 512KB per core, but only give 256KB to the 8 core), and 16MB L3, compared to Intel’s 20MB L3 (which, if you have the 512KB L2, it won’t matter!). Broadwell-E was shit and an incremental step. Sure it has a higher base level, but overclocks worse than Haswell did, meaning way more limited! They both have SMT now, so as long as it works well, you are talking Haswell-E performance for between 1/2 to 1/4 of the cost.
I’m waiting to see benchmarks and the power levels required. If it doesn’t jump up astronomically on a small OC (seen on some AMD products in the past), so long as you have amazing cooling, this will be right where it needs to be to reclaim a portion of the market. If it has no cold bug or cold boot bug, it will make a splash with extreme OCers, unlike Broadwell-E.
The price for the highest end board for AM4 will cost the same as for the highest end Intel boards, but you get the chip for much less. It doesn’t mention PCIe lanes here, so it needs 40+ to be competitive and quad channel memory. If it has those and goes toe-to-toe with Intel Haswell-E, it will be snapped up quick, considering Skylake-X won’t arrive until 2H of 2017. So saying “even down somewhere between . . . Haswell-E/-EP” makes no sense as Broadwell-E is nothing great!
LOL so you want the same performance as a $600 cpu for $200! It makes me laugh when these fanboys make outrageous statements like this. It’s not happening! I expect we zen to be around mid/high i5 performance and the cost at under $150.
In all honesty alot of cases clock speed isnt everything ..
we ll more than likely see 3.0 or 3.5ghz stock speeds with turbo boost to 4.0-4.5ghz on release
Some games are optimized for fx and they do well with it. DOOM is probably such game. But those games which aren’t optimized for 8 threads and low single core capacity. Take a hit with fx. GTA V doesn’t perform well but if you clock the fx to 4.8 ghz the bottleneck is gone in that game. but games like fallout 4, Farcry 4 they don’t really perform well with fx and many online fps like planet side 2 etc. Even if you oc to 5.0 ghz you cant get 60 fps in them. They still do 45-50 fps which is just Ok.
Whereas an unlocked i5 (haswell & Skylake only) gets you a consistent 60 fps in all games, if you have the GPU firepower.
I understand you mate, Heck the 8 core fx used to bottleneck the gtx 970 even at 4.6 ghz overclocks.
Is this the place where we can bash the Nintendo Switch?
Maybe he’s hoping his CPU has more Insane Clown Posse!