Home / Component / CPU / Intel launches Xeon Scalable CPUs

Intel launches Xeon Scalable CPUs

Just a few weeks ago, we saw AMD bring competition back to the server market with the launch of the EPYC CPU platform. Since then, Intel had been quiet on the Xeon front, until today, with the launch of Xeon SP, based on the new Skylake-SP architecture. ‘SP' in this instance stands for ‘scalable platform', Intel also says that its new Skylake-based Xeons offer a 1.65 times performance boost compared to Broadwell-based Xeons.

Xeon SP will offer up to 28 cores per socket, support for up to 6 TB of system memory and a performance advantage over the previous generation. In comparison, EPYC offers up to 32 cores per socket, though Intel says that its new 28-core Xeon SP can deliver 28 percent faster performance compared to AMD's 32-core EPYC 7601.

With these new Xeons, Intel has a new ‘mesh architecture', which is what helped squeeze out more performance. Switching to Mesh has allowed for lower latency and high bandwidth between cores, memory and I/O controllers by aligning everything more efficiently. on-chip cache banks, memory controllers, I/O controllers and cores are aligned in rows/columns with wires and switches connecting them at each intersection. This improves efficiency and performance by creating more direct paths for draw calls to follow. Think of it like a well optimised highway system.

There are several Xeon Scalable Processors on the way, with the Xeon Platinum sitting at the very top, followed by Xeon Gold, Silver and Bronze. Here is the lineup:

Xeon Bronze (3100 Series) Xeon Silver (4100 Series) Xeon Gold (5100 Series) Xeon Gold (6100 Series Xeon Platinum (8100 Series)
Highest Core Count Supported 8 12 14 22 28
Highest Clock Speed Supported 1.7 GHz (8C/85W) 2.2 GHz (10C/85W) 3.6 GHz (4C/105W) 3.4 GHz (6C/115W) 3.6GHz (4C/105W)
CPU Sockets Supported Up to 2 Up to 2 Up to 4 Up to 4 Up to 8
Max Memory Speed 2133 MHz 2400 Mhz 2400 MHz 2666 MHz 2666 MHz
Highest Memory Capacity per Socket 768 GB 768 GB 768 GB 768 GB, 1.5TB 768 GB, 1.5 TB

KitGuru Says: It looks like Intel had a Xeon upgrade ready to go to combat AMD's EPYC launch. Competition in the datacenter world is definitely heating up. 

Become a Patron!

Check Also

Blizzard unveils Warcraft 1&2 Remastered and major Warcraft 3 patch

The classic Warcraft RTS trilogy is now fully playable in remastered form, with Blizzard surprisingly dropping remasters for Warcraft 1&2 this week.

9 comments

  1. Let’s see prices first, because 28% more performance at more than twice the price is still not a good deal.

  2. Most customers that purchase Xeons by the dozens to hundreds are not the end-users; they are more like average folks that wait forever in between upgrades. So if the price justifies the performance difference of an ancient mainframe whatever.

    The only reason Intel is making comparisons to Broadwell-EP/-EX Xeons is because AMD has been comparing Zen to it. Businesses are still penny pinchers, more so than average mainstream, but will justify expenses due to return on investment.

    But even businesses don’t typically buy direct from AMD or Intel, they go through reseller OEMs like HP or Dell– which are the real customers in this context.

  3. What you must remember as well is efficiency. Zen has been demonstrated to be more thermally efficient that Skylake and Kabylake by a surprising margin. If this efficiency of desktop Zen to desktop Skylake continues in the server platform (which isn’t always the case I admit), Intel’s offerings just won’t appeal to OEMs as efficiency and features is key. Taking that both EPYC and Xeon have similar features (barring niches that will need an exact feature), the greater efficiency will win.

    That thermal efficiency is a hell of a lot more important in servers than people realise. There are new door radiant water pump server cooling systems coming in now, but even then, the only working one in the world right now is in Canada. Most haven’t even begun construction. So old style coolant compression is still king. And you can ask anyone in Arizona USA about AC costs, and they’ll tell ya, it’s far from cheap.

  4. When your software cost hundreds of thousands of dollars, you don’t really care about the price of hardware.

  5. it’s funny they say up to 28% higher performance while I can count the number of tests that the 8176 (2 socket) on one hand while Epyc (7601, also dual socket) is either equal or better than those in every other test
    http://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade

    beside the lower price of entry for a dual 7601 box, unless you are doing integer heavy tasks, the Epyc box will cost less to power it

  6. unless your software licenses are dependent on a per core or per server basis there are and then hardware cost comes back into play

  7. I’ve talked to a couple end-consumers in this particular market, and unless they work in a high-end tech environment (like, say, at Google), the most important thing for them is neither price nor even efficiency, it’s certification and proven performance. Xeon has a HUGE advantage there, and most people responsible for buying and supporting their firm’s hardware neither care about price nor support costs too much, as long as they don’t have to go before the board of their company just a few years later to explain why half the hardware is dead in such a short period of time, and that’s where certification and proven track record make all the difference – even if it would all die, they can still say they took all precautions, whereas if they go for much cheaper but uncertified, they’re likely walking out of there and straight out of the building 😛

  8. A lot of people buy server chips for workstations. I used to know a guy with a (“low” end) Xeon in a workstation PCU. He even got a motherboard with two sockets so he could upgrade to a second one at a later date. Those people care about cost.

    But even server companies care about cost, especially when they can get more PCI-E lanes for roughly half the cost. The number of PCI-E lanes makes a huge difference to performance for them as they rely a *lot* on PCI-E based SSDs and multiple GPUs for performance.

  9. That is the thing. It is the Googles, the Amazons, etc that are the big money makers in terms of Xeon/EPYC due to buying rapidly increasing massive quantities.