Surely one reason for the drop in price is that that these CPUs are still subject to a performance penalty for Meltdown and Spectre mitigations, and therefore offer a less compelling value? Nobody in their right mind wants to drink poisoned coffee if they can wait a year and grab an iced mocha.
Stiff competition in the form of Ryzen and Threadripper might be another factor.
If I were building a PC right now I'd have to agree, my money would go to AMD. Partially because they are entirely competitive, but also partially because Intel has been such a bastard about socket\chipset fragmentation between generations. I think it really started with Haswell\Broadwell when they basically encouraged everyone to upgrade from 80-series to 90-series to guarantee future gen support, which they didn't really deliver (Broadwell on the desktop was basically vaporware) then forced everyone to upgrade again to 100-series for Skylake, which turns out was NOT necessary because the 90-series addressed the technical power delivery incompatibility with Skylake via FIVR bypass.
So they did it for money. And they got a pass. So they are continuing to do it, now on the professional\enterprise platforms where the real money is. No better way to milk corporate than force them to replace entire SERVERS to upgrade a single generation of CPU.
The problem for AMD is and will continue to be mobile. The 8th gen Core-U is just a monster. 4 core\8 threads in 15w, with a decent iGPU. The Ryzen can't touch that, because at best it can only match the physical core count and even then the cores are weaker.
The few reviews of Ryzen U APUs I've seen had them trading blows with the respective Intel parts in general compute scenarios and destroy them in anything gaming related. The problem is still in battery life, where Intel laptops still outshine AMD ones (not sure if it's an APU/chipset thing or just manufacturer laziness in not optimizing enough). And price is not as competitive as I'd have hoped. Often times AMD APU laptops cost about as much as the Intel equivalent with an MX150 GPU. Which is just weird.
Businesses don't reuse motherboards, they replace entire PCs. The consumer segment who reuse motherboards are such a small percent of total market that it's probably not even worth considering. Backward and forward compatibility (or lack thereof) is almost certainly related to something else.
Remember, though, it's not a 15W CPU. If you want anything like the promised performance out of those 8th gen cores then you're running at 25W minimum, usually closer to 35W - that's why so many Lenovo systems have dog-awful performance under any kind of sustained load.
I will say this though, dropping the voltage of my Skylake mobile drops the power usage dramatically and it will push the turbo a lot harder and keep the temps down quite a bit.
There is a lot of headroom with Intel mobile, if they'd bother to make sure the temps stay lower than 90*C, which honestly isn't that hard once the voltage is lowered.
I have an HP Elitebook G5 with Ryzen 2700u - Cinebench 15 multi score is 650 after one run and averages at 545-550 after the 3rd run and stays there - that is almost exactly the score the flagship 15w of Intel 8650u will score (see notebookcheck) - the differences here is marginal with the Intel scoring 675 - Intels lead here is almost none existent.
Battery runtime is impossible to compare for end users, we have to wait for a propper review here. But Intel will likely pull ahead here with 20% I would guess
"Nobody in their right mind wants to drink poisoned coffee if they can wait a year and grab an iced mocha."
Consumers yes. Business will get what they need to when they need to, especially when the extra money made from investing now will pay for itself. The only time I hear businesses delay purchasing hardware are big supercomputing projects / datacenter scale-outs waiting on the next generation.
Yeah, the original comment assumes flexibility on purchasing timeline. Sometimes, people need a [new] PC *now* and just have to choose from what's currently available.
As an aside, I sort of doubt that CPUs with the fixes in hardware are going to be as fast as someone living dangerously with a current-gen CPU in an unpatched system. I wish these mitigations could be limited to ring 0, 1, and 2. Sadly, too much sensitive data is handled in ring 3.
This here is a point I've tried to make. Hardware fixes for Meltdown and Spectre, by design, will reduce performance. It's unlikely to be significantly different than the current methods of avoiding the problems. There is no reason to wait for hardware that has a fix, it may perform better but it'll be almost entirely due to it simply being a new generation of chips, not due to the fixes being in hardware.
Meltdown requires a security check during L1 address translation, while the branch misprediction form of Spectre can be addressed by storing more information about the caller it's for (potentially multiplying cache entries): https://news.ycombinator.com/item?id=16593061
Both of these will require extra transistors, and maybe extra time, but it seems unlikely to be as much time as the software workarounds.
Indeed, it's another form of specialized hardware for particular tasks; like AES-NI, which can easily be ten times faster than doing the same thing in software.
I can't agree with that assessment, you are comparing adding more conditions to an operation (which reduces performance) with adding a specialized component that does just one thing well. Branch prediction is used virtually all the time, it does not get 'faster' by adding more requirements, in fact it gets slower. Additional transisters dedicated to the task will not be used only in specific scenarios, because again branch prediction is used virtually all the time. Instead they will add more heat, more latency and more power consumption.
I don't see doing it 'in hardware' as a way to gain performance out of this.
That doesn't make sense. We have to effectively add those conditions (or elide the benefits of the cache) now *anyway* to ensure security - which imposes a performance penalty. That's the deficit we're trying to avoid. I wouldn't be surprised if such checks could be done in parallel to the hardware access, so it returns an error in the same time that it would have returned a value.
No, the current approach was to disable the functionality, which certainly does cost performance. To do it securely will be more latent than the previous implementation, so while you will get some perf back due to the feature being usable again, it won't be like before. It's not going to be some huge gain.
And no, you can't do it in parallel or you open yourself up to another side channel attack. Parallel operations like this are a big part of the problem in the first place.
Whether DMI is a limit depends on how much of the data shunted around the PCIe buses actually needs to hop over it to go via the CPU & main memory. With DMI you can go from storage to/from network (or storage to GPU, etc) just via the PCH without bottlenecking.
Previous E3/E5/E7 naming was familiar, but current scheme is more meaningful. At least Xeon E: it encodes generation (2nd digit - now "1") and number of cores (last digit). 3rd digit isn't obvious: it encodes the "level"/"coolness" somehow. BTW, Xeon-SP naming is also more meaningful than E5/E7 SKU numbering.
Yep, new scheme now aligns properly with socket compatibility. Previously, you ended up with the E5 series split between two different mutually incompatible sets (some E5s could go into single socket boards, some were dual socket only, E7s were their own special set with the boards rare as hens teeth), now it's simple: Xeon E is for Socket Hx boards, Xeon W is for Socket Rx boards, Xeon scalable is for Socket Px boards. If it fits, it runs.
Some of this is actually backwards, the first number in the E5 series represents socket (E5-2600 supports dual, E5-4600 supports 4). Now, there is no indicator in the Xeon scalable lineup to represent amount of sockets supported, you have to remember a metal (Bronze and Silver are 2, Platinum is 8.. etc)
Interestingly, on the Supermicro website they mention that their Intel Xeon E-2100 series motherboards support "Up to 8-core/16 threads Intel® Xeon® E processor".
When I can build a "consumer desktop" with an AMD part with 8/16 cores/threads, what is then supposed to be a "workstation" with 6/12 cores/threads. Sounds like confusing marketing at least. Not to speak about Threadripper parts.
Given the PCIe limit on DMI it does not look much as a workstation part either. And finally, the pricing of the fastest part also does not make sense. It is difficult to see any strategy in recent Intel's actions.
Xeon E has always been their excuse to overcharge for their desktop-grade CPUs by failing to remove ECC compatibility. As such this is business-as-usual for Intel.
Same goes for the pricing of the top-end part. It's always bizarre.
Because use cases are different. The Xeon E-2186G is now the best CAD workstation in the world, as most (caveats here) CAD programs rely entirely on single-threaded performance. IPC + frequency is still king in many fields.
For bulk builds only, yes. But for general edit/compile/link cycle, I'm not so sure. You edit on C/C++/hs file, you invoke (g)make and then this invokes compiler and linker. Just serial job, nothing to paralelize here as long as compiler and linker are just single-threaded apps. Now, as a developer I spent way more time on single edit/compile/link cycle than on bulk builds hence for me highest single threaded perf is preferred over multi-core perf...
That I did not know. My understanding was that all computational heavy and algorithmically "homogeneous" tasks (e.g. video rendering, simulations, even 3D modelling) were already parallelized correctly in professional tools. Which led me to believe that more cores would in general offset the higher single-thread IPC.
"The C246 motherboards are set to support Xeon E, Pentium, Celeron, and Core i3 processors only, and not Core i5/i7. "
Anyone but me find that statement amusing? We will let you put a celeron, our lowest of the low end in your workstation motherboard (as if anyone would want to) but an i5 or i7 won't work.
This list is interesting, because it's basically just a list of processors where ECC support hasn't been disabled. (Yes: Pentium, Celeron, and Core i3 series processors support ECC. I don't know why, but it's a handy way to save a bit of cash if you are building e.g. a NAS with ECC ram.)
Glad they have posted turbo frequencies after all but doesn't seem much difference between the SKUs particularly the top two only difference seems to be the TDP? Perhaps some extra headroom to turbo longer? But still hard to justify I suspect most users won't notice much difference between alot of difference between the 6 core models. One other thing for workstations now is we get charged alot more for xeons Over £100 list for Windows pro for workstations 4 cores plus so even cheaper Xeons don't necessarily equal a cheaper workstation. as a result we are seeing more and more i5 i7 and i9 workstations for those who can live without ECC.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
40 Comments
Back to Article
GreenReaper - Thursday, July 12, 2018 - link
Surely one reason for the drop in price is that that these CPUs are still subject to a performance penalty for Meltdown and Spectre mitigations, and therefore offer a less compelling value? Nobody in their right mind wants to drink poisoned coffee if they can wait a year and grab an iced mocha.Stiff competition in the form of Ryzen and Threadripper might be another factor.
hansmuff - Thursday, July 12, 2018 - link
Apt comment. My money is going to AMD for now, just because they have less of those issues and lose less performance on the fixes.Samus - Friday, July 13, 2018 - link
If I were building a PC right now I'd have to agree, my money would go to AMD. Partially because they are entirely competitive, but also partially because Intel has been such a bastard about socket\chipset fragmentation between generations. I think it really started with Haswell\Broadwell when they basically encouraged everyone to upgrade from 80-series to 90-series to guarantee future gen support, which they didn't really deliver (Broadwell on the desktop was basically vaporware) then forced everyone to upgrade again to 100-series for Skylake, which turns out was NOT necessary because the 90-series addressed the technical power delivery incompatibility with Skylake via FIVR bypass.So they did it for money. And they got a pass. So they are continuing to do it, now on the professional\enterprise platforms where the real money is. No better way to milk corporate than force them to replace entire SERVERS to upgrade a single generation of CPU.
The problem for AMD is and will continue to be mobile. The 8th gen Core-U is just a monster. 4 core\8 threads in 15w, with a decent iGPU. The Ryzen can't touch that, because at best it can only match the physical core count and even then the cores are weaker.
Death666Angel - Friday, July 13, 2018 - link
The few reviews of Ryzen U APUs I've seen had them trading blows with the respective Intel parts in general compute scenarios and destroy them in anything gaming related. The problem is still in battery life, where Intel laptops still outshine AMD ones (not sure if it's an APU/chipset thing or just manufacturer laziness in not optimizing enough). And price is not as competitive as I'd have hoped. Often times AMD APU laptops cost about as much as the Intel equivalent with an MX150 GPU. Which is just weird.Yuriman - Friday, July 13, 2018 - link
Businesses don't reuse motherboards, they replace entire PCs. The consumer segment who reuse motherboards are such a small percent of total market that it's probably not even worth considering. Backward and forward compatibility (or lack thereof) is almost certainly related to something else.Spunjji - Friday, July 13, 2018 - link
Remember, though, it's not a 15W CPU. If you want anything like the promised performance out of those 8th gen cores then you're running at 25W minimum, usually closer to 35W - that's why so many Lenovo systems have dog-awful performance under any kind of sustained load.0ldman79 - Friday, July 13, 2018 - link
That is true.I will say this though, dropping the voltage of my Skylake mobile drops the power usage dramatically and it will push the turbo a lot harder and keep the temps down quite a bit.
There is a lot of headroom with Intel mobile, if they'd bother to make sure the temps stay lower than 90*C, which honestly isn't that hard once the voltage is lowered.
blublub - Friday, July 13, 2018 - link
Sorry but this just isn't entirely true:I have an HP Elitebook G5 with Ryzen 2700u - Cinebench 15 multi score is 650 after one run and averages at 545-550 after the 3rd run and stays there - that is almost exactly the score the flagship 15w of Intel 8650u will score (see notebookcheck) - the differences here is marginal with the Intel scoring 675 - Intels lead here is almost none existent.
Battery runtime is impossible to compare for end users, we have to wait for a propper review here. But Intel will likely pull ahead here with 20% I would guess
Ian Cutress - Thursday, July 12, 2018 - link
"Nobody in their right mind wants to drink poisoned coffee if they can wait a year and grab an iced mocha."Consumers yes. Business will get what they need to when they need to, especially when the extra money made from investing now will pay for itself. The only time I hear businesses delay purchasing hardware are big supercomputing projects / datacenter scale-outs waiting on the next generation.
mode_13h - Thursday, July 12, 2018 - link
Yeah, the original comment assumes flexibility on purchasing timeline. Sometimes, people need a [new] PC *now* and just have to choose from what's currently available.As an aside, I sort of doubt that CPUs with the fixes in hardware are going to be as fast as someone living dangerously with a current-gen CPU in an unpatched system. I wish these mitigations could be limited to ring 0, 1, and 2. Sadly, too much sensitive data is handled in ring 3.
Reflex - Friday, July 13, 2018 - link
This here is a point I've tried to make. Hardware fixes for Meltdown and Spectre, by design, will reduce performance. It's unlikely to be significantly different than the current methods of avoiding the problems. There is no reason to wait for hardware that has a fix, it may perform better but it'll be almost entirely due to it simply being a new generation of chips, not due to the fixes being in hardware.GreenReaper - Friday, July 13, 2018 - link
Meltdown requires a security check during L1 address translation, while the branch misprediction form of Spectre can be addressed by storing more information about the caller it's for (potentially multiplying cache entries):https://news.ycombinator.com/item?id=16593061
Both of these will require extra transistors, and maybe extra time, but it seems unlikely to be as much time as the software workarounds.
Indeed, it's another form of specialized hardware for particular tasks; like AES-NI, which can easily be ten times faster than doing the same thing in software.
Reflex - Friday, July 13, 2018 - link
I can't agree with that assessment, you are comparing adding more conditions to an operation (which reduces performance) with adding a specialized component that does just one thing well. Branch prediction is used virtually all the time, it does not get 'faster' by adding more requirements, in fact it gets slower. Additional transisters dedicated to the task will not be used only in specific scenarios, because again branch prediction is used virtually all the time. Instead they will add more heat, more latency and more power consumption.I don't see doing it 'in hardware' as a way to gain performance out of this.
GreenReaper - Saturday, July 14, 2018 - link
That doesn't make sense. We have to effectively add those conditions (or elide the benefits of the cache) now *anyway* to ensure security - which imposes a performance penalty. That's the deficit we're trying to avoid. I wouldn't be surprised if such checks could be done in parallel to the hardware access, so it returns an error in the same time that it would have returned a value.Reflex - Saturday, July 14, 2018 - link
No, the current approach was to disable the functionality, which certainly does cost performance. To do it securely will be more latent than the previous implementation, so while you will get some perf back due to the feature being usable again, it won't be like before. It's not going to be some huge gain.And no, you can't do it in parallel or you open yourself up to another side channel attack. Parallel operations like this are a big part of the problem in the first place.
iwod - Thursday, July 12, 2018 - link
The DMI limits are getting silly. 4x PCI-E bandwidth for everything from Thunderbolt 3.0, possibly 10G Ethernet, USB 3.1 Gen 2,shabby - Thursday, July 12, 2018 - link
Ya i'm surprised the infographic didn't mention UP TO 16 PCIE LANES FOR ESSENTIAL PERFORMANCE AND VISUALS!!!edzieba - Friday, July 13, 2018 - link
Whether DMI is a limit depends on how much of the data shunted around the PCIe buses actually needs to hop over it to go via the CPU & main memory. With DMI you can go from storage to/from network (or storage to GPU, etc) just via the PCH without bottlenecking.TrevorH - Thursday, July 12, 2018 - link
> Intel's new naming scheme for the Xeon platforms has been a relatively haphazard transitionIs that a polite way of saying "makes no sense at all"?
bolkhov - Thursday, July 12, 2018 - link
Previous E3/E5/E7 naming was familiar, but current scheme is more meaningful.At least Xeon E: it encodes generation (2nd digit - now "1") and number of cores (last digit).
3rd digit isn't obvious: it encodes the "level"/"coolness" somehow.
BTW, Xeon-SP naming is also more meaningful than E5/E7 SKU numbering.
edzieba - Friday, July 13, 2018 - link
Yep, new scheme now aligns properly with socket compatibility. Previously, you ended up with the E5 series split between two different mutually incompatible sets (some E5s could go into single socket boards, some were dual socket only, E7s were their own special set with the boards rare as hens teeth), now it's simple: Xeon E is for Socket Hx boards, Xeon W is for Socket Rx boards, Xeon scalable is for Socket Px boards. If it fits, it runs.diehardmacfan - Friday, July 13, 2018 - link
Some of this is actually backwards, the first number in the E5 series represents socket (E5-2600 supports dual, E5-4600 supports 4). Now, there is no indicator in the Xeon scalable lineup to represent amount of sockets supported, you have to remember a metal (Bronze and Silver are 2, Platinum is 8.. etc)RuralJuror - Friday, July 13, 2018 - link
Interestingly, on the Supermicro website they mention that their Intel Xeon E-2100 series motherboards support "Up to 8-core/16 threads Intel® Xeon® E processor".risa2000 - Friday, July 13, 2018 - link
When I can build a "consumer desktop" with an AMD part with 8/16 cores/threads, what is then supposed to be a "workstation" with 6/12 cores/threads. Sounds like confusing marketing at least. Not to speak about Threadripper parts.Given the PCIe limit on DMI it does not look much as a workstation part either. And finally, the pricing of the fastest part also does not make sense. It is difficult to see any strategy in recent Intel's actions.
Spunjji - Friday, July 13, 2018 - link
Xeon E has always been their excuse to overcharge for their desktop-grade CPUs by failing to remove ECC compatibility. As such this is business-as-usual for Intel.Same goes for the pricing of the top-end part. It's always bizarre.
diehardmacfan - Friday, July 13, 2018 - link
Because use cases are different. The Xeon E-2186G is now the best CAD workstation in the world, as most (caveats here) CAD programs rely entirely on single-threaded performance. IPC + frequency is still king in many fields.kgardas - Friday, July 13, 2018 - link
Also probably for simple edit/compile/link cycles done in compiled to binary languages. Hence then true software engineering workstation...peevee - Friday, July 13, 2018 - link
For compiles, cores matter, and number of memory channels matter. Threadripper is a true workstation CPU now.kgardas - Friday, July 13, 2018 - link
For bulk builds only, yes. But for general edit/compile/link cycle, I'm not so sure. You edit on C/C++/hs file, you invoke (g)make and then this invokes compiler and linker. Just serial job, nothing to paralelize here as long as compiler and linker are just single-threaded apps. Now, as a developer I spent way more time on single edit/compile/link cycle than on bulk builds hence for me highest single threaded perf is preferred over multi-core perf...risa2000 - Saturday, July 14, 2018 - link
That I did not know. My understanding was that all computational heavy and algorithmically "homogeneous" tasks (e.g. video rendering, simulations, even 3D modelling) were already parallelized correctly in professional tools. Which led me to believe that more cores would in general offset the higher single-thread IPC.dullard - Friday, July 13, 2018 - link
Ian, the prices on the first image don't match your second image prices. Possibly a copy-paste error from the Coffee Lake review?Ian Cutress - Friday, July 13, 2018 - link
Yup you are right. Should be fixed if you refresh the cache.peevee - Friday, July 13, 2018 - link
"Despite the same pin configuration as the consumer parts (LGA1151), they will require Xeon E-enabled motherboards with the C246 chipset"Purely marketoid decision. MBAs are killing this once-great company...
Ratman6161 - Friday, July 13, 2018 - link
"The C246 motherboards are set to support Xeon E, Pentium, Celeron, and Core i3 processors only, and not Core i5/i7. "Anyone but me find that statement amusing? We will let you put a celeron, our lowest of the low end in your workstation motherboard (as if anyone would want to) but an i5 or i7 won't work.
diehardmacfan - Friday, July 13, 2018 - link
I actually have a feeling this is incorrect, the Precision 3630's have the C246 and can be ordered with i5 and i7 procs.kepstin - Sunday, July 15, 2018 - link
This list is interesting, because it's basically just a list of processors where ECC support hasn't been disabled.(Yes: Pentium, Celeron, and Core i3 series processors support ECC. I don't know why, but it's a handy way to save a bit of cash if you are building e.g. a NAS with ECC ram.)
Alsw - Friday, July 13, 2018 - link
Glad they have posted turbo frequencies after all but doesn't seem much difference between the SKUs particularly the top two only difference seems to be the TDP? Perhaps some extra headroom to turbo longer? But still hard to justify I suspect most users won't notice much difference between alot of difference between the 6 core models. One other thing for workstations now is we get charged alot more for xeons Over £100 list for Windows pro for workstations 4 cores plus so even cheaper Xeons don't necessarily equal a cheaper workstation. as a result we are seeing more and more i5 i7 and i9 workstations for those who can live without ECC.