Comments Locked

83 Comments

Back to Article

  • chuychopsuey - Friday, March 23, 2018 - link

    Wow! That's pretty significant. Looks like it takes you a generation or two backwards.
  • Samus - Saturday, March 24, 2018 - link

    The real damage comes for people with NVMe SSD's or other CPU-centric PCIe hardware. 29% performance drop...wow.
  • III-V - Saturday, March 24, 2018 - link

    On a single benchmark.

    You might need to see an optometrist.
  • Samus - Sunday, March 25, 2018 - link

    Optometrist, no. Optimist, yes.
  • umano - Sunday, March 25, 2018 - link

    I want to be optimist, this is not a very powerful cpu "This significant performance loss is partly due to the NVMe drive performance now being CPU bound" so maybe on more powerful cpu this % will be lower... I hope so
  • linuxgeex - Monday, March 26, 2018 - link

    yes but people with high-end CPUs will tend to have even faster drives, or RAID, or Optane as a disk cache, and I've seen benchmarks where Optane performance dropped >40% on i9. Desktop users won't feel this very much but others who have been paying 300% more for 25% more performance for scale-up business logic are crying.
  • iter - Monday, March 26, 2018 - link

    Ultra high performance is only required in workstations. Workstations are used for important work. It is best to keep such systems offline. No need for antivirus, firewall, updating, patching and all that stuff that impedes performance and introduces downtime.

    The maxima among professionals is "if it works don't touch it". We are well past the point where it is worth keeping everything up to date with the latest versions. It's been a while since version updates were about making things better for the user, they are mostly about making things better for the big software corporations.

    Use a separate low end system for internet. With nothing important on it.
  • PeachNCream - Monday, March 26, 2018 - link

    I can't think of any situations where I've seen professional workstations that are intentionally kept offline and deprived of software updates. Perhaps that's something that would happen in a small business or SOHO where the end user is also her own IT support and might make such an odd decision, but as a client to a company using that sort of policy, I'd be concerned they were making a larger error of judgement in adhering to automotive or mechanical engineering wisdom of the 20th Century.
  • iter - Monday, March 26, 2018 - link

    That same wisdom that has resulted in countless security breaches and the privacy of billions of people violated.

    Thanks but no thanks. The "standards" are too low.

    The lack of internet connectivity doesn't in any way impede dedicated support personnel from supporting. Those people are supposed to know their biz, not google about it.

    The only error in judgement you should be concerned at the time is your own. A system that is dedicated to doing work has no job being connected to the internet. There is absolutely no good reason to update it as long as it operates property. The update will add no value, will only introduce downtime, and is likely to break stuff up.
  • PeachNCream - Monday, March 26, 2018 - link

    Despite those strongly expressed thoughts, there are very few workstations that are running as stand-alone systems that don't get vendor software updates.
  • iter - Monday, March 26, 2018 - link

    According to whom? You, the workstation all-seer? Or perhaps some statistics done over the internet?
  • iter - Monday, March 26, 2018 - link

    Also, if a "standalone system" is for you the opposite of "connected to the internet" that is quite indicative... You know there exists this thing called a network, on top of which the internet runs. You can have a load of workstations and servers in a network that is not connected to the outside world.

    Most places that do important work do it this way. Eliminates 99.99% of threats from the outside and the from the inside. Just one of many other common sense things, such as disabled usb storage devices, unauthorized network clients and whatnot. Machines that do connect to the internet are physically isolated from the secure network. They use secure proprietary interfaces for explicit data transfer between the two networks under tight scrutiny.
  • rhoades-brown - Tuesday, March 27, 2018 - link

    Eh? So, your saying that you would put your workstations unpatched and completely unprotected on a network where other devices can connect to it?

    Did you hear about WannaCrypt? Your network connected workstation would have been easy prey.

    Would you allow these unprotected workstations to share files with other workstations and what about the cheaper machines? I assume that you are either creating or processing content/data of some description. Have a look at MS16-120 - 'The most serious of these vulnerabilities could allow remote code execution if a user either visits a specially crafted website or opens a specially crafted document.'

    What about USB sticks? Something on one machine could easily be spread to another, and people are stupid enough to plug in a USB stick that they found in a car park, etc.

    There are exceptions- air-gaped networks to make things highly secure, but that seems unlikely, and if your workstations are in that rare scenario, have a look at xLED which uses a compromised switch to flash it's status LEDs to share data- crazy, I know; scary, absolutely.
  • Gasaraki88 - Monday, March 26, 2018 - link

    Wow, that's a big exaggeration...
  • Bulat Ziganshin - Friday, March 23, 2018 - link

    >Though there is a certain irony to the fact that taken to its logical conclusion, patching a CPU instead renders storage performance slower, with the most impacted systems having the fastest storage.

    It looks ironic because it was incorrectly attributed as CPU bug. But the point is that it allows to discover information when OS allows it, and thus it's an OS bug of not preventing it. As far as you run pure CPU computations, it doesn't need any mitigations.

    The only thing that need to be patched is communication between OS and application, and therefore you got larger hit when these communications are more intensive - on higher-IOPS operations. So f.e. I/O in large blocks (1 MB or so) is unaffected, but 4K I/O is affected, especially with higher-performance drives and higher QD scenarios.
  • jordanclock - Friday, March 23, 2018 - link

    It is a CPU bug. The speculative execution is faulty and that is a CPU feature. The OS patches are simply workarounds to prevent certain kinds of speculative execution.
  • Reflex - Friday, March 23, 2018 - link

    It is not a bug at all at either level. It is a feature that was found to be able to be abused. That happens all the time. Once found, it was mitigated, in this case by disabling the feature (Meltdown) or mitigating the impact (Spectre). In future designs it will be mitigated or eliminated.

    There are all sorts of features your CPU is capable of utilizing that can compromise your data or stability (hey, you can still run in unprotected mode for memory!), when it is found to be a problem it is typically disabled at the appropriate level (microcode/firmware/OS).
  • bji - Friday, March 23, 2018 - link

    Uh, no. It's a feature that comes with an unintended side effect of allowing data reads that should be disallowed. That part of it is a bug, plain and simple. I guess you are the kind of person that would call a bug that crashes the computer a "feature" because "it saves you power when your PC is off because it crashed".
  • PixyMisa - Friday, March 23, 2018 - link

    So it's a bug.
  • yeeeeman - Saturday, March 24, 2018 - link

    Bug is something that doesn't work as designed. I am pretty sure that they designed and verified it this way. These vulnerabilities are not bugs, they are just security loopholes.
  • Samus - Saturday, March 24, 2018 - link

    It isn't a bug, or a design flaw. It's just an exploit of the architecture.

    Saying otherwise is like saying houses not built for category 3 hurricanes in New York have a design flaw when the area has never needed construction to that spec. But with climate change, the need is becoming necessary as the architecture is no longer fit for the climate.

    Not a great analogy, but in the same example, neither Intel nor construction designers anticipated the architecture would become flawed due to unforeseen circumstances.
  • Alexvrb - Saturday, March 24, 2018 - link

    It's definitely a flaw. Unintentional security flaws are still flaws. It's an exploit of a security flaw in the architecture. They'll release a CPU in the future which still have speculative execution and isn't vulnerable to this flaw, at an architecture level. It might have other flaws, however. CPU flaws, bugs, errata are very common.
  • HStewart - Saturday, March 24, 2018 - link

    "It isn't a bug, or a design flaw. It's just an exploit of the architecture."

    It is attempt to distract on Intel productions, that also back fire because it also effects ARM and AMD.
  • bji - Sunday, March 25, 2018 - link

    That's not really true. There is a class of very hard to exploit design flaws in most implementations of speculative execution that seem to be systemic to almost all chips, that is true. These are so hard to exploit as to be nearly unexploitable in my opinion. These are called Spectre.

    But there is also a much more significant and easy to exploit design flaw. This affects only Intel chips (and I guess some ARM chips too -- but not AMD). This is called Meltdown.

    So there is no distracting going on here. Almost all chips are affected by spectre, but it's so hard to exploit as to almost be irrelevant. Meltdown is serious and it's Intel only.
  • Manch - Monday, March 26, 2018 - link

    HStewart is an Intel shill/fan boy. In his mind Intel can do no wrong. You're wasting your breath arguing with him.
  • bcronce - Sunday, March 25, 2018 - link

    A bug is when something does not work to spec.
  • boozed - Sunday, March 25, 2018 - link

    I bet you $10 the spec doesn't say "our branch prediction should have this massive security flaw".
  • linuxgeex - Monday, March 26, 2018 - link

    Correct, it's a design flaw not a bug. When something operates as designed it isn't a bug, but that doesn't make it correct either. Intel is rightly getting sued by the people who are significantly affected and who can afford to battle Intel in court... that will be people who forked over millions to obtain small performance improvements based on Intel's claims of security, performance, and fitness for the purpose for which it was sold, which later was found to be false. They are being sued exactly the same way that Honda would be sued if they supplied a Formula One team with an engine that performed 30% slower than advertised.
  • Alexvrb - Saturday, March 24, 2018 - link

    It's a CPU vulnerability.
  • willis936 - Friday, March 23, 2018 - link

    The power consumption drop with performance is interesting. I'd be interested in seeing a comparison of efficiency pre and post patch. At a glance t's difficult to tell if efficiency has gone up, down, or stayed the same. I'm under the impression that this patch disables speculative execution entirely which, unless speculation takes a lot more power than I think, efficiency should go down.
  • boeush - Friday, March 23, 2018 - link

    Speculative execution uses up compute cycles and can cause excessive memory loads and cache thrashing - which amount to wasted power and in some cache-sensitive cases, possibly even a drop in performance - when the speculation is frequently-enough incorrect (i.e. when the actual branch taken doesn't match the CPU's guess.)

    I'd expect that disabling speculative execution under high load (e.g. benchmarking scenarios) should normally result in improved power efficiency (avoiding wasted computation and I/O) - but at the cost of raw compute performance. In less intense, more 'bursty' scenarios, where the CPU spends a lot of time in an idle state, the "hurry up and rest" dynamic might strongly reduce the overall power waste of speculative execution, as the CPU would spend less time in an active-but-stalled state while spending more time in a sleep state...
  • Cravenmor - Friday, March 23, 2018 - link

    The thing that caught my eye was the reduction in power from the patches. I wonder what to deduct from speculative function and whether it's inefficient.
  • Lord of the Bored - Saturday, March 24, 2018 - link

    Speculative execution does add somewhat to the power load. That's why Atom parts were in-order for a long time, and many ARM parts still are.
  • Cravenmor - Friday, March 23, 2018 - link

    willis936 beat to it by a nose
  • eva02langley - Friday, March 23, 2018 - link

    It is interesting nonetheless. The storage data is absolutely devastating. Can we make conclusions to the server world from Intel? I don't know since servers are still using hard drives. However, it might force companies to switch to Epyc or to upgrade to Canon Lake. It would be interesting.
  • boeush - Friday, March 23, 2018 - link

    The CPU used in these tests was a low-power 2-core - pretty weak to begin with. Knock some performance off the top, and you have detectable impact on I/O.

    Probably the impact would be much less severe with a more powerful CPU: where the test scenario would again 'flip' from CPU-bound to bus/storage device performance- limited.
  • Reflex - Friday, March 23, 2018 - link

    Also, servers are usually only using NVMe drives as cache, SAS is less likely to have significant impact.
  • Drazick - Friday, March 23, 2018 - link

    This is a great analysis.
    We'd be happy to have more like this (On various performance impacting situations).

    I'd be happy to have a guide how to prevent the patching for each OS (Windows, macOS, Linux) as the private user mostly has no reason to be afraid of those.

    Thank You!
  • ZolaIII - Friday, March 23, 2018 - link

    I found this comparation much more interesting.
    https://www.phoronix.com/scan.php?page=article&...
    It's done on much more capable system which whose more hit in the first place & at least some benchmarks are representative in real usage workloads. Seams M$ again did a bad job & chubby Linus is still not satisfied with up to date results so future work still carries on.
  • Klimax - Sunday, March 25, 2018 - link

    Not really correct...
  • Klimax - Sunday, March 25, 2018 - link

    Just well chosen set of likely badly written projects to get wanted conclusion...
  • ZolaIII - Sunday, March 25, 2018 - link

    What isn't correct there? Why don't you write better one's as source is there? The general tests as SQL, web serving or Github init create & compile times are among many real use & not synthetic benchmarks. You may not find those crucial on standard desktop, on workstation and small server they are & OS choose dictated upon them & system purpose on large server space you won't find anything called Windows there even M$ embraced Linux there (Azure platform).
    Now try to explain what is your so called "wanted conclusion"? While Linux makes his small talks how he still isn't satisfied with fixes person mostly struck with the regressions & solely most responsible one for the getting Linux to performance leading platform & on much more architectures than any other OS ever supported is Peter Zijlstra.
    Now buzz off!
  • ZolaIII - Sunday, March 25, 2018 - link

    Linus*
  • nismotigerwvu - Friday, March 23, 2018 - link

    Wow, it looks like this puts Intel behind AMD on IPC for the first time since 2006, at least until new SKUs come to market.
  • mkaibear - Saturday, March 24, 2018 - link

    Nah, Zen was about 7% behind on IPC on average, this gets AMD closer but not quite there yet.

    Means picking a system based on use case has got more interesting though.
  • Klimax - Sunday, March 25, 2018 - link

    Since AMD doesn't have patch for Specters ready, such conclusion is not warranted yet.
  • phoenix_rizzen - Friday, March 23, 2018 - link

    Are the OS patch levels listed correct?

    .125 is unpatched.
    .309 is Meltdown-only patched.
    .214 is both patched.

    Should that last one be .314?
  • ganeshts - Saturday, March 24, 2018 - link

    Initially, I wanted to present only the unpatched and fully patched results.

    Unfortunately, I have a number of PCs pending review that don't seem to be receiving BIOS updates anytime soon (despite Intel having released the final microcode for its patching). Hence, I had to add the OS-patch only scenario at the last minute after rolling back the BIOS.

    So, the order of testing was :

    1. Unpatched
    2. Both Patched
    3. Only OS patched
  • Drazick - Saturday, March 24, 2018 - link

    How did you make your system unpatched?
  • Ryan Smith - Saturday, March 24, 2018 - link

    There are registry settings available in Windows to turn off the OS patches. Steve Gibson's InSpectre can twiddle the necessary bits rather easily: https://www.grc.com/inspectre.htm
  • Drazick - Saturday, March 24, 2018 - link

    This is perfect!

    Thank You.
  • nocturne - Friday, March 23, 2018 - link

    I'm wondering why there were different builds of windows tested, when the patches can be disabled via a simple powershell command. Performance can vary wildly for synthetic tests across subsequent builds, especially with insider builds.

    I can understand how this comparison gives you the /before and after/, but testing across different builds doesn't show you anything about the performance impact of the patches themselves.
  • ganeshts - Saturday, March 24, 2018 - link

    BIOS patches (CPU microcode) can't be turned off from within the OS. But, I did use the InSpectre utility to do quick testing of the extensively affected benchmarks across all the builds (as applicable). The performance loss in those benchmarks were consistent with what we got with the final build (309) in a fully patched state (BIOS v0062).

    By the way, none of these builds are insider builds.

    The reason we have listed these versions is just to indicate the build used to collect comprehensive data for the configuration.

    The builds vary because the testing was done over the course of two months, as Intel kept revising their fix and MS also had to modify some of their patches.
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Saturday, March 24, 2018 - link

    Futuremark Storage Bench>
    Why are you getting 312MB/s (unpatched) bandwith for a drive that has an average read speed of 1000MB/s ?

    Please clarify why this synthetic test has any basis in fact
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Saturday, March 24, 2018 - link

    I'm only asking this because I have been getting real world results that have little relationship to a synthetic tests

    For example, simply swapping a CPU from a 2.6Ghz dualcore to a 3.3 Ghz quadcore while keeping all other hardware and software the same will add a couple seconds to my boot times (same O.S.)

    Now, I never expected a faster quadcore to take longer to boot but it does

    Is there more overhead as you add cores and could this be measured with a synthetic test?

    Do you believe the synthetic test is actually measuring the bandwidth of the SSD, or how fast the CPU can process the data coming from the SSD?

    How would this differ from a real world test?
  • hyno111 - Sunday, March 25, 2018 - link

    Futuremark Storage Benchmark used real world load to test the overall disk throughput. The official sequential r/w speed does not represent actual use cases and is used for mainly for advertising.
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Sunday, March 25, 2018 - link

    "Futuremark Storage Benchmark used real world load to test the overall disk throughput."
    ----------------------------------------------------------------------------------------------------------------------
    O.K., except my point was you are not measuring the disk throughput which would stay the same regardless of slowdowns in the processor

    You are testing how fast the processor can handle the data coming from the disk "sorta"

    The synthetic test would still not tell me that my faster quadcore would boot slower than my dualcore in the example given, therefore it also does not directly relate to a real world test

    The disk hasn't changed and neither has it's actual throughput
  • ಬುಲ್ವಿಂಕಲ್ ಜೆ ಮೂಸ್ - Sunday, March 25, 2018 - link

    "The synthetic test would still not tell me that my faster quadcore would boot slower than my dualcore in the example given, therefore it also does not directly relate to a real world test"
    -----------------------------------------------------------------------------------------------------------
    Before you answer, I admit that the example above does not tell me the actual throughput of the disk.
    It is used to show that the synthetic test does not directly relate to the results you might get in a real world test, yet both my example and AnandTech's example do not show the actual disk throughput which stays the same
  • akula2 - Saturday, March 24, 2018 - link

    I do not have an iota of doubt that all these so-called vulnerabilities are well thought and deliberately pre-planned by the Deep State during the CPU architecture design stage. The result is huge loss of thrust in the brands like Intel who were/are part of this epic shamelessness! I'm pretty sure some of the tech media houses are in part of this syndicate willingly or not. Now, I do not give any benefit of doubt to AMD either.

    The gigantic problem: what is the alternative? The answer lies in nation taking the lead to setup companies away from the influence of Deep State, ideally in Asia.
  • FullmetalTitan - Saturday, March 24, 2018 - link

    I thought I only had to deal with everyone's favorite Anandtech loony, but now we have the conspiracy nuts in here too?
    Can we get some forum moderation please?
  • akula2 - Sunday, March 25, 2018 - link

    Really, are you stupid or what?

    Don't you know what is happening, Snowden and Wikileaks? You've the audacity to call me as a conspiracy theorist, you moron where were you when Dual_EC_DRBG "vulnerability" was discovered? Who planned that backdoor? Don't you know the collusion of companies behind the epic data collection programme launched by the NSA? Do you really think people on this planet are idiots?
  • boeush - Monday, March 26, 2018 - link

    Yes, Asia - that global bastion of freedom, democracy, open-source transparency, and total absence of corruption. From China, to Vietnam, to North and South Korea, to India and Pakistan, to Myanmar and Indonesia, Afghanistan, Iran, the Philippines, Japan, oh my... LMAO
  • Matthmaroo - Saturday, March 24, 2018 - link

    Someone forgot some meds today
  • TrevorH - Saturday, March 24, 2018 - link

    I would love to see some linux testing added to this set up. I work for a VoIP provider and run CentOS on a bunch of servers and the results I see by adding the initial RH patches were an approximate 30% increase in cpu time. Adding the microcode patch and enabling the IBRS mitigation to that resulted in a 100% increase in cpu usage for our workload. Yes, 100% increase - so a machine with 20 cores that was running at 800% cpu usage before the patches was using 1600% (16 cores at 100% each) after both the PTI and IBRS mitigations were turned on. Now our workload is probably quite unusual in that it uses both KVM virtualization and does lots and lots of small packet UDP network i/o but it does mean that with the mitigations in place, in order to run the same workload that we did before, we'd need to buy just about double the hardware we currently have in use.
  • timecop1818 - Sunday, March 25, 2018 - link

    Hey but why the fuck did you even install the patches, or enable these "fixes"? You've just said it, you are running a closed voip routing system. Why do you need to care about either of these non-problems in those servers? Why do people running Windows on desktop in single user setup need to care about any of this? In an earlier comment Ryan mentioned there's registry settings to disable this, guess what I'm doing as soon as I'm home?
  • Alexvrb - Saturday, March 24, 2018 - link

    You need to test with older architectures. Pre-Broadwell
  • kn0w1 - Sunday, March 25, 2018 - link

    Here is one limited comparison for Ivy Bridge and Y-Series Broadwell for good measure.
    https://www.smajumdar.com/2018/03/musing-48-impact...
  • HStewart - Saturday, March 24, 2018 - link

    Well first all - i not sure if the average customer will even notice these changes.

    Here is two things I thinking of

    1. Has there been actual virus / attack with this stuff - or all of this hypothetical
    2. It odd that power consumptions actually improved with the path

    The good news this stuff is final over with - move on.
  • satai - Sunday, March 25, 2018 - link

    "The good news this stuff is final over with."

    Probably not. We can expect more types of Spectre and similar attacks to come...
  • 29a - Saturday, March 24, 2018 - link

    I have to give props to ASRock for releasing a new BIOS for my Z170 Extreme3 motherboard dated 2018/3/12, I wasn't sure if they would update a budget Z170 board. This along with every ASRock MB I have owned being super stable has made me a very loyal customer.
  • rocky12345 - Sunday, March 25, 2018 - link

    What I would like to see is for tests done on older cpu's like ivy's and sandy bridge since that is where Intel said the biggest hits in performance would be.
  • Ryan Smith - Monday, March 26, 2018 - link

    It's on the list. The microcode is shipping, but we can't actually cover it until either an updated BIOS lands for one of the old mobos we still have, or MS publishes the microcode through Windows Update.
  • CircuitBoard - Sunday, March 25, 2018 - link

    Well, looks like storage trend that going to NVMe would be slower than i've expected...
  • piasabird - Sunday, March 25, 2018 - link

    It should be illegal for Intel to continue to sell defective parts.
  • HStewart - Sunday, March 25, 2018 - link

    Some one purposely creating software virus that effects the security of a system does not mean the system is defected - the real problem here is people that create the virus. That does not mean the system should be updated to prevent such attacks on system in the future.

    Keep in mind these issues are just on Intel - also on ARM and AMD. And stating that it is illegal for Intel and not including others should complete bias against Intel
  • mkaibear - Monday, March 26, 2018 - link

    It should also be illegal for Ford to sell cars which can go faster than the speed limit. And it should be illegal for Marshall to sell guitar amps which can go loud enough to damage hearing. And it should be illegal for Cisco to sell border routers which can pass traffic to Tor.

    ... Or alternatively we can accept that the problem is people using what is sold for an illegal purpose, like rational humans...
  • FourEyedGeek - Monday, March 26, 2018 - link

    They are not defective, a flaw / exploit has been discovered but they work as intended.
  • Duncan Macdonald - Monday, March 26, 2018 - link

    Can you do a similar test on an AMD Ryzen system. I would be interested to see the results there (especially whether Microsoft enables the performance sapping Meltdown fixes on a CPU that does not need them).
  • casperes1996 - Monday, March 26, 2018 - link

    Nice read as always. I'd like to suggest a follow-up, looking at how the performance impact is on Linux/macOS, to see if other Ones are hit in the same way as Windows
  • Adam Slivinsky - Monday, March 26, 2018 - link

    Seeing as you are using Steve Gibson's InSpectre tool https://www.grc.com/inspectre.htm in the screen shots, it might be nice mention it and link to it instead of just showing it in the images that spiders can not read.
  • Manch - Monday, March 26, 2018 - link

    As long as it can play Crysis, IDGAF
  • marxzae - Friday, April 27, 2018 - link

    How does performance change if all patches (OS and BIOS) are applied but disabled via registry?

    Would be nice to know if one could apply all patches but also disable them then maximum performance is needed...
  • thuckabay - Tuesday, March 26, 2019 - link

    Virtual machines are particularly affected by the Spectre and Meltdown patches, as VMs are heavy on disk I/O. This impact is not just magnified in VMs, but for those depending upon NVME drives to get the best VM performance, the impact is really awful and very noticeable. Hopefully, future in-hardware mitigations will alleviate this situation.

Log in

Don't have an account? Sign up now