Comments Locked

60 Comments

Back to Article

  • yankeeDDL - Monday, April 22, 2019 - link

    Is it me or, generally speaking, it is noticeably slower than the 970 Evo?
  • DanNeely - Monday, April 22, 2019 - link

    The 970 can make use of 4 lanes, with only 2 effective lanes in most scenarios any good x4 drive is going to be able to smoke the H10.
  • yankeeDDL - Monday, April 22, 2019 - link

    I still remember that Optane should be 1000x faster and 1000x cheaper. It seems that it is faster, albeit by a much lower factor ... then why hamper it with a slower bus? I mean, I came to read the review thinking that it could be a nice upgrade, and then I see it beaten handily by the 970 Evo. What's the point of such device? It is clearly more complex, so I doubt it'll be cheaper than the 970 Evo...
  • Alexvrb - Monday, April 22, 2019 - link

    Wait, did they say it would be cheaper? I don't remember that. I know they thought it would be a lot faster than it is... to be fair they seemed to be making projections like NAND based solutions wouldn't speed up at all in years LOL.

    It can be a lot faster in certain configs (the high end PCIe add-on cards, for example) but it's insanely expensive. Even then it's mainly faster for low QDs...
  • kgardas - Tuesday, April 23, 2019 - link

    Yes, but just in comparison with DRAM prices. E.g. NVDIMM of big size cheaper than DIMM of big size.
  • Irata - Tuesday, April 23, 2019 - link

    It was supposed to be 1000x faster and have 1000x the endurance of NAND as per Intel's official 2016 slides.

    It may be slightly off on those promises - would have loved for the article to include the slide with Intel's original claims.

    Price wasn't mentioned.
  • yankeeDDL - Tuesday, April 23, 2019 - link

    You're right. They said 1000x faster, 1000x endurance and 10x denser, but they did not say cheaper, although, the 10x denser somewhat implies it (https://www.micron.com/~/media/documents/products/... Still, this drive is not faster, nor it has significantly higher endurance. Let's see if it is any cheaper.
  • Valantar - Tuesday, April 23, 2019 - link

    Denser than DRAM, not NAND. Speed claims are against NAND, price/density claims against DRAM - where they might not be 1/10th the price, but definitely cheaper. The entire argument for 3D Xpoint is "faster than NAND, cheaper than DRAM (while persistent and closer to the former than the latter in capacity)", after all.
  • CheapSushi - Wednesday, April 24, 2019 - link

    I think this is why there's still negative impressions around 3D Xpoint. Too many people still don't understand it or confuse the information given.
  • cb88 - Friday, May 17, 2019 - link

    Optane itself is *vastly* faster than this... on an NVDIMM it rivals DDR4 with latencies in hundreds of ns instead of micro or milliseconds. And bandwidth basically on par with DDR4.

    I think it's some marketing BS that they don't use 4x PCIe on thier M.2 cards .... perhaps trying to avoid server guys buying them up cheap and putting them on quad m.2 to PCIe adapters.
  • Valantar - Tuesday, April 23, 2019 - link

    "Why hamper it with a slower bus?": cost. This is a low-end product, not a high-end one. The 970 EVO can at best be called "midrange" (though it keeps up with the high end for performance in a lot of cases). Intel doesn't yet have a monolithic controller that can work with both NAND and Optane, so this is (as the review clearly states) two devices on one PCB. The use case is making a cheap but fast OEM drive, where caching to the Optane part _can_ result in noticeable performance increases for everyday consumer workloads, but is unlikely to matter in any kind of stress test. The problem is that adding Optane drives up prices, meaning that this doesn't compete against QLC drives (which it would beat in terms of user experience) but also TLC drives which would likely be faster in all but the most cache-friendly, bursty workloads.

    I see this kind of concept as the "killer app" for Optane outside of datacenters and high-end workstations, but this implementation is nonsense due to the lack of a suitable controller. If the drive had a single controller with an x4 interface, replaced the DRAM buffer with a sizeable Optane cache, and came in QLC-like capacities, it would be _amazing_. Great capacity, great low-QD speeds (for anything cached), great price. As it stands, it's ... meh.
  • cb88 - Friday, May 17, 2019 - link

    Therein lies the BS... Optane cannot compete as a low end product as it is too expensive.. so they should have settled for being the best premium product with 4x PCIe... probably even maxing out PCIe 4.0 easily once it launches.
  • CheapSushi - Wednesday, April 24, 2019 - link

    I think you're mixing up why it would be faster. The lanes are the easier part. It's inherently faster. But you can't magically make x2 PCIe lanes push more bandwidth than x4 PCIe lanes on the same standard (3.0 for example).
  • twotwotwo - Monday, April 22, 2019 - link

    Prices not announced, so they can still make it cheaper.

    Seems like a tricky situation unless it's priced way below anything that performs similarly though. Faster options on one side and really cheap drives that are plenty for mainstream use on the other.
  • CaedenV - Monday, April 22, 2019 - link

    lol cheaper? All of the parts of a traditional SSD, *plus* all of the added R&D, parts, and software for the Optane half of the drive?
    I will be impressed if this is only 2x the price of a Sammy... and still slower.
  • DanNeely - Monday, April 22, 2019 - link

    Ultimately, to scale this I think Intel is going to have to add an on card PCIe switch. With the company currently dominating the market setting prices to fleece enterprise customers, I suspect that means they'll need to design something in house. PCIe4 will help some, but normal drives will get faster too.
  • kpb321 - Monday, April 22, 2019 - link

    I don't think that would end up working out well. As the article mentions PCI-E switches tend to be power hungry which wouldn't work well and would add yet another part to the drive and push the BOM up even higher. For this to work you'd need to deliver TLC level performance or better but at a lower cost. Ultimately the only way I can see that working would be moving to a single integrated controller. From a cost perspective eliminating the DRAM buffer by using a combination of the Optane memory and HBM should probably work. This would probably push it into a largely or completely hardware managed solution and would improve compatibility and eliminate the issues with the PCI-E bifrication and bottlenecks.
  • ksec - Monday, April 22, 2019 - link

    Yes, I think we will need a Single Controller to see its true potential and if it has a market fit.

    Cause right now I am not seeing any real benefits or advantage of using this compared to decent M.2 SSD.
  • Kevin G - Monday, April 22, 2019 - link

    What Intel needs to do for this to really take off is to have a combo NAND + Optane controller capable of handling both types natively. This would eliminate the need for a PCIe switch and free up board space on the small M.2 sticks. A win-win scenario if Intel puts forward the development investment.
  • e1jones - Monday, April 22, 2019 - link

    A solution for something in search of a problem. And, typical Intel, clearly incompatible with a lot of modern systems, much less older systems. Why do they keep trying to limit the usability of Optane!?

    In a world where each half was actually accessible, it might be useful for ZFS/NAS apps, where the Optane could be the log or cache and the QLC could be a WORM storage tier.
  • Flunk - Monday, April 22, 2019 - link

    This sounded interesting until I read software solution and split bandwidth. Intel seems to be really intent upon forcing Optane into products regardless if they make sense or not.

    Maybe it would have made sense with SSDs at the price points they were this time last year, but now it just seems like pointless exercise.
  • PeachNCream - Monday, April 22, 2019 - link

    Who knew Optane would end up acting as a bandage fix for QLC's garbage endurance? I suppose its better than nothing, but 0.16 DWPD is terrible. The 512GB model would barely make it to 24 months in a laptop without making significant configuration changes (caching the browser to RAM, disabling the swap file entirely, etc.)
  • IntelUser2000 - Monday, April 22, 2019 - link

    The H10 is a mediocre product, but endurance claims are overblown.

    Even if the rated lifespan is a total of 35TB, you'd be perfectly fine. The 512GB H10 is rated for 150TB.

    The amount of users that would even reach 20TB in 5 years are in the minority. When I was actively using the system, my X25-M registered less than 5TB in 2 years.
  • PeachNCream - Monday, April 22, 2019 - link

    Your usage is extremely light. Endurance is a real-world problem. I've already dealt with it a couple of times with MLC SSDs.
  • IntelUser2000 - Monday, April 22, 2019 - link

    SSDs are over 50% of the storage sold in notebooks. It's firmly reaching mainstream there.

    I would say instead I think most of *your* customers are too demanding. Vast majority of the folks would use less than me.

    The market agrees too, which is why we went from MLC to TLC, and now we have QLCs coming.

    Perhaps you are confusing write-endurance with physical stress endurance, or even natural MTBF related endurance.
  • PeachNCream - Monday, April 22, 2019 - link

    I haven't touched on any usage but my own so far. The drives' own software identified the problems so if there is confusion about failures, that's in the domain of the OEM. (Note, those drives don't fail gracefully either so that data can be recovered. It's a pretty ugly end to reach.) As for the move from MLC to TLC and now QLC -- thats driven by cost sensitivity for given capacities and ignores endurance to a great extent.
  • IntelUser2000 - Monday, April 22, 2019 - link

    I get the paranoia. The world does that to you. You unconsciously become paranoid in everything.

    However, for most folks endurance is not a problem. The circuit in the SSD will likely fail of natural causes before write endurance is reached. Everything dies. But people are just excessively worried about NAND SSD write endurance because its a fixed metric.

    It's like knowing the date of your death.
  • PeachNCream - Friday, May 3, 2019 - link

    That's not really a paranoia thing. You're attempt to bait someone into an argument where you can then toss out insults is silly.
  • SaberKOG91 - Monday, April 22, 2019 - link

    That's a naive argument. Most SSDs of 250GB or larger are rated for at least 100TBW on a 3 year warranty. 75TBW on a 5 year warranty is an insult.

    I think you underestimate how much demand the average user makes of their system. Especially when you have things like anti-virus and web browsers making lots of little writes in the background, all the time.

    The market is going from TLC to QLC because of density, not reliability. We had all the same reliability issues going from MLC to TLC and from SLC to MLC. It took years for each transition for manufacturers to reach the same durability level as the previous technology, all while seeing the previous generation continuing to improve even further. Moving to denser tech means smaller dies for the same capacity or higher capacity for unit area which is good for everyone. But these drives don't even look to have 0.20DWPD or 5 year warranty of other QLC Flash products.

    I am a light user who doesn't have a lot of photos or video and this laptop has already seen 1.3TBW in only 3 months. My work desktop has over 20TBW from the last 5 years. My home desktop where I compile software has over 12TBW in the first year. My gaming PC has 27TBW on a 5 year old drive. So while I might agree that 75TBW seems like a lot, If I were to simplify my life down to one machine, I'd easily hit 20TBW a year or 8TBW a year even without the compile machine.

    That all said, you're still ignoring that many Micron and Samsung drives have been shown to go way beyond their rated lifespan whereas Optane has such horrible lifespan at these densities that reviewers destroyed the drives just benchmarking them. Since the Optane is acting as a persistent cache, what happens to these drives when the Optane dies? At the very least performance will tank. At the worst the drive is hosed.
  • IntelUser2000 - Monday, April 22, 2019 - link

    Something is very wrong with your drive or you are not really a "light user".

    1300GB in 3 months equals to 14GB write per day. That means if you use your computer 7 hours a day you'd be using 2GB/s hour. The computer I had the SSD on I used it for 8-12 hours every day for the two years and it was a gaming PC and a primary one at that.

    Perhaps the X25-M drive I had is particularly good at this aspect, but the differences seem too much.

    Anyways, moving to denser cells just mean consumer level workloads do not need the write endurance MLC needs and lower prices are preferred.

    "Optane has such horrible lifespan at these densities that reviewers destroyed the drives just benchmarking them."

    Maybe you are referring to the few faulty units in the beginning? Any devices can fail in the first 30 days. That's completely unrelated to *write endurance*. The first gen modules are rated for 190TBW. If they played around for a year(which is unrealistic since its for a benchmark), they would have been using 500GB/s day. Maybe you want to verify your claims yourself.
  • SaberKOG91 - Monday, April 22, 2019 - link

    Nothing special about my usage on my laptop. Running linux so I'm sure journals and other logs are a decent portion of the background activity. I also consume a fair bit of streaming media so caching to disk is also very likely. This machine gets actively used an average of 10-12 hours a day and is usually only completely off for about 8-10 hours. I also install about 150MB of software updates a week, which is pretty on par with say windows update. I also use Spotify which definitely racks up some writes.

    I can't speak to the endurance of that drive, but it is also MLC instead of TLC.

    I would argue that it means that the cost per GB of QLC is now low enough that the manufacturing benefit of smaller dies for the same capacity is worth it. Most consumer SSDs are 250-500GB regardless of technology.

    I'm not referring to a few faulty units or infant mortality. I can't remember the exact news piece, but there were reports of unusually high failure rates in the first generation of Optane cache modules. I also wasn't amused when Anandtech's review sample of the first consumer cache drive died before they finished testing it. You're also assuming that they only factor in the failure of a drive is write endurance. It could very well be that overheating, leakage buildup, or some other electrical factor lead to premature failure, regardless of TBW. It's also worth noting that you may accelerate drive death if you exceed the rated DWPD.
  • RSAUser - Tuesday, April 23, 2019 - link

    I'm at about 3TB after nearly 2 years, this with adding new software like android etc. And swapping between technologies constantly and wiping my drive once every year.
    I also have Spotify, game on it, etc.

    There is something wrong with your usage if you have that much write? I have 32GB RAM so very little caching though, so could be the difference.
  • IntelUser2000 - Tuesday, April 23, 2019 - link

    "You're also assuming that they only factor in the failure of a drive is write endurance. It could very well be that overheating, leakage buildup, or some other electrical factor lead to premature failure, regardless of TBW."

    I certainly did not. It was in reply to your original post.

    Yes, write endurance is a small part of a drive failing. If its failing due to other reasons way before warranty, then they should move to remedy this.
  • Irata - Tuesday, April 23, 2019 - link

    You are forgetting the sleep state on laptops. That alone will result in a lot of data being written to the SSD.
  • jeremyshaw - Sunday, July 14, 2019 - link

    Or they have a laptop with the "Modern Standby," which is code for:

    Subpar idle state which goes to Hibernation (flush RAM to SSD - I have 32GB of RAM) whenever the system drains too much power in this "Standby S3 replacement."
  • voicequal - Monday, April 22, 2019 - link

    "Optane has such horrible lifespan at these densities that reviewers destroyed the drives just benchmarking them."

    What is your source for this comment?
  • SaberKOG91 - Monday, April 22, 2019 - link

    Anandtech killed their review sample when Optane first came out. Happened other places too.
  • voicequal - Tuesday, April 23, 2019 - link

    Link? Anandtech doesn't do endurance testing, so I don't think it's possible to conclude that failures were the result of worn out media.
  • FunBunny2 - Wednesday, April 24, 2019 - link

    "Since our Optane Memory sample died after only about a day of testing, we cannot conduct a complete analysis of the product or make any final recommendations. "

    here: https://www.anandtech.com/show/11210/the-intel-opt...
  • Mikewind Dale - Monday, April 22, 2019 - link

    I don't understand the purpose of this product. For light duties, the Optane will be barely faster than the SLC cache, and the limitation to PCIe x2 might make the Optane slower than a x4 SLC cache. And for heavy duties, the PCIe x2 is definitely a bottleneck.

    So for light duties, a 660p is just as good, and for heavy duties, you need a Samsung 970 or something similar.

    Add in the fact that this combo Optane+QLC has serious hardware compatibility problems, and I just don't see the purpose. Even in the few systems where the Optane+QLC worked, it would still be much easier to just install a 660p and be done with it. Adding an extra software layer is just one more potential point of failure, and there's barely any offsetting benefit.
  • The_Assimilator - Tuesday, April 23, 2019 - link

    > I don't understand the purpose of this product.

    It's Intel still trying, and still failing, to make Optane relevant in the consumer space.
  • tacitust - Tuesday, April 23, 2019 - link

    It works in the sense that the OEMs who use this drive will be able to use the fact that customers will be getting cutting edge Optane storage. As the review says, this is a low effort solution, so it likely didn't cost much to develop, so they won't need too many design wins to recoup their costs. It also gets Optane into many more consumer devices, which helps in the long run in terms of perception, if nothing else.

    Note: most users won't know or even care that the drive itself doesn't provide faster performance than other solutions, so it doesn't really matter to Intel either. If they get the design win, Optane does gain relevance in the consumer space, just not with the small segment of power users who read AnandTech for the reviews.
  • ironargonaut - Monday, April 29, 2019 - link

    Seems it does provide faster performance in some usage cases.
    https://www.pcworld.com/article/3389742/intel-opta...
  • CheapSushi - Wednesday, April 24, 2019 - link

    I can't stand these dumb posts where people shut down the usage for consumers. I use it all the time for OS and other programs/files. I use it as cache. I use it for different reasons. Even the cheap early x2 laned variants. I'm not in IT or anything enterprise.
  • name99 - Thursday, April 25, 2019 - link

    It's worse than that.
    The OPTANE team clearly want to sell as many Optanes as they can.
    But INTC management has decided that they can extract maximal money from enterprise by limiting
  • name99 - Thursday, April 25, 2019 - link

    It's worse than that.
    The OPTANE team clearly want to sell as many Optanes as they can.
    But INTC management has decided that they can extract maximal money from enterprise by limiting the actually sensible Optane uses (in the memory system, either as persistent memory ---for enterprise, or as a good place to swap to, for consumers).

    And so we have this ridiculous situation where the Optane team keeps trying to sell Optane in ways that make ZERO sense because the way that makes by far the most sense (sell a 16 or 32 GB or 64GB DIMM that acts as the swap space) is prevented by Intel high management (who presumably are scared that if cheap CPUs can talk to Optane DIMMs, then someone somewhere will figure out how to use them in bulk rather than super expensive special Xeons).
    Corporate dysfunction at its finest...
  • Billy Tallis - Friday, April 26, 2019 - link

    I think it's too soon to say that Intel's artificially holding back Optane DIMMs from market segments where they might have a chance. They had initially planned to have Optane DIMM support in Skylake-SP but couldn't get it working until Cascade Lake, which has only been shipping in volume for a few months. Now that they have got one working Optane-compatible memory controller out the door, they can consider bringing those memory controller features down to other product segments. But we've seen that they have given up on updating the memory controllers on their 14nm consumer parts even to provide LPDDR4 support, which certainly is a more compelling and widely-demanded feature than Optane support. I wouldn't expect Intel to be able to introduce Optane support to their consumer CPUs until their second generation of 10nm (not counting CNL) processors at the earliest. Trying to squeeze it into their first mass-market 10nm would be unreasonable since they should be trying at all costs to avoid feature creep on those parts and just ship something that works and isn't still Skylake.
  • ironargonaut - Monday, April 29, 2019 - link

    Read here for an actual real world usage test. Two system with only memory difference and same input sometimes significantly different results.
    https://www.pcworld.com/article/3389742/intel-opta...
    3X speed up for some tasks. I don't know about ya'll but I multitask a lot at work so I will let background stuff go while I do something else that is in front of me.
  • weevilone - Monday, April 22, 2019 - link

    That's too bad. I tried to tinker with the Optane caching when it launched and it was a software disaster. I wrote it off to early days stuff and put it in my kids' PC when they began to allow non-boot drives to be cached. It was another disaster and Intel's techs couldn't figure it out.

    I wound up re-installing Windows the first time and I had to redo the kids' game drive the second time. No thanks.
  • CheapSushi - Wednesday, April 24, 2019 - link

    The problem is you were using the proprietary HDD caching they marketed. There are so many ways to do drive caching on Windows that doesn't involve that Intel software. It's way better and smoother. even if still software. Software RAID and cache is superior to hardware cache unless you're using $1K+ add-on cards.
  • Alexvrb - Monday, April 22, 2019 - link

    "The caching is managed entirely in software, and the host system accesses the Optane and QLC sides of the H10 independently. "

    So, it's already got serious baggage. But wait, there's more!

    "In practice, the 660p almost never needed more bandwidth than an x2 link can provide, so this isn't a significant bottleneck."

    Yeah OK, what about the Optane side of things?
  • Samus - Tuesday, April 23, 2019 - link

    They totally nerf'd this thing with 2x PCIe.
  • PeachNCream - Tuesday, April 23, 2019 - link

    Linux handles Optane pretty easily without any Intel software through bcache. I'm not sure why Anandtech can't test that, but maybe just a lack of awareness.

    https://www.phoronix.com/scan.php?page=article&...
  • Billy Tallis - Tuesday, April 23, 2019 - link

    Testing bcache performance won't tell us anything about how Intel's caching software behaves, only how bcache behaves. I'm not particularly interested in doing a review that would have such a narrow audience. And bcache is pretty thoroughly documented so it's easier to predict how it will handle different workloads without actually testing.
  • easy_rider - Wednesday, April 24, 2019 - link

    Is there a reliable review of 118gb intel optane ssd in M2 form factor? Does it make sense to hunt it down and put as a system drive in the dual-m2 laptop?
  • name99 - Thursday, April 25, 2019 - link

    "QLC NAND needs a performance boost to be competitive against mainstream TLC-based SSDs"

    The real question is what dimension, if any, does this thing win on?
    OK, it may not be the fastest out there? But does it, say, provide approximately leading edge TLC speed at QLC prices, so it wins by being cheap?
    Because just having a cache is meaningless. Any QLC drive that isn't complete garbage will have a controller-managed cache created by using the QLC flash as SLC; and the better controllers will slowly degrade across the entire drive, maintaining always an SLC cache, but also using the entire drive (till its filled up) as SLC, then switching blocks to MLC, then to TLC, and only when the drive is approaching capacity, using blocks as QLC.

    So the question is not "does it give cached performance to a QLC drive", the question is does it give better performance or better price than other QLC solutions?
  • albert89 - Saturday, April 27, 2019 - link

    Didn't I tell ya ? Optane's capacity was too small for many yrs and compatible with a very tiny number devices/hardware/OS. She played the game of hard to get and now no guy wants her.
  • peevee - Monday, April 29, 2019 - link

    "The caching is managed entirely in software, and the host system accesses the Optane and QLC sides of the H10 independently. Each half of the drive has two PCIe lanes dedicated to it."

    Fail.
  • ironargonaut - Monday, April 29, 2019 - link

    "While the Optane Memory H10 got us into our Word document in about 5 seconds, the TLC-based 760P took 29 seconds to open the file. In fact, we waited so long that near the end of the run, we went ahead and also launched Google Chrome with it preset to open four websites. "

    https://www.pcworld.com/article/3389742/intel-opta...

    Win
  • realgundam - Saturday, November 16, 2019 - link

    What if you have a normal 660p and an Optane stick? would it do the same thing?

Log in

Don't have an account? Sign up now