Understanding QLC's place in the market (cheap bulk flash storage) I'm also struggling to understand who these premium-priced QLC products are for. Seriously who is going to pay 23-25¢/GB for something like this when it's only crutch is high read throughput that has zero real world advantage for virtually all PC users.
There is definitely the endurance advantage, but you don't need a commercial product for proof of concept. Indeed, I'd say releasing a commercial product just to prove it can be done where there is no real use for it is a bit daft. Unless they plan to inflict it upon customers in a data collection exercise, using their muscle to force it into laptops. We have already seen the advantages of this kind of tech when smaller SSDs were placed as a cache / tier into HDDs.
If their plan is to build this into an industrial product, their proof of concept should be a bunch of engineering samples tested for endurance, not a bodged consumer grade product which seems as though it's going to do more to show you can have a very complex and bodged product and it just about compete with what's already established on the market.
As for advertising, I'd say this is a pretty poor advert. Someone mentioned that Intel's storage division has been held back and it strikes me this is the case. This isn't a new and exciting product, it's two technologies being put together with an inadequate hardware interface and terrible software.
It has potential, but the people who will accept QLC NAND won't know or care what this is and the people who might benefit from the high DWPD won't touch it with a barge pole.
This should have stayed in R&D until it could add something to the market.
I'll believe it when it's independently tested. No level of software trickery will enable massive gains in TBW. If you fully write to a drive, the physical cells are fully utilized. Sure you can mask this with a large spare area and aggressive wear leveling but even a 2TB QLD SSD with 4TB of physical NAND (so 2TB spare area) will only yield 4x the endurance and that's best case scenario.
Enmotus can't break the laws of physics with intelligent software unless they've come up with some revolutionary hardware deduplication\compression algorithm that is limiting physical changes to NAND by many orders of magnitude, while also eliminating write amplification that is essential to modern ECC for data integrity.
The key advantage the Enmotus drive has over regular QLC drives is that the static SLC portion can be used for far more P/E cycles. On a regular QLC drive, which blocks are used for the dynamic SLC cache is constantly changing, and the fact that a block that's currently operating as SLC may soon be repurposed as QLC effectively prevents it from being rated for more P/E cycles than QLC usage can permit. But with a large pool of permanent SLC, the drive can safely re-use those cells long past the point where they would be unusable as QLC. 128GiB at 30k P/E cycles can on its own handle more total writes than the drive as a whole is rated for.
As long as the tiering software does a good job of preventing most writes and write amplification from ever getting to the QLC part of the drive, the endurance rating is completely realistic. The tiering software won't be able to keep the wear confined to the SLC if you are using the drive as a giant circular buffer for video recording or something else that keeps the drive full and constantly modifies all of the data. But most real consumer workloads have a small amount of hot data that's frequently changing and a large amount of cold data that doesn't get rewritten often enough to pose a problem for QLC.
Agreed - this would really need to show a serious performance benefit at a similar cost to a TLC drive, or lower cost and similar performance. As it is, it does neither. I'm sure OEMs will lap it up at whatever knockdown price Intel offers it to them to clear the shelves.
Derped there and confused the price of the Enmotus with the H20... the Enmotus product really does seem to be in a bad place for price vs. consumer appeal without the benefit of Intel's cosy relationship with OEMs.
The Enmotus product is perfect for Chia miners. Plotting on Chia absolutely destroys consumer-grade SSD's. A 980 Pro will get smoked in around 3 months, whereas this Enmotus drive, even though it's pricier, will last 3-5x longer.
I think Chia plotting requires more space than the SLC portion of the Enmotus drive, and plotting is an example of the kinds of workloads that would not be handled well by the Enmotus tiering software unless the plotting could fit entirely in the SLC tier.
Ah, the QLC brigade is here, with the same Dire Warnings Of Horrible Doom that previously fell flat for MLC and TLC, but THIS time will totally come true (or we'll cross out the Q and put P and protest against the evils of PLC next year!).
If you were to somehow get one of these Intel drives and plug it into an unsupported system, will it just show up as 2 separate NVMe drives? Would you be able to use it with hardware agnostic caching solutions like PrimoCache on Windows or bcache/dm-cache on Linux?
Depends on what the host system is, and what kind of slot. Only the supported Intel systems can initialize PCIe links to both sides. For the H10 review, I made a chart of all the systems I'd tried: https://www.anandtech.com/show/14249/the-intel-opt...
If the slot is only PCIe x1 or x2, you get the NAND. If it's x4, you might get the NAND or you might get the 3DXP.
Ah, so there's no PCIe bridge/switch on the device itself? I guess they're relying on the upstream bridge of the M.2 slot supporting bifurcating the 4× link into 2×/2×.
Correct. The H10 and H20 rely on upstream port bifurcation support. I think there's also a proprietary element to it, but bifurcation down to x2 links is less widely supported than bifurcation down to x4 links anyways.
A PCIe switch would have been nice, but wouldn't fit. And this product line isn't important enough for Intel to make a big new custom ASIC for, either a SSD controller that can speak to both 3DXP and QLC, or adding PCIe switch/passthrough support to one of the two controllers.
The H20 is just a placeholder while the Optane team treads water and begs the rest of Intel to let them release a proper drive. This H20 looks like a greyhound with bricks tied to its neck. Absolutely lovely latency and random 4K performance would be a credit to any high-end workstation. But it's crippled by a shit implementation.
Apple was doing tiered drive storage nearly 10 years ago with their Fusion drives, and as the Enmotus tiered drive shows, it can do amazing things. This is how the H20 should be set up.
I say the H20 is treading water; it should be on PCIe 4.0 but because of Intel's shenanigans with PCIe 4.0 the Optane team are crippled and can't release this drive with the backing support it needs. Hopefully the next model, maybe the H30, will have PCIe 4.0 and then it'll finally be a decent overall drive.
Probably not, given the sad history of Intel shooting Optane in the foot. They could have released this drive several years ago, and it would have been excellent then, but normal NAND drives are improving all the time and it's just too little, too late.
VMD changes the PID & VID so the NVMe drive will not be detected with generic drivers. This is the same behavior on X299, but those boards let you enable / disable VMD per PCIe slot. There is yet another feature called "CPU Attached RAID" which lets you use RST RAID or Optane Memory acceleration with non-VMD drives attached to the CPU lanes and not chipset lanes.
This really looks like a piece of hardware to avoid unless you run Windoes on the most recent generation of Intel hardware. So, that's a double "nope" from me. That's for the warning!
VMD has been an important feature of Intel server platforms for years. As a result, Linux has supported VMD for years. You may not be able to do a clean install of Windows onto this Tiger Lake laptop without loading extra drivers, but Linux has no problem.
I had a multi-boot setup on a drive that was in the Whiskey Lake laptop. When I moved it over to the Tiger Lake laptop, grub tried to load its config from the wrong partition. But once I got past that issue, Linux booted with no trouble. Windows could only boot into its recovery environment. From there, I had to put RST drivers on a USB drive, load them in the recovery environment so it could detect the NVMe drive, then install them into the Windows image on the NVMe drive so it could boot on its own.
Other way round. Optane is stunning but Intel has persistently shot it in the foot for almost all their non-server releases.
In Intel’s defence, getting it right requires full-stack cooperation between Intel, Microsoft, and motherboard makers. You’d think they should be able to do it, given that cooperating is at the basis of their existence, but in Optane’s case it hasn’t been achievable.
Only Apple seems to be achieving this full stack integration with their M1 chip & unified memory & their OS, and it took them a long time to get to this point.
Yes... I meant that Optane is the lipstick & QLC is the pig Tomatotech dude! I use several Optane drives but see no advantage at this point for QLC! It's just not priced properly to provide a tempting alternative to TLC.
I still feel this is lazy solution. QLC for data storage, Optane for file metadata storage is the way. instant search and big size, best of both worlds.
What you're describing is inferior to current QLC SSD's. Optane is still orders of magnitude slower than RAM, and I bet it would still be slower than just using system RAM like many DRAMless drives do. Plus, expensive for a consumer product.
Optane's main use is to add terabytes of low-cost low-latency storage to workstations (That's how Intel uses it, to sell said workstations), and today both RAM and SLC drives are hot on it's heels.
All I want is a OS file system that can handle microfiles without grinding down to KBps all the time. Nothing more I love than seeing my super fast storage grind to a halt when I do file large user data copies.
If there only would be pure optane m2 ssd about 500 Gb to 1tb… and i,know… it would cost at least $1000 to $2000 but that would be quite usefull in highend nat storage or even as a main pc system drive.
As can be seen from the actual application benchmarks, these caching drives add almost nothing to (and sometimes take away from) performance. This matches my experience with a hybrid SSD - hard drive a few years ago on Windows that was also 16 or 32 GB for the fast part – it was indistinguishable from a regular hard drive in performance. Upgrading the same machine to a full SSD on the other hand was night and day. Basically software doesn't seem to be able to do a good job of determining what to cache.
I see a lot of people bagging on Optane in general, both here and at other forums. I admit to not being a fan of it for many reasons, however, when it works, and when it's implemented with very specific goals, it does make a big difference. The organization I work at got a whole bunch (thousands) of PCs a few years ago that had mechanical hard drives. Over the last few years, different security and auditing software has been installed on them that has seriously impacted their performance. The organization was able to bulk buy a ton of the early 32GB Optane drives and we've been installing them in the machines as workload has permitted. The performance difference when you get the configuration right is drastically better for ordinary day to day office workers. This is NOT a solution for power users. This is a solution for machines that will be doing only a few, specific tasks that are heavily access latency bound and don't change a lot from day to day. The caching algorithms figure out the access patterns relatively quickly and it's largely indistinguishable from the newer PCs that were purchased with SSDs from the start.
As for the H20, I understand where Intel was going with this, and as a "minimum effort" refresh on an existing product, it achieves it's goals. However, I feel that Intel has seriously missed the mark with this product in furthering the product itself.
I suggest that Intel should have invested in their own combined NVME/Optane controller chip that would do the following: 1) Use PCIe 4.0 on the bus interface with a unified 4x setup. 2) Instead of using regular DRAM for caching, use the Optane modules themselves in that role. Tier the caching with host-based caching like the DRAMless controller models do, then tier that down to the Optane modules. They can continue to use the same strategies that regular Optane uses for caching, but have it implemented on the on-card controller instead of the host operating system. A lot of the features that were the reason that the Optane device needed to be it's own PCIe device separate from the SSD were addressed in NVME Spec 1.4(a and b), meaning that a lot of those things can be done through the unified controller. A competent controller chip should have been achievable that would have realized all of the features of the existing, but with much better I/O capabilities.
Maybe that's coming in the next generation, if that ever happens. This... this was a minimum effort to keep a barely relevant product... barely relevant.
It's a general property of caching that if your workload doesn't actually fit in the cache, then it will run at about the same speed as if that cache didn't exist. This is as true of storage caches as it is of a CPU's caches for RAM. Of course, defining whether your workload "fits" in a cache is a bit fuzzy, and depends on details of the workload's spatial and temporal locality, and the cache replacement policy.
That Intel Optane Memory H20 stick may be the source of the "coil whine". Don't be so sure about this noise always coming from the main board. A colleague has been bothered by a periodic high-pitched noise from her laptop, up until the installed Optane Memory H10 stick was replaced by a regular m.2 NAND SSD. The noise can come from a capacitor or inductor in the switching regulator circuit on the m.2 stick.
Oh, and Intel Optane Memory H20 is spec'ed at PCIe 3.0 x4 for the m.2 interface. I have the same HP Spectre x360 15.6" laptop with Tiger Lake CPU, and it happily runs the m.2 NVMe SSD at PCIe Gen4 speed, with a sequential read speed of over 6000 MB/s as measured by winsat disk. So this is the H20 not supporting PCIe Gen4 speed as opposed to the HP laptop lacking support of that speed.
I tested the laptop with 10 different SSDs. The coil whine is not from the SSD.
I tested the laptop with a PCIe gen4 SSD, and it did not operate at gen4 speed. I checked the lspci output in Linux and the host side of that link did not list 16 GT/s capability.
Give me a little credit here, instead of accusing me of being wildly wrong about stuff that's trivially verifiable.
What about using an Intel 660p Series M.2 2280 2TB PCIe NVMe 3.0 x4 3D2, QLC Internal Solid State Drive (SSD) SSDPEKNW020T8X1 extra CPU l2 or even l3 cache at 1-8TB going forward in a PCI-e 4.0 slot if intel and AMD will allow this to occur for getting rid of any GPU and HDD bottlenecking in the PCH and CPU lanes on the motherboard here? Is it even possible for this sort of additional cache allowed for the CPU to access by formatting the SSD to use for added l3 and l2 cache for speeding up the GPU on an APU or CPU using igpu or even for GPUs running in mgpu on AMD or sli on Nvidia to help kill the CPU bottlenecking issues here if they can mod one for this sort of thing here for the second m.2 PCI-e 4.0 SSD slot to use for additional CPU cache needs here?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
45 Comments
Back to Article
powerarmour - Tuesday, May 18, 2021 - link
QLC garbage again, I can hardly contain myself.Samus - Wednesday, May 19, 2021 - link
Understanding QLC's place in the market (cheap bulk flash storage) I'm also struggling to understand who these premium-priced QLC products are for. Seriously who is going to pay 23-25¢/GB for something like this when it's only crutch is high read throughput that has zero real world advantage for virtually all PC users.Wereweeb - Wednesday, May 19, 2021 - link
These products are both proofs of concept, and an advertising for the importance of Caching/Tiering.Enmotus managed to get 3600 TBW out of a 2TB QLC SSD by reducing it's available capacity by a bit and using their software.
philehidiot - Wednesday, May 19, 2021 - link
There is definitely the endurance advantage, but you don't need a commercial product for proof of concept. Indeed, I'd say releasing a commercial product just to prove it can be done where there is no real use for it is a bit daft. Unless they plan to inflict it upon customers in a data collection exercise, using their muscle to force it into laptops. We have already seen the advantages of this kind of tech when smaller SSDs were placed as a cache / tier into HDDs.If their plan is to build this into an industrial product, their proof of concept should be a bunch of engineering samples tested for endurance, not a bodged consumer grade product which seems as though it's going to do more to show you can have a very complex and bodged product and it just about compete with what's already established on the market.
As for advertising, I'd say this is a pretty poor advert. Someone mentioned that Intel's storage division has been held back and it strikes me this is the case. This isn't a new and exciting product, it's two technologies being put together with an inadequate hardware interface and terrible software.
It has potential, but the people who will accept QLC NAND won't know or care what this is and the people who might benefit from the high DWPD won't touch it with a barge pole.
This should have stayed in R&D until it could add something to the market.
Samus - Thursday, May 20, 2021 - link
I'll believe it when it's independently tested. No level of software trickery will enable massive gains in TBW. If you fully write to a drive, the physical cells are fully utilized. Sure you can mask this with a large spare area and aggressive wear leveling but even a 2TB QLD SSD with 4TB of physical NAND (so 2TB spare area) will only yield 4x the endurance and that's best case scenario.Enmotus can't break the laws of physics with intelligent software unless they've come up with some revolutionary hardware deduplication\compression algorithm that is limiting physical changes to NAND by many orders of magnitude, while also eliminating write amplification that is essential to modern ECC for data integrity.
Billy Tallis - Thursday, May 20, 2021 - link
The key advantage the Enmotus drive has over regular QLC drives is that the static SLC portion can be used for far more P/E cycles. On a regular QLC drive, which blocks are used for the dynamic SLC cache is constantly changing, and the fact that a block that's currently operating as SLC may soon be repurposed as QLC effectively prevents it from being rated for more P/E cycles than QLC usage can permit. But with a large pool of permanent SLC, the drive can safely re-use those cells long past the point where they would be unusable as QLC. 128GiB at 30k P/E cycles can on its own handle more total writes than the drive as a whole is rated for.As long as the tiering software does a good job of preventing most writes and write amplification from ever getting to the QLC part of the drive, the endurance rating is completely realistic. The tiering software won't be able to keep the wear confined to the SLC if you are using the drive as a giant circular buffer for video recording or something else that keeps the drive full and constantly modifies all of the data. But most real consumer workloads have a small amount of hot data that's frequently changing and a large amount of cold data that doesn't get rewritten often enough to pose a problem for QLC.
Spunjji - Wednesday, May 19, 2021 - link
Agreed - this would really need to show a serious performance benefit at a similar cost to a TLC drive, or lower cost and similar performance. As it is, it does neither. I'm sure OEMs will lap it up at whatever knockdown price Intel offers it to them to clear the shelves.Spunjji - Wednesday, May 19, 2021 - link
Derped there and confused the price of the Enmotus with the H20... the Enmotus product really does seem to be in a bad place for price vs. consumer appeal without the benefit of Intel's cosy relationship with OEMs.Morawka - Friday, May 21, 2021 - link
The Enmotus product is perfect for Chia miners. Plotting on Chia absolutely destroys consumer-grade SSD's. A 980 Pro will get smoked in around 3 months, whereas this Enmotus drive, even though it's pricier, will last 3-5x longer.Billy Tallis - Friday, May 21, 2021 - link
I think Chia plotting requires more space than the SLC portion of the Enmotus drive, and plotting is an example of the kinds of workloads that would not be handled well by the Enmotus tiering software unless the plotting could fit entirely in the SLC tier.haukionkannel - Wednesday, May 19, 2021 - link
All ”highend” ssd are soon gonna be qlc and middle and low range will go plc…So just wait the things to get even worse!
;)
edzieba - Friday, May 21, 2021 - link
Ah, the QLC brigade is here, with the same Dire Warnings Of Horrible Doom that previously fell flat for MLC and TLC, but THIS time will totally come true (or we'll cross out the Q and put P and protest against the evils of PLC next year!).kepstin - Tuesday, May 18, 2021 - link
If you were to somehow get one of these Intel drives and plug it into an unsupported system, will it just show up as 2 separate NVMe drives? Would you be able to use it with hardware agnostic caching solutions like PrimoCache on Windows or bcache/dm-cache on Linux?drexnx - Tuesday, May 18, 2021 - link
sounds like the host system just sees it as a 32gb optane SSDBilly Tallis - Tuesday, May 18, 2021 - link
Depends on what the host system is, and what kind of slot. Only the supported Intel systems can initialize PCIe links to both sides. For the H10 review, I made a chart of all the systems I'd tried: https://www.anandtech.com/show/14249/the-intel-opt...If the slot is only PCIe x1 or x2, you get the NAND. If it's x4, you might get the NAND or you might get the 3DXP.
kepstin - Tuesday, May 18, 2021 - link
Ah, so there's no PCIe bridge/switch on the device itself? I guess they're relying on the upstream bridge of the M.2 slot supporting bifurcating the 4× link into 2×/2×.Billy Tallis - Tuesday, May 18, 2021 - link
Correct. The H10 and H20 rely on upstream port bifurcation support. I think there's also a proprietary element to it, but bifurcation down to x2 links is less widely supported than bifurcation down to x4 links anyways.A PCIe switch would have been nice, but wouldn't fit. And this product line isn't important enough for Intel to make a big new custom ASIC for, either a SSD controller that can speak to both 3DXP and QLC, or adding PCIe switch/passthrough support to one of the two controllers.
Kurosaki - Tuesday, May 18, 2021 - link
Maybe next gen then?...Tomatotech - Tuesday, May 18, 2021 - link
The H20 is just a placeholder while the Optane team treads water and begs the rest of Intel to let them release a proper drive. This H20 looks like a greyhound with bricks tied to its neck. Absolutely lovely latency and random 4K performance would be a credit to any high-end workstation. But it's crippled by a shit implementation.Apple was doing tiered drive storage nearly 10 years ago with their Fusion drives, and as the Enmotus tiered drive shows, it can do amazing things. This is how the H20 should be set up.
I say the H20 is treading water; it should be on PCIe 4.0 but because of Intel's shenanigans with PCIe 4.0 the Optane team are crippled and can't release this drive with the backing support it needs. Hopefully the next model, maybe the H30, will have PCIe 4.0 and then it'll finally be a decent overall drive.
Probably not, given the sad history of Intel shooting Optane in the foot. They could have released this drive several years ago, and it would have been excellent then, but normal NAND drives are improving all the time and it's just too little, too late.
Spunjji - Wednesday, May 19, 2021 - link
"a greyhound with bricks tied to its neck" - succinct. 👍haukionkannel - Wednesday, May 19, 2021 - link
Most likely PCI 5.0 or 6.0 in reality… and bigger ottaen part. Much bigger!tuxRoller - Friday, May 21, 2021 - link
You made me curious regarding the history of hsm.It earliest one seems to be the IBM 3850 in the 70s.
So. Yeah. It's not exactly new tech:-|
Monstieur - Tuesday, May 18, 2021 - link
VMD changes the PID & VID so the NVMe drive will not be detected with generic drivers. This is the same behavior on X299, but those boards let you enable / disable VMD per PCIe slot. There is yet another feature called "CPU Attached RAID" which lets you use RST RAID or Optane Memory acceleration with non-VMD drives attached to the CPU lanes and not chipset lanes.Monstieur - Tuesday, May 18, 2021 - link
500 Series:VMD (CPU) > RST VMD driver / RST Optane Memory Acceleration with H10 / H20
Non-VMD (CPU) > Generic driver
CPU Attached RAID (CPU) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
RAID (PCH) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
AHCI (PCH) > Generic driver
X299:
VMD (CPU) > VROC VMD driver / VROC RAID
Non-VMD (CPU) > Generic driver
CPU Attached RAID (CPU) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
RAID (PCH) > Generic or RST driver / RST RAID / RST Optane Memory Acceleration with H10 / H20 / 900p / 905p
AHCI (PCH) > Generic driver
dwillmore - Tuesday, May 18, 2021 - link
This really looks like a piece of hardware to avoid unless you run Windoes on the most recent generation of Intel hardware. So, that's a double "nope" from me. That's for the warning!Billy Tallis - Tuesday, May 18, 2021 - link
VMD has been an important feature of Intel server platforms for years. As a result, Linux has supported VMD for years. You may not be able to do a clean install of Windows onto this Tiger Lake laptop without loading extra drivers, but Linux has no problem.I had a multi-boot setup on a drive that was in the Whiskey Lake laptop. When I moved it over to the Tiger Lake laptop, grub tried to load its config from the wrong partition. But once I got past that issue, Linux booted with no trouble. Windows could only boot into its recovery environment. From there, I had to put RST drivers on a USB drive, load them in the recovery environment so it could detect the NVMe drive, then install them into the Windows image on the NVMe drive so it could boot on its own.
dsplover - Tuesday, May 18, 2021 - link
Great read, thanks. Love the combinations benefits being explained so well.CaptainChaos - Tuesday, May 18, 2021 - link
The phrase "putting lipstick on a pig" comes to mind for Intel here!Tomatotech - Wednesday, May 19, 2021 - link
Other way round. Optane is stunning but Intel has persistently shot it in the foot for almost all their non-server releases.In Intel’s defence, getting it right requires full-stack cooperation between Intel, Microsoft, and motherboard makers. You’d think they should be able to do it, given that cooperating is at the basis of their existence, but in Optane’s case it hasn’t been achievable.
Only Apple seems to be achieving this full stack integration with their M1 chip & unified memory & their OS, and it took them a long time to get to this point.
CaptainChaos - Wednesday, May 19, 2021 - link
Yes... I meant that Optane is the lipstick & QLC is the pig Tomatotech dude! I use several Optane drives but see no advantage at this point for QLC! It's just not priced properly to provide a tempting alternative to TLC.deil - Wednesday, May 19, 2021 - link
I still feel this is lazy solution.QLC for data storage, Optane for file metadata storage is the way.
instant search and big size, best of both worlds.
Wereweeb - Wednesday, May 19, 2021 - link
What you're describing is inferior to current QLC SSD's. Optane is still orders of magnitude slower than RAM, and I bet it would still be slower than just using system RAM like many DRAMless drives do. Plus, expensive for a consumer product.Optane's main use is to add terabytes of low-cost low-latency storage to workstations (That's how Intel uses it, to sell said workstations), and today both RAM and SLC drives are hot on it's heels.
jabber - Wednesday, May 19, 2021 - link
All I want is a OS file system that can handle microfiles without grinding down to KBps all the time. Nothing more I love than seeing my super fast storage grind to a halt when I do file large user data copies.Tomatotech - Wednesday, May 19, 2021 - link
Pay for a 100% Optane SSD then. Or review your SSD / OS choices if this aspect is key to your income.haukionkannel - Wednesday, May 19, 2021 - link
If there only would be pure optane m2 ssd about 500 Gb to 1tb… and i,know… it would cost at least $1000 to $2000 but that would be quite usefull in highend nat storage or even as a main pc system drive.Fedor - Sunday, May 23, 2021 - link
There are, and have been for quite a few years. See the 900p, 905p (discontinued) and enterprise equivalents like 4800X and now the new 5800X.jabber - Wednesday, May 19, 2021 - link
They ALL grind to a halt when they hit thousands of microfiles.ABR - Wednesday, May 19, 2021 - link
As can be seen from the actual application benchmarks, these caching drives add almost nothing to (and sometimes take away from) performance. This matches my experience with a hybrid SSD - hard drive a few years ago on Windows that was also 16 or 32 GB for the fast part – it was indistinguishable from a regular hard drive in performance. Upgrading the same machine to a full SSD on the other hand was night and day. Basically software doesn't seem to be able to do a good job of determining what to cache.lightningz71 - Wednesday, May 19, 2021 - link
I see a lot of people bagging on Optane in general, both here and at other forums. I admit to not being a fan of it for many reasons, however, when it works, and when it's implemented with very specific goals, it does make a big difference. The organization I work at got a whole bunch (thousands) of PCs a few years ago that had mechanical hard drives. Over the last few years, different security and auditing software has been installed on them that has seriously impacted their performance. The organization was able to bulk buy a ton of the early 32GB Optane drives and we've been installing them in the machines as workload has permitted. The performance difference when you get the configuration right is drastically better for ordinary day to day office workers. This is NOT a solution for power users. This is a solution for machines that will be doing only a few, specific tasks that are heavily access latency bound and don't change a lot from day to day. The caching algorithms figure out the access patterns relatively quickly and it's largely indistinguishable from the newer PCs that were purchased with SSDs from the start.As for the H20, I understand where Intel was going with this, and as a "minimum effort" refresh on an existing product, it achieves it's goals. However, I feel that Intel has seriously missed the mark with this product in furthering the product itself.
I suggest that Intel should have invested in their own combined NVME/Optane controller chip that would do the following:
1) Use PCIe 4.0 on the bus interface with a unified 4x setup.
2) Instead of using regular DRAM for caching, use the Optane modules themselves in that role. Tier the caching with host-based caching like the DRAMless controller models do, then tier that down to the Optane modules. They can continue to use the same strategies that regular Optane uses for caching, but have it implemented on the on-card controller instead of the host operating system. A lot of the features that were the reason that the Optane device needed to be it's own PCIe device separate from the SSD were addressed in NVME Spec 1.4(a and b), meaning that a lot of those things can be done through the unified controller. A competent controller chip should have been achievable that would have realized all of the features of the existing, but with much better I/O capabilities.
Maybe that's coming in the next generation, if that ever happens. This... this was a minimum effort to keep a barely relevant product... barely relevant.
zodiacfml - Thursday, May 20, 2021 - link
I did not get the charts. I did not see any advantage except if the workload fits in Optane, is that correct?Billy Tallis - Thursday, May 20, 2021 - link
It's a general property of caching that if your workload doesn't actually fit in the cache, then it will run at about the same speed as if that cache didn't exist. This is as true of storage caches as it is of a CPU's caches for RAM. Of course, defining whether your workload "fits" in a cache is a bit fuzzy, and depends on details of the workload's spatial and temporal locality, and the cache replacement policy.scan80269 - Thursday, May 20, 2021 - link
That Intel Optane Memory H20 stick may be the source of the "coil whine". Don't be so sure about this noise always coming from the main board. A colleague has been bothered by a periodic high-pitched noise from her laptop, up until the installed Optane Memory H10 stick was replaced by a regular m.2 NAND SSD. The noise can come from a capacitor or inductor in the switching regulator circuit on the m.2 stick.scan80269 - Thursday, May 20, 2021 - link
Oh, and Intel Optane Memory H20 is spec'ed at PCIe 3.0 x4 for the m.2 interface. I have the same HP Spectre x360 15.6" laptop with Tiger Lake CPU, and it happily runs the m.2 NVMe SSD at PCIe Gen4 speed, with a sequential read speed of over 6000 MB/s as measured by winsat disk. So this is the H20 not supporting PCIe Gen4 speed as opposed to the HP laptop lacking support of that speed.Billy Tallis - Thursday, May 20, 2021 - link
I tested the laptop with 10 different SSDs. The coil whine is not from the SSD.I tested the laptop with a PCIe gen4 SSD, and it did not operate at gen4 speed. I checked the lspci output in Linux and the host side of that link did not list 16 GT/s capability.
Give me a little credit here, instead of accusing me of being wildly wrong about stuff that's trivially verifiable.
Polaris19832145 - Wednesday, September 22, 2021 - link
What about using an Intel 660p Series M.2 2280 2TB PCIe NVMe 3.0 x4 3D2, QLC Internal Solid State Drive (SSD) SSDPEKNW020T8X1 extra CPU l2 or even l3 cache at 1-8TB going forward in a PCI-e 4.0 slot if intel and AMD will allow this to occur for getting rid of any GPU and HDD bottlenecking in the PCH and CPU lanes on the motherboard here? Is it even possible for this sort of additional cache allowed for the CPU to access by formatting the SSD to use for added l3 and l2 cache for speeding up the GPU on an APU or CPU using igpu or even for GPUs running in mgpu on AMD or sli on Nvidia to help kill the CPU bottlenecking issues here if they can mod one for this sort of thing here for the second m.2 PCI-e 4.0 SSD slot to use for additional CPU cache needs here?