I'd personally rather stick with MLC. I still think TLC is a bit too exotic, and only for systems that aren't critical. QLC is sub-spinning rust speed & reliability.
Once you get past the SLC cache, reviews make it seem like sequential writes once you hit QLC, it can often be slower than a 7200 RPM HDD. I know that's not a typical case for a desktop user, but in my books, no SSD should ever be coming close to HDD levels of speed or responsiveness.
1TB and larger QLC drives seem to have post-SLC sequential write speeds that are faster than gigabit Ethernet, so even though it's a lot slower than a good TLC drive it's still fast enough for most use cases. Moving tens of GBs between local drives only happens frequently for video editors. And the QLC write speed of an Enmotus MiDrive should be a bit faster than existing QLC drives, since there won't be SLC cache management in the background to slow things down.
On my full-drive sequential write test, the 660p's overall average is 118.5 MB/s and a 1TB 7200 RPM WD Black does 141.9 MB/s. Peak numbers are 10x better for the QLC SSD than the hard drive, but the worst slowdowns for the 660p after filling the SLC cache definitely take it below 7200RPM performance.
I've never been able to get a 7200 RPM hard drive to hit above 100MB/s in real life, but alright.
Based on your test, the random performance of the 660p when full is still far superior to a hard disk, so "responsiveness" doesn't drop to hard disk levels. Hard disks still can't hit above, what? 1MB/s? The 660p hits 30 or 40MB when full?
Simply stating that QLC NAND hits 7200 RPM hard disk territory is disingenuous at best and it's not the entire story.
This is interesting but anybody remember the abortion that was the WD Black DUO? an SSD and a hard drive in one box..badly thought out, weird ,non-standard.
What? You must be joking. I have an 8tb WD elements external drive and it gets 170MB/s sequential reads even over USB. HDDs have been hitting 100~200MB/s for quite some time.
that's partially true. Tech goes on and sticking to OLD product keeps you from getting 3D nands that yelded nice upgrades for example. there is literally ONE drive (960 pro) and its now successor CES 2020: Samsung 980 PRO PCIe 4.0 that have both MLC and nice nvme format.
99% of people never reach the point where TLC is the problem. AND for my father/niece that both care for OS to boot and browser to load funny animals QLC is just cheap good enough. that laptop wont survive longer than the drive, I can assure you about it. world exists of ~70% of that kind of clients. 25% of power users that can strain TLC drive and that 5% that need SLC.
I personally care more about how things work in practice rather than the exact technology. If QLC with a smart SLC cache ends up providing enough performance and endurance for my needs, all at a low price, I'd certainly welcome it.
QLC sucks, TLC is not-so-great. There is no getting around the fact that adding additional measurable states to very tiny NAND cells results in ever declining durability and longevity. The trouble is that you're swimming against the current from both the consumer and the OEM perspectives. OEMs want to maximize profits so they sell you low endurance, slow TLC and unfortunately now QLC at only marginally lower costs, pocketing the profits. Consumers eagerly snap up the little price decreases they see in order to obtain higher capacities and "logic" Q/TLC's poor useful lifespan away by saying the endurance is "good enough" or "you'll never wear out a drive because my OEM-provided software insists I've only inflicted 3% wear over the last year" and then they talk with their wallets, buying up crappier NAND that further encourages OEMs to keep screwing consumers over. Thsi silly cycle has caused NAND to circle the proverbial drain in the toilet of life for years now and without any viable alternatives on the horizon, we are entering a point where people like you and me are going to have to sit here enjoying our QLC NAND with an optimistic 500 P/E cycles per cell while telling ourselves that the OEM drive software is honestly representing drive lifespan and that everything will be okay right up to the point when our storage stops working. Yay!
I am sure then that you are willing to pay what we used to pay for SLC drives back when they were the only option, right? Will you be happy to pay the same for a 256 GB SLC drive than we pay for a 1 TB QLC? I for once salute the companies for providing a cheap enought (TLC and QLC) alternative to spinning rust for 99% of PC users and also for providing expensive SSDs with MLC for the "PRO's".
I am sure that if you are willing to pay enought you can also get an ultra expensive SLC Enterprise drive that will satisfy your personal needs.
> TLC is not-so-great My Samsung 970 EVO would beg to differ. I've had it for a year and a half and its performance is nothing short of amazing. TLC may have been bad when it first came out but now that it's been in the market for some time the manufacturers know the limitations of it and can work around them to make it so that penalities aren't nearly as bad or as noticable to the end user.
Mainly my problem with TLC and QLC is endurance. Read performance, where most client workloads reside leaves end users with the impression of high system responsiveness. Write performance is another story, but as you and the article above have already mentioned, pseudo-SLC cache modes mask most of the hit.
I won't go QLC drive until they do somehow improve QLC performance, and this caching just likely is not "good enough" for me. Maybe it is, maybe it isn't. My uses though are for a system/application disk where honestly I want everything to be very snappy. The other is for bulk storage. Well, SSD prices aren't enough for the bulk storage I am using today. They are getting close, but not there. My use cases with bulk storage also means caching is likely to fall on its face at some point. They type of caching presented here is great for a "one disk to rule them all", that will of course have some amount of compromises. But isn't great as a system/application disk or a bulk disk.
Bulk sounds like nothing is going to hit the SLC cache. At least most likely not. Pure system/application disk, you are likely to have a lot of misses on the SLC cache then if it is size limited and things are often not bumped to QLC to make room for writes.
My bulk use case is for storing movies, music, photos and application installation files on the order of about 3.4TiB of data or a bit more. But every once in awhile I have to move a full disk image. Where QLC drive of ANY type is going to run out of SLC cache and result in slow writes to QLC. That or it hanging up while it evicts things to QLC and empties the SLC. Right now that is through a pair of 3TB HDDs in RAID0. A set in my server, a set in my desktop (and a 6TB USB HDD for offline backup of it all). I've got 2x1GbE between my desktop and my server, which means I am network limited to about 235MB/sec transfer speeds when I need to do a full disk copy.
If QLC drives were in there, some of the smaller files might transfer a little faster (the RAID0 array and network link slows to around 80MiB/sec once it is copying photo/music directories with the smaller file sizes), but larger files? That cache gets filled up, from everything I've seen its going to slow down to about 70MB/sec.
At least with TLC you are talking more on the order of 200MB/sec once the cache fills.
A RAID0 of a pair of 2TB QLC drives could fit all my files, but still, you are talking ~140MB/sec writes once the cache fills. RAID0 with a pair of 2TB TLC drives and you are pushing 400MB/sec, which is well over what my network connection can manage. It would barely be disk limited if I had a pair of 2.5GbE links or a 5GbE link, which I am hoping I'll have (at least a single 2.5GbE link) in the next couple of years if switch prices would come down a bit more with some more players on the market.
I don't need 10,000,000MB/sec transfers. Or RAM disk speeds or anything else like that. But I'd really, really like to have at least the performance to saturate a single 2.5GbE link with whatever I implement. Bonus points if it COULD be a single TLC drive of >4TB capacity. I am willing to do a pair of SSD's in RAID0 to get the performance I want. Which TLC drives can do in spades even once their caches are filled. But QLC can't. Not even close.
But I absolutely see the use case for 80+% of users. Not sure what the exact management strategy will be, but TBH it seems like the smartest way to do it would be 32-64GB of SLC acting as combo page file and most frequently accessed file cache. But it would still make sense to have at least 8-16GB of SLC cache as a pure write buffer for the QLC. That would likely satisfy 90% of users (or more!) who would never, ever notice performance degradation of pure QLC writes which are slow.
That being said, at that point, unless you need a huge drive, why not TLC? It looks like you'd be taking a QLC drive and making it 32+400GB capacity or similar. Where as for a TLC drive, you'd have a 500/512GB drive with dynamic SLC cache. With more frequent "good" performance for all files, versus just the commonly used ones. Sure the cost might be somewhat higher, but you get 18% more storage at that tier for the TLC drive and likely better performance for edge cases and for average use cases you'd probably have pretty similar performance.
That to me says that a QLC drive with this technology probably needs to be at least 18% cheaper to be maybe worth while.
You're entirely right, and you also need to bear in mind the labour costs of swapping out the QLC drives more frequently in an organization where the DWPD will result in premature failures for some users compared to TLC drives.
Look at Samsung QVO vs EVO pricing. Their QLC are about 25% cheaper. AData on the other hand isn't differentiating as much between SU800 and SU630 pricing. So I wouldn't recommend opting for AData QLC drives at this point in time.
What an overly simple and ridiculous paragraph. QLC NAND is a necessary step. 2-MLC NAND is not practical; sticking to 2-bit MLC or 3-bit TLC is a waste of wafers and money to just stifle innovation because you want "MOAR ENDURANCE!". Even "planar" 2D TLC NAND was more than acceptable for 99 percent of consumers. 3D TLC NAND is practical short term for certain workloads, but it's not practical to continue forever, even after hitting 72+ layers.
With so much competition in the market, what do you expect companies to do? Just stop adding bits? Just keep selling expensive NAND that consumers will never exhaust and continue the e-waste and then lose the competitive edge because they refused to innovate? Do you even understand how this works?
64-layer QLC NAND is more than adequate for the majority of consumers, period. Even the Intel 660p with a TBW of 150 would take the average gamer/ PC user 10+ years to exhaust. What's comical about all of this is that hard disks have never come with endurance ratings and naturally have far higher failure rates.
It isn't simply a case of endurance. For the vast majority of users, yes, QLC endurance is just fine. The issue is performance. SLC caching of some sort hides the true performance for a lot of use cases for QLC drives, other than very large writes. The issue though is, if you exhaust the cache, the performance for large writes is significantly slower than a HARD DRIVE. You are looking at ~70MB/sec performance, compared to a recent hard drive which is going to be in the 140-180MB/sec range.
TLC drives at least can hit around 170-200MB/sec for large writes once their cache is exhausted.
Of course QLC drives have better small file performance compared to a HDD once the cache is exhausted, but even there, they perform massively worse than a TLC drive in small writes if SLC caching cannot be used.
I DO think QLC drives are fine for 95% of consumers and okay for bulk storage for about half of the folks who are left, at least if they can be made cheaper enough than TLC to make sense (at least a 15-20% discount).
My concern though are things like PLC flash, which we know is coming. TLC->QLC at least nets a 33% increase in storage and seems to be around 20% or so cheaper.
PLC only increase storage density over QLC by 25%. I don't think I've seen anything definitive on endurance or performance, but if TLC->QLC is any guide at all, endurance will end up actually being an issue for some common consumer end users and performance is going to be horrible once their SLC cache is exhausted. Which also some common users might end up hitting with not unrealistic workloads.
One of the considerations is the lower the bare metal performance of the NAND is, the longer it takes to flush the SLC cache to TLC/QLC/PLC. Also depending on how that is managed, it can also create pretty drastic changes in performance if the drive sees access during the flushing process.
If QLC is ~70MB/sec writes for the underlying flash, PLC is likely in mid 2010's eMMC territory of 30-40MB/sec. Endurance is likely to be less than half of QLC. A lot of consumers end up using a device like a laptop for 4 or 5 or longer years. Also entry level storage capacities are typically most commonly purchased types...
"TLC drives at least can hit around 170-200MB/sec" 64-layer 3D TLC, yes. Not 32-layer 3D TLC nor planar TLC.
"The issue though is, if you exhaust the cache, the performance for large writes is significantly slower than a HARD DRIVE. You are looking at ~70MB/sec performance, compared to a recent hard drive which is going to be in the 140-180MB/sec range."
For large writes? No. Not automatically. Why not give the entire story instead of just half-assing it. In sequential file transfers DRIVE TO DRIVE you MAY surpass the pSLC cache IF you can copy to it fast enough. IF. Have you even read the reviews?
Additionally, I'd love to see a modern hard drive that can write/ read sequential and random data faster than the 660p. Even modern hard disks can't write random data faster than 1MB/s, and sequential MAY hit above 100MB/s IF the drive isn't full and IF there aren't a lot of requests.
You WILL NOT hit the QLC NAND under the majority of workloads, period. End of story.
The MLC 970 Pro has lower performance specs in every category than the TLC 970 Evo Plus, excepting when the SLC cache on the Evo Plus is full. MLC with no SLC cache is not a clear win over TLC with an SLC cache.
I had quite old OCZ drive Agility 3 (gave to mom, but it was starting to "forget" things...then again, mom did not use system that often so likely it not powered on as often as should have been not helps.
(I fib, I have a 970 pro m.2 which is either 2bit or 3 bit MLC..confused samsung information they post about it LOL) the others (crucial MX 100 (256gb) 200 (500gb) mx500 (1tb) all use varying different style SLC/MLC/TLC style
TLC works well, pretty much all of them use a fancier cache method to speed up what they do, wouldn't it be great if we could have "modern" MLC or SLC at pricing they offer TLC at.
QLC, no thanks, higher chance of fail in shorter amount of time (robustness) unless the drive in question is intelligent enough to bounce around what is "written" to avoid data loss, am sure the makers take all the time in the world to proper design to avoid data loss, but then again, the price point of pretty much all of the QLC might as well stick with MLC or TLC
I don’t think you understand how hard drives work. HDDs suck at random access and that’s what slows down a computer. I’ve never seen a modern SSD anything less than 10x faster at random rw than a hdd. To be fair, most are around 100x faster, even the QLC ones you seem to hate so much.
Tell you what, I’ll happily take these horrible QLC SSDs off your hands. I’ll even give you a wonderful 1TB hdd for free in exchange for each one (250gb+ please) you send me. I’ll even pay postage both ways!
How’s that for a bargain? I have a stack of HDDs from work computers just waiting to sent to you and several dozen staff who will be overjoyed to have these nasty old QLCs you don’t want and will happily donate their speedy HDDs to you. Happy yet?
People get obsessed with seq. read/write and forget how terrible HDDs are for a main OS disk, especially when you start adding in resource-intensitive applications like aggressive AntiVirus programs or all the ridiculous amount of services running in Win10 now
My personal view of QLC is that it provides 2 key functional roles: First, it's an alternative technology to HDD for long-term storage, ie backups of critical files. Using 2 different technologies is highly recommended for avoid SPOF. To get the same reliability for 20 years of storage on HDD you will need to use a NAS or rely on regular human interaction, both of which will escalate the HDD costs beyond the QLC SSD cost. The second role is as a frontcache for a cold storage NAS where you expect less than 1 frontcache DW per week. At that rate it will last 6 years, which is right on target.
It might not. When you try to make something fast out of slow components it is basically a losing game. If the OS and the drivers are smart about telling the drive where to locate files it would be better, if they make bad decisions it could be horribly worse.
In real life Windows machines run virus checking and often have processes running on them that do a lot of I/O. For instance, my work computer has a client on it that scans the disk periodically looking for bank account numbers, credit card numbers, and PDF files that have lists of coordinates that look like "#### #### #### ####". If those programs interact badly with the disk, then you'll have problems.
What's guaranteed to work is something that is fast all the time, and most of all you need to pay attention to 95-99% latency because what drives you nuts as a computer user is not the median case, it's the occasional slow case that has you looking at a spinner.
Background file scanning software like you describe shouldn't cause too much interference with tiered storage like what Enmotus does. It should be reading most files with about the same frequency, and thus wouldn't have much impact on which files are identified as hot data. The Enmotus tiering software is deliberately conservative about automatically promoting data to the fast tier, to avoid cache thrashing.
Heh, reading this MLC/TLC/QLC debate is funny... At the end of the day, do you have the data from your MLC/TLC/QLC drive backed up anywhere else? If you do, then get the drive that fits your usage scenario.
I'm intrigued by the concept and hope you can get a device to test in the near future. Anything that can make QLC less terrible under any but very light loads is a good thing; and there doesn't seem to be any reason this couldn't also be used to enhance the cache on TLC drives too.
Have any of you people even used a QLC drive, they are nowhere near as bad as you people are making them and they are fucking awesome compared to a hard drive.
Yes, I have. Compared to a hard drive, they are quicker. Compared to TLC, MLC, and SLC they are slower. Perception of performance has a lot to do with the sort of compairsons you make, but you should already know that without someone having to tell you.
QLC may be slower but to the average user, they wouldn't be able to tell the difference between it and TLC. Other than showing benchmark numbers, real-world performance isn't going to show much difference.
This looks, I gotta say, like a copy of Apple's Fusion drive along pretty much every dimension, from the static partitioning to the host side decisions to the simplified setup. Main thing missing is the tight integration with the file system.
Which is fine --- if you're going to copy, copy from the best!
(I do wonder if, at some point, Apple will offer an SLC+QLC Fusion solution for low-end Macs. Honestly it makes more sense than continuing with a hard drive. Maybe it's one more thing waiting for the grand realignment once the ARM mac arrives, which presumably will include more Apple control over the flash controller?)
"Any compatibility issue or other glitch can easily render a PC unbootable, and data recovery isn't as straightforward as for a single drive."
Well said. Decided I couldn't trust the Optane Memory H10 that came with a laptop. Replaced it with TLC 3D NAND that is always fast without the need for hidden Intel caching software running the background tying up the CPU, spinning up the fan, and draining the battery with no indication of what it is actually doing and why.
This sounds like a very interesting concept. Looking forward to reviews, and if prices, availability and performance are all good I'll definitely consider a drive like this for one of the builds/upgrades that are on the books for the near(-ish) future, especially the one(s) where QLC drives with traditional caching were the most likely candidates. I'd definitely be willing to pay, say, 20% over a 660p-class drive for something like this. Having frequently accessed OS files and applications permanently in SLC (but also monitored) sounds like an excellent idea.
Also looking forward to Intel getting their thumbs out of their collective asses and making a hybrid Optane+NAND controller so they can make an actually usable follow-up to the H10. Two discrete drives on one m.2 stick is a _terrible_ idea. An m.2 2280 with 64GB of Optane and 1-2TB of NAND on the other hand? Yes, please.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
42 Comments
Back to Article
yetanotherhuman - Thursday, January 30, 2020 - link
I'd personally rather stick with MLC. I still think TLC is a bit too exotic, and only for systems that aren't critical. QLC is sub-spinning rust speed & reliability.29a - Thursday, January 30, 2020 - link
I have a QLC drive and it is much faster than a hard drive.yetanotherhuman - Thursday, January 30, 2020 - link
Once you get past the SLC cache, reviews make it seem like sequential writes once you hit QLC, it can often be slower than a 7200 RPM HDD. I know that's not a typical case for a desktop user, but in my books, no SSD should ever be coming close to HDD levels of speed or responsiveness.Billy Tallis - Thursday, January 30, 2020 - link
1TB and larger QLC drives seem to have post-SLC sequential write speeds that are faster than gigabit Ethernet, so even though it's a lot slower than a good TLC drive it's still fast enough for most use cases. Moving tens of GBs between local drives only happens frequently for video editors. And the QLC write speed of an Enmotus MiDrive should be a bit faster than existing QLC drives, since there won't be SLC cache management in the background to slow things down.DyneCorp - Thursday, January 30, 2020 - link
Please cite exactly where the 660p hits "7200 RPM HDD" speeds. I haven't seen a single review showing this.Billy Tallis - Thursday, January 30, 2020 - link
On my full-drive sequential write test, the 660p's overall average is 118.5 MB/s and a 1TB 7200 RPM WD Black does 141.9 MB/s. Peak numbers are 10x better for the QLC SSD than the hard drive, but the worst slowdowns for the 660p after filling the SLC cache definitely take it below 7200RPM performance.DyneCorp - Thursday, January 30, 2020 - link
I've never been able to get a 7200 RPM hard drive to hit above 100MB/s in real life, but alright.Based on your test, the random performance of the 660p when full is still far superior to a hard disk, so "responsiveness" doesn't drop to hard disk levels. Hard disks still can't hit above, what? 1MB/s? The 660p hits 30 or 40MB when full?
Simply stating that QLC NAND hits 7200 RPM hard disk territory is disingenuous at best and it's not the entire story.
dromoxen - Sunday, February 2, 2020 - link
I have 5900/5400rpm drives (over USb) that regularly hit 160-170Mbps , even writing over the network (gigabit) they get 110-113Mbps.dromoxen - Sunday, February 2, 2020 - link
This is interesting but anybody remember the abortion that was the WD Black DUO? an SSD and a hard drive in one box..badly thought out, weird ,non-standard.whatthe123 - Wednesday, June 3, 2020 - link
What? You must be joking. I have an 8tb WD elements external drive and it gets 170MB/s sequential reads even over USB. HDDs have been hitting 100~200MB/s for quite some time.Samus - Sunday, February 2, 2020 - link
Hard disks become incredibly slow as they fill up too, easily sub-100MB/sec, and as they become fragmented, access times can be awful.deil - Thursday, January 30, 2020 - link
that's partially true. Tech goes on and sticking to OLD product keeps you from getting 3D nands that yelded nice upgrades for example. there is literally ONE drive (960 pro) and its now successor CES 2020: Samsung 980 PRO PCIe 4.0 that have both MLC and nice nvme format.99% of people never reach the point where TLC is the problem.
AND
for my father/niece that both care for OS to boot and browser to load funny animals QLC is just cheap good enough.
that laptop wont survive longer than the drive, I can assure you about it.
world exists of ~70% of that kind of clients. 25% of power users that can strain TLC drive and that 5% that need SLC.
ET - Thursday, January 30, 2020 - link
I personally care more about how things work in practice rather than the exact technology. If QLC with a smart SLC cache ends up providing enough performance and endurance for my needs, all at a low price, I'd certainly welcome it.PeachNCream - Thursday, January 30, 2020 - link
QLC sucks, TLC is not-so-great. There is no getting around the fact that adding additional measurable states to very tiny NAND cells results in ever declining durability and longevity. The trouble is that you're swimming against the current from both the consumer and the OEM perspectives. OEMs want to maximize profits so they sell you low endurance, slow TLC and unfortunately now QLC at only marginally lower costs, pocketing the profits. Consumers eagerly snap up the little price decreases they see in order to obtain higher capacities and "logic" Q/TLC's poor useful lifespan away by saying the endurance is "good enough" or "you'll never wear out a drive because my OEM-provided software insists I've only inflicted 3% wear over the last year" and then they talk with their wallets, buying up crappier NAND that further encourages OEMs to keep screwing consumers over. Thsi silly cycle has caused NAND to circle the proverbial drain in the toilet of life for years now and without any viable alternatives on the horizon, we are entering a point where people like you and me are going to have to sit here enjoying our QLC NAND with an optimistic 500 P/E cycles per cell while telling ourselves that the OEM drive software is honestly representing drive lifespan and that everything will be okay right up to the point when our storage stops working. Yay!valinor89 - Thursday, January 30, 2020 - link
I am sure then that you are willing to pay what we used to pay for SLC drives back when they were the only option, right? Will you be happy to pay the same for a 256 GB SLC drive than we pay for a 1 TB QLC?I for once salute the companies for providing a cheap enought (TLC and QLC) alternative to spinning rust for 99% of PC users and also for providing expensive SSDs with MLC for the "PRO's".
I am sure that if you are willing to pay enought you can also get an ultra expensive SLC Enterprise drive that will satisfy your personal needs.
TLDR: You pay for what you get.
trparky - Thursday, January 30, 2020 - link
> TLC is not-so-greatMy Samsung 970 EVO would beg to differ. I've had it for a year and a half and its performance is nothing short of amazing. TLC may have been bad when it first came out but now that it's been in the market for some time the manufacturers know the limitations of it and can work around them to make it so that penalities aren't nearly as bad or as noticable to the end user.
PeachNCream - Thursday, January 30, 2020 - link
Mainly my problem with TLC and QLC is endurance. Read performance, where most client workloads reside leaves end users with the impression of high system responsiveness. Write performance is another story, but as you and the article above have already mentioned, pseudo-SLC cache modes mask most of the hit.azazel1024 - Thursday, January 30, 2020 - link
I won't go QLC drive until they do somehow improve QLC performance, and this caching just likely is not "good enough" for me. Maybe it is, maybe it isn't. My uses though are for a system/application disk where honestly I want everything to be very snappy. The other is for bulk storage. Well, SSD prices aren't enough for the bulk storage I am using today. They are getting close, but not there. My use cases with bulk storage also means caching is likely to fall on its face at some point. They type of caching presented here is great for a "one disk to rule them all", that will of course have some amount of compromises. But isn't great as a system/application disk or a bulk disk.Bulk sounds like nothing is going to hit the SLC cache. At least most likely not. Pure system/application disk, you are likely to have a lot of misses on the SLC cache then if it is size limited and things are often not bumped to QLC to make room for writes.
My bulk use case is for storing movies, music, photos and application installation files on the order of about 3.4TiB of data or a bit more. But every once in awhile I have to move a full disk image. Where QLC drive of ANY type is going to run out of SLC cache and result in slow writes to QLC. That or it hanging up while it evicts things to QLC and empties the SLC. Right now that is through a pair of 3TB HDDs in RAID0. A set in my server, a set in my desktop (and a 6TB USB HDD for offline backup of it all). I've got 2x1GbE between my desktop and my server, which means I am network limited to about 235MB/sec transfer speeds when I need to do a full disk copy.
If QLC drives were in there, some of the smaller files might transfer a little faster (the RAID0 array and network link slows to around 80MiB/sec once it is copying photo/music directories with the smaller file sizes), but larger files? That cache gets filled up, from everything I've seen its going to slow down to about 70MB/sec.
At least with TLC you are talking more on the order of 200MB/sec once the cache fills.
A RAID0 of a pair of 2TB QLC drives could fit all my files, but still, you are talking ~140MB/sec writes once the cache fills. RAID0 with a pair of 2TB TLC drives and you are pushing 400MB/sec, which is well over what my network connection can manage. It would barely be disk limited if I had a pair of 2.5GbE links or a 5GbE link, which I am hoping I'll have (at least a single 2.5GbE link) in the next couple of years if switch prices would come down a bit more with some more players on the market.
I don't need 10,000,000MB/sec transfers. Or RAM disk speeds or anything else like that. But I'd really, really like to have at least the performance to saturate a single 2.5GbE link with whatever I implement. Bonus points if it COULD be a single TLC drive of >4TB capacity. I am willing to do a pair of SSD's in RAID0 to get the performance I want. Which TLC drives can do in spades even once their caches are filled. But QLC can't. Not even close.
But I absolutely see the use case for 80+% of users. Not sure what the exact management strategy will be, but TBH it seems like the smartest way to do it would be 32-64GB of SLC acting as combo page file and most frequently accessed file cache. But it would still make sense to have at least 8-16GB of SLC cache as a pure write buffer for the QLC. That would likely satisfy 90% of users (or more!) who would never, ever notice performance degradation of pure QLC writes which are slow.
That being said, at that point, unless you need a huge drive, why not TLC? It looks like you'd be taking a QLC drive and making it 32+400GB capacity or similar. Where as for a TLC drive, you'd have a 500/512GB drive with dynamic SLC cache. With more frequent "good" performance for all files, versus just the commonly used ones. Sure the cost might be somewhat higher, but you get 18% more storage at that tier for the TLC drive and likely better performance for edge cases and for average use cases you'd probably have pretty similar performance.
That to me says that a QLC drive with this technology probably needs to be at least 18% cheaper to be maybe worth while.
linuxgeex - Thursday, January 30, 2020 - link
You're entirely right, and you also need to bear in mind the labour costs of swapping out the QLC drives more frequently in an organization where the DWPD will result in premature failures for some users compared to TLC drives.Look at Samsung QVO vs EVO pricing. Their QLC are about 25% cheaper. AData on the other hand isn't differentiating as much between SU800 and SU630 pricing. So I wouldn't recommend opting for AData QLC drives at this point in time.
DyneCorp - Thursday, January 30, 2020 - link
What an overly simple and ridiculous paragraph. QLC NAND is a necessary step. 2-MLC NAND is not practical; sticking to 2-bit MLC or 3-bit TLC is a waste of wafers and money to just stifle innovation because you want "MOAR ENDURANCE!". Even "planar" 2D TLC NAND was more than acceptable for 99 percent of consumers. 3D TLC NAND is practical short term for certain workloads, but it's not practical to continue forever, even after hitting 72+ layers.With so much competition in the market, what do you expect companies to do? Just stop adding bits? Just keep selling expensive NAND that consumers will never exhaust and continue the e-waste and then lose the competitive edge because they refused to innovate? Do you even understand how this works?
64-layer QLC NAND is more than adequate for the majority of consumers, period. Even the Intel 660p with a TBW of 150 would take the average gamer/ PC user 10+ years to exhaust. What's comical about all of this is that hard disks have never come with endurance ratings and naturally have far higher failure rates.
azazel1024 - Monday, February 3, 2020 - link
It isn't simply a case of endurance. For the vast majority of users, yes, QLC endurance is just fine. The issue is performance. SLC caching of some sort hides the true performance for a lot of use cases for QLC drives, other than very large writes. The issue though is, if you exhaust the cache, the performance for large writes is significantly slower than a HARD DRIVE. You are looking at ~70MB/sec performance, compared to a recent hard drive which is going to be in the 140-180MB/sec range.TLC drives at least can hit around 170-200MB/sec for large writes once their cache is exhausted.
Of course QLC drives have better small file performance compared to a HDD once the cache is exhausted, but even there, they perform massively worse than a TLC drive in small writes if SLC caching cannot be used.
I DO think QLC drives are fine for 95% of consumers and okay for bulk storage for about half of the folks who are left, at least if they can be made cheaper enough than TLC to make sense (at least a 15-20% discount).
My concern though are things like PLC flash, which we know is coming. TLC->QLC at least nets a 33% increase in storage and seems to be around 20% or so cheaper.
PLC only increase storage density over QLC by 25%. I don't think I've seen anything definitive on endurance or performance, but if TLC->QLC is any guide at all, endurance will end up actually being an issue for some common consumer end users and performance is going to be horrible once their SLC cache is exhausted. Which also some common users might end up hitting with not unrealistic workloads.
One of the considerations is the lower the bare metal performance of the NAND is, the longer it takes to flush the SLC cache to TLC/QLC/PLC. Also depending on how that is managed, it can also create pretty drastic changes in performance if the drive sees access during the flushing process.
If QLC is ~70MB/sec writes for the underlying flash, PLC is likely in mid 2010's eMMC territory of 30-40MB/sec. Endurance is likely to be less than half of QLC. A lot of consumers end up using a device like a laptop for 4 or 5 or longer years. Also entry level storage capacities are typically most commonly purchased types...
DyneCorp - Friday, February 7, 2020 - link
"TLC drives at least can hit around 170-200MB/sec"64-layer 3D TLC, yes. Not 32-layer 3D TLC nor planar TLC.
"The issue though is, if you exhaust the cache, the performance for large writes is significantly slower than a HARD DRIVE. You are looking at ~70MB/sec performance, compared to a recent hard drive which is going to be in the 140-180MB/sec range."
For large writes? No. Not automatically. Why not give the entire story instead of just half-assing it. In sequential file transfers DRIVE TO DRIVE you MAY surpass the pSLC cache IF you can copy to it fast enough. IF. Have you even read the reviews?
Additionally, I'd love to see a modern hard drive that can write/ read sequential and random data faster than the 660p. Even modern hard disks can't write random data faster than 1MB/s, and sequential MAY hit above 100MB/s IF the drive isn't full and IF there aren't a lot of requests.
You WILL NOT hit the QLC NAND under the majority of workloads, period. End of story.
extide - Thursday, January 30, 2020 - link
Go buy MLC today..OH CRAP, you can't
R0H1T - Thursday, January 30, 2020 - link
Technical these are MLC, what you are referring to is probably "DLC" 😅yetanotherhuman - Thursday, January 30, 2020 - link
Samsung 970 Pro.Guspaz - Tuesday, February 4, 2020 - link
The MLC 970 Pro has lower performance specs in every category than the TLC 970 Evo Plus, excepting when the SLC cache on the Evo Plus is full. MLC with no SLC cache is not a clear win over TLC with an SLC cache.Dragonstongue - Thursday, January 30, 2020 - link
I had quite old OCZ drive Agility 3 (gave to mom, but it was starting to "forget" things...then again, mom did not use system that often so likely it not powered on as often as should have been not helps.(I fib, I have a 970 pro m.2 which is either 2bit or 3 bit MLC..confused samsung information they post about it LOL) the others (crucial MX 100 (256gb) 200 (500gb) mx500 (1tb) all use varying different style SLC/MLC/TLC style
TLC works well, pretty much all of them use a fancier cache method to speed up what they do, wouldn't it be great if we could have "modern" MLC or SLC at pricing they offer TLC at.
QLC, no thanks, higher chance of fail in shorter amount of time (robustness) unless the drive in question is intelligent enough to bounce around what is "written" to avoid data loss, am sure the makers take all the time in the world to proper design to avoid data loss, but then again, the price point of pretty much all of the QLC might as well stick with MLC or TLC
^.^
Tomatotech - Thursday, January 30, 2020 - link
I don’t think you understand how hard drives work. HDDs suck at random access and that’s what slows down a computer. I’ve never seen a modern SSD anything less than 10x faster at random rw than a hdd. To be fair, most are around 100x faster, even the QLC ones you seem to hate so much.Tell you what, I’ll happily take these horrible QLC SSDs off your hands. I’ll even give you a wonderful 1TB hdd for free in exchange for each one (250gb+ please) you send me. I’ll even pay postage both ways!
How’s that for a bargain? I have a stack of HDDs from work computers just waiting to sent to you and several dozen staff who will be overjoyed to have these nasty old QLCs you don’t want and will happily donate their speedy HDDs to you. Happy yet?
Farfolomew - Sunday, February 2, 2020 - link
+1People get obsessed with seq. read/write and forget how terrible HDDs are for a main OS disk, especially when you start adding in resource-intensitive applications like aggressive AntiVirus programs or all the ridiculous amount of services running in Win10 now
linuxgeex - Thursday, January 30, 2020 - link
My personal view of QLC is that it provides 2 key functional roles: First, it's an alternative technology to HDD for long-term storage, ie backups of critical files. Using 2 different technologies is highly recommended for avoid SPOF. To get the same reliability for 20 years of storage on HDD you will need to use a NAS or rely on regular human interaction, both of which will escalate the HDD costs beyond the QLC SSD cost. The second role is as a frontcache for a cold storage NAS where you expect less than 1 frontcache DW per week. At that rate it will last 6 years, which is right on target.sheh - Friday, January 31, 2020 - link
QLC isn't for long-term storage, due to bad retention.PaulHoule - Thursday, January 30, 2020 - link
It might work out.It might not. When you try to make something fast out of slow components it is basically a losing
game. If the OS and the drivers are smart about telling the drive where to locate files it would be better, if they make bad decisions it could be horribly worse.
In real life Windows machines run virus checking and often have processes running on them that do a lot of I/O. For instance, my work computer has a client on it that scans the disk periodically looking for bank account numbers, credit card numbers, and PDF files that have lists of coordinates that look like "#### #### #### ####". If those programs interact badly with the disk, then you'll have problems.
What's guaranteed to work is something that is fast all the time, and most of all you need to pay attention to 95-99% latency because what drives you nuts as a computer user is not the median case, it's the occasional slow case that has you looking at a spinner.
Billy Tallis - Thursday, January 30, 2020 - link
Background file scanning software like you describe shouldn't cause too much interference with tiered storage like what Enmotus does. It should be reading most files with about the same frequency, and thus wouldn't have much impact on which files are identified as hot data. The Enmotus tiering software is deliberately conservative about automatically promoting data to the fast tier, to avoid cache thrashing.Drkrieger01 - Thursday, January 30, 2020 - link
Heh, reading this MLC/TLC/QLC debate is funny...At the end of the day, do you have the data from your MLC/TLC/QLC drive backed up anywhere else? If you do, then get the drive that fits your usage scenario.
extide - Thursday, January 30, 2020 - link
BINGO this is the answer -- get what best meets your I/O to dollar capabilityAND backup
DanNeely - Thursday, January 30, 2020 - link
I'm intrigued by the concept and hope you can get a device to test in the near future. Anything that can make QLC less terrible under any but very light loads is a good thing; and there doesn't seem to be any reason this couldn't also be used to enhance the cache on TLC drives too.29a - Thursday, January 30, 2020 - link
Have any of you people even used a QLC drive, they are nowhere near as bad as you people are making them and they are fucking awesome compared to a hard drive.PeachNCream - Thursday, January 30, 2020 - link
Yes, I have. Compared to a hard drive, they are quicker. Compared to TLC, MLC, and SLC they are slower. Perception of performance has a lot to do with the sort of compairsons you make, but you should already know that without someone having to tell you.trparky - Thursday, January 30, 2020 - link
QLC may be slower but to the average user, they wouldn't be able to tell the difference between it and TLC. Other than showing benchmark numbers, real-world performance isn't going to show much difference.name99 - Thursday, January 30, 2020 - link
This looks, I gotta say, like a copy of Apple's Fusion drive along pretty much every dimension, from the static partitioning to the host side decisions to the simplified setup. Main thing missing is the tight integration with the file system.Which is fine --- if you're going to copy, copy from the best!
(I do wonder if, at some point, Apple will offer an SLC+QLC Fusion solution for low-end Macs. Honestly it makes more sense than continuing with a hard drive.
Maybe it's one more thing waiting for the grand realignment once the ARM mac arrives, which presumably will include more Apple control over the flash controller?)
voicequal - Thursday, January 30, 2020 - link
"Any compatibility issue or other glitch can easily render a PC unbootable, and data recovery isn't as straightforward as for a single drive."Well said. Decided I couldn't trust the Optane Memory H10 that came with a laptop. Replaced it with TLC 3D NAND that is always fast without the need for hidden Intel caching software running the background tying up the CPU, spinning up the fan, and draining the battery with no indication of what it is actually doing and why.
Valantar - Tuesday, February 4, 2020 - link
This sounds like a very interesting concept. Looking forward to reviews, and if prices, availability and performance are all good I'll definitely consider a drive like this for one of the builds/upgrades that are on the books for the near(-ish) future, especially the one(s) where QLC drives with traditional caching were the most likely candidates. I'd definitely be willing to pay, say, 20% over a 660p-class drive for something like this. Having frequently accessed OS files and applications permanently in SLC (but also monitored) sounds like an excellent idea.Also looking forward to Intel getting their thumbs out of their collective asses and making a hybrid Optane+NAND controller so they can make an actually usable follow-up to the H10. Two discrete drives on one m.2 stick is a _terrible_ idea. An m.2 2280 with 64GB of Optane and 1-2TB of NAND on the other hand? Yes, please.