So this is SLC 3D nand with bigger DRAM cache and very fast indirection system (possibly HW automated)? Obviously have to really squint hard, reading between the lines.
Slc cache is slc memory. Different chip than the main tlc memory. Which is why those drives can only continuously write at a high speed until that cache is filled, and afterwards the drive performs at a much lower performance level, because it is using the much slower main flash memory. Adoy!
Multi-level cell is just dividing the flash up into more than two levels. SLC is read as 0 or 1. MLC has 0, 1, 2, 3 (2-bit) and TLC 0, 1, 2, 3, 4, 5, 6, 7 (3-bit). Although each might be optimised and manufactured slightly differently, most of the characteristics arise from how the flash is used by the controller. TLC does not wear faster - but because it needs to be more precise than SLC, it is less tolerant to wear before it becomes too unreliable, and the controller retires the cell. So if you write to TLC with only two levels (0, 1) like SLC, you should get similar characteristics to SLC, including the controller needing less time to read or write to it, because there is more margin for error. Think of it as either completely filling or emptying a bucket of water (fast) rather than having to precisely top it up or remove water to a certain partial capacity (slow and needs precision). Normally I would agree that TLC is not SLC, but in this case, it is more about how it is used, and not the capacity.
You might not be able to use SLC flash with a TLC controller - it might not be designed to be partially filled or emptied to that precision. But using TLC as SLC? I would believe that.
At least one SSD manufacturer states they simulate SLC functionality using a portion of the TLC capacity. It appears that XabanakFanatik's statement is accurate.
If that is the case, where is the benefit of launching znand as a "mlc used in slc mode" when:
1 - it doesn't seem to offer any capacity benefit - 800 gb is quite modest for mlc, we've seen 2tb in a tiny m2 drive already
2 - it doesn't seem to offer any performance benefit - it is barely faster than mlc, and the advantage can be attributed to a better controller, it is nowhere nearly as fast as slc.
As for toshiba's "explanation" - it is either a technically inaccurate layman's version of what is going on, or a bad case of cutting corners, because even though technically it is possible to save some time writing only one bit, the writing process is still significantly slower than slc due to the cell design. Current drives that employ slc caching certainly behave like they have a discrete and static amount of cache rather than using some portion of the available main memory.
I can't speak for Samsung when it comes to justification regarding the release of Z-NAND so I won't address your concerns about it or about whether or not the performance justifies a new product release.
As for SLC caching, the amount of TLC assigned as cache is static. A portion of non-resizing TLC is dedicated to SLC operating mode which explains why you can reliably replicate benchmark results to demonstrate a point in time when performance declines as write activity is forced outside the SLC mode space. The information in the link I provided is accurate though. There is no special SLC only memory in a modern TLC drive.
CPU cache is actual low latency/high performance/costly memory totally different from dram, that's why they're used on KB sizes for L1 and a few MB's for L2-L3.
EDRAM is a bit slower than L3 but still much faster than typical DRAM which is why the Broadwell chips with GT4e gpu's using the built in 128MB of EDRAM as "L4 cache" were performing miles ahead of any other cpu at certain tasks like file compression&encrypting (like 5-10x).
No, DRAM and SRAM are totally different. "SLC Cache" as the device makers call it is when they store only one bit per MLC or TLC cell for better performance. It's a pretty standard industry practice.
Since the manufacturers absolutely can (and do) operate their mlc cells in SLC mode (called pseudoSLC, but the reason why is a bit subtle and maybe not too meaningful) then HD sorts be looking to be cautious of Vespa! From everything I've read there seem to only be a few differences between SLC and the others. Those differences are: size (SLC is usually built using larger, 2X or higher, process nodes which allows for larger voltage differences), and only needs to differentiate between two states. Moving an M/TLC cell to SLC means you get as large of a voltage difference between states as if the cell had been made SLC, at the same process, from the start. There's some complications that I'm ignoring (mostly involving the more sophisticated error correction/detection mechanisms, the idea of partial writes with the corresponding need to be able to detect the smaller voltage differences of each state and write dis) but this is what I'd expect Samsung to have here (the degree to which they've changed the internal and external controllers is the real unknown here).
It's really difficult. TechInsights/Chipworks does teardowns and reverse engineering where they'll decap the chips and probe them while the drive is running to see how it programs cells, and then dig into the construction of the chip with an electron microscope. They often release a few headline details of their findings as enticement to buy their extremely expensive full reports.
"However, both drives still fall short of the long-gone Micron P320h SLC NAND SSD, in both performance and endurance (though Intel has at least exceeded the random write speed of the P320h)."
The usual suspect who will come here and bitch about "Intel Optane" not crediting Micron, or 3D Xpoint being a scam to suppress SLC or similar nonsense will likely latch onto this, so pre-emptively: that 350GB Micron drive launched at $4000, or ~$4250 today. A 280GB Optane 900p goes for $390. That ~10x price increase is why nobody makes SLC NAND drives any more.
1 - SSD prices have dropped about 5 fold since 2012
2 - the micron drive is an enterprise product, and those come at a premium, anywhere from 2x to 5x more expensive than consumer products
3 - the 375 gb enterprise version of optane costs the whooping 1500$, given the nand flash pricing trends, it would have cost 7500$ in 2012
4 - if an slc drive from 2012 beats an optane drive from 2017, it is a pretty safe bet that had someone bothered to make an slc drive in 2017, it would definitely wipe the floor with optane
5 - density wise, given that there was a 2 tb mlc product in m2 form factor in 2017, half the density of slc would make a terabyte drive possible using slc, whereas intel will not be able to launch anything over 256 gb in that form factor any time soon, so that would give slc at least 4x density advantage.
So yes, slc from 5 years ago is vastly superior to optane, which is only able to match it in one single metric. It is pretty obvious that a modern slc drive will massacre optane. And it wouldn't cost more either, it will likely be cheaper than optane due to its significant density advantage.
Fun test to run to show how "crappy" Xpoint is: Turn off the 1.5GB RAM cache that Samsung had to put onto these drives to get at least paper performance levels that are near Optane drives that don't include a single kilobyte of RAM cache. Then see how "insanely superior" ordinary NAND is compared to Xpoint in a real head to head comparison instead of relying on large pools of high-speed RAM that Intel has even gone out of its way to say is faster than Xpoint.
Anything else is just stupid uninformed drivel from technically ignorant fanboy who sees "Intel" and has to hurl lies that are easily disproven by facts.
How does this address the fact that even the article acknowledges the FACT that a 5 year old slc drive is better? Oh that's right, intel fanboys do not concern themselves with facts.
Don't really thing so. SLC pretty much hit their top performance by innate properties of the technology. MLC is way more crappy but modern techniques/algorithms made them look somewhat "ok". Just check when many sites were complaining years ago the moment MLC drives drives arrived with shitty 2k-4k cycles on 30-60GB MLC SSD's.
Nonsense, back when consumer slc was a thing, controllers weren't anywhere near to pushing the process to its peak performance. In reality it is the controllers that contribute to pretty much ALL of the performance increase in ssd drives, the actual memory medium has only gotten worse as it shrunk down, it improves density at the expense of every performance metric.
"it is a pretty safe bet that had someone bothered to make an slc drive in 2017, it would definitely wipe the floor with optane"
Then why is every NAND manufacturer on the planet apparently dumber than a random comment-section ranter, and failing to produce any SLC whatsoever? Surely if - as you claim - it is sooooo superior to everything else, and soooo easy to make, and costs a mere 3x/4x the cost of other NAND, then they would be raking in the cash of high performance high endurance premium drives? Intel/Micron are not the only NAND manufacturers after all, so even if the "artificially pump 3D XPoint" conspiracy is somehow correct, it doesn't explain how everyone else is refraining from eating their lunch with cheaper, better parts.
Or, just maybe, the people who make these parts know that it's not as simple as "half capacity double cost double performance!".
Intel and micron were extremely lucky that the industry walked away from slc. Had this not happened, their years long and costly investment in xpoint would be diminished significantly. Thanks to the absence of slc drives, they can hide the fact that xpoint was pretty much a waste, considering the fact they could have gotten better performance out of good old slc and save a lot of time and money on developing a redundant technology.
Granted, xpoint improves on the few weaknesses of flash nand, but even so, the improvements are not really worth the investment. The reason the industry walked away from slc was that it was too good, even for the enterprise, they could get more than enough performance from denser and more profitable nand varieties.
For a consumer usage scenario, even the difference between a sata and a nvme drive is negligible, and for most of the prosumers too. So even more performance is entirely redundant, and intel only went for it because it had to present something to show for their investment.
It is quite clear that optane is not selling out even in the enterprise, where it actually makes some sense in a few niche workload scenarios. Because if it did sell out, intel wouldn't be marketing it 3 times cheaper on ... friggin gamers, who have exactly zero use of it. It goes without saying, it is not some unprecedented, newly found love for consumers that is forcing intel to sell xpoint at a much lower profit target, they literally don't have anything better to do with it.
Which also explains why znand is in really mediocre mlc + improved controller + marketing nonsense. There is no need for something as fast as slc, and samsung are simply following intel's example, as quite obviously, a little hype can get a long way.
@iter: You mention consumer a lot, but the target of this Samsung drive is literally written in the article: "The SZ985 is a high-performance, high-endurance enterprise NVMe SSD." Enterprise is the target here, not consumer.
In regards to Optane in the enterprise, the problem I see is the capacity isn't there yet. It also needs more models in U.2 form factor to aggregate to higher total capacity; AIC limits you to that single slot solution. That's limiting sales to the enterprise more than any fault due to the technology itself. So if Intel makes profit selling the smaller capacity Optane drives to consumers, it's because they are just making good use of their existing product line while they continue to work on the overall capacity.
I can definitely agree that SLC would have been nice to keep around & continue to improve upon. The performance was certainly there. But the mfgs wanted more profit, and the buyers wanted more capacity. So that's why we got MLC/TLC, and sadly, probably even QLC. It does make me wonder where we would be with SLC now, though, if continual improvements were made there.
However, I definitely disagree with your statement that the "reason the industry walked away from slc was that it was too good, even for the enterprise... " Enterprise loads have always needed the most storage performance as possible as storage is almost always the biggest bottleneck for performance. Whether it's OLTP, massive databases, huge virtual hosts - more storage performance is always needed. The consistency & low latency are also benefits of Xpoint, so that's welcome as well.
Now that I think about it, maybe this Samsung drive is the successor to SLC based upon continual R&D Samsung made even after transitioning to MLC/TLC & 3D NAND.
Well, once invested in the production lines for slc, it will actually be a tad more affordable than mlc per cell, as the cell design is simpler, and it would perform much better for that same reason.
The only logical reason for samsung to go for that nonsense is the fact they don't have slc production lines, and it would probably cost more money than they will make on overpriced niche market products.
People do not buy their storage 'per cell'. SLC has exactly half of the potential capacity as MLC, and a third the potential capacity as TLC in any given implementation. While NAND is not the entire cost of a drive, it is by far the largest driver of cost and as such there is no world where simply 'investing more' in production lines brings its prices even close to MLC or TLC. Virtually every improvement that would apply to SLC also applies to MLC and TLC making it always the most expensive route to go.
SLC is not coming back because its gains are marginal for most use cases while its cost is always going to be substantially higher than price/performance dictates is reasonable.
And no, this thing can not "offer the performance" of a proper SLC, it can try to bruteforce random workloads but endurance is simply not possible by definition.
yes, I suppose, at a fixed node size, but the lure of 3D/Z NAND is that cells are made on much larger node size than current SotA. 40nm (or a bit more), IIRC, so endurance is, mostly (modulo multi-voltage effects, if any) what one might expect historically at that node size(s).
as to SLC/MLC/TLC: back when the move was from SLC to MLC, the conventional wisdom (even here, IIRC) that the NAND cells were the same. the chip was the same. the difference was in what voltage(s) the controller read/write to the cells.
is there some docs on why, today, TLC for example is physically different from SLC???
I'd say there is definitely a difference in the cells, for one thing, why would endurance drop from writing intermediate values? If anything, conventional wisdom dictates that writing voltage below the cell maximum should introduce less wear.
it's not that endurance drops from writing intermediate values, it's that the difference between said intermediate values is smaller than that between "off" and "on" and the cell can sustain less wear before it becomes hard to tell if the value is meant to be 3 or 4
Iter, you were very vocal in this thread, even to the level that someone suspected you're ddrivers new apprentice. Yet you didn't know this? If I were you I'd reconsider some of the posts you made here?
The problem lies with silicon oxide insulator that wears out by every cycle of erase/rewrite. Yes, the applied voltage has minimum and maximum values and the cell has less wear when a lower voltage is applied. But on a cell lifespan you randomly cycle every voltage level. So, for every cycle of erase/rewrite you can consider the voltage applied to the cell as the mean from its maximum and minimum values.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
45 Comments
Back to Article
GhostOfAnand - Monday, January 29, 2018 - link
I want z nand! Where is z nand?woggs - Monday, January 29, 2018 - link
So this is SLC 3D nand with bigger DRAM cache and very fast indirection system (possibly HW automated)? Obviously have to really squint hard, reading between the lines.aryonoco - Tuesday, January 30, 2018 - link
I think the article implies that this is NOT SLC.IntelUser2000 - Tuesday, January 30, 2018 - link
Yes, it is actually. Based on Anandtech's own article: https://www.anandtech.com/show/11703/samsung-at-fl...2nd generation moves to MLC to lower cost.
Looking at the specs, this is actually close to Intel's SSD DC P3700 SSD.
CheapSushi - Tuesday, January 30, 2018 - link
The whole point of Z-NAND is to get SLC performance for cheaper by using MLC & TLC NAND in SLC mode.iter - Tuesday, January 30, 2018 - link
You can use tlc flash in slc mode about as much as you can use a moped in harley davidson mode.XabanakFanatik - Tuesday, January 30, 2018 - link
That's exactly how SLC cache is done, they use sone of the TLC as SLC. Educate yourself.iter - Tuesday, January 30, 2018 - link
Slc cache is slc memory. Different chip than the main tlc memory. Which is why those drives can only continuously write at a high speed until that cache is filled, and afterwards the drive performs at a much lower performance level, because it is using the much slower main flash memory. Adoy!Lolimaster - Tuesday, January 30, 2018 - link
The is no SLC memory in any SSD that is not an actual SLC SSD. They simply simulate it the same way your HDD pagefile "simulates" ram.karatekid430 - Tuesday, May 22, 2018 - link
Multi-level cell is just dividing the flash up into more than two levels. SLC is read as 0 or 1. MLC has 0, 1, 2, 3 (2-bit) and TLC 0, 1, 2, 3, 4, 5, 6, 7 (3-bit). Although each might be optimised and manufactured slightly differently, most of the characteristics arise from how the flash is used by the controller. TLC does not wear faster - but because it needs to be more precise than SLC, it is less tolerant to wear before it becomes too unreliable, and the controller retires the cell. So if you write to TLC with only two levels (0, 1) like SLC, you should get similar characteristics to SLC, including the controller needing less time to read or write to it, because there is more margin for error. Think of it as either completely filling or emptying a bucket of water (fast) rather than having to precisely top it up or remove water to a certain partial capacity (slow and needs precision). Normally I would agree that TLC is not SLC, but in this case, it is more about how it is used, and not the capacity.You might not be able to use SLC flash with a TLC controller - it might not be designed to be partially filled or emptied to that precision. But using TLC as SLC? I would believe that.
iter - Tuesday, January 30, 2018 - link
I guess that CPU cache is just "sone of the dram used as sram", right?PeachNCream - Tuesday, January 30, 2018 - link
At least one SSD manufacturer states they simulate SLC functionality using a portion of the TLC capacity. It appears that XabanakFanatik's statement is accurate.https://support.ocz.com/customer/en/portal/article...
iter - Tuesday, January 30, 2018 - link
If that is the case, where is the benefit of launching znand as a "mlc used in slc mode" when:1 - it doesn't seem to offer any capacity benefit - 800 gb is quite modest for mlc, we've seen 2tb in a tiny m2 drive already
2 - it doesn't seem to offer any performance benefit - it is barely faster than mlc, and the advantage can be attributed to a better controller, it is nowhere nearly as fast as slc.
As for toshiba's "explanation" - it is either a technically inaccurate layman's version of what is going on, or a bad case of cutting corners, because even though technically it is possible to save some time writing only one bit, the writing process is still significantly slower than slc due to the cell design. Current drives that employ slc caching certainly behave like they have a discrete and static amount of cache rather than using some portion of the available main memory.
PeachNCream - Tuesday, January 30, 2018 - link
I can't speak for Samsung when it comes to justification regarding the release of Z-NAND so I won't address your concerns about it or about whether or not the performance justifies a new product release.As for SLC caching, the amount of TLC assigned as cache is static. A portion of non-resizing TLC is dedicated to SLC operating mode which explains why you can reliably replicate benchmark results to demonstrate a point in time when performance declines as write activity is forced outside the SLC mode space. The information in the link I provided is accurate though. There is no special SLC only memory in a modern TLC drive.
Lolimaster - Tuesday, January 30, 2018 - link
CPU cache is actual low latency/high performance/costly memory totally different from dram, that's why they're used on KB sizes for L1 and a few MB's for L2-L3.EDRAM is a bit slower than L3 but still much faster than typical DRAM which is why the Broadwell chips with GT4e gpu's using the built in 128MB of EDRAM as "L4 cache" were performing miles ahead of any other cpu at certain tasks like file compression&encrypting (like 5-10x).
Flunk - Tuesday, January 30, 2018 - link
No, DRAM and SRAM are totally different. "SLC Cache" as the device makers call it is when they store only one bit per MLC or TLC cell for better performance. It's a pretty standard industry practice.chrnochime - Wednesday, January 31, 2018 - link
Who's the one who needs to be educated here anyway LOLtuxRoller - Tuesday, January 30, 2018 - link
Since the manufacturers absolutely can (and do) operate their mlc cells in SLC mode (called pseudoSLC, but the reason why is a bit subtle and maybe not too meaningful) then HD sorts be looking to be cautious of Vespa!From everything I've read there seem to only be a few differences between SLC and the others. Those differences are: size (SLC is usually built using larger, 2X or higher, process nodes which allows for larger voltage differences), and only needs to differentiate between two states. Moving an M/TLC cell to SLC means you get as large of a voltage difference between states as if the cell had been made SLC, at the same process, from the start. There's some complications that I'm ignoring (mostly involving the more sophisticated error correction/detection mechanisms, the idea of partial writes with the corresponding need to be able to detect the smaller voltage differences of each state and write dis) but this is what I'd expect Samsung to have here (the degree to which they've changed the internal and external controllers is the real unknown here).
Lolimaster - Tuesday, January 30, 2018 - link
No, you don't get it. Don't fall for snake oil tactics.SunnyNW - Tuesday, January 30, 2018 - link
Billy is there a way to test how many voltage levels there are per cell?Billy Tallis - Tuesday, January 30, 2018 - link
It's really difficult. TechInsights/Chipworks does teardowns and reverse engineering where they'll decap the chips and probe them while the drive is running to see how it programs cells, and then dig into the construction of the chip with an electron microscope. They often release a few headline details of their findings as enticement to buy their extremely expensive full reports.edzieba - Tuesday, January 30, 2018 - link
"However, both drives still fall short of the long-gone Micron P320h SLC NAND SSD, in both performance and endurance (though Intel has at least exceeded the random write speed of the P320h)."The usual suspect who will come here and bitch about "Intel Optane" not crediting Micron, or 3D Xpoint being a scam to suppress SLC or similar nonsense will likely latch onto this, so pre-emptively: that 350GB Micron drive launched at $4000, or ~$4250 today. A 280GB Optane 900p goes for $390. That ~10x price increase is why nobody makes SLC NAND drives any more.
iter - Tuesday, January 30, 2018 - link
That's a rather silly way to look at it:1 - SSD prices have dropped about 5 fold since 2012
2 - the micron drive is an enterprise product, and those come at a premium, anywhere from 2x to 5x more expensive than consumer products
3 - the 375 gb enterprise version of optane costs the whooping 1500$, given the nand flash pricing trends, it would have cost 7500$ in 2012
4 - if an slc drive from 2012 beats an optane drive from 2017, it is a pretty safe bet that had someone bothered to make an slc drive in 2017, it would definitely wipe the floor with optane
5 - density wise, given that there was a 2 tb mlc product in m2 form factor in 2017, half the density of slc would make a terabyte drive possible using slc, whereas intel will not be able to launch anything over 256 gb in that form factor any time soon, so that would give slc at least 4x density advantage.
So yes, slc from 5 years ago is vastly superior to optane, which is only able to match it in one single metric. It is pretty obvious that a modern slc drive will massacre optane. And it wouldn't cost more either, it will likely be cheaper than optane due to its significant density advantage.
CajunArson - Tuesday, January 30, 2018 - link
Oh look, DDRiver has a sockpuppet account.Fun test to run to show how "crappy" Xpoint is: Turn off the 1.5GB RAM cache that Samsung had to put onto these drives to get at least paper performance levels that are near Optane drives that don't include a single kilobyte of RAM cache. Then see how "insanely superior" ordinary NAND is compared to Xpoint in a real head to head comparison instead of relying on large pools of high-speed RAM that Intel has even gone out of its way to say is faster than Xpoint.
Anything else is just stupid uninformed drivel from technically ignorant fanboy who sees "Intel" and has to hurl lies that are easily disproven by facts.
iter - Tuesday, January 30, 2018 - link
How does this address the fact that even the article acknowledges the FACT that a 5 year old slc drive is better? Oh that's right, intel fanboys do not concern themselves with facts.Lolimaster - Tuesday, January 30, 2018 - link
Don't really thing so. SLC pretty much hit their top performance by innate properties of the technology. MLC is way more crappy but modern techniques/algorithms made them look somewhat "ok". Just check when many sites were complaining years ago the moment MLC drives drives arrived with shitty 2k-4k cycles on 30-60GB MLC SSD's.iter - Tuesday, January 30, 2018 - link
Nonsense, back when consumer slc was a thing, controllers weren't anywhere near to pushing the process to its peak performance. In reality it is the controllers that contribute to pretty much ALL of the performance increase in ssd drives, the actual memory medium has only gotten worse as it shrunk down, it improves density at the expense of every performance metric.edzieba - Wednesday, January 31, 2018 - link
"it is a pretty safe bet that had someone bothered to make an slc drive in 2017, it would definitely wipe the floor with optane"Then why is every NAND manufacturer on the planet apparently dumber than a random comment-section ranter, and failing to produce any SLC whatsoever? Surely if - as you claim - it is sooooo superior to everything else, and soooo easy to make, and costs a mere 3x/4x the cost of other NAND, then they would be raking in the cash of high performance high endurance premium drives? Intel/Micron are not the only NAND manufacturers after all, so even if the "artificially pump 3D XPoint" conspiracy is somehow correct, it doesn't explain how everyone else is refraining from eating their lunch with cheaper, better parts.
Or, just maybe, the people who make these parts know that it's not as simple as "half capacity double cost double performance!".
peevee - Tuesday, January 30, 2018 - link
You compare 2012 prices to 2018 prices?SLC NAND chips are precisely 2 times more expensive than MLC and 3 times more than TLC.
iter - Tuesday, January 30, 2018 - link
Intel and micron were extremely lucky that the industry walked away from slc. Had this not happened, their years long and costly investment in xpoint would be diminished significantly. Thanks to the absence of slc drives, they can hide the fact that xpoint was pretty much a waste, considering the fact they could have gotten better performance out of good old slc and save a lot of time and money on developing a redundant technology.Granted, xpoint improves on the few weaknesses of flash nand, but even so, the improvements are not really worth the investment. The reason the industry walked away from slc was that it was too good, even for the enterprise, they could get more than enough performance from denser and more profitable nand varieties.
For a consumer usage scenario, even the difference between a sata and a nvme drive is negligible, and for most of the prosumers too. So even more performance is entirely redundant, and intel only went for it because it had to present something to show for their investment.
It is quite clear that optane is not selling out even in the enterprise, where it actually makes some sense in a few niche workload scenarios. Because if it did sell out, intel wouldn't be marketing it 3 times cheaper on ... friggin gamers, who have exactly zero use of it. It goes without saying, it is not some unprecedented, newly found love for consumers that is forcing intel to sell xpoint at a much lower profit target, they literally don't have anything better to do with it.
Which also explains why znand is in really mediocre mlc + improved controller + marketing nonsense. There is no need for something as fast as slc, and samsung are simply following intel's example, as quite obviously, a little hype can get a long way.
romrunning - Tuesday, January 30, 2018 - link
@iter:You mention consumer a lot, but the target of this Samsung drive is literally written in the article: "The SZ985 is a high-performance, high-endurance enterprise NVMe SSD." Enterprise is the target here, not consumer.
In regards to Optane in the enterprise, the problem I see is the capacity isn't there yet. It also needs more models in U.2 form factor to aggregate to higher total capacity; AIC limits you to that single slot solution. That's limiting sales to the enterprise more than any fault due to the technology itself. So if Intel makes profit selling the smaller capacity Optane drives to consumers, it's because they are just making good use of their existing product line while they continue to work on the overall capacity.
I can definitely agree that SLC would have been nice to keep around & continue to improve upon. The performance was certainly there. But the mfgs wanted more profit, and the buyers wanted more capacity. So that's why we got MLC/TLC, and sadly, probably even QLC. It does make me wonder where we would be with SLC now, though, if continual improvements were made there.
However, I definitely disagree with your statement that the "reason the industry walked away from slc was that it was too good, even for the enterprise... " Enterprise loads have always needed the most storage performance as possible as storage is almost always the biggest bottleneck for performance. Whether it's OLTP, massive databases, huge virtual hosts - more storage performance is always needed. The consistency & low latency are also benefits of Xpoint, so that's welcome as well.
romrunning - Tuesday, January 30, 2018 - link
Now that I think about it, maybe this Samsung drive is the successor to SLC based upon continual R&D Samsung made even after transitioning to MLC/TLC & 3D NAND.Lolimaster - Tuesday, January 30, 2018 - link
Seems even Samsung forgets about MLC and uses shitty TLC for comparison.Why can't they simply use SLC again? not simulated crap with MLC?
Flunk - Tuesday, January 30, 2018 - link
Because hilariously, that would cost more.iter - Tuesday, January 30, 2018 - link
Well, once invested in the production lines for slc, it will actually be a tad more affordable than mlc per cell, as the cell design is simpler, and it would perform much better for that same reason.The only logical reason for samsung to go for that nonsense is the fact they don't have slc production lines, and it would probably cost more money than they will make on overpriced niche market products.
Reflex - Tuesday, January 30, 2018 - link
People do not buy their storage 'per cell'. SLC has exactly half of the potential capacity as MLC, and a third the potential capacity as TLC in any given implementation. While NAND is not the entire cost of a drive, it is by far the largest driver of cost and as such there is no world where simply 'investing more' in production lines brings its prices even close to MLC or TLC. Virtually every improvement that would apply to SLC also applies to MLC and TLC making it always the most expensive route to go.SLC is not coming back because its gains are marginal for most use cases while its cost is always going to be substantially higher than price/performance dictates is reasonable.
Lolimaster - Tuesday, January 30, 2018 - link
Random Reads are way more important than random writes since the former is basically the operation mode of a drive.Lolimaster - Tuesday, January 30, 2018 - link
And no, this thing can not "offer the performance" of a proper SLC, it can try to bruteforce random workloads but endurance is simply not possible by definition.FunBunny2 - Tuesday, January 30, 2018 - link
"endurance is simply not possible by definition"yes, I suppose, at a fixed node size, but the lure of 3D/Z NAND is that cells are made on much larger node size than current SotA. 40nm (or a bit more), IIRC, so endurance is, mostly (modulo multi-voltage effects, if any) what one might expect historically at that node size(s).
FunBunny2 - Tuesday, January 30, 2018 - link
as to SLC/MLC/TLC: back when the move was from SLC to MLC, the conventional wisdom (even here, IIRC) that the NAND cells were the same. the chip was the same. the difference was in what voltage(s) the controller read/write to the cells.is there some docs on why, today, TLC for example is physically different from SLC???
iter - Tuesday, January 30, 2018 - link
I'd say there is definitely a difference in the cells, for one thing, why would endurance drop from writing intermediate values? If anything, conventional wisdom dictates that writing voltage below the cell maximum should introduce less wear.MamiyaOtaru - Tuesday, January 30, 2018 - link
it's not that endurance drops from writing intermediate values, it's that the difference between said intermediate values is smaller than that between "off" and "on" and the cell can sustain less wear before it becomes hard to tell if the value is meant to be 3 or 4MrSpadge - Wednesday, January 31, 2018 - link
Iter, you were very vocal in this thread, even to the level that someone suspected you're ddrivers new apprentice. Yet you didn't know this? If I were you I'd reconsider some of the posts you made here?Kamgusta - Friday, October 19, 2018 - link
The problem lies with silicon oxide insulator that wears out by every cycle of erase/rewrite. Yes, the applied voltage has minimum and maximum values and the cell has less wear when a lower voltage is applied.But on a cell lifespan you randomly cycle every voltage level. So, for every cycle of erase/rewrite you can consider the voltage applied to the cell as the mean from its maximum and minimum values.
KindOne - Tuesday, January 30, 2018 - link
Your "Source: Samsung" link is broken.