One thing has always confused me about these benchmarks. Does performance get progressively worse as the drive fills up? For example, the ATSB - Light average latency for the drive is 48 mu empty and 330 mu full. Does that mean when the drive is 50% full the latency would be around 189 mu? Or does it run at 48 mu until the drive hits 100% full? Same for the average data rate.
I think there's usually a threshold at which performance drops pretty rapidly because the SLC cache or spare area is no longer large enough. Unfortunately, determining the shape of the curve and where the threshold is (if there is one) is extremely time-consuming, and the tools used for the ATSB tests don't make it easy to test multiple drives in parallel.
I did run the Heavy and Light tests on this drive with it 80% full and the results were similar to the 100% full case. But manual overprovisioning like that doesn't necessarily have the same impact that re-tuning the firmware would. A typical variable-size SLC cache won't be significantly larger for an 80% full drive than for a 100% full drive.
And there's still the problem that the ATSB tests don't give the drive any long idle periods to flush the SLC cache. The SLC cache on a full drive might be large enough to handle the Heavy test reasonably well if it gets a realistic amount of idle time to flush the cache mid-test. But that would take the Heavy test from 1.5 hours to a full day.
Understandable. With the huge performance difference between empty and full with this controller, I was just curious at what percentage used the drive performance tanked. Based on your test we already know that 80% full is just as bad as 100%. Hopefully it's not any lower than that.
If the performance hit did not occur until 95% full or more, then it would be easily avoidable and acceptable (to me). If it happens at 30% full, it's a deal breaker. Or a linear degredation would also unacceptable to me since the degredation is so extreme.
I STRONGLY ENCOURAGE taking the time to explore the "degradation curve" relative to "fullness" for this drive, since it is so dramatic. It could make a special article of the type AnandTech excels at.
Currently, the ATSB tests cut all idle times down to a maximum of 25ms. I suspect that idle times on the order of seconds would be sufficient, but I don't think we even still have the original traces with the full idle times. In the near future I'll do some SYSmark runs with a mostly-full drive; that's a similar intensity of storage workload to the ATSB light, but with a fairly realistic pacing including idle.
I'll also try to compare the power data against the performance test duration for the synthetic tests. That should reveal how long the drive took to return to idle after the writing stopped, and give us a pretty good idea of how quickly the drive can empty the SLC cache and how high of a duty cycle it can sustain for writes at full speed.
Both, in a big way when it's 2TB, and especially when you have a variable-size SLC cache. A mostly-empty 2TB drive can have over 100GB of SLC cache, which is absolutely impossible to fill up with any real-world client workload.
I wonder if...I think you could get similar results (stellar performance characteristics at low drive usage) by using a larger DRAM read/write cache when the drive mapping table is not taking up as much RAM. With 2GB of DDR4, let's say arbitrarily that 1.5GB of that is used by FTL page mapping tables when the drive is full. What if you found a way in firmware to manage your memory such that when most of the drive FTL is unmapped, that you could use say only 0.5GB for the mapping table and have an extra 1GB available for caching? Many of the synthetic tests could be gamed by keeping that much drive cache. I don't remember your drive testing methodology fully, but perhaps a full power cycle of the drive after the data is written, before the read tests, would make sure that all the performance is indeed SLC speed and not just enormous amounts of caching.
The sustained I/O synthetic tests move far too much data for DRAM caching of user data to have much impact. The burst I/O tests could theoretically benefit from using DRAM as a write cache, but it doesn't look like that's the case based on these results, and I don't think Silicon Motion would really want to add such a complication to their firmware.
don't think any SSD has used the DRAM as cache (only used for PAGE table) i could speed things up a little but your still limited by the NAND speed any way, Writing directly to NAND makes more sense
That drop in performance in the Heavy test, going from empty to full, was horrifying. I'd like to see some additional tests where the drive gets progressively closer to full. At what point does the drive's performance plummet? Is it gradual or sudden?
With other drives, it doesn't matter so much. Most of them have approximately (within 10-20%) the same performance when empty or full, so a person using a full drive will still get approximately the same experience no matter how much they use the drive. But the SM2262EN loses about 80%(!!!!) of its performance when full. So it would be important to know how quickly or gradually this loss occurs as the drive fills.
Considering this thing is still in a beta state, I don't think any further investigation into the full state performance is beneficial to us consumers. But if a SM2262EN SSD hits the shelves and is buyable, then a look into different states of fullness and the corresponding performance will be greatly appreciated. :D Good test and SSD controller so far.
So have we reached peak SSD? If even Optane don't give us any user perceived performance, then surely user would choose larger capacity SSD than 3GB/s vs 2GB/s SSD.
Right now we need price to drop faster. 500GB PCI-E SSD with 1GB/s + Speed should be under $100.
"Silicon Motion's second-generation NVMe SSD controllers have all but taken over the consumer NVMe SSD market. Drives like the HP EX920 and ADATA SX8200 currently offer great performance at prices that are far lower than what Samsung and Western Digital are charging for their flagship products."
This (kind of) implies that the controller is the biggest cost element of a drive. Does anyone have a rough breakdown of parts costs for a drive like this, i.e. controller, DRAM, NAND, and the board+ancillaries?
I don't read it that way, but okay. :) I don't have a definitve cost breakdown of an SSD. But my best guess is NAND is still the factor #1 and goes up with capacity. #2 would be the controller or the RAM, depending on size of the SSD, which usually correlates with the size of the RAM. But controllers can cost a few dollars or a few tens of dollars, so that is still a relevant number in pricing of an SSD. Samsung and WD price their drives that way. because they can, so far.
well, here's the problem. if you're an economist, then marginal cost is the driver of price in a competitive market. whether that's true for SSD/SSDparts is murky. for the accountant/MBA types, average cost drives price, regardless of market.
now, the crunchy aspect of correlating cost to price is the production process. in the 19th century, labor was a significant component of cost and thus price. demand slackens, fire people to keep both costs more or less stable. demand increases, hire for the same effect.
in the 21st century, with SSD/SSDparts, there's virtually no labor in direct production, so marginal cost is near 0; ergo the econ types say to drop price to move more product. the accountant/MBA types recognize that most of average cost, while higher than marginal, is mostly amortization of R&D and capital equipment (all those new fabs AT has been reporting on, recall?). even they understand that the decision is the same as the econ, a very rare event: the only way to make money is to move more product and drive down average cost. but they can only do this is demand increases. and that can only happen if end-user product vendors can 1) more ways to use the parts, and 2) people have more money to buy the end-user product.
1) is largely a substitution exercise; i.e. a zero-sum game among end-user product vendors. there's no growth in aggregate demand for end-user product, thus none for SSD/SSDparts. nobody wins.
2) is a purely macro-economic phenomenon, and thus dependent on the 'middle class' having more moolah to spend on more bling. you can see where this is going? with right-wing governments driving income concentration, aggregate demand eventually collapses. this is exactly what created the Great Recession.
end-user product vendors can't directly move 2), all they can do is encourage their governments to spread the wealth so that aggregate demand can grow, and they can sell more product. on the whole, they haven't shown the smarts to see where their bread is buttered. as labor cost diminishes, just firing bodies gains you less and less until it gains you nothing. growth in highly capitalized production economies of the 21st century doesn't work as it did in the primitive 19th.
What I'd really like to see are SSD tests done on an (user) encrypted drive. Would performance be equivalent to a fully filled drive? I imagine this would be a fairly common use case?
Software encryption does technically leak information if it uses TRIM commands or otherwise signals to the drive what data is and isn't valid. It also imposes performance overhead from doing the encryption on the CPU . There aren't many reasons to justify using software full-drive encryption on a SSD when self-encrypting SSDs are so common (Samsung, Crucial MX, etc).
Is Opal effected by this? What performance cost is there? We’ve got whopping laptop at woro with it enabled buy I’d like to push us in a more secure direction. Would probably help our PCI score too.
I wish someone would build a review site that includes SSDs that writes reviews based upon a an average PC gamer's performance perspective. I myself have tested the Evo 860, the 970 EVO, Optane 900, the XPG SX8200, and the Patriot Hellfire. Like many other revealing Youtube videos that compare these drives most often the Evo 860 is either faster at loading a game, the same or very slightly slower. While I understand that Anandtech has readers that are looking at higher usage scenarios, I'd venture to say MOST of their readers are in the former category. As it stands today with most similar sites we see chart after chart of benchmarks on multiple pages. We read about accolades on random and sequential performance. Some sites rank the drives from 1-10. But in the end, the user experience differences prove to be negligible for most users and a simple article like that probably would entice site visits to read through the hairsplitting benchmarks.
I'll repeat something Billy Tallis stated in a comment and probably should incorporate into the text of the review: “I did run the Heavy and Light tests on this drive with it 80% full and the results were similar to the 100% full case.”
When I partition an SSD, I've always left a bit of space unused in order to effectively increase the spare area to 20% or so. That improved performance consistency with older SSD designs. With the SM2262EN, it might still reduce write amplification, but not enough to substantially affect performance.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
28 Comments
Back to Article
DigitalFreak - Wednesday, August 1, 2018 - link
One thing has always confused me about these benchmarks. Does performance get progressively worse as the drive fills up? For example, the ATSB - Light average latency for the drive is 48 mu empty and 330 mu full. Does that mean when the drive is 50% full the latency would be around 189 mu? Or does it run at 48 mu until the drive hits 100% full? Same for the average data rate.Billy Tallis - Wednesday, August 1, 2018 - link
I think there's usually a threshold at which performance drops pretty rapidly because the SLC cache or spare area is no longer large enough. Unfortunately, determining the shape of the curve and where the threshold is (if there is one) is extremely time-consuming, and the tools used for the ATSB tests don't make it easy to test multiple drives in parallel.I did run the Heavy and Light tests on this drive with it 80% full and the results were similar to the 100% full case. But manual overprovisioning like that doesn't necessarily have the same impact that re-tuning the firmware would. A typical variable-size SLC cache won't be significantly larger for an 80% full drive than for a 100% full drive.
And there's still the problem that the ATSB tests don't give the drive any long idle periods to flush the SLC cache. The SLC cache on a full drive might be large enough to handle the Heavy test reasonably well if it gets a realistic amount of idle time to flush the cache mid-test. But that would take the Heavy test from 1.5 hours to a full day.
DigitalFreak - Wednesday, August 1, 2018 - link
Understandable. With the huge performance difference between empty and full with this controller, I was just curious at what percentage used the drive performance tanked. Based on your test we already know that 80% full is just as bad as 100%. Hopefully it's not any lower than that.justaviking - Wednesday, August 1, 2018 - link
I had the exact same question. How full is full?If the performance hit did not occur until 95% full or more, then it would be easily avoidable and acceptable (to me). If it happens at 30% full, it's a deal breaker. Or a linear degredation would also unacceptable to me since the degredation is so extreme.
I STRONGLY ENCOURAGE taking the time to explore the "degradation curve" relative to "fullness" for this drive, since it is so dramatic. It could make a special article of the type AnandTech excels at.
29a - Wednesday, August 1, 2018 - link
I agree.jtd871 - Wednesday, August 1, 2018 - link
How long of a "long idle time" do you need? Are you talking about 1.5h run time for ATSB to 8h or 24h with sufficiently long "long idle times"?Billy Tallis - Wednesday, August 1, 2018 - link
Currently, the ATSB tests cut all idle times down to a maximum of 25ms. I suspect that idle times on the order of seconds would be sufficient, but I don't think we even still have the original traces with the full idle times. In the near future I'll do some SYSmark runs with a mostly-full drive; that's a similar intensity of storage workload to the ATSB light, but with a fairly realistic pacing including idle.I'll also try to compare the power data against the performance test duration for the synthetic tests. That should reveal how long the drive took to return to idle after the writing stopped, and give us a pretty good idea of how quickly the drive can empty the SLC cache and how high of a duty cycle it can sustain for writes at full speed.
Dark_wizzie - Wednesday, August 1, 2018 - link
A larger drive helps mitigate the issues because 1) Larger drives tend to have large SLC cache? Or 2) There is more normal free space for the drive?Billy Tallis - Wednesday, August 1, 2018 - link
Both, in a big way when it's 2TB, and especially when you have a variable-size SLC cache. A mostly-empty 2TB drive can have over 100GB of SLC cache, which is absolutely impossible to fill up with any real-world client workload.mattrparks - Wednesday, August 1, 2018 - link
I wonder if...I think you could get similar results (stellar performance characteristics at low drive usage) by using a larger DRAM read/write cache when the drive mapping table is not taking up as much RAM. With 2GB of DDR4, let's say arbitrarily that 1.5GB of that is used by FTL page mapping tables when the drive is full. What if you found a way in firmware to manage your memory such that when most of the drive FTL is unmapped, that you could use say only 0.5GB for the mapping table and have an extra 1GB available for caching? Many of the synthetic tests could be gamed by keeping that much drive cache. I don't remember your drive testing methodology fully, but perhaps a full power cycle of the drive after the data is written, before the read tests, would make sure that all the performance is indeed SLC speed and not just enormous amounts of caching.Billy Tallis - Wednesday, August 1, 2018 - link
The sustained I/O synthetic tests move far too much data for DRAM caching of user data to have much impact. The burst I/O tests could theoretically benefit from using DRAM as a write cache, but it doesn't look like that's the case based on these results, and I don't think Silicon Motion would really want to add such a complication to their firmware.leexgx - Saturday, August 4, 2018 - link
don't think any SSD has used the DRAM as cache (only used for PAGE table) i could speed things up a little but your still limited by the NAND speed any way, Writing directly to NAND makes more senseMikewind Dale - Thursday, August 2, 2018 - link
That drop in performance in the Heavy test, going from empty to full, was horrifying. I'd like to see some additional tests where the drive gets progressively closer to full. At what point does the drive's performance plummet? Is it gradual or sudden?With other drives, it doesn't matter so much. Most of them have approximately (within 10-20%) the same performance when empty or full, so a person using a full drive will still get approximately the same experience no matter how much they use the drive. But the SM2262EN loses about 80%(!!!!) of its performance when full. So it would be important to know how quickly or gradually this loss occurs as the drive fills.
jjj - Thursday, August 2, 2018 - link
Any chance you are going to the Flash Memory Summit? Might be an interesting year.Billy Tallis - Thursday, August 2, 2018 - link
Yep, we'll be at FMS next week. Tuesday is going to be a very busy day.jjj - Thursday, August 2, 2018 - link
Great, looking forward to your reports!Death666Angel - Thursday, August 2, 2018 - link
Considering this thing is still in a beta state, I don't think any further investigation into the full state performance is beneficial to us consumers. But if a SM2262EN SSD hits the shelves and is buyable, then a look into different states of fullness and the corresponding performance will be greatly appreciated. :D Good test and SSD controller so far.DigitalFreak - Thursday, August 2, 2018 - link
I would definitely like to see this with a retail drive.iwod - Thursday, August 2, 2018 - link
So have we reached peak SSD? If even Optane don't give us any user perceived performance, then surely user would choose larger capacity SSD than 3GB/s vs 2GB/s SSD.Right now we need price to drop faster. 500GB PCI-E SSD with 1GB/s + Speed should be under $100.
rpg1966 - Thursday, August 2, 2018 - link
"Silicon Motion's second-generation NVMe SSD controllers have all but taken over the consumer NVMe SSD market. Drives like the HP EX920 and ADATA SX8200 currently offer great performance at prices that are far lower than what Samsung and Western Digital are charging for their flagship products."This (kind of) implies that the controller is the biggest cost element of a drive. Does anyone have a rough breakdown of parts costs for a drive like this, i.e. controller, DRAM, NAND, and the board+ancillaries?
Death666Angel - Thursday, August 2, 2018 - link
I don't read it that way, but okay. :) I don't have a definitve cost breakdown of an SSD. But my best guess is NAND is still the factor #1 and goes up with capacity. #2 would be the controller or the RAM, depending on size of the SSD, which usually correlates with the size of the RAM. But controllers can cost a few dollars or a few tens of dollars, so that is still a relevant number in pricing of an SSD. Samsung and WD price their drives that way. because they can, so far.FunBunny2 - Friday, August 3, 2018 - link
well, here's the problem. if you're an economist, then marginal cost is the driver of price in a competitive market. whether that's true for SSD/SSDparts is murky. for the accountant/MBA types, average cost drives price, regardless of market.now, the crunchy aspect of correlating cost to price is the production process. in the 19th century, labor was a significant component of cost and thus price. demand slackens, fire people to keep both costs more or less stable. demand increases, hire for the same effect.
in the 21st century, with SSD/SSDparts, there's virtually no labor in direct production, so marginal cost is near 0; ergo the econ types say to drop price to move more product. the accountant/MBA types recognize that most of average cost, while higher than marginal, is mostly amortization of R&D and capital equipment (all those new fabs AT has been reporting on, recall?). even they understand that the decision is the same as the econ, a very rare event: the only way to make money is to move more product and drive down average cost. but they can only do this is demand increases. and that can only happen if end-user product vendors can 1) more ways to use the parts, and 2) people have more money to buy the end-user product.
1) is largely a substitution exercise; i.e. a zero-sum game among end-user product vendors. there's no growth in aggregate demand for end-user product, thus none for SSD/SSDparts. nobody wins.
2) is a purely macro-economic phenomenon, and thus dependent on the 'middle class' having more moolah to spend on more bling. you can see where this is going? with right-wing governments driving income concentration, aggregate demand eventually collapses. this is exactly what created the Great Recession.
end-user product vendors can't directly move 2), all they can do is encourage their governments to spread the wealth so that aggregate demand can grow, and they can sell more product. on the whole, they haven't shown the smarts to see where their bread is buttered. as labor cost diminishes, just firing bodies gains you less and less until it gains you nothing. growth in highly capitalized production economies of the 21st century doesn't work as it did in the primitive 19th.
greggm2000 - Thursday, August 2, 2018 - link
What I'd really like to see are SSD tests done on an (user) encrypted drive. Would performance be equivalent to a fully filled drive? I imagine this would be a fairly common use case?Billy Tallis - Thursday, August 2, 2018 - link
Software encryption does technically leak information if it uses TRIM commands or otherwise signals to the drive what data is and isn't valid. It also imposes performance overhead from doing the encryption on the CPU . There aren't many reasons to justify using software full-drive encryption on a SSD when self-encrypting SSDs are so common (Samsung, Crucial MX, etc).Icehawk - Saturday, August 4, 2018 - link
Is Opal effected by this? What performance cost is there? We’ve got whopping laptop at woro with it enabled buy I’d like to push us in a more secure direction. Would probably help our PCI score too.Chaser - Sunday, August 5, 2018 - link
I wish someone would build a review site that includes SSDs that writes reviews based upon a an average PC gamer's performance perspective. I myself have tested the Evo 860, the 970 EVO, Optane 900, the XPG SX8200, and the Patriot Hellfire. Like many other revealing Youtube videos that compare these drives most often the Evo 860 is either faster at loading a game, the same or very slightly slower. While I understand that Anandtech has readers that are looking at higher usage scenarios, I'd venture to say MOST of their readers are in the former category.As it stands today with most similar sites we see chart after chart of benchmarks on multiple pages. We read about accolades on random and sequential performance. Some sites rank the drives from 1-10. But in the end, the user experience differences prove to be negligible for most users and a simple article like that probably would entice site visits to read through the hairsplitting benchmarks.
KAlmquist - Sunday, August 5, 2018 - link
I'll repeat something Billy Tallis stated in a comment and probably should incorporate into the text of the review: “I did run the Heavy and Light tests on this drive with it 80% full and the results were similar to the 100% full case.”When I partition an SSD, I've always left a bit of space unused in order to effectively increase the spare area to 20% or so. That improved performance consistency with older SSD designs. With the SM2262EN, it might still reduce write amplification, but not enough to substantially affect performance.
kensiko - Wednesday, January 9, 2019 - link
I'm hesitating between the AData XPG SX8200 (SM2262) and the pro one (SM2262EN), 50 CAD$ difference. Any opinion ?