Enterprise SSDs usually have their endurance rated at 3 months of residual data retention capability, vs 1 year for consumer models. Since data retention time decreases with NAND wear, this allows manufacturers to claim almost "for free" a higher endurance than what the P/E limit for consumer NAND memory would suggest, even though it might be the exact same memory (but different requirements).
Most likely, the rated endurance for these drives is at a much higher number of P/E cycles than 3000.
"Most likely, the rated endurance for these drives is at a much higher number of P/E cycles than 3000."
I don't think that is necessarily the case. If you look at my calculations on the "Endurance Ratings" page, the combined WLF and WAF is already only 1.24x when using the raw NAND capacity to calculate the endurance at 3,000 P/E cycles. 1.24x is excellent, so I find it hard to believe that the NAND would be rated at higher than 3,000 cycles as the combined WLF and WAF would essentially be about 1.00 in that case (which is basically impossible without compression). Also, Micron specifically said that this is a 3,000 P/E cycle part.
As the endurance rating for enterprise drives is usually intended for a typical steady random workload (and no trim to help), the write amplification factor should be higher than the rather low value you used for your calculation. You can see that endurance figures (not just in this case, but most other other enterprise drives as well) start make more sense when actual P/E cycles for that usage/application are higher than their consumer counterparts.
Here's a prominent example. You could try the same calculation here. In this specification sheet for 2013 Samsung enterprise drives, which includes a model with TLC NAND, it's dead obvious that the rated P/E cycles limit of consumer drives (unofficially, rated 1000 cycles) doesn't apply for them, even though for the low end models they're most certainly using the same memory. You never really see a fixed P/E cycles limit for enterprise drives as in the end is the TBW figure that counts and the shorter data retention requirement for them helps boosting that even though there might actually not be any hardware difference at all.
The specs you linked definitely show 1000 P/E cycles for all the NAND on all the drives, TLC and MLC. I used this formula: Total Bytes Written Allowed = NAND P/E cycles * Total Bytes Written per Day
Enterprise drives have lower data retention requirements because in the enterprise space, drives will be read and written to more frequently and will not be powered off for extended periods of time. Consumer drives on the other hand can have a lots of down time.
PM843, TLC NAND rated 1000 P/E cycles on the consumer version. Let's take the 120GB model as an example. Endurance with 100% sequential workloads: 207 TB
1000 P/E (NAND life @ 1 year of data retention, on the consumer version) * 128 GiB (physical NAND capacity) = 128000 GB = 125 TiB. This drive doesn't make use of data compression. With sequential workloads the best case write amplification would therefore be 1.0x. To reach the claimed 207 TiB of terabytes written of endurance, the NAND memory on this drive would need at the very least to endure 1000/125*207 = 1656 P/E cycles, again assuming the best case write amplification factor. One can expect this to be at least around 1.15-1.20x under real world scenarios, which would bring this figure to about 1900-2000 P/E cycles.
SM843, the enterprise version of the 840 Pro with 3000 P/E cycles MLC NAND. Again, let's take the 120GB version for reference. Stated endurance with 100% sequential workloads: 1 PB
128 GiB physical capacity * 3000 P/E cycles = 375 TiB Actual P/E cycles needed for 1 PB at 1.0x write amplification: 3000 * 1024/375 = 8192
Like I said, ultimately it's impossible to figure out where exactly the endurance is coming from. It's likely that the NAND could be rated higher thanks to the looser retention requirements (3 months vs 1 year) in the enterprise space but then again, figuring out the exact P/E cycle count isn't easy because we don't know the write amplification.
If you have spare time and still have the drives you could try applying a standard sustained 4kB random load for an extended period of time to figure out what the write amplification for these drives with that usage is. Marvell-based SSDs usually provide, in a way or another, both NAND writes and Host writes among their monitoring attributes, and with these data it's pretty straightforward to calculate it. Given the large OP area, I predict it will end up being somewhere around 2.5x.
I still have the drives but there are other products in the review queue. I'll see what I can do -- the process is rather simple as you outlined I've done some similar testing in the past too.
I don't quite understand your statement in the first part: data retention decreases with NAND wear -> consumer drives have higher endurance
Regarding the last sentence, SSD endurance is measured in number of writes like TBW. NAND endurance is measured in P/E cycles. The endurance of an SSD should not be measured in P/E cycles because erasing is handled internally to the SSD, there is no "erase" command to send to an SSD (trim does not directly yield an erase), write amplification (decreases endurance) and overprovisioning (increases endurance) must be taken into account and is not controlled by the user. Total writes is all that is needed when discussing SSD endurance. With that said, please explain your reasoning for the drive having a higher endurance than 3000 "P/E cycles".
The more P/E cycles your NAND memory goes through, the shorter its data retention time gets. Therefore, the shorter the data retention requirement for the intended usage is, the more P/E cycles you can make your memory can go through (or in other words: the more data you can write). Actually it's a bit more complex than that (for example the uncorrectable bit error rate also goes up with wear), but that's pretty much it.
I see. So the assumption is that NAND with shorter data retention requires more refreshing (a.k.a. wasted programs). I believe this to be true for enterprise drives but I would be surprised to see this being done on consumer drives (maybe for TLC, though).
Why would you want SLC anyway ? If you need endurance, HE-MLC is plety enough. Unless you write like crazy; them probobly buying SLC shouldn't pose a problem :)
Because 20nm TLC and crap like that barely holds a "charge", so to speak, when not powered up. That's just way too volatile for my liking. I'm not always running all my PC's every day.
What difference does it make if the drive is powered up or not? These are static cells, they are not "refreshed" like DRAM. They are only refreshed when they are rewritten, and if your drive is not doing continuous writes, it's not guaranteed to rewrite any particular cell within any specific timeframe.
NAND has limited data retention and should be refreshed like DRAM, albeit at a much larger timescale like 1 month (TLC) to a year (I believe 54nm SLC from years ago had this spec near the end of its life, ~100,000 P/E cycles). Good SSDs should be doing this.
ALL consumer drives have a minimum data retention of one year, regardless of the type of NAND (SLC, MLC or TLC). This is a standard set by JEDEC. For enterprise drives it's three months.
That may be the requirement for drives but not for NAND. Drives can do several things to increase data retention: refresh stale data after time, provide strong ECC, do voltage thresholding, etc. I think JEDEC specifies hundreds of hours for NAND retention.
I'm seen an opportunity here to clarify something that I've always wondered about: How exactly does this long time retention work for FLASH?
In the old days, when you had an SSD, you weren't very likely having it lie around, after you paid an arm and a leg for it.
These days, however, storing your most valuable data on an SSD almost seems logical, because one of my nightmares is dropping that very last backup magnetic drive, just when I'm trying to insert it after a complete loss of my primary active copy: SSD just seems so much more reliable!
And then there comes this retention figure...
So what happens when I re-insert an SSD, that has been lying around say for 9 months with those most valuable baby pics of your grown up children?
Does just powering it up mean all those flash cells with logical 1's in them will magically draw in charge like some sort of electron sponge?
Or will the drive have to go through a complete read-check/overwrite cycle depending on how near blocks have come to the electron depletion limit?
How would it know the time delta? How would I know it's finished the refresh and it's safe to put it away for another 9 months?
I have some older FusionIO 320GB MLC drives in the cupboard, that haven't been powered up for more than a year: Can I expect them to look blank?
P.S. Yes, you need an edit button and a resizable box for text entry!
The way NAND flash works is that electrons are injected to what is called a floating gate, which is insulated from the other parts of the transistor. As it is insulated, the electrons can't escape the floating gate and thus SSDs are able to hold the data. However, as the SSD is written to, the insulating layer will wear out, which decreases its ability to insulate the floating gate (i.e. make sure the electrons don't escape). That causes the decrease in data retention time.
Figuring out the exact data retention time isn't really possible. At the maximum endurance, it should be 1 year for client drives and 3 months for enterprise drives but anything before and after is subject to several variables that the end-user don't have access to.
Data retention depends mainly on NAND wear. It's the highest (several years - I've read 10+ years even for TLC memory though) at 0 P/E cycles and decreases with usage. By JEDEC specifications, consumer SSDs are to be considered at "end life" when the minimum retention time drops below 1 year, and that's what you should expect when reaching the P/E "limit" (which is not actually a hard limit, just a threshold based on those JEDEC-spec requirements). For enterprise drives it's 3 months. Storage temperature will also affect retention. If you store your drives in a cool place when unpowered, their retention time will be longer. By JEDEC specifications the 1 year time for consumer drives is at 30C, while the 3 months time for enterprise one is at 40C. Tidbit: manufacturers use to bake NAND memory in low temperature ovens to simulate high wear usage scenarios during tests.
To be refreshed, data has to be reprogrammed again. Just powering up an SSD is not going to reset the retention time for the existing data, it's only going to make it temporarily slow down.
When powered, the SSD's internal controller keeps track of when writes occurred and reprograms old blocks as needed to make sure that data retention is maintained and consistent across all data. This is part of the wear leveling process, which usually is pretty efficient in keeping block usage consistent. However, I speculate this can happen only to a certain extent/rate. A worn drive left unpowered for a long time should preferably have its data dumped somewhere and then cloned back, to be sure that all NAND blocks have been refreshed and that their retention time has been reset to what their wear status allow.
TLC is far from crap (well quality one that is). And no, TLC does not have issues holding a "charge". Jedec states a minimum of 1 year of data retention, so your statement is complete bullshit.
My SLC X25-E 64GB is still chugging along, with not so much as a hiccup.
It n e v e r slows down, it 'felt' fast constantly, not matter what is going on.
In about that time I've had one failed OCZ 128GB disk (early Indullix I think), one failed Kingston V100, one failed Corsair 100GB too (model forgotten), a 160GB X25-M arrived DOA (but it's replacement is still going strong in a workstation), and late last year a failed Patriot Wildfire 240GB.
The two 840 Evo 250GB disks I have (TLC) are absolute garbage. So bad I had to remove them from the RAID0, and run them individually. When you want to over-write all the free space - you'd better have some time on your hands.
The X25-E 64 GB actually has 80 GiB of NAND memory on its PCB. Since of these only 64 GB (-> 59.6 GiB) are available to the user, it means that about 25% of it is overprovisining area. The drive is obviously going to excel in performance consistency (at least for its time).
On the other hand, the 840 250 GB EVO has less OP than the previous 840 models with TLC memory, as you have to subtract 9 GiB from the 23.17 GiB amount of unavailable space (256 GiB of physically installed NAND - 250 GB->232.83 GiB of user space) previously fully used as overprovisioning area, for the Turbowrite feature. This means that in trim-less or intensive write environments with little or no free space they're not going to be that great in performance consistency.
If you were going to use The Samsung 840 EVOs in a RAID-0 configuration you should really had at the very least to increase the OP area by setting up trimmed, unallocated space. So, it's not really that they are "absolute garbage" (as they obviously they aren't) and it's really inherently due to the TLC memory. It's your fault in that you most likely didn't take the necessary steps to use them properly with your RAID configuration.
> When you want to over-write all the free space - you'd better have some time on your hands.
Why would you overwrite all the free space? Can't you TRIM the drives?
Any why run them in RAID0? Can't you use them as JBOD, and combine volumes?
SLC versus TLC results in a about a factor of 4 cheaper just based on a die area basis. That's why drives are MLC and TLC based, the extra storage being used to add extra spare area to make the drive more economical over the drives useful life. Your SLC x25-e, on the other hand, will probably never ever reach it's P/E limit before you discard it for a more useful, faster, bigger replacement drive. We'll probably have practical memrister based drives before the x25-e uses all it's P/E cycles.
It make me think about my OCZ vector 256GB, it breaks everytime there is power lose, even hard reset... There are quite a lot people claim this problem online, and Vector 256GB became only sale refurbised before any other vector drive.... I RMAed two of them, and OCZ replaced mine with Vector 150, which seems fine now.. maybe we should add power lost test to SSDs...
I think the price is ridiculous, nearly twice as expensive as the reliable Intel S3500 and almost as expensive as the uber-superior S3700. Makes no sense.
Granted, new benchmarks, but IMO that should be split off to a seperate article and the entire thing delayed for publishing to get the tests done. Otherwise, excellent revewing as always.
If you want a drive with good speed , low price and amazing endurance, just pick up a used Samsung 830 for cheap. People have tested them to 25,000 cycles. That's 10+ PB for a 512GB drive, for just $300 or less. And I suspect their data retention is superior as well.
Thing is, while older consumer drives with quality MLC NAND might appear to have an exceptional P/E rating until failure (which occurs when wear is so high that the data retention gets so short the uncorrectable bit error rate so extreme that the controller can't keep the drive in a working state anymore, not even when powered), there's no way their manufacturer will guarantee such usage.
On a related note, all consumer Samsung 840 drives (with TLC memory) I've seen pushed through in stress endurance testings posted on the internet have reached at least ~3200-3500 P/E cycles until failure and didn't start show any SMART error before 2800-2900 cycles, which means that the approximate ~1800-2000 P/E rating (for the stated TBW endurance with sequential workloads) for TLC-NAND datacenter enterprise Samsung SSDs drives (@ a 3 months data retention) makes much sense. But again, no way Samsung will offer any guarantee for such usage with consumer or workstation drives. they will just tell you they are tested for consumer/light workloads.
Real endurance figures for NAND memory in the SSD market has to be one of the industry's best kept secrets.
Ever think of doing a real world test, measuring "time"? Everyone should know synthetic benchmarks for hard drives are meaningless. Why don't you do a roundup of drives and compare program load time, file copy time, boot time, and encoding time. Am I a freakin genius to think of this?
I'm wondering if any testing is done with a 30/70 read/write ratio - Most i've seen is 70% read. With enterprise drives, they are often rebadged and used in SANs - Would be interesting to see how they compare in write-intensive enviroments (VDI)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
37 Comments
Back to Article
Solid State Brain - Tuesday, April 22, 2014 - link
Enterprise SSDs usually have their endurance rated at 3 months of residual data retention capability, vs 1 year for consumer models. Since data retention time decreases with NAND wear, this allows manufacturers to claim almost "for free" a higher endurance than what the P/E limit for consumer NAND memory would suggest, even though it might be the exact same memory (but different requirements).Most likely, the rated endurance for these drives is at a much higher number of P/E cycles than 3000.
Kristian Vättö - Tuesday, April 22, 2014 - link
"Most likely, the rated endurance for these drives is at a much higher number of P/E cycles than 3000."I don't think that is necessarily the case. If you look at my calculations on the "Endurance Ratings" page, the combined WLF and WAF is already only 1.24x when using the raw NAND capacity to calculate the endurance at 3,000 P/E cycles. 1.24x is excellent, so I find it hard to believe that the NAND would be rated at higher than 3,000 cycles as the combined WLF and WAF would essentially be about 1.00 in that case (which is basically impossible without compression). Also, Micron specifically said that this is a 3,000 P/E cycle part.
Solid State Brain - Tuesday, April 22, 2014 - link
As the endurance rating for enterprise drives is usually intended for a typical steady random workload (and no trim to help), the write amplification factor should be higher than the rather low value you used for your calculation. You can see that endurance figures (not just in this case, but most other other enterprise drives as well) start make more sense when actual P/E cycles for that usage/application are higher than their consumer counterparts.Here's a prominent example. You could try the same calculation here. In this specification sheet for 2013 Samsung enterprise drives, which includes a model with TLC NAND, it's dead obvious that the rated P/E cycles limit of consumer drives (unofficially, rated 1000 cycles) doesn't apply for them, even though for the low end models they're most certainly using the same memory. You never really see a fixed P/E cycles limit for enterprise drives as in the end is the TBW figure that counts and the shorter data retention requirement for them helps boosting that even though there might actually not be any hardware difference at all.
http://www.samsung.com/global/business/semiconduct...
apudapus - Tuesday, April 22, 2014 - link
The specs you linked definitely show 1000 P/E cycles for all the NAND on all the drives, TLC and MLC. I used this formula: Total Bytes Written Allowed = NAND P/E cycles * Total Bytes Written per DayEnterprise drives have lower data retention requirements because in the enterprise space, drives will be read and written to more frequently and will not be powered off for extended periods of time. Consumer drives on the other hand can have a lots of down time.
Solid State Brain - Tuesday, April 22, 2014 - link
PM843, TLC NAND rated 1000 P/E cycles on the consumer version. Let's take the 120GB model as an example.Endurance with 100% sequential workloads: 207 TB
1000 P/E (NAND life @ 1 year of data retention, on the consumer version) * 128 GiB (physical NAND capacity) = 128000 GB = 125 TiB. This drive doesn't make use of data compression. With sequential workloads the best case write amplification would therefore be 1.0x. To reach the claimed 207 TiB of terabytes written of endurance, the NAND memory on this drive would need at the very least to endure 1000/125*207 = 1656 P/E cycles, again assuming the best case write amplification factor. One can expect this to be at least around 1.15-1.20x under real world scenarios, which would bring this figure to about 1900-2000 P/E cycles.
SM843, the enterprise version of the 840 Pro with 3000 P/E cycles MLC NAND. Again, let's take the 120GB version for reference.
Stated endurance with 100% sequential workloads: 1 PB
128 GiB physical capacity * 3000 P/E cycles = 375 TiB
Actual P/E cycles needed for 1 PB at 1.0x write amplification: 3000 * 1024/375 = 8192
Kristian Vättö - Wednesday, April 23, 2014 - link
Like I said, ultimately it's impossible to figure out where exactly the endurance is coming from. It's likely that the NAND could be rated higher thanks to the looser retention requirements (3 months vs 1 year) in the enterprise space but then again, figuring out the exact P/E cycle count isn't easy because we don't know the write amplification.Solid State Brain - Wednesday, April 23, 2014 - link
If you have spare time and still have the drives you could try applying a standard sustained 4kB random load for an extended period of time to figure out what the write amplification for these drives with that usage is. Marvell-based SSDs usually provide, in a way or another, both NAND writes and Host writes among their monitoring attributes, and with these data it's pretty straightforward to calculate it. Given the large OP area, I predict it will end up being somewhere around 2.5x.Kristian Vättö - Wednesday, April 23, 2014 - link
I still have the drives but there are other products in the review queue. I'll see what I can do -- the process is rather simple as you outlined I've done some similar testing in the past too.Kristian Vättö - Wednesday, April 23, 2014 - link
*and I've done similar testing in the past too.(Yes, we need an edit button)
apudapus - Wednesday, April 23, 2014 - link
OIC. My best guess is that the voltage thresholding (their ARM/OR) extends the life of the NAND.apudapus - Tuesday, April 22, 2014 - link
I don't quite understand your statement in the first part:data retention decreases with NAND wear -> consumer drives have higher endurance
Regarding the last sentence, SSD endurance is measured in number of writes like TBW. NAND endurance is measured in P/E cycles. The endurance of an SSD should not be measured in P/E cycles because erasing is handled internally to the SSD, there is no "erase" command to send to an SSD (trim does not directly yield an erase), write amplification (decreases endurance) and overprovisioning (increases endurance) must be taken into account and is not controlled by the user. Total writes is all that is needed when discussing SSD endurance. With that said, please explain your reasoning for the drive having a higher endurance than 3000 "P/E cycles".
Solid State Brain - Tuesday, April 22, 2014 - link
The more P/E cycles your NAND memory goes through, the shorter its data retention time gets.Therefore, the shorter the data retention requirement for the intended usage is, the more P/E cycles you can make your memory can go through (or in other words: the more data you can write). Actually it's a bit more complex than that (for example the uncorrectable bit error rate also goes up with wear), but that's pretty much it.
apudapus - Wednesday, April 23, 2014 - link
I see. So the assumption is that NAND with shorter data retention requires more refreshing (a.k.a. wasted programs). I believe this to be true for enterprise drives but I would be surprised to see this being done on consumer drives (maybe for TLC, though).valnar - Tuesday, April 22, 2014 - link
I wish they would find a way to lower the cost of SLC. Look at those endurance numbers.hojnikb - Tuesday, April 22, 2014 - link
Why would you want SLC anyway ?If you need endurance, HE-MLC is plety enough.
Unless you write like crazy; them probobly buying SLC shouldn't pose a problem :)
valnar - Tuesday, April 22, 2014 - link
Because 20nm TLC and crap like that barely holds a "charge", so to speak, when not powered up. That's just way too volatile for my liking. I'm not always running all my PC's every day.bji - Tuesday, April 22, 2014 - link
What difference does it make if the drive is powered up or not? These are static cells, they are not "refreshed" like DRAM. They are only refreshed when they are rewritten, and if your drive is not doing continuous writes, it's not guaranteed to rewrite any particular cell within any specific timeframe.apudapus - Tuesday, April 22, 2014 - link
NAND has limited data retention and should be refreshed like DRAM, albeit at a much larger timescale like 1 month (TLC) to a year (I believe 54nm SLC from years ago had this spec near the end of its life, ~100,000 P/E cycles). Good SSDs should be doing this.Kristian Vättö - Wednesday, April 23, 2014 - link
ALL consumer drives have a minimum data retention of one year, regardless of the type of NAND (SLC, MLC or TLC). This is a standard set by JEDEC. For enterprise drives it's three months.apudapus - Wednesday, April 23, 2014 - link
That may be the requirement for drives but not for NAND. Drives can do several things to increase data retention: refresh stale data after time, provide strong ECC, do voltage thresholding, etc. I think JEDEC specifies hundreds of hours for NAND retention.abufrejoval - Monday, April 28, 2014 - link
I'm seen an opportunity here to clarify something that I've always wondered about:How exactly does this long time retention work for FLASH?
In the old days, when you had an SSD, you weren't very likely having it lie around, after you paid an arm and a leg for it.
These days, however, storing your most valuable data on an SSD almost seems logical, because one of my nightmares is dropping that very last backup magnetic drive, just when I'm trying to insert it after a complete loss of my primary active copy: SSD just seems so much more reliable!
And then there comes this retention figure...
So what happens when I re-insert an SSD, that has been lying around say for 9 months with those most valuable baby pics of your grown up children?
Does just powering it up mean all those flash cells with logical 1's in them will magically draw in charge like some sort of electron sponge?
Or will the drive have to go through a complete read-check/overwrite cycle depending on how near blocks have come to the electron depletion limit?
How would it know the time delta? How would I know it's finished the refresh and it's safe to put it away for another 9 months?
I have some older FusionIO 320GB MLC drives in the cupboard, that haven't been powered up for more than a year: Can I expect them to look blank?
P.S. Yes, you need an edit button and a resizable box for text entry!
Kristian Vättö - Tuesday, April 29, 2014 - link
The way NAND flash works is that electrons are injected to what is called a floating gate, which is insulated from the other parts of the transistor. As it is insulated, the electrons can't escape the floating gate and thus SSDs are able to hold the data. However, as the SSD is written to, the insulating layer will wear out, which decreases its ability to insulate the floating gate (i.e. make sure the electrons don't escape). That causes the decrease in data retention time.Figuring out the exact data retention time isn't really possible. At the maximum endurance, it should be 1 year for client drives and 3 months for enterprise drives but anything before and after is subject to several variables that the end-user don't have access to.
Solid State Brain - Tuesday, April 29, 2014 - link
Data retention depends mainly on NAND wear. It's the highest (several years - I've read 10+ years even for TLC memory though) at 0 P/E cycles and decreases with usage. By JEDEC specifications, consumer SSDs are to be considered at "end life" when the minimum retention time drops below 1 year, and that's what you should expect when reaching the P/E "limit" (which is not actually a hard limit, just a threshold based on those JEDEC-spec requirements). For enterprise drives it's 3 months. Storage temperature will also affect retention. If you store your drives in a cool place when unpowered, their retention time will be longer. By JEDEC specifications the 1 year time for consumer drives is at 30C, while the 3 months time for enterprise one is at 40C. Tidbit: manufacturers use to bake NAND memory in low temperature ovens to simulate high wear usage scenarios during tests.To be refreshed, data has to be reprogrammed again. Just powering up an SSD is not going to reset the retention time for the existing data, it's only going to make it temporarily slow down.
When powered, the SSD's internal controller keeps track of when writes occurred and reprograms old blocks as needed to make sure that data retention is maintained and consistent across all data. This is part of the wear leveling process, which usually is pretty efficient in keeping block usage consistent. However, I speculate this can happen only to a certain extent/rate. A worn drive left unpowered for a long time should preferably have its data dumped somewhere and then cloned back, to be sure that all NAND blocks have been refreshed and that their retention time has been reset to what their wear status allow.
hojnikb - Wednesday, April 23, 2014 - link
TLC is far from crap (well quality one that is). And no, TLC does not have issues holding a "charge". Jedec states a minimum of 1 year of data retention, so your statement is complete bullshit.apudapus - Wednesday, April 23, 2014 - link
TLC does have issues but the issues can be mitigated. A drive made up of TLC NAND requires much stronger ECC compared to MLC and SLC.Notmyusualid - Tuesday, April 22, 2014 - link
My SLC X25-E 64GB is still chugging along, with not so much as a hiccup.It n e v e r slows down, it 'felt' fast constantly, not matter what is going on.
In about that time I've had one failed OCZ 128GB disk (early Indullix I think), one failed Kingston V100, one failed Corsair 100GB too (model forgotten), a 160GB X25-M arrived DOA (but it's replacement is still going strong in a workstation), and late last year a failed Patriot Wildfire 240GB.
The two 840 Evo 250GB disks I have (TLC) are absolute garbage. So bad I had to remove them from the RAID0, and run them individually. When you want to over-write all the free space - you'd better have some time on your hands.
SLC for the win.
Solid State Brain - Wednesday, April 23, 2014 - link
The X25-E 64 GB actually has 80 GiB of NAND memory on its PCB. Since of these only 64 GB (-> 59.6 GiB) are available to the user, it means that about 25% of it is overprovisining area. The drive is obviously going to excel in performance consistency (at least for its time).On the other hand, the 840 250 GB EVO has less OP than the previous 840 models with TLC memory, as you have to subtract 9 GiB from the 23.17 GiB amount of unavailable space (256 GiB of physically installed NAND - 250 GB->232.83 GiB of user space) previously fully used as overprovisioning area, for the Turbowrite feature. This means that in trim-less or intensive write environments with little or no free space they're not going to be that great in performance consistency.
If you were going to use The Samsung 840 EVOs in a RAID-0 configuration you should really had at the very least to increase the OP area by setting up trimmed, unallocated space. So, it's not really that they are "absolute garbage" (as they obviously they aren't) and it's really inherently due to the TLC memory. It's your fault in that you most likely didn't take the necessary steps to use them properly with your RAID configuration.
Solid State Brain - Wednesday, April 23, 2014 - link
I meant:*...and it's NOT really inherently due to the...
TheWrongChristian - Friday, April 25, 2014 - link
> When you want to over-write all the free space - you'd better have some time on your hands.Why would you overwrite all the free space? Can't you TRIM the drives?
Any why run them in RAID0? Can't you use them as JBOD, and combine volumes?
SLC versus TLC results in a about a factor of 4 cheaper just based on a die area basis. That's why drives are MLC and TLC based, the extra storage being used to add extra spare area to make the drive more economical over the drives useful life. Your SLC x25-e, on the other hand, will probably never ever reach it's P/E limit before you discard it for a more useful, faster, bigger replacement drive. We'll probably have practical memrister based drives before the x25-e uses all it's P/E cycles.
zodiacsoulmate - Tuesday, April 22, 2014 - link
It make me think about my OCZ vector 256GB, it breaks everytime there is power lose, even hard reset...There are quite a lot people claim this problem online, and Vector 256GB became only sale refurbised before any other vector drive....
I RMAed two of them, and OCZ replaced mine with Vector 150, which seems fine now.. maybe we should add power lost test to SSDs...
Samus - Wednesday, April 23, 2014 - link
I think the price is ridiculous, nearly twice as expensive as the reliable Intel S3500 and almost as expensive as the uber-superior S3700. Makes no sense.ZeDestructor - Wednesday, April 23, 2014 - link
Lot's of lack of time in some sections...Granted, new benchmarks, but IMO that should be split off to a seperate article and the entire thing delayed for publishing to get the tests done. Otherwise, excellent revewing as always.
okashira - Wednesday, April 23, 2014 - link
If you want a drive with good speed , low price and amazing endurance, just pick up a used Samsung 830 for cheap.People have tested them to 25,000 cycles. That's 10+ PB for a 512GB drive, for just $300 or less. And I suspect their data retention is superior as well.
Solid State Brain - Wednesday, April 23, 2014 - link
Thing is, while older consumer drives with quality MLC NAND might appear to have an exceptional P/E rating until failure (which occurs when wear is so high that the data retention gets so short the uncorrectable bit error rate so extreme that the controller can't keep the drive in a working state anymore, not even when powered), there's no way their manufacturer will guarantee such usage.On a related note, all consumer Samsung 840 drives (with TLC memory) I've seen pushed through in stress endurance testings posted on the internet have reached at least ~3200-3500 P/E cycles until failure and didn't start show any SMART error before 2800-2900 cycles, which means that the approximate ~1800-2000 P/E rating (for the stated TBW endurance with sequential workloads) for TLC-NAND datacenter enterprise Samsung SSDs drives (@ a 3 months data retention) makes much sense. But again, no way Samsung will offer any guarantee for such usage with consumer or workstation drives. they will just tell you they are tested for consumer/light workloads.
Real endurance figures for NAND memory in the SSD market has to be one of the industry's best kept secrets.
AnnonymousCoward - Friday, April 25, 2014 - link
Ever think of doing a real world test, measuring "time"? Everyone should know synthetic benchmarks for hard drives are meaningless. Why don't you do a roundup of drives and compare program load time, file copy time, boot time, and encoding time. Am I a freakin genius to think of this?MrPoletski - Saturday, April 26, 2014 - link
Why does every single performance consistency graph say 4KB random write QD 32?markoshark - Sunday, April 27, 2014 - link
I'm wondering if any testing is done with a 30/70 read/write ratio - Most i've seen is 70% read.With enterprise drives, they are often rebadged and used in SANs - Would be interesting to see how they compare in write-intensive enviroments (VDI)