In the meantime, DiskFresh can read and write each sector once, which will temporarily fix performance issues. I used it on the first 40% of my drive and its back to full speed. http://download.cnet.com/DiskFresh/3000-18512_4-75...
I've wondered about booting my MacBook Pro into single-user mode and doing a dd if=/dev/rdisk0 of=/dev/rdisk0 bs=1m. Seems like that should work, but it would put a lot of wear on the drive. If it's only about three weeks away, I think I might just wait for the firmware update.
There was one school of thought that speculated that the problem was fading charges in the TLC NAND, but given the complete lack of data loss, I personally find that unlikely.
Another theory was that the wear-leveling routines were moving data around in such a way that eliminated the parallelism between NAND modules, thus slowing down the drive. If this is the case, the re-arranging of data to speed things back up would be a one-time event, but would still have to occur.
But, it's all just speculation at this point, of course.
Well, thats a pretty bad theory. Even if data was moved around in such manner, it would still be faster than 50MB/s (or even less in some cases). ToggleDDR is good for ~400MB/s, so even if data is on a single package for some reason, it still woulnd hit performance so hard.
Putting aside controller strategies (read: ECC) that on normally functioning drives are probably occurring to ensure that read operations remain reliable with wear and usage, if static data has the chance to become old with normal usage patterns it likely means that the controller wasn't moving it around at all. Every time the SSD's wear leveling count increases by one, all NAND flash cells are supposed to have been written on once on average and this is going to happen at least once a month with just 4-5 GB of daily host writes on a 250GB drive on average, assuming a write amplification of 2, which would represent a very light usage.
So it's not a matter of "aggressively moving data around", but to actually get it to move around in the first place, assuming the bug isn't affecting something else instead (like for example the metadata associated with the recorded data as it gets shuffled around).
Hopefully they will release a fix for the old 840 too... I have 4 of these at work and I was just going to make a big storage spaces pool with 2 layers out of them. Not sure that the bug will show in this context, but if there will be some static most accesed files moved to ssd by storage spaces I assume yes.
I have a 840 500GB as my system and steam drive here. Haven't noticed any slow downs at any point during the last 18 months I had it. Benchmarks during that time always showed it operating a few 10s of MB below peak performance for my SATA II (old P55 chipset) and now SATA II (Z87) connections.
A non-EVO 840 250GB in one of my systems is showing this issue on old static data, according to LBA scan read benchmarks. However the user who is actively using that PC didn't notice any significant slowdown.
The latest firmware by Samsung for their SSD was released on March '14 for 840 and 840 Pro models, so there's little reason to believe they won't release additional ones to fix this issue. The latest firmware for 840 EVO drives is still the original one issued on December '13.
So what does the new firmware do exactly? Simply move old data around much more often? That's going to kill TLC endurance (which is already less than 1/3rd that of everyone else's MLC drives in endurance tests).
I think Samsung have made a huge mistake with TLC if its data retention really is that bad. It's not as if the TLC drives are significantly cheaper than MLC, which apparently was supposed to be the prime advantage of TLC (lower cost)? Where I live, a 512GB Samsung Pro (high endurance MLC) is almost £240. A buggy 512GB Samsung 840 EVO (low endurance TLC) is around £150-155. But a perfectly working 512GB Crucial MX 100 (high endurance MLC) is around £145. Total no-brainer...
"I think Samsung have made a huge mistake with TLC if its data retention really is that bad." Is it, though? I haven't dived into devouring forums posts about this stuff, But on my main tech news site, no one said that the data was lost because of this bug, only that older files were getting read very slowly. And I haven't only found that to be on the Evo drive, not the vanilla 840. At this point in time, it seems prudent to go with the MX100 (though not having public bugs at this stage does not mean it is bug free, see 840 Evo :P). But to draw any other conclusions from this (TLC is bad, data retention is bad) seem premature and not based in the facts presented so far.
Not really a point to anything I wrote about. The TLC drive I bought (500GB 840) was the cheapest 500GB SSD at the time, so TLC saved me money. The performance was better than most MLC drives at the time, except ones much more expensive. And I specifically talked about the data retention aspect of TLC, as that was what the OP mentioned. So whatever "truth" you are referring to, it alludes me.
Well others with "vanilla" 840's have indeed been complaining about the bug too in multiple forums, so it's definitely not limited to the EVO. I don't know for sure if it is TLC, but so far it only seems to affect the Samsung TLC drives (840 & 840 EVO). The MLC drives (840 Pro and older 830 (of which I own one)) are completely unaffected. It might be the TLC itself losing charge causing heavy error correction to slow it down, or it might be a combination of TLC + 19nm process or it might be firmware specifics for TLC (from what I remember of Anand's review, Samsung's dedicate a small portion of each TLC NAND die as an SLC write buffer, ie, an extra layer of complexity over MLC drives):- http://www.anandtech.com/show/7173/samsung-ssd-840...
Either way, if it's just a "firmware speed bug", then it's a pretty big bug to miss, but if "data loses charge far quicker than expected causing slowdown due to error correction" is an unforeseen side-effect of small-process TLC, then "move data around more" firmware is really more of a workaround than a "fix". In any case, I can't understand why the Samsung Pro MLC's are so much more expensive compared to everyone else's MLC drives?
Since rewriting the data (for example with a defrag) apparently temporarily solves this issue, all points to 840/840 EVO-specific bugs with wear leveling algorithms, which are already supposed to do that in the background during write operations. Every SSD in order to minimize lifetime wear is supposed to use all NAND cells more or less uniformly, and if TLC-equipped Samsung SSDs aren't doing that, they actually are wearing up faster, not slower as some people here are writing.
Not exactly correct. Wear leveling actually causes additional wear on the NAND because it rewrites data that has already been written. Samsung's wear leveling in the 840 and 840 EVO seems to be very passive to avoid extra NAND writes, which is why even heavy writing to the drives won't recover performance of old data as the drive doesn't seem to move static data around. At some point it definitely should, but it could very well be that the firmware is designed to wear out (~800-900 P/E cycles) the empty blocks first before it touches blocks with static data because that results in less overall wear than constant wear leveling would.
Even the Wikipedia acknowledges that there are two, in general, wear leveling approaches: static and dynamic. They differ in the way you'd expect based on the names: the static leveling does re-write all blocks over time, while dynamic only re-writes as needed by new writes. At one time, probably 3 or so years ago, wearing leveling was understood to be static. When the dynamic version came into use? That would be an AnandTech question.
Dynamic was used (and probobly still is) by pretty much every cheap nand controller in the past, because its much simpler and works decent enough with sequential workload (eg flashdrives or memory cards).
Old data however also needs more ECC time for being reliably read, meaning lower performance. As user reactions show, this is clearly not acceptable in the consumer world.
To me it seems just weird that Samsung would deliberately defer static wear leveling to the point that old data has impaired read performance and possibly that dynamic wear leveling would operate on a reduced amount of NAND blocks, especially given that the possible benefits in write amplification by doing so don't seem that great compared to SSDs / SSD controllers that do not seem to employ such techniques.
I believe it's likely that this was a genuine bug that Samsung is going to fix rather than an indended behavior which the company would be attempting to cover up to save face.
BTW, I would also check for TRIM behavior after the updated firmware gets released. It could have been a side effect of this bug.
> In any case, I can't understand why the Samsung Pro MLC's are so much more expensive compared to everyone else's MLC drives?
Because they are faster. Samsung rather uses best quality MLC for their top drives and best quality TLC for their value line. Others just use cheaper MLC (be that of lower quality/bin, bigger die or smaller litograhpy).
Thanks for your post! Will this affect this model of drive: Samsung 840 evo MZ-7TE120BW single unit version?? I'm pretty new to ssd's and this will be my primary ssd for the os and a few programs. I don't want to have to do a firmware fix and have it all messed up. Thx.
I don't think TLC endurance is a problem for most users. Most of us will have dumped our EVOs well before they are "worn out". Even if this fix halves the life of the EVO, it won't be much of an issue.
True for 240GB capacities or larger, but the 120GB TLC drives only barely last long enough for a modern upgrade cycle. Anything that cuts the life expectancy down for the 120GB drives will be kind of a problem.
It's good that a date has been set for the firmware fix but Samsung needs to answer a lot more: 1. Explain the details of the problem and how the fix works. 2. Does the new firmware affect the SSD life? How much? 3. Is the speed fully restored or is there going to be a certain read speed value (fixed by Samsung in the firmware) that triggers the data relocation ? If it is so then it will still be degradation just not so severe.
All in all, Samsung should have done a better job with this drive.
To quote from their site: "End-to-end integration of in-house components (NAND/Controller/DRAM/Firmware)"
But this proved to be more of a weakness than an advantage. NAND - 19nm TLC with low endurance and potentially poor data retention Controller - overheats ; they could have at least put some thermal pads on it Firmware - buggy
If the issue is that old static data is not getting shuffled around, a new firmware correcting this issue will actually improve the SSD's life as it will ensure that all memory cells will wear up uniformly, not just the ones holding dynamic, new data.
Not necesserly. If you shuffle all the data around, this can actually increase write amplification, while giving static data a smaller priority when doing wear levelling (so doing more dynamic wearlevelling than static). Some cheap controllers (usually found in cheapst flash drives/memory cards) dont do any static wearlevelling at all.
The key is intelligent shuffling. Of course, no drive can predict the future, but if 1/2 of the drive is relatively static, and 1/4 is dynamic, with the rest unallocated, then without static wear leveling, only 1/2 of the drive will wear out. If the static wear leveling is done when the dynamic part is 1/4 through its lifetime, it adds an extra 4 writes out of 800, but allows one to get another 1600 1/4 drive writes of dynamic data. The hard part is that static data is hard to predict, as something which is thought to be static may be changed the next time Windows Update decides to patch that static file. But WIndows only issues patches once a month, so at most, that data would be changed 60 times over a 5 year drive.
People like you are actually making my blood boil over this.
1,2,3 - Once the firmware has been validated and released I am sure more information will be forthcoming. It would make no sense to release all of their findings to date on this issue only to find another issue during validation which may alter the information released, which is only going to annoy people further.
"All in all, Samsung should have done a better job with this drive.".
This drive has been consistently recommended from all reputable websites since it was released, well over a year ago. It has not had a widespread bug which has caused complete failure or data loss. This bug here causes an inconvenience, an inconvenience which has taken people 8-9 months to identify, so it was hardly easy for anybody to spot.
Maybe you also need reminding that since the introduction of SSD's Samsung has the best record in reliability from all big manufacturers and to date. None of their SSD's have had a widespread bug which has caused their drives to brick. Unlike, to name a few:
Intel X25-M 8MB bug Intel 320 series 8MB bug (same bug, two drives) Sandforce (many BSOD reports, including some Intel's) Crucial m4 (5184 power on failure) OCZ - everything
From the 470 series which Anand never publicly reviewed but said it was always reliable (albeit slow), to today, Samsung have not had a major widespread drive failure bug. And despite the doomsday FUD coming from people like you, this is not it either.
You misunderstood. I meant the information should be revealed at the time the firmware fix is released, not now.
A silent firmware release without clearly specifying what was wrong and how the drive is affected will damage their credibility more. Customers have the right to know how the drive will behave in the future.
And yes, other manufacturers had problems too, that doesn't make it ok for Samsung.
Unless we get official statement from Samsung, we can only guess what the problem is. But it seems people just like to draw conclusions too quickly. If the problem is data not being moved around uniformly than a firmware fix will probably improve endurance rather than decrease it.
I think the fact that their drive is losing cell charge hasn't been established yet. As of yet they are just speculations. We will have to wait and see with the new firmware what changed and what was fixed to establish the fact or we need official statement.
This test suggests TLC does not have the endurance of MLC, but it also shows that all the SSDs tested, including the EVO drive, have performed very well over the long haul. No drive failed before 750 TB in writes. The story continues, with two drives still fully-functional after writing 1.5 PB.
Yeah, but it still shows the weaker points of TLC. While lots of MLC would go happily over ~700TB, they appear to have a hardcoded limit, so they locked up and stopped writing data. TLC based 840 started getting relocated sectors much sooner than MLC drives...
Its still plenty of endurance for most users though.
"Its still plenty of endurance for most users though."
Unfortunately here we have another problem. Because TLC has less endurance than MLC people think it's junk and they need MLC without bothering to read up on the actual endurance levels. TLC is fine for the vast majority of people.
Its fine, that entirely true. But paying the same amout of money for same performing drive but getting less endurance ? Yeah, i'm gonna pick a MLC drive, regardless if i can manage to wear it out or not. TLC drives are simply too "expensive" compared to MLC drives to actually make sense. I mean, if i can get MX100 for example for the same amout of money (or even less) than ULTRA II or EV0, you can bet which of those i'm gonna pick.
However this is an industry standard and not specific to SLC, MLC or TLC. JEDEC standards state that for a consumer SSD data retention is 12 months after being powered off. Enterprise this is only 3 months.
I hope that "October 15th" means they have isolated the problem, have a fix already, and it will take until then to thoroughly test it, rather than that the marketing department (or some-such) has decided that that is the date it *will* be released.
Thanks for your post! Will this affect this model of drive: Samsung 840 evo MZ-7TE120BW single unit version?? I'm pretty new to ssd's and this will be my primary ssd for the os and a few programs. I don't want to have to do a firmware fix and have it all messed up. Thx.
Why release it on a specific date either it is ready or not. I understand saying you need about 2 weeks but you can not predict these things down to the hour unless it is already finished
If most people who are complaining never would have looked at the bench marks, I wonder how many would have been crying foul?
I bought a 1TB 840 EVO msata and without looking at benchmarks never noticed the difference in speed.
One thing I noticed was heat. I have a fan over it and it runs very cool. There's a three year warranty and I back up my data regularly.
Unless it's a physical problem with the drive, just don't bench mark wait for the firmware update, chances are most people wouldn't even notice without some kind of program telling them that their data is moving too slow.
It's ironic a lot of the same (used to be) fan boys, are now screaming that their precious drives are doomed and that Samsung had better do this or that, or the shizza will hit the fan, most people's memories are short like microwave times and they can't remember a week later why they were mad to begin with...
Samsung will keep making ssd', sheeple will keep buying them based on all of awesomeness of the drives, and this will be non-news in a short period of time...
I didn't buy my drive because of the reviews, I bought it because the price was right and I wanted more drive space.
Does anyone know if this "old age" issue is file based or block based?
I assume it is block based, so if I have for example a vhd file on it and regularly modify a bit or add a bit to it (while the VM is used) the untouched blocks will still slow down over time.
Hi all. I ordered last week a new laptop and I had to choose between the 120GB Samsung 840 EVO and the 120GB Kingston Hyperx 3k for a similar price. I chose the 840 EVO as it had better score in several webpages but I learn about the 840 EVO issue this weekend. I still have the opportunity of changing the order but I need some advice: Will the best improvement of the EVO compensate the risk of low life/performance or should I get the Kingston straight away? Thanks
Given that choice, I'd take the Samsung drive every time. The Kingston drives are based on Sandforce controllers, which have had a whole host of data corrupting/losing bugs. They may be all fixed now, and you may never see a bug with a Kingston drive, but the firmware is inherently more complex and doesn't degrade nicely in the face of bugs or data corruption.
Kingston have also had a recent scandal for bait-and-switch practices, where review drives are sent out, perform decently, then they switch out the NAND in use for cheaper, lower performance NAND (not this model AFAIK.)
All in all, Samsung have a much better reputation in SSD, even with this "end of the world" (not!) bug. If you must switch, the general consensus appears to be leaning towards the Crucial mx100, which has gained a good performance and value reputation.
Stick with the EVO. This bug is an inconvenience that frankly, most people/end users would never even have noticed in day to day use if they hadn't read about it and/or ran drive benchmarks which really are just about irrelevant to anything other than, well, benchmarks. The whole TLC limited endurance/low life thing is a red herring for the vast, vast majority of users. If you aren't running a database server that's getting constantly hammered, you'll have long since upgraded or entirely replaced your entire system before wear becomes an issue. Samsung has a fix coming. Get the EVO and apply the new firmware once its been released after validation.
man - Sunday, September 28, 2014 - link When the firmware update fix becomes available will I need to reformat my drive and os then apply the fix? I tried doing an update a year ago with a kingston hyper ssd and it kept blue screening.
I will be using this as a primary os drive or do you guys have any other options for a os ssd that won't need a fix at the moment?
It is not known yet whether the update will be destructive. However, if you are buying an SSD now and your usage pretty average (no VMs or other IO heavy activity), you should just go with the MX100 since it is cheaper too.
Are there any other SSD that come in a 1 TB mSATA for factor? I was about to hit the buy button on the EVO 840 mSATA 1TB but this has me worried. I suspend/resume VMs often and it makes me think that the longevity won't be very good if I'm buying for the long term.
Just to clarify, my worry is that whatever fix Samsung issues will be based on reshuffling data around a lot, hence eroding the already substandard life of the TLC setup.
Substandard is an overstatement. TLC doesn't have the endurance of MLC, but for most users -- as the vast, vast majority -- it has plenty and then some.
Yes but I suspend/resume 3GB VMs on a daily basis. That's doesn't fall within the "most users" category.
I'm not looking to restock some CEO's webbrowsing box. This is for getting work done.
Before, I could calculate whether TLC met my needs based on the total throughput expected. Now, with some stealth firmware silently wearing down the SSD in the background to prevent this bug, I can't.
And I don't get why the media are so RAH RAH RAH about TLC. I mean put some SLC lipstick on a pig and sell it the same price at MLC? What the? "where's the outrage"?
It also doesn't have MLC's lower latency or power consumption. TLC is substandard in all respects except density and cost to manufacture, just as MLC is substandard in comparison with SLC in all respects except density and cost to manufacture.
I have several of these 840 EVO drives placed in new high-end laptops, including my own. I'd actually just begun the troubleshooting stage of trying to figure out why my i7/12gb RAM/SSD Thinkpad was so damn slow. I have the original benchmarks I'd run when I put this drive in originally and now, less than a year later, reads and writes at smaller block sizes were ~1/8 of what they were new.
I refreshed the disk with DiskFresh last night and it improved things, but I'm still at 1/2-2/3 of new. If Samsung's firmware update doesn't fix this I'm going to be attempting to have these returned under warranty. This is unacceptable.
Interesting that just last week when I needed a couple of new drives I chose drives from Crucial on a gut feeling...
I've had problems with Steam download speeds, and it seems totally fixed after running DiskFresh on my 840 EVO. Steam needs to verify data to update, so that's probably why it was affected by this.
I downloaded and ran the "restoration" software on my 250G 840 EVO (only a month old). It seemed to work fine, but then my SSD wasn't showing any signs of poor read performance. However, after the update, Samsung Magician completely misreports the usage on that disk -- says 150G used, where Windows Explorer reports a more reasonable 54G used.
Updated the first of my drives today. Was pretty straightforward.
It does seem to have improved the benchmarked performance somewhat, but it's not as good as the drive was after refreshing with diskfresh just a month ago (although performance had already dropped precipitously after just one month) and just a fraction of what it benchmarked at when new, at least at the smaller block sizes.
I'm benchmarking with ATTO Bench32 as it's a tool I've used for a long time so I have a history to compare it to. Not impressed Samsung.
I can confirm after doing a FULL DISK BACKUP in "sector by sector" mode using Acronis True Image 2015 and using the same SSD I had in previously, unfortunately the PS4 still picks up that a partition change / adjustment has occurred and insists on re-initialising the PS4 from scratch, as if it's a new HDD, so you can't restore your data / saves / downloads, unless you have them "up in the cloud" so to speak
Oh well, I guess I'm formatting my PS4 for the afternoon. I do have some benchmarks of "before" the firmware update regardless, so I'll post the differences. (I'm using a Samsung 840 Evo 1TB in mine)
EDIT: Wrong. First boot it detected weird and wanted me to initialise the PS4 again. I put in the USB key to re-initialise (/ps4/update/ps4update.pup) but when I hit X it just shut down? Next boot - full boot of the PS4, with all apps / data etc, so the bit for bit copy worked.
Improvement was most noticable on the most disk intensive load. KZ Shadowfall continue campaign. Went from 44 seconds load time, to 33 seconds load time, my save / data files were from launch so around 11 months old. Good news I guess!
Yeah I used the DOS Version Samsung EVO 840 Performance Restore ISO as I am using a EVO 840 1 TB in my Macbook with native only MAC OSX. It was 15 Minutes or so Performance + update to the firmware!!
Scunmonk how exactly did you do it? I'm running evo's in 2 macbook pro's and couldn't work out how to upload firmware to the drive which is running osx 10.9. How did you get it into a DOS set up to start with - I have downloaded the fixes to USB and a dvd and couldn't upload. Apologies for my lack of knowledge and thank you for any help!!!
Scunmmonk how exactly did you do it? I'm running evo's in 2 macbook pro's and couldn't work out how to upload firmware to the drive which is running osx 10.9. How did you get it into a DOS set up to start with - I have downloaded the fixes to USB and a dvd and couldn't upload. Apologies for my lack of knowledge and thank you for any help!!!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
84 Comments
Back to Article
ThisWasATriumph - Friday, September 26, 2014 - link
In the meantime, DiskFresh can read and write each sector once, which will temporarily fix performance issues. I used it on the first 40% of my drive and its back to full speed. http://download.cnet.com/DiskFresh/3000-18512_4-75...Maltz - Friday, September 26, 2014 - link
I've wondered about booting my MacBook Pro into single-user mode and doing a dd if=/dev/rdisk0 of=/dev/rdisk0 bs=1m. Seems like that should work, but it would put a lot of wear on the drive. If it's only about three weeks away, I think I might just wait for the firmware update.dylan522p - Friday, September 26, 2014 - link
The firmware update is pretty much just gonna be aggressively moving data around so that it never gets old and the voltages in the cells do not drop.boozed - Friday, September 26, 2014 - link
How sure of this are you?Maltz - Friday, September 26, 2014 - link
There was one school of thought that speculated that the problem was fading charges in the TLC NAND, but given the complete lack of data loss, I personally find that unlikely.Another theory was that the wear-leveling routines were moving data around in such a way that eliminated the parallelism between NAND modules, thus slowing down the drive. If this is the case, the re-arranging of data to speed things back up would be a one-time event, but would still have to occur.
But, it's all just speculation at this point, of course.
hojnikb - Saturday, September 27, 2014 - link
Well, thats a pretty bad theory. Even if data was moved around in such manner, it would still be faster than 50MB/s (or even less in some cases). ToggleDDR is good for ~400MB/s, so even if data is on a single package for some reason, it still woulnd hit performance so hard.Solid State Brain - Friday, September 26, 2014 - link
Putting aside controller strategies (read: ECC) that on normally functioning drives are probably occurring to ensure that read operations remain reliable with wear and usage, if static data has the chance to become old with normal usage patterns it likely means that the controller wasn't moving it around at all. Every time the SSD's wear leveling count increases by one, all NAND flash cells are supposed to have been written on once on average and this is going to happen at least once a month with just 4-5 GB of daily host writes on a 250GB drive on average, assuming a write amplification of 2, which would represent a very light usage.So it's not a matter of "aggressively moving data around", but to actually get it to move around in the first place, assuming the bug isn't affecting something else instead (like for example the metadata associated with the recorded data as it gets shuffled around).
chrysrobyn - Monday, September 29, 2014 - link
How about dd if=/dev/rdisk0 of=/dev/null bs=1m? DD will read all the sectors, but it won't put them anywhere, so no write wearing.Mugur - Saturday, September 27, 2014 - link
Hopefully they will release a fix for the old 840 too... I have 4 of these at work and I was just going to make a big storage spaces pool with 2 layers out of them. Not sure that the bug will show in this context, but if there will be some static most accesed files moved to ssd by storage spaces I assume yes.Death666Angel - Saturday, September 27, 2014 - link
I have a 840 500GB as my system and steam drive here. Haven't noticed any slow downs at any point during the last 18 months I had it. Benchmarks during that time always showed it operating a few 10s of MB below peak performance for my SATA II (old P55 chipset) and now SATA II (Z87) connections.Solid State Brain - Saturday, September 27, 2014 - link
A non-EVO 840 250GB in one of my systems is showing this issue on old static data, according to LBA scan read benchmarks. However the user who is actively using that PC didn't notice any significant slowdown.arrc - Monday, October 6, 2014 - link
What are your benchmark results? I'm also on a 840 500GB running SATA II, and I'm getting horrible benchmark results..Solid State Brain - Saturday, September 27, 2014 - link
The latest firmware by Samsung for their SSD was released on March '14 for 840 and 840 Pro models, so there's little reason to believe they won't release additional ones to fix this issue. The latest firmware for 840 EVO drives is still the original one issued on December '13.bsim500 - Saturday, September 27, 2014 - link
So what does the new firmware do exactly? Simply move old data around much more often? That's going to kill TLC endurance (which is already less than 1/3rd that of everyone else's MLC drives in endurance tests).I think Samsung have made a huge mistake with TLC if its data retention really is that bad. It's not as if the TLC drives are significantly cheaper than MLC, which apparently was supposed to be the prime advantage of TLC (lower cost)? Where I live, a 512GB Samsung Pro (high endurance MLC) is almost £240. A buggy 512GB Samsung 840 EVO (low endurance TLC) is around £150-155. But a perfectly working 512GB Crucial MX 100 (high endurance MLC) is around £145. Total no-brainer...
Death666Angel - Saturday, September 27, 2014 - link
"I think Samsung have made a huge mistake with TLC if its data retention really is that bad."Is it, though? I haven't dived into devouring forums posts about this stuff, But on my main tech news site, no one said that the data was lost because of this bug, only that older files were getting read very slowly. And I haven't only found that to be on the Evo drive, not the vanilla 840. At this point in time, it seems prudent to go with the MX100 (though not having public bugs at this stage does not mean it is bug free, see 840 Evo :P). But to draw any other conclusions from this (TLC is bad, data retention is bad) seem premature and not based in the facts presented so far.
hojnikb - Saturday, September 27, 2014 - link
Well, TLC is inherently worse on pretty every aspect compared to MLC, so there is some truth in that, even if firmware proves to fix the issues.Death666Angel - Saturday, September 27, 2014 - link
Not really a point to anything I wrote about. The TLC drive I bought (500GB 840) was the cheapest 500GB SSD at the time, so TLC saved me money. The performance was better than most MLC drives at the time, except ones much more expensive. And I specifically talked about the data retention aspect of TLC, as that was what the OP mentioned. So whatever "truth" you are referring to, it alludes me.bsim500 - Saturday, September 27, 2014 - link
Well others with "vanilla" 840's have indeed been complaining about the bug too in multiple forums, so it's definitely not limited to the EVO. I don't know for sure if it is TLC, but so far it only seems to affect the Samsung TLC drives (840 & 840 EVO). The MLC drives (840 Pro and older 830 (of which I own one)) are completely unaffected. It might be the TLC itself losing charge causing heavy error correction to slow it down, or it might be a combination of TLC + 19nm process or it might be firmware specifics for TLC (from what I remember of Anand's review, Samsung's dedicate a small portion of each TLC NAND die as an SLC write buffer, ie, an extra layer of complexity over MLC drives):-http://www.anandtech.com/show/7173/samsung-ssd-840...
Either way, if it's just a "firmware speed bug", then it's a pretty big bug to miss, but if "data loses charge far quicker than expected causing slowdown due to error correction" is an unforeseen side-effect of small-process TLC, then "move data around more" firmware is really more of a workaround than a "fix". In any case, I can't understand why the Samsung Pro MLC's are so much more expensive compared to everyone else's MLC drives?
Solid State Brain - Saturday, September 27, 2014 - link
Since rewriting the data (for example with a defrag) apparently temporarily solves this issue, all points to 840/840 EVO-specific bugs with wear leveling algorithms, which are already supposed to do that in the background during write operations. Every SSD in order to minimize lifetime wear is supposed to use all NAND cells more or less uniformly, and if TLC-equipped Samsung SSDs aren't doing that, they actually are wearing up faster, not slower as some people here are writing.Kristian Vättö - Saturday, September 27, 2014 - link
Not exactly correct. Wear leveling actually causes additional wear on the NAND because it rewrites data that has already been written. Samsung's wear leveling in the 840 and 840 EVO seems to be very passive to avoid extra NAND writes, which is why even heavy writing to the drives won't recover performance of old data as the drive doesn't seem to move static data around. At some point it definitely should, but it could very well be that the firmware is designed to wear out (~800-900 P/E cycles) the empty blocks first before it touches blocks with static data because that results in less overall wear than constant wear leveling would.FunBunny2 - Saturday, September 27, 2014 - link
Even the Wikipedia acknowledges that there are two, in general, wear leveling approaches: static and dynamic. They differ in the way you'd expect based on the names: the static leveling does re-write all blocks over time, while dynamic only re-writes as needed by new writes. At one time, probably 3 or so years ago, wearing leveling was understood to be static. When the dynamic version came into use? That would be an AnandTech question.hojnikb - Saturday, September 27, 2014 - link
Dynamic was used (and probobly still is) by pretty much every cheap nand controller in the past, because its much simpler and works decent enough with sequential workload (eg flashdrives or memory cards).Solid State Brain - Monday, September 29, 2014 - link
Old data however also needs more ECC time for being reliably read, meaning lower performance. As user reactions show, this is clearly not acceptable in the consumer world.To me it seems just weird that Samsung would deliberately defer static wear leveling to the point that old data has impaired read performance and possibly that dynamic wear leveling would operate on a reduced amount of NAND blocks, especially given that the possible benefits in write amplification by doing so don't seem that great compared to SSDs / SSD controllers that do not seem to employ such techniques.
I believe it's likely that this was a genuine bug that Samsung is going to fix rather than an indended behavior which the company would be attempting to cover up to save face.
BTW, I would also check for TRIM behavior after the updated firmware gets released. It could have been a side effect of this bug.
hojnikb - Saturday, September 27, 2014 - link
> In any case, I can't understand why the Samsung Pro MLC's are so much more expensive compared to everyone else's MLC drives?Because they are faster. Samsung rather uses best quality MLC for their top drives and best quality TLC for their value line.
Others just use cheaper MLC (be that of lower quality/bin, bigger die or smaller litograhpy).
xwingman - Saturday, September 27, 2014 - link
Thanks for your post! Will this affect this model of drive: Samsung 840 evo MZ-7TE120BW single unit version?? I'm pretty new to ssd's and this will be my primary ssd for the os and a few programs. I don't want to have to do a firmware fix and have it all messed up. Thx.Proffo - Sunday, September 28, 2014 - link
I don't think TLC endurance is a problem for most users. Most of us will have dumped our EVOs well before they are "worn out". Even if this fix halves the life of the EVO, it won't be much of an issue.iLovefloss - Sunday, September 28, 2014 - link
True for 240GB capacities or larger, but the 120GB TLC drives only barely last long enough for a modern upgrade cycle. Anything that cuts the life expectancy down for the 120GB drives will be kind of a problem.sweeper765 - Saturday, September 27, 2014 - link
It's good that a date has been set for the firmware fix but Samsung needs to answer a lot more:1. Explain the details of the problem and how the fix works.
2. Does the new firmware affect the SSD life? How much?
3. Is the speed fully restored or is there going to be a certain read speed value (fixed by Samsung in the firmware) that triggers the data relocation ? If it is so then it will still be degradation just not so severe.
All in all, Samsung should have done a better job with this drive.
To quote from their site:
"End-to-end integration of in-house components (NAND/Controller/DRAM/Firmware)"
But this proved to be more of a weakness than an advantage.
NAND - 19nm TLC with low endurance and potentially poor data retention
Controller - overheats ; they could have at least put some thermal pads on it
Firmware - buggy
Solid State Brain - Saturday, September 27, 2014 - link
If the issue is that old static data is not getting shuffled around, a new firmware correcting this issue will actually improve the SSD's life as it will ensure that all memory cells will wear up uniformly, not just the ones holding dynamic, new data.hojnikb - Saturday, September 27, 2014 - link
Not necesserly. If you shuffle all the data around, this can actually increase write amplification, while giving static data a smaller priority when doing wear levelling (so doing more dynamic wearlevelling than static).Some cheap controllers (usually found in cheapst flash drives/memory cards) dont do any static wearlevelling at all.
jhh - Monday, September 29, 2014 - link
The key is intelligent shuffling. Of course, no drive can predict the future, but if 1/2 of the drive is relatively static, and 1/4 is dynamic, with the rest unallocated, then without static wear leveling, only 1/2 of the drive will wear out. If the static wear leveling is done when the dynamic part is 1/4 through its lifetime, it adds an extra 4 writes out of 800, but allows one to get another 1600 1/4 drive writes of dynamic data. The hard part is that static data is hard to predict, as something which is thought to be static may be changed the next time Windows Update decides to patch that static file. But WIndows only issues patches once a month, so at most, that data would be changed 60 times over a 5 year drive.Coup27 - Saturday, September 27, 2014 - link
People like you are actually making my blood boil over this.1,2,3 - Once the firmware has been validated and released I am sure more information will be forthcoming. It would make no sense to release all of their findings to date on this issue only to find another issue during validation which may alter the information released, which is only going to annoy people further.
"All in all, Samsung should have done a better job with this drive.".
This drive has been consistently recommended from all reputable websites since it was released, well over a year ago. It has not had a widespread bug which has caused complete failure or data loss. This bug here causes an inconvenience, an inconvenience which has taken people 8-9 months to identify, so it was hardly easy for anybody to spot.
Maybe you also need reminding that since the introduction of SSD's Samsung has the best record in reliability from all big manufacturers and to date. None of their SSD's have had a widespread bug which has caused their drives to brick. Unlike, to name a few:
Intel X25-M 8MB bug
Intel 320 series 8MB bug (same bug, two drives)
Sandforce (many BSOD reports, including some Intel's)
Crucial m4 (5184 power on failure)
OCZ - everything
From the 470 series which Anand never publicly reviewed but said it was always reliable (albeit slow), to today, Samsung have not had a major widespread drive failure bug. And despite the doomsday FUD coming from people like you, this is not it either.
sweeper765 - Saturday, September 27, 2014 - link
You misunderstood. I meant the information should be revealed at the time the firmware fix is released, not now.A silent firmware release without clearly specifying what was wrong and how the drive is affected will damage their credibility more. Customers have the right to know how the drive will behave in the future.
And yes, other manufacturers had problems too, that doesn't make it ok for Samsung.
Coup27 - Saturday, September 27, 2014 - link
"And yes, other manufacturers had problems too, that doesn't make it ok for Samsung."I don't think they have done it on purpose.
saliti - Saturday, September 27, 2014 - link
Unless we get official statement from Samsung, we can only guess what the problem is. But it seems people just like to draw conclusions too quickly. If the problem is data not being moved around uniformly than a firmware fix will probably improve endurance rather than decrease it.hojnikb - Saturday, September 27, 2014 - link
Do you really think samsung would give an official statement, that their flash was losing data retention/cell charge faster than normal ?saliti - Saturday, September 27, 2014 - link
I think the fact that their drive is losing cell charge hasn't been established yet. As of yet they are just speculations. We will have to wait and see with the new firmware what changed and what was fixed to establish the fact or we need official statement.DPUser - Saturday, September 27, 2014 - link
This test suggests TLC does not have the endurance of MLC, but it also shows that all the SSDs tested, including the EVO drive, have performed very well over the long haul. No drive failed before 750 TB in writes. The story continues, with two drives still fully-functional after writing 1.5 PB.http://techreport.com/review/26523/the-ssd-enduran...
DPUser - Saturday, September 27, 2014 - link
Correction: One drive failed at 725 TB.hojnikb - Saturday, September 27, 2014 - link
Yeah, but it still shows the weaker points of TLC. While lots of MLC would go happily over ~700TB, they appear to have a hardcoded limit, so they locked up and stopped writing data.TLC based 840 started getting relocated sectors much sooner than MLC drives...
Its still plenty of endurance for most users though.
Coup27 - Saturday, September 27, 2014 - link
"Its still plenty of endurance for most users though."Unfortunately here we have another problem. Because TLC has less endurance than MLC people think it's junk and they need MLC without bothering to read up on the actual endurance levels. TLC is fine for the vast majority of people.
hojnikb - Saturday, September 27, 2014 - link
Its fine, that entirely true. But paying the same amout of money for same performing drive but getting less endurance ? Yeah, i'm gonna pick a MLC drive, regardless if i can manage to wear it out or not. TLC drives are simply too "expensive" compared to MLC drives to actually make sense. I mean, if i can get MX100 for example for the same amout of money (or even less) than ULTRA II or EV0, you can bet which of those i'm gonna pick.kgh00007 - Saturday, September 27, 2014 - link
Nevermind write endurance, what happens if you leave the drive off for 6 months or a year, are you going to loose data?Coup27 - Saturday, September 27, 2014 - link
6 months, no. 12 months, possibly. Longer than 12 months, yes.However this is an industry standard and not specific to SLC, MLC or TLC. JEDEC standards state that for a consumer SSD data retention is 12 months after being powered off. Enterprise this is only 3 months.
AndrewMorton - Saturday, September 27, 2014 - link
I hope that "October 15th" means they have isolated the problem, have a fix already, and it will take until then to thoroughly test it, rather than that the marketing department (or some-such) has decided that that is the date it *will* be released.xwingman - Saturday, September 27, 2014 - link
Thanks for your post! Will this affect this model of drive: Samsung 840 evo MZ-7TE120BW single unit version?? I'm pretty new to ssd's and this will be my primary ssd for the os and a few programs. I don't want to have to do a firmware fix and have it all messed up. Thx.hojnikb - Sunday, September 28, 2014 - link
yes.poohbear - Sunday, September 28, 2014 - link
yesssss! had this ssd for a month as my system drive and was bummed to find out about this problem! looking forward to the fix!Roland00Address - Sunday, September 28, 2014 - link
Why release it on a specific date either it is ready or not. I understand saying you need about 2 weeks but you can not predict these things down to the hour unless it is already finishedDon66 - Sunday, September 28, 2014 - link
If most people who are complaining never would have looked at the bench marks, I wonder how many would have been crying foul?I bought a 1TB 840 EVO msata and without looking at benchmarks never noticed the difference in speed.
One thing I noticed was heat. I have a fan over it and it runs very cool. There's a three year warranty and I back up my data regularly.
Unless it's a physical problem with the drive, just don't bench mark wait for the firmware update, chances are most people wouldn't even notice without some kind of program telling them that their data is moving too slow.
It's ironic a lot of the same (used to be) fan boys, are now screaming that their precious drives are doomed and that Samsung had better do this or that, or the shizza will hit the fan, most people's memories are short like microwave times and they can't remember a week later why they were mad to begin with...
Samsung will keep making ssd', sheeple will keep buying them based on all of awesomeness of the drives, and this will be non-news in a short period of time...
I didn't buy my drive because of the reviews, I bought it because the price was right and I wanted more drive space.
valinor89 - Sunday, September 28, 2014 - link
I did notice it being slower on startup. I simply thought that the culprit was windows. Still much better than my old HHDD.Mugur - Monday, September 29, 2014 - link
Does anyone know if this "old age" issue is file based or block based?I assume it is block based, so if I have for example a vhd file on it and regularly modify a bit or add a bit to it (while the VM is used) the untouched blocks will still slow down over time.
ssdengr - Monday, September 29, 2014 - link
Sounds like they aren't doing read level calibration like Intel. Not surprising since they patented it. http://www.google.com/patents/US8510636Pabl8 - Monday, September 29, 2014 - link
Hi all. I ordered last week a new laptop and I had to choose between the 120GB Samsung 840 EVO and the 120GB Kingston Hyperx 3k for a similar price. I chose the 840 EVO as it had better score in several webpages but I learn about the 840 EVO issue this weekend. I still have the opportunity of changing the order but I need some advice: Will the best improvement of the EVO compensate the risk of low life/performance or should I get the Kingston straight away? ThanksTheWrongChristian - Monday, September 29, 2014 - link
Given that choice, I'd take the Samsung drive every time. The Kingston drives are based on Sandforce controllers, which have had a whole host of data corrupting/losing bugs. They may be all fixed now, and you may never see a bug with a Kingston drive, but the firmware is inherently more complex and doesn't degrade nicely in the face of bugs or data corruption.Kingston have also had a recent scandal for bait-and-switch practices, where review drives are sent out, perform decently, then they switch out the NAND in use for cheaper, lower performance NAND (not this model AFAIK.)
All in all, Samsung have a much better reputation in SSD, even with this "end of the world" (not!) bug. If you must switch, the general consensus appears to be leaning towards the Crucial mx100, which has gained a good performance and value reputation.
Pabl8 - Monday, September 29, 2014 - link
I have been looking into Kingston problems with the ssd's and I think I'll stay with Samsung. Thanks for the advice!Romberry - Tuesday, September 30, 2014 - link
Stick with the EVO. This bug is an inconvenience that frankly, most people/end users would never even have noticed in day to day use if they hadn't read about it and/or ran drive benchmarks which really are just about irrelevant to anything other than, well, benchmarks. The whole TLC limited endurance/low life thing is a red herring for the vast, vast majority of users. If you aren't running a database server that's getting constantly hammered, you'll have long since upgraded or entirely replaced your entire system before wear becomes an issue. Samsung has a fix coming. Get the EVO and apply the new firmware once its been released after validation.ThisWasATriumph - Tuesday, September 30, 2014 - link
My old Kingston HyperX 3K had all sort of issues with data corruption and read errors. Total piece of junk with zero firmware updates.Pabl8 - Wednesday, October 1, 2014 - link
Definitely I'll go for the EVO. Thanks all!!AbRASiON - Monday, September 29, 2014 - link
Gotta open up the PS4 :(xwingman - Monday, September 29, 2014 - link
man - Sunday, September 28, 2014 - linkWhen the firmware update fix becomes available will I need to reformat my drive and os then apply the fix? I tried doing an update a year ago with a kingston hyper ssd and it kept blue screening.
I will be using this as a primary os drive or do you guys have any other options for a os ssd that won't need a fix at the moment?
Kristian Vättö - Monday, September 29, 2014 - link
It is not known yet whether the update will be destructive. However, if you are buying an SSD now and your usage pretty average (no VMs or other IO heavy activity), you should just go with the MX100 since it is cheaper too.gravothermal - Monday, September 29, 2014 - link
Are there any other SSD that come in a 1 TB mSATA for factor? I was about to hit the buy button on the EVO 840 mSATA 1TB but this has me worried. I suspend/resume VMs often and it makes me think that the longevity won't be very good if I'm buying for the long term.gravothermal - Monday, September 29, 2014 - link
Just to clarify, my worry is that whatever fix Samsung issues will be based on reshuffling data around a lot, hence eroding the already substandard life of the TLC setup.Romberry - Tuesday, September 30, 2014 - link
Substandard is an overstatement. TLC doesn't have the endurance of MLC, but for most users -- as the vast, vast majority -- it has plenty and then some.gravothermal - Tuesday, September 30, 2014 - link
Yes but I suspend/resume 3GB VMs on a daily basis. That's doesn't fall within the "most users" category.I'm not looking to restock some CEO's webbrowsing box. This is for getting work done.
Before, I could calculate whether TLC met my needs based on the total throughput expected. Now, with some stealth firmware silently wearing down the SSD in the background to prevent this bug, I can't.
And I don't get why the media are so RAH RAH RAH about TLC. I mean put some SLC lipstick on a pig and sell it the same price at MLC? What the? "where's the outrage"?
Oxford Guy - Saturday, October 4, 2014 - link
It also doesn't have MLC's lower latency or power consumption. TLC is substandard in all respects except density and cost to manufacture, just as MLC is substandard in comparison with SLC in all respects except density and cost to manufacture.zhenya00 - Tuesday, September 30, 2014 - link
Thanks for reporting on this.I have several of these 840 EVO drives placed in new high-end laptops, including my own. I'd actually just begun the troubleshooting stage of trying to figure out why my i7/12gb RAM/SSD Thinkpad was so damn slow. I have the original benchmarks I'd run when I put this drive in originally and now, less than a year later, reads and writes at smaller block sizes were ~1/8 of what they were new.
I refreshed the disk with DiskFresh last night and it improved things, but I'm still at 1/2-2/3 of new. If Samsung's firmware update doesn't fix this I'm going to be attempting to have these returned under warranty. This is unacceptable.
Interesting that just last week when I needed a couple of new drives I chose drives from Crucial on a gut feeling...
DesktopMan - Wednesday, October 1, 2014 - link
I've had problems with Steam download speeds, and it seems totally fixed after running DiskFresh on my 840 EVO. Steam needs to verify data to update, so that's probably why it was affected by this.Oxford Guy - Saturday, October 4, 2014 - link
It shouldn't affect the 840 "vanilla", I assume, because it doesn't have that fake SLC caching tech. That's most likely the source of the issue.Per Hansson - Monday, October 6, 2014 - link
It affects the "vanilla" 840 tooZer0-G - Wednesday, October 15, 2014 - link
Fix is out http://www.samsung.com/global/business/semiconduct...gireal - Friday, October 17, 2014 - link
This utility only works for the 840 Evo. It does not work for the 840, which also has this flaw and Samsung appears to want to ignore it completely.
kgh00007 - Wednesday, October 15, 2014 - link
No sign of anything yet!kgh00007 - Wednesday, October 15, 2014 - link
Oops, I was checking through Samsung Magician!!marcle - Wednesday, October 15, 2014 - link
I downloaded and ran the "restoration" software on my 250G 840 EVO (only a month old). It seemed to work fine, but then my SSD wasn't showing any signs of poor read performance. However, after the update, Samsung Magician completely misreports the usage on that disk -- says 150G used, where Windows Explorer reports a more reasonable 54G used.zhenya00 - Wednesday, October 15, 2014 - link
Updated the first of my drives today. Was pretty straightforward.It does seem to have improved the benchmarked performance somewhat, but it's not as good as the drive was after refreshing with diskfresh just a month ago (although performance had already dropped precipitously after just one month) and just a fraction of what it benchmarked at when new, at least at the smaller block sizes.
I'm benchmarking with ATTO Bench32 as it's a tool I've used for a long time so I have a history to compare it to. Not impressed Samsung.
AbRASiON - Wednesday, October 22, 2014 - link
Posting this in the hope others find this post.I can confirm after doing a FULL DISK BACKUP in "sector by sector" mode using Acronis True Image 2015 and using the same SSD I had in previously, unfortunately the PS4 still picks up that a partition change / adjustment has occurred and insists on re-initialising the PS4 from scratch, as if it's a new HDD, so you can't restore your data / saves / downloads, unless you have them "up in the cloud" so to speak
Oh well, I guess I'm formatting my PS4 for the afternoon. I do have some benchmarks of "before" the firmware update regardless, so I'll post the differences.
(I'm using a Samsung 840 Evo 1TB in mine)
AbRASiON - Wednesday, October 22, 2014 - link
EDIT:
Wrong.
First boot it detected weird and wanted me to initialise the PS4 again. I put in the USB key to re-initialise (/ps4/update/ps4update.pup) but when I hit X it just shut down?
Next boot - full boot of the PS4, with all apps / data etc, so the bit for bit copy worked.
Improvement was most noticable on the most disk intensive load.
KZ Shadowfall continue campaign.
Went from 44 seconds load time, to 33 seconds load time, my save / data files were from launch so around 11 months old.
Good news I guess!
mercutio - Wednesday, October 22, 2014 - link
When is fix coming for the non-evo samsung 840s?Mastermix Audio Media - Tuesday, October 28, 2014 - link
Mac and Linux version posted:http://www.samsung.com/global/business/semiconduct...
scunmmonk - Wednesday, October 29, 2014 - link
Yeah I used the DOS Version Samsung EVO 840 Performance Restore ISO as I am using a EVO 840 1 TB in my Macbook with native only MAC OSX. It was 15 Minutes or so Performance + update to the firmware!!Guyp16 - Tuesday, November 4, 2014 - link
Scunmonk how exactly did you do it? I'm running evo's in 2 macbook pro's and couldn't work out how to upload firmware to the drive which is running osx 10.9. How did you get it into a DOS set up to start with - I have downloaded the fixes to USB and a dvd and couldn't upload. Apologies for my lack of knowledge and thank you for any help!!!Guyp16 - Wednesday, November 5, 2014 - link
Scunmmonk how exactly did you do it? I'm running evo's in 2 macbook pro's and couldn't work out how to upload firmware to the drive which is running osx 10.9. How did you get it into a DOS set up to start with - I have downloaded the fixes to USB and a dvd and couldn't upload. Apologies for my lack of knowledge and thank you for any help!!!