Are you saying my Intel X25-M SSD that has been sitting on a shelf in my apartment for the last couple of years has probably lost all the data? Why no one has said anything back in 2006? Data retention measured in weeks? This is just crazy!
"Remember that the figures presented here are for a drive that has already passed its endurance rating, so for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs."
I.e. it depends on how much you abused the drive while it was in use. Then again, if you haven't touched it in years, the data in it is probably not very valuable...
clearly you did not read the article, because the article clearly states "the figures presented here are for a drive that has already passed its endurance rating" ie the maximum data has been written.
Don't try solving a problem before you actually read it and understand the facts: it's a 6-7 year old model that's been sitting on a shelf for 2 years. Assuming 4-5 years of operating time it might as well be way over the endurance rating. So 104 weeks of storage at 25 degrees (room temp) after working at 40 degrees (relatively normal working temp especially if used in a laptop) is exactly the "expiry" date on his data. Also, sitting on a shelf might as well be in direct sunlight which would half that to 1 year.
No matter what the man is right , it was the manufacturer's responsibility to take these into account and properly inform a buyer BEFORE the product is sold. Having your data disappear after 2 years when proper working and storage conditions are met isn't really boosting confidence in future products.
I highly doubt it! Typical users will write about 2-3TB of data a year. Even if he used it for 4-5years, that is not a lot of data written. And x25-m chips are on 50nm process! That alone can tell you something about longer data retention.
yeah, you could burn out your flash in a year or two (goes into read state). But it also means you have it under power more or less all the time and data retention is most likely not a problem for you
Still, if it hasn't been on power for years, I still would guess a lot of its data is damaged or just plain gone. If you'd power it up once every 6 months the controller would be able to refresh it, but no power at all... Not good.
Nonsense. I've recorded both when they first came out, and they're fine. I'm willing to bet that the info about current technology ssd's will be modified though. Not upward either.
Well, some of mine didn't read - even just 5 years later.
There are many factors involved, so just because yours are OK doesn't mean everybody's are. Certain drives and disks (especially) don't seem to create archives with the "100 year shelf life" many of us imagined they should have.
Optical storage has the longest shelf-life. I read a study in the 90's showing Taiyo Yuden CD's lasting a simulated 70 years.
Most archive-grade DVD's and Blurays are all at least 50 years ONCE BURNED; unwritten shelf-life is much shorter since the laser burning process helps seal the media from UV and corrosion.
However, I suspect throwing a hard disk on a shelf for 20 years would yield pretty good data retention.
retention rate is also exponential to usage, so for a fresh drive used under normal circumstances retention is 1000+ years. I am surprised this wasnt mentioned in the article. It's when a heavily used disk (10 yrs of near continuous writing IIUC) is stored in higher temperatures then when in use, that you get in single weeks territory.
Does anyone find it coincidental that the subcommittee that is headed by a mechanical hard drive company person and the one who wrote the report that also works for a mechanical hard drive company would come out with a report that causes panic among ssd adopters just about the time large and cheaper ssds are poised to relegate most mechanical hard drives to second class status?
If you actually read the report, it is only going to cause panic among those that lack reading comprehension. It basically says if you have exceeded the write endurance of the drive, it may only retain data for a year or so. The vast majority of drives are nowhere near their write endurance limits and will have no issues retaining data.
Seagate makes SSDs and owns SandForce, one of the largest SSD controller suppliers. HGST also makes SSDs and has been making a big push to the market in the recent years. Also, the presentation is several years old because I've used the data in it in several articles -- someone just decided to dig it up for a story now. Lastly, the actual retention data was submitted by Intel, so I find it very unlikely to be biased/modified.
Then if this was something nefarious, the seeds were planted a few years ago. This is how it works sometimes - it takes a while, but the seeds eventually sprout (whether for good or evil).
I hope this wasn't the case here. Possibly someone, somewhere along the way wanted this to happen. Everyone else unwittingly contributed.
But don't you agree it's an awful co-incidence that "Alvin Cox" rhymes with "cardboard box" and Frank Chu's HGST uses cardboard boxes to ship its products? This is obviously some kind of conspiracy, and not just outdated information being misread today on clickbait articles.
They're on to you! Lock your doors and stash your SSDs with the mehvidence in the freezer.
I for run will be moving my SSDs behind my PC so the hot exhaust keeps them warm during operation, and I'll keep them cool with a nearby portable A/C unit when not in use.
I wouldn't rule out the possibility that there's a correlation here. But it's rather hard to know that this was planned - especially without being close to the situation.
This is why self-serving political leaders can get away with so much. They don't actually make disasters happen (in most cases), but they know something's bound to happen within the next couple of years, and when it does they spring into action to take full advantage of it - almost as if they'd planned it.
So yes, it's possible he created the presentation knowing there was a good chance the press would pick up on this and report on it in with "panic" headlines. And that even after the situation was explained accurately, it would have created enough lingering doubt (and most importantly, with a negative emotion still attached to the memory) it would create a small shift back to more hard drive sales.
But I have no idea of their motives. With a politician, if you're paying attention (and don't actually take perverse comfort in being deceived) we can usually discern the motives after a little while. But these men I don't know at all, so I have no idea whether they wanted this to happen.
I don't think simply providing electricity to the drive is sufficient for data retention. If the drive controller doesn't scrub old data to refresh it, you'll still run into issues.
Why the devil has this suddenly become an issue??? it was all common knowledge back in 2010/11, with the standard being announced in the September of 2010 from recollection...
...indeed, you've got some sites claiming that the presentation was a couple of months ago, whereas, ttbomk the slide show that's being linked to is from October 2010 - with it having been presented on the 5th of October 2010 by JEDEC in San Jose.
There's just some really lazy reporting on the part of other sites; trying to create a panic about something that's neither been hidden nor is what they're claiming it to be... Yeah, we all keep our drives in direct sunlight & saunas & whatnot, don't we.
So the one week figure comes from when what is effectively the room temperature is 55C and the running temperature is 25C. Someone must have a pretty powerful case cooling system to achieve that.
quote: "In a worst case scenario where the active temperature is only 25-30°C"
you obviously don't do summers
---
let's say you have a you have a SSD with data and you need to ship it internationally, across the ocean(s) via plane/boat will the data be readable and intact once it reaches it's destination? knowing what we know and depending on how much the item will get stuck in customs improper storage, data integrity will be affected
Tell that to Microsoft. In order to upload all of your data to Asure, if you can't do it online because of the amount of data and connection reliability, they tell you to send your drives to them, and they will do it, returning your drives afterwards.
Hopefully these drives will be sent by a very fast carrier, but many people in IT aren't the brightest. Saving money can be more important to them. Knowing this might help them make the right decision.
To be truthful, the only time it would matter is if these businesses are so focused on saving pennies that they're using SSDs past their endurance limits, in which case they very well might retain data only a week. They might want to ship them on a monday with three-day service.
I did this back in 2011, paying for a SSD in the US so that I could have some data shipped across to the UK (& save a bit of money on the drive) - & all of the data was intact...
Well, whilst temps 'could' be higher than normal room temps for a period (though they're more likely to be lower for much of the time), are you seriously suggesting that they'd store everything at stupid temps???
indeed, I had some a couple of bars of chocolate thrown in as freebies with some CDs from Australia a couple of years ago & they turned up in exactly the same shape as manufactured - & the melting point of chocolate's typically 30-32C.
I hate to say this, but yes! I've found that companies can do the darnedest things, to paraphrase Art Linkletter. So I've worked with music companies trying to restore old master tapes from warehouses without heating, air conditioning, or leak free roofs.
I've been called in by companies whose backup tape facilities, where they stores older, but valuable data, were flooded because they built them in a depressed area where runoff collected.
I could give you stories that would chill you to the bones.
i was commenting on customs warehouses - where one would both imagine that 'if' there was any risk of flooding then it would have been discovered years ago & are being manned (effectively) 24/7 by staff processing packages so the temps & humidity & whatnot need to be reasonable (both for the workers & so they're not inundated with compensation claims for damaged goods)... ...& planes/boats/whatever, since the OP was talking about shipping a SSD with data on internationally.
Now obviously there can still be calamities, but if, say, the customs warehouse burnt down or was hit by a tidal wave or whatever, or a ship sank or was captured by pirates or ... ...destroying or claiming as booty all of the contents or cargo, this isn't a limitation of SSDs & their ability to retain data in transit, but instead an unpredictable event that couldn't be planned for.
Anyone who's seriously asking those questions don't know what they're talking about, so I wouldn't worry. A drive with only a few TB written to it would last a year with no problem at highish temperatures.
For the curious, Tech Report did a data retention test as part of their SSD endurance experiment. The test was performed at 300 TB, well beyond their endurance ratings.
Well, duh. The 100TB limit is only to be safe, and then the 1 week rating (if the drive had been run at low temperatures for that 300 TB) is only the lowest possible period of retention to hold to the standard.
It's like people proclaiming that your milk will go back after one week in the fridge. Like, sure, it can. It could last two weeks after that, too. The due date catches 99.95% of all cases, but that's all it's designed for.
"for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs."
Where's this 10 year figure from? I recall reading something similar in a Dell document from 2010 or so, but basic flash reliability is supposed to be worse nowadays with the smaller manufacturing processes.
It would be nice to know what the data retention is on partly worn drives or drive using TLC and MLC and at smaller node sizes. Also testing of 3D nand. After the issues Samsung had this all seems more important.
Also do these standards mean drives will be readable when their nand is worn out now? Most drives seem to suicide instead of going into read only mode.
But it's not being written to constantly in that case. Busy time is different from idle time. Unless they're constantly defragging, but then there's a problem there anyway.
"For active use the temperature has the opposite effect. Because higher temperature makes the silicon more conductive, the flow of current is higher during program/erase operation ...."
If the benefit is only for active erased/written cells, then all the other unwritten cells are inactive? If so, then overall a hot computer environment is not beneficial unless the entire drive is rewritten periodically. So even a read-only drive will eventually wear out.
Do SSDs monitor temperature so they can tell how often they need to rewrite the inactive cells?
Too bad the Intel table doesn't go lower, it would be interesting to see how much data retention improves with cold temperature storage. When an SSD becomes obsolete, it might be convenient to use it for archival storage, in the freezer if necessary.
It's basically impossible that you'll go 100 TB of writes without using and reusing every available block in the chips. The controller is always collecting garbage and clearing things out. Wear-levelling algorithms try to keep everything worn down similarly.
Also, once written, the data will remain in those cells for only a limited time, regardless of power (unless some drives scan for weak cells and refresh them). You're kind of relying on moving data around in order to avoid bit-rot over the course of ten years. If you remember where all your data was in 2005, though, it's likely you've moved it all several times.
I'm a little bit confused by these results. Traditional floating gate structures use doped polysilicon (a conductor) as the storage layer. So I would expect the conductivity to go down as you increased the temperature (conductivity of conductors decreases with increasing temperature).
Also, it's worth pointing out that the underlying technology is not the same from one SSD to another. For instance, Samsung's 850 series SSDs use Charge Trap Flash, which replaces the polysi storage layer with silicon nitride, which is an insulator. So because it is an insulator, I would not expect the stored electrons to move around much, even if there was a short between the charge trapping layer and the channel. So these results may or may not at all be relevant depending on which particular SSD you own.
You are correct that the conductivity of the floating gate actually decreases with temperature due to increased resistance. However, my understanding is that the conductivity of the silicon dioxide (i.e. tunnel oxide) increases with temperature, which results in increased charge trapping in the oxide that alters the cell voltage. The topic is certainly more complex than what reads in the article.
eh? I thought the resistance reduced as the temperature got lower.. to the point where you get near absolute zero [Look Ma, I am on Helium!] and poof [or qwaaaak!] resistance is down to nada, nothing, nunca, niente, zero, zip, zilch!
i'm very confused by the wording in this article. "the figures presented here are for a drive that has already passed its endurance rating" does that mean that the drive is out of its warranty period, ran out of the last unallocated sector flash blocks, or the NAND's write tolerance had been exceeded, or something else? If the NAND's write tolerance had been exceeded, then this article makes no sense. the drive, as proven by the Tech Report through their SSD endurance experiment, would simply just stop working at once, so no data would even be readable, and in some cases, the drive would brick itself when the power is cycled, thus not even showing up in the system POST.
What I find the most interesting is that data retention in some cases is effectively halved, by a simple temperature change of just 5 celsius and this occurs in any shown temperature area on the chart.
Bitrot and complete data loss on SSD's has been known to be an issue for a long time and the newer SSD's are making the problem worse because of the thinning layers.
If possible keep the SSD powered on at all times so the SSD firmware can do background operations to keep the data valid. So on Windows disable the "power off hard drives option" in the power profile and let the computer Sleep instead of Hibernate.
If you have a laptop with a SSD and it is mostly powered off then at least once a month power it on and do a full backup to both refresh the SSD cells and have a backup in case the SSD bites the big one.
Good analysis Kristian. The current spin from this old presentation is almost mind boggling in its ignorance. classic example of "a little knowledge is a dangerous thing".
NAND physics does state that retention will have a lifetime
couple items: 1) this is a spec, that set a limit on how to measure BER. it doesnt not mean that the devices will fail anywhere near here.... they dont. Before this spec, the limit was zero for endurance... max the write out and all bets are off. 2) as mentioned. it is the retention after maxing out write lifetime. if your SSD is maxed out in write lifetime, this is rare (nearly unheard of for client), it is probably a good idea to back it up ... 100% of enterprise people would do this. 3) the temp coefficient is what is used for models... it does not necessarily apply to real life or all ranges of temperature.
that said, if you are using a SSD that is maxed out for storing data without a backup for 3-4 years without being powered on, you probably should not be managing your own computer (or you like to throw money away).
I have 256Gb SSD and 2Tb of photos. Because I am not sure of what photos should I keep for fast access, I have written/rewritten 95Tb on the SSD, but still have not made up my mind. Also, because I happen to live in a scientific base in the Artic, working temperature is around 25C but, after work, I go to sleep with my notebook and temperature under the blanket gets close to 35C. In the case my base is destroyed by some winter storm and my remains are only found in 10 years from now, will my kids be able to retrieve the beautiful pictures I have taken of whales and polar bears? Does it mean I should go back to HDDs, or better, 3 1/2" floppies? I still have thousands of news boxes of them, both 1.44Mb and 720K!!! Too bad I did not read this article before. The comments here made me laugh like I haven't in quite a while.
Well, temperatures of 40°C and more for several days are not unusual in hot summers and just normal in some of the worlds warmer countries. We should not even accept data rentention times anywhere below five years on < 50°C. A mass storage device must hold its data. That is much more important than speed or anything else!
Those parts of the story are well understood. What I'd like to know is what you have to do to keep the drives refreshed: Do I have to actively overwrite the data? Is the drive somehow capable of recognizing that it should do an internal check and refresh cycle?
The most valuable data I store is typically photographs and videos from the family, and of course all kinds of documents and financial records and scans. Most of that stuff is meant to last 10 years minimum; photos etc. future generations should decide what to do with.
Basically I'm planning for a technology refresh every five years or so, but otherwise the data is just there to stay, never to be read unless two copies have failed.
1st copy is an active RAID6 and a standby RAID5 for the 2nd (really meant to be ZFS, but that's another story). Beyond that I use removable drives for a 3rd copy. Those started as 3.5" magneto optical drives, but they became impractical in terms of capacity, neither drive nor media are being sold any more and SCSI is becoming hard to maintain.
Additional copies are given as 4th or 5th copy to members of the family to protect geographically.
But magnetical drives are, well mechanical, and I don't trust mechanics all that much: These old klunkers are just likely to get dropped in transfer or fail once you really, really need them.
In a way I feel much safer using SSDs for storage of really valuable data, well except that they wear out when you use them and they loose their charge when you don't.
Wear levelling code is very sophisticated these days, but somehow I'm pretty sure, none of the firmware writers have gone great lengths into testing and validating long periods of disconnected SSD use. At the price these things did cost originally, who'd ever think about putting them in storage for any extended period of time, right?
Except that this good old Postville drive, which still signaled 99% remaining life after some years of constant (but light) use, has long since been replaced by faster and bigger SATA 3 cousins but lives out its life as baby pics archive.
It's filled to the proper 80% and now I guess I should plug it into the server for a verify pass every couple of months, right?
How do I tell it to do that? How does it know, that it should check and compensate bit rot? Does it or can it have any notion of how much time has passed since the last powerdown?
Shouldn't it be time-stamping at least erase blocks?
I guess the only way you'll ever get those flash cells stuffed with electrons again would be through overwriting? What about metadata? Chances are it doesn't get any better treatment than normal data (unless you have one of these modern TLC/SLC hybrids) and thus overwriting may not be nearly as good as a full erase and refill?
Somehow I shudder at erasing first what I want to protect so I guess I'll have to think up some kind of rotation scheme... and some nice new specialized archiving SSDs companies can now market: How would anyone even test them?
I got these big RAIDs to make things easier and now I believe I've just opened a huge can of worms called media management, something we used to do with tapes in huge libraries...
USB sticks use the same kind of chips. One of the first no-name USB sticks I've ever bought in 2000, had a single chip, with 128MB of memory on it. Files on that stick still read, after 15 years! Another drive, I purchased in 2005, 2GB, from transcend, has failed, and needed a reformat. After quickformatting the stick, Recuva was able to retrieve most of the data stored on the USB stick. So I fear more for controller and FAT/TOC table, than the actual data; since one error in the FAT table, can result in the drive being unreadable.
I know this is an old article, but my questions weren't answered in several pages....plus, maybe new knowledge has come to light in the last 2 years?? :)
1) Say I have an SSD in an old XP (heck, even Win98) gaming box that only gets turned on once in a blue moon. What is required to keep the data "refreshed", so to speak?
2) Should I turn it on once a year so the bits don't go bad? And if so, then for HOW LONG should I leave it on to make that happen? What exactly gets refreshed by simply sending electricity through it?
3) And if #2 is correct, what does that say about static data, such as my \Windows installation where files don't get moved or changed. Do those get refreshed in the same way? Does turning it on even matter?
4) So that begs the question, what about the static data on our current rigs that we use today? If we aren't copying, changing and refreshing cells, will that static data go bad faster than the moving data?
I can't find the answers to these anywhere. All my PC's have SSD's now, both young and old.
Your right! A lot of questions still need to be answered and its 2018, so new data on SSD performance and reliability should be out by now.
I only use external hard drives for archiving, which only happens once or twice a year. So, I would like to know if upgrading my hard drives to SSD's would be beneficial or should I still wait. The next best step in archiving would be to switch to Mdisc or LTO tape, but Mdisc's still don't have sufficient space and LTO tape is cheap (but the drive is really expensive).
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
86 Comments
Back to Article
p1esk - Wednesday, May 13, 2015 - link
Are you saying my Intel X25-M SSD that has been sitting on a shelf in my apartment for the last couple of years has probably lost all the data?Why no one has said anything back in 2006? Data retention measured in weeks? This is just crazy!
davidedney123 - Wednesday, May 13, 2015 - link
Did you actually read the article you fool?p1esk - Wednesday, May 13, 2015 - link
Yes, I did! According to the article, my drive has probably lost all its data by now.Kristian Vättö - Wednesday, May 13, 2015 - link
"Remember that the figures presented here are for a drive that has already passed its endurance rating, so for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs."I.e. it depends on how much you abused the drive while it was in use. Then again, if you haven't touched it in years, the data in it is probably not very valuable...
Shadowmaster625 - Wednesday, May 13, 2015 - link
clearly you did not read the article, because the article clearly states "the figures presented here are for a drive that has already passed its endurance rating" ie the maximum data has been written.Endda - Wednesday, May 13, 2015 - link
And you don't know whether or not his drive has passed the endurance rating or not.mkozakewich - Wednesday, May 13, 2015 - link
If he's complaining about an SSD that's been on a shelf for ten years, we can be reasonably sure it has not.close - Friday, May 15, 2015 - link
Don't try solving a problem before you actually read it and understand the facts: it's a 6-7 year old model that's been sitting on a shelf for 2 years. Assuming 4-5 years of operating time it might as well be way over the endurance rating. So 104 weeks of storage at 25 degrees (room temp) after working at 40 degrees (relatively normal working temp especially if used in a laptop) is exactly the "expiry" date on his data. Also, sitting on a shelf might as well be in direct sunlight which would half that to 1 year.No matter what the man is right , it was the manufacturer's responsibility to take these into account and properly inform a buyer BEFORE the product is sold. Having your data disappear after 2 years when proper working and storage conditions are met isn't really boosting confidence in future products.
SuperVeloce - Tuesday, May 19, 2015 - link
I highly doubt it! Typical users will write about 2-3TB of data a year. Even if he used it for 4-5years, that is not a lot of data written. And x25-m chips are on 50nm process! That alone can tell you something about longer data retention.santadog - Tuesday, May 19, 2015 - link
2-3TB a year?!?I do a weekly backup of around 15TB...i guess i'm fucked :P
SuperVeloce - Tuesday, May 19, 2015 - link
yeah, you could burn out your flash in a year or two (goes into read state). But it also means you have it under power more or less all the time and data retention is most likely not a problem for youWinterCharm - Thursday, May 21, 2015 - link
You'd have to fill up each drive multiple times....For example, in about a ear, I've done 6.5 TB on my SSD, out of the maximum 1000 TB my drive can handle.
ProDigit - Friday, August 28, 2015 - link
You're right,the older SSDs also had longer data retention, because the die was much larger.
jospoortvliet - Sunday, March 13, 2016 - link
Still, if it hasn't been on power for years, I still would guess a lot of its data is damaged or just plain gone. If you'd power it up once every 6 months the controller would be able to refresh it, but no power at all... Not good.DCide - Wednesday, May 13, 2015 - link
Your SSD is probably fine, but some of those old CDs and DVDs you burned aren't.Margalus - Wednesday, May 13, 2015 - link
cd's I burned almost 20 years ago are still fine...piiman - Saturday, May 16, 2015 - link
He said "some"melgross - Wednesday, May 13, 2015 - link
Nonsense. I've recorded both when they first came out, and they're fine. I'm willing to bet that the info about current technology ssd's will be modified though. Not upward either.DCide - Thursday, May 14, 2015 - link
Well, some of mine didn't read - even just 5 years later.There are many factors involved, so just because yours are OK doesn't mean everybody's are. Certain drives and disks (especially) don't seem to create archives with the "100 year shelf life" many of us imagined they should have.
Samus - Thursday, May 14, 2015 - link
Optical storage has the longest shelf-life. I read a study in the 90's showing Taiyo Yuden CD's lasting a simulated 70 years.Most archive-grade DVD's and Blurays are all at least 50 years ONCE BURNED; unwritten shelf-life is much shorter since the laser burning process helps seal the media from UV and corrosion.
However, I suspect throwing a hard disk on a shelf for 20 years would yield pretty good data retention.
PrinceGaz - Friday, May 15, 2015 - link
I suspect throwing a hard disk on a shelf for any length of time would be quite detrimental to it :ppiiman - Saturday, May 16, 2015 - link
Read it againparlinone - Saturday, May 16, 2015 - link
retention rate is also exponential to usage, so for a fresh drive used under normal circumstances retention is 1000+ years. I am surprised this wasnt mentioned in the article.It's when a heavily used disk (10 yrs of near continuous writing IIUC) is stored in higher temperatures then when in use, that you get in single weeks territory.
CoreLogicCom - Wednesday, May 13, 2015 - link
Does anyone find it coincidental that the subcommittee that is headed by a mechanical hard drive company person and the one who wrote the report that also works for a mechanical hard drive company would come out with a report that causes panic among ssd adopters just about the time large and cheaper ssds are poised to relegate most mechanical hard drives to second class status?e36Jeff - Wednesday, May 13, 2015 - link
If you actually read the report, it is only going to cause panic among those that lack reading comprehension. It basically says if you have exceeded the write endurance of the drive, it may only retain data for a year or so. The vast majority of drives are nowhere near their write endurance limits and will have no issues retaining data.Kristian Vättö - Wednesday, May 13, 2015 - link
Seagate makes SSDs and owns SandForce, one of the largest SSD controller suppliers. HGST also makes SSDs and has been making a big push to the market in the recent years. Also, the presentation is several years old because I've used the data in it in several articles -- someone just decided to dig it up for a story now. Lastly, the actual retention data was submitted by Intel, so I find it very unlikely to be biased/modified.DCide - Wednesday, May 13, 2015 - link
Then if this was something nefarious, the seeds were planted a few years ago. This is how it works sometimes - it takes a while, but the seeds eventually sprout (whether for good or evil).I hope this wasn't the case here. Possibly someone, somewhere along the way wanted this to happen. Everyone else unwittingly contributed.
mkozakewich - Wednesday, May 13, 2015 - link
But don't you agree it's an awful co-incidence that "Alvin Cox" rhymes with "cardboard box" and Frank Chu's HGST uses cardboard boxes to ship its products? This is obviously some kind of conspiracy, and not just outdated information being misread today on clickbait articles.Alexvrb - Wednesday, May 13, 2015 - link
They're on to you! Lock your doors and stash your SSDs with the mehvidence in the freezer.I for run will be moving my SSDs behind my PC so the hot exhaust keeps them warm during operation, and I'll keep them cool with a nearby portable A/C unit when not in use.
DCide - Wednesday, May 13, 2015 - link
I wouldn't rule out the possibility that there's a correlation here. But it's rather hard to know that this was planned - especially without being close to the situation.This is why self-serving political leaders can get away with so much. They don't actually make disasters happen (in most cases), but they know something's bound to happen within the next couple of years, and when it does they spring into action to take full advantage of it - almost as if they'd planned it.
So yes, it's possible he created the presentation knowing there was a good chance the press would pick up on this and report on it in with "panic" headlines. And that even after the situation was explained accurately, it would have created enough lingering doubt (and most importantly, with a negative emotion still attached to the memory) it would create a small shift back to more hard drive sales.
But I have no idea of their motives. With a politician, if you're paying attention (and don't actually take perverse comfort in being deceived) we can usually discern the motives after a little while. But these men I don't know at all, so I have no idea whether they wanted this to happen.
wiz329 - Thursday, May 14, 2015 - link
I think you give politicians far too much credit and make them out to be far more intelligent than they actually are.TheRealPD - Wednesday, May 13, 2015 - link
They didn't - i am 99.999% certain that the presentation slides are from October 2010.haukionkannel - Wednesday, May 13, 2015 - link
So external ssd is still a good idea for random backups? Even if you don't give it electricity more than a couple of times during the year...DCide - Wednesday, May 13, 2015 - link
For sure, since you're not even writing 2TB/year to it, and its life is e.g. 100TB.It's just a matter of whether you prefer to use a SSD or HDD, really.
Gigaplex - Wednesday, May 13, 2015 - link
I don't think simply providing electricity to the drive is sufficient for data retention. If the drive controller doesn't scrub old data to refresh it, you'll still run into issues.Spirall - Wednesday, May 13, 2015 - link
So we better install our SSD(s) in the hottest (when running) place into the case (usually near outlet air flow), oposite to the HDs.DCide - Wednesday, May 13, 2015 - link
Yes - don't you dare use them in a cool running, energy efficient computer.Now I have to throw out my Mac Mini and replace it with a gaming tower.
joex4444 - Friday, May 22, 2015 - link
Yes, ideally you'd run the SSD in a hot environment and if you need to pull it out and store it for a while, you'd put it in the refrigerator.TheRealPD - Wednesday, May 13, 2015 - link
Why the devil has this suddenly become an issue??? it was all common knowledge back in 2010/11, with the standard being announced in the September of 2010 from recollection......indeed, you've got some sites claiming that the presentation was a couple of months ago, whereas, ttbomk the slide show that's being linked to is from October 2010 - with it having been presented on the 5th of October 2010 by JEDEC in San Jose.
instead, there was an article on KoreLogic (https://blog.korelogic.com/blog/2015/03/24#ssds-ev... which was solely talking about it in the context of digital evidence retention for court cases.
There's just some really lazy reporting on the part of other sites; trying to create a panic about something that's neither been hidden nor is what they're claiming it to be... Yeah, we all keep our drives in direct sunlight & saunas & whatnot, don't we.
Gigaplex - Wednesday, May 13, 2015 - link
Because people are stupid and don't pay attention to warnings before they trust their data with it, and then panic after the fact.triarius - Wednesday, May 13, 2015 - link
So the one week figure comes from when what is effectively the room temperature is 55C and the running temperature is 25C. Someone must have a pretty powerful case cooling system to achieve that.mkozakewich - Wednesday, May 13, 2015 - link
I know, I lol'd at the chart. Those areas are greyed out for a reason, but it's interesting they included those instead of stripping them out.zmeul - Wednesday, May 13, 2015 - link
quote: "In a worst case scenario where the active temperature is only 25-30°C"you obviously don't do summers
---
let's say you have a you have a SSD with data and you need to ship it internationally, across the ocean(s) via plane/boat
will the data be readable and intact once it reaches it's destination? knowing what we know and depending on how much the item will get stuck in customs improper storage, data integrity will be affected
Murloc - Wednesday, May 13, 2015 - link
that's a stupid way to send data.Still, airplane holds are cold and planes are fast so there is no issue.
zmeul - Wednesday, May 13, 2015 - link
except when your item get stuck in customs for 1 month?!mkozakewich - Wednesday, May 13, 2015 - link
That's only four weeks. If you're shipping an SSD, you probably shouldn't have written 100 TB of data to it.melgross - Wednesday, May 13, 2015 - link
Tell that to Microsoft. In order to upload all of your data to Asure, if you can't do it online because of the amount of data and connection reliability, they tell you to send your drives to them, and they will do it, returning your drives afterwards.Hopefully these drives will be sent by a very fast carrier, but many people in IT aren't the brightest. Saving money can be more important to them. Knowing this might help them make the right decision.
mkozakewich - Wednesday, May 13, 2015 - link
To be truthful, the only time it would matter is if these businesses are so focused on saving pennies that they're using SSDs past their endurance limits, in which case they very well might retain data only a week. They might want to ship them on a monday with three-day service.Gigaplex - Wednesday, May 13, 2015 - link
If you're saving money, you'd use an HDD, not an SSD.TheRealPD - Wednesday, May 13, 2015 - link
I did this back in 2011, paying for a SSD in the US so that I could have some data shipped across to the UK (& save a bit of money on the drive) - & all of the data was intact...Well, whilst temps 'could' be higher than normal room temps for a period (though they're more likely to be lower for much of the time), are you seriously suggesting that they'd store everything at stupid temps???
indeed, I had some a couple of bars of chocolate thrown in as freebies with some CDs from Australia a couple of years ago & they turned up in exactly the same shape as manufactured - & the melting point of chocolate's typically 30-32C.
melgross - Wednesday, May 13, 2015 - link
I hate to say this, but yes! I've found that companies can do the darnedest things, to paraphrase Art Linkletter. So I've worked with music companies trying to restore old master tapes from warehouses without heating, air conditioning, or leak free roofs.I've been called in by companies whose backup tape facilities, where they stores older, but valuable data, were flooded because they built them in a depressed area where runoff collected.
I could give you stories that would chill you to the bones.
TheRealPD - Thursday, May 14, 2015 - link
i was commenting on customs warehouses - where one would both imagine that 'if' there was any risk of flooding then it would have been discovered years ago & are being manned (effectively) 24/7 by staff processing packages so the temps & humidity & whatnot need to be reasonable (both for the workers & so they're not inundated with compensation claims for damaged goods)... ...& planes/boats/whatever, since the OP was talking about shipping a SSD with data on internationally.Now obviously there can still be calamities, but if, say, the customs warehouse burnt down or was hit by a tidal wave or whatever, or a ship sank or was captured by pirates or ... ...destroying or claiming as booty all of the contents or cargo, this isn't a limitation of SSDs & their ability to retain data in transit, but instead an unpredictable event that couldn't be planned for.
mkozakewich - Wednesday, May 13, 2015 - link
Anyone who's seriously asking those questions don't know what they're talking about, so I wouldn't worry. A drive with only a few TB written to it would last a year with no problem at highish temperatures.Kevin G - Wednesday, May 13, 2015 - link
For the curious, Tech Report did a data retention test as part of their SSD endurance experiment. The test was performed at 300 TB, well beyond their endurance ratings.http://techreport.com/review/25681/the-ssd-enduran...
Spoiler: no data loss after a week.
SunLord - Wednesday, May 13, 2015 - link
Facts don't matter only drama and click bait articlessheh - Wednesday, May 13, 2015 - link
But they didn't test long term retention.Kevin G - Wednesday, May 13, 2015 - link
True but they had written enough data to have run into any potential loss due to 300 TB being written at the time.mkozakewich - Wednesday, May 13, 2015 - link
Well, duh. The 100TB limit is only to be safe, and then the 1 week rating (if the drive had been run at low temperatures for that 300 TB) is only the lowest possible period of retention to hold to the standard.It's like people proclaiming that your milk will go back after one week in the fridge. Like, sure, it can. It could last two weeks after that, too. The due date catches 99.95% of all cases, but that's all it's designed for.
sheh - Wednesday, May 13, 2015 - link
"for new drives the data retention is considerably higher, typically over ten years for MLC NAND based SSDs."Where's this 10 year figure from? I recall reading something similar in a Dell document from 2010 or so, but basic flash reliability is supposed to be worse nowadays with the smaller manufacturing processes.
melgross - Wednesday, May 13, 2015 - link
Accelerated aging testing. It's a well understood methodology.sheh - Thursday, May 14, 2015 - link
Run by whom on what type of flash or drive?toyotabedzrock - Wednesday, May 13, 2015 - link
It would be nice to know what the data retention is on partly worn drives or drive using TLC and MLC and at smaller node sizes. Also testing of 3D nand. After the issues Samsung had this all seems more important.Also do these standards mean drives will be readable when their nand is worn out now? Most drives seem to suicide instead of going into read only mode.
toyotabedzrock - Wednesday, May 13, 2015 - link
Also the 8hour a day on time needs to end. Consumers leave things on 24/7 and while it is not in constant use most people leave things on.mkozakewich - Wednesday, May 13, 2015 - link
But it's not being written to constantly in that case. Busy time is different from idle time. Unless they're constantly defragging, but then there's a problem there anyway.JonnyTurtle - Wednesday, May 13, 2015 - link
Great article. Thank you!Gc - Wednesday, May 13, 2015 - link
"For active use the temperature has the opposite effect. Because higher temperature makes the silicon more conductive, the flow of current is higher during program/erase operation ...."If the benefit is only for active erased/written cells, then all the other unwritten cells are inactive?
If so, then overall a hot computer environment is not beneficial unless the entire drive is rewritten periodically. So even a read-only drive will eventually wear out.
Do SSDs monitor temperature so they can tell how often they need to rewrite the inactive cells?
Too bad the Intel table doesn't go lower, it would be interesting to see how much data retention improves with cold temperature storage. When an SSD becomes obsolete, it might be convenient to use it for archival storage, in the freezer if necessary.
mkozakewich - Wednesday, May 13, 2015 - link
It's basically impossible that you'll go 100 TB of writes without using and reusing every available block in the chips. The controller is always collecting garbage and clearing things out. Wear-levelling algorithms try to keep everything worn down similarly.Also, once written, the data will remain in those cells for only a limited time, regardless of power (unless some drives scan for weak cells and refresh them). You're kind of relying on moving data around in order to avoid bit-rot over the course of ten years. If you remember where all your data was in 2005, though, it's likely you've moved it all several times.
CountDown_0 - Wednesday, May 13, 2015 - link
Great article, Kristian. Thanks for clearing this up!zap117 - Wednesday, May 13, 2015 - link
I'm a little bit confused by these results. Traditional floating gate structures use doped polysilicon (a conductor) as the storage layer. So I would expect the conductivity to go down as you increased the temperature (conductivity of conductors decreases with increasing temperature).Also, it's worth pointing out that the underlying technology is not the same from one SSD to another. For instance, Samsung's 850 series SSDs use Charge Trap Flash, which replaces the polysi storage layer with silicon nitride, which is an insulator. So because it is an insulator, I would not expect the stored electrons to move around much, even if there was a short between the charge trapping layer and the channel. So these results may or may not at all be relevant depending on which particular SSD you own.
Kristian Vättö - Thursday, May 14, 2015 - link
You are correct that the conductivity of the floating gate actually decreases with temperature due to increased resistance. However, my understanding is that the conductivity of the silicon dioxide (i.e. tunnel oxide) increases with temperature, which results in increased charge trapping in the oxide that alters the cell voltage. The topic is certainly more complex than what reads in the article.pseudoid - Tuesday, May 26, 2015 - link
eh? I thought the resistance reduced as the temperature got lower.. to the point where you get near absolute zero [Look Ma, I am on Helium!] and poof [or qwaaaak!] resistance is down to nada, nothing, nunca, niente, zero, zip, zilch!ALBundyHere - Thursday, May 14, 2015 - link
i'm very confused by the wording in this article. "the figures presented here are for a drive that has already passed its endurance rating" does that mean that the drive is out of its warranty period, ran out of the last unallocated sector flash blocks, or the NAND's write tolerance had been exceeded, or something else? If the NAND's write tolerance had been exceeded, then this article makes no sense. the drive, as proven by the Tech Report through their SSD endurance experiment, would simply just stop working at once, so no data would even be readable, and in some cases, the drive would brick itself when the power is cycled, thus not even showing up in the system POST.sheh - Thursday, May 14, 2015 - link
I read it as: past the manufacturer specced TBW.Kristian Vättö - Thursday, May 14, 2015 - link
That is correct, it's the TBW that counts.HMK - Thursday, May 14, 2015 - link
What I find the most interesting is that data retention in some cases is effectively halved, by a simple temperature change of just 5 celsius and this occurs in any shown temperature area on the chart.HighTech4US - Thursday, May 14, 2015 - link
Bitrot and complete data loss on SSD's has been known to be an issue for a long time and the newer SSD's are making the problem worse because of the thinning layers.If possible keep the SSD powered on at all times so the SSD firmware can do background operations to keep the data valid. So on Windows disable the "power off hard drives option" in the power profile and let the computer Sleep instead of Hibernate.
If you have a laptop with a SSD and it is mostly powered off then at least once a month power it on and do a full backup to both refresh the SSD cells and have a backup in case the SSD bites the big one.
HighTech4US - Friday, May 15, 2015 - link
A to Z of SSD: Data Retentionhttp://www.virtium.com/blog/z-ssd-data-retention
emvonline - Thursday, May 14, 2015 - link
Good analysis Kristian. The current spin from this old presentation is almost mind boggling in its ignorance. classic example of "a little knowledge is a dangerous thing".NAND physics does state that retention will have a lifetime
couple items:
1) this is a spec, that set a limit on how to measure BER. it doesnt not mean that the devices will fail anywhere near here.... they dont. Before this spec, the limit was zero for endurance... max the write out and all bets are off.
2) as mentioned. it is the retention after maxing out write lifetime. if your SSD is maxed out in write lifetime, this is rare (nearly unheard of for client), it is probably a good idea to back it up ... 100% of enterprise people would do this.
3) the temp coefficient is what is used for models... it does not necessarily apply to real life or all ranges of temperature.
that said, if you are using a SSD that is maxed out for storing data without a backup for 3-4 years without being powered on, you probably should not be managing your own computer (or you like to throw money away).
galta - Thursday, May 21, 2015 - link
I have 256Gb SSD and 2Tb of photos. Because I am not sure of what photos should I keep for fast access, I have written/rewritten 95Tb on the SSD, but still have not made up my mind.Also, because I happen to live in a scientific base in the Artic, working temperature is around 25C but, after work, I go to sleep with my notebook and temperature under the blanket gets close to 35C.
In the case my base is destroyed by some winter storm and my remains are only found in 10 years from now, will my kids be able to retrieve the beautiful pictures I have taken of whales and polar bears?
Does it mean I should go back to HDDs, or better, 3 1/2" floppies? I still have thousands of news boxes of them, both 1.44Mb and 720K!!!
Too bad I did not read this article before.
The comments here made me laugh like I haven't in quite a while.
Bachsau - Sunday, May 31, 2015 - link
Well, temperatures of 40°C and more for several days are not unusual in hot summers and just normal in some of the worlds warmer countries. We should not even accept data rentention times anywhere below five years on < 50°C. A mass storage device must hold its data. That is much more important than speed or anything else!abufrejoval - Monday, June 8, 2015 - link
Those parts of the story are well understood. What I'd like to know is what you have to do to keep the drives refreshed: Do I have to actively overwrite the data? Is the drive somehow capable of recognizing that it should do an internal check and refresh cycle?The most valuable data I store is typically photographs and videos from the family, and of course all kinds of documents and financial records and scans. Most of that stuff is meant to last 10 years minimum; photos etc. future generations should decide what to do with.
Basically I'm planning for a technology refresh every five years or so, but otherwise the data is just there to stay, never to be read unless two copies have failed.
1st copy is an active RAID6 and a standby RAID5 for the 2nd (really meant to be ZFS, but that's another story). Beyond that I use removable drives for a 3rd copy. Those started as 3.5" magneto optical drives, but they became impractical in terms of capacity, neither drive nor media are being sold any more and SCSI is becoming hard to maintain.
Additional copies are given as 4th or 5th copy to members of the family to protect geographically.
But magnetical drives are, well mechanical, and I don't trust mechanics all that much: These old klunkers are just likely to get dropped in transfer or fail once you really, really need them.
In a way I feel much safer using SSDs for storage of really valuable data, well except that they wear out when you use them and they loose their charge when you don't.
Wear levelling code is very sophisticated these days, but somehow I'm pretty sure, none of the firmware writers have gone great lengths into testing and validating long periods of disconnected SSD use. At the price these things did cost originally, who'd ever think about putting them in storage for any extended period of time, right?
Except that this good old Postville drive, which still signaled 99% remaining life after some years of constant (but light) use, has long since been replaced by faster and bigger SATA 3 cousins but lives out its life as baby pics archive.
It's filled to the proper 80% and now I guess I should plug it into the server for a verify pass every couple of months, right?
How do I tell it to do that? How does it know, that it should check and compensate bit rot?
Does it or can it have any notion of how much time has passed since the last powerdown?
Shouldn't it be time-stamping at least erase blocks?
I guess the only way you'll ever get those flash cells stuffed with electrons again would be through overwriting? What about metadata? Chances are it doesn't get any better treatment than normal data (unless you have one of these modern TLC/SLC hybrids) and thus overwriting may not be nearly as good as a full erase and refill?
Somehow I shudder at erasing first what I want to protect so I guess I'll have to think up some kind of rotation scheme... and some nice new specialized archiving SSDs companies can now market: How would anyone even test them?
I got these big RAIDs to make things easier and now I believe I've just opened a huge can of worms called media management, something we used to do with tapes in huge libraries...
ProDigit - Friday, August 28, 2015 - link
USB sticks use the same kind of chips.One of the first no-name USB sticks I've ever bought in 2000, had a single chip, with 128MB of memory on it.
Files on that stick still read, after 15 years!
Another drive, I purchased in 2005, 2GB, from transcend, has failed, and needed a reformat.
After quickformatting the stick, Recuva was able to retrieve most of the data stored on the USB stick.
So I fear more for controller and FAT/TOC table, than the actual data; since one error in the FAT table, can result in the drive being unreadable.
valnar - Friday, June 16, 2017 - link
I know this is an old article, but my questions weren't answered in several pages....plus, maybe new knowledge has come to light in the last 2 years?? :)1) Say I have an SSD in an old XP (heck, even Win98) gaming box that only gets turned on once in a blue moon. What is required to keep the data "refreshed", so to speak?
2) Should I turn it on once a year so the bits don't go bad? And if so, then for HOW LONG should I leave it on to make that happen? What exactly gets refreshed by simply sending electricity through it?
3) And if #2 is correct, what does that say about static data, such as my \Windows installation where files don't get moved or changed. Do those get refreshed in the same way? Does turning it on even matter?
4) So that begs the question, what about the static data on our current rigs that we use today? If we aren't copying, changing and refreshing cells, will that static data go bad faster than the moving data?
I can't find the answers to these anywhere. All my PC's have SSD's now, both young and old.
ND4695 - Saturday, March 17, 2018 - link
Your right! A lot of questions still need to be answered and its 2018, so new data on SSD performance and reliability should be out by now.I only use external hard drives for archiving, which only happens once or twice a year. So, I would like to know if upgrading my hard drives to SSD's would be beneficial or should I still wait. The next best step in archiving would be to switch to Mdisc or LTO tape, but Mdisc's still don't have sufficient space and LTO tape is cheap (but the drive is really expensive).
I hope they answer some of these questions soon.