What I want to know is whether or not Intel finally got TRIM working properly with the Sandforce drives. TRIM is an important feature in a SSD drive and I really do not want a drive that loses performance over time. Any chance we can see testing on this or get official word from Intel? I am also still curious about RST 11.5 bringing TRIM to raid and wonder if it will be on all intel drives or only ones with certain controllers?
For TRIM, the issue with SandForce is that it doesn't fully recover, but at the same time the worst-case performance is generally still very good. Or are you talking more about TRIM with RAID?
Sources, please. Nobody in this discussion has been able to find any reliable indication that there are BSODs on the 520s. Unless you have reliable sources, you're just adding more hearsay to the noise.
I haven't had bsod/freeze/restart since summer with my OCZ Vertex 3. Not sure about firmware version atm, but half year run without glitch sounds like a rock stable to me (machine runs 24/7).
My OCZ Vertex 3 still blue screens once in awhile. Just updated to the latest firmware. v2.22 so will see what happens. v2.15 definitely didn't fix the problem for me although it happened a lot less than it used to.
I won't ever buy an OCZ SSD again. Not because I blame them for the problems but because of the way they dealt with it. My next one will probably be one of these Intel 330's with the more reliable firmware.
AFAIK, Sandforce performance only noticeably suffers when the drive is full or very near full. Not to say that the issue isn't important, but the likelihood that it will affect you seems pretty small unless you keep your drives at 99% full, which nobody does.
"""unless you keep your drives at 99% full, which nobody does."""
Since drives don't understand file systems, TRIM is the only way for a disk to figure out that it's not full. If you were to temporarily use 95% of your disk, then delete files to free up space, without TRIM, the drive won't know that it's safe to erase the flash blocks which contained the deleted files - the disk will stay 95% allocated.
If, after deleting files, you write more, and they land on the previous 5% free space, the disk will be 100% allocated, and it will stay there. I don't believe that any operating systems preferentially over-write deleted data when writing new data, so if you don't use TRIM, the question is not whether you will fill the flash blocks, but when.
In summary: TRIM is pretty much essential for SSD use in a desktop, though I don't think that the Sandforce post-TRIM performance loss will be at all noticeable, and so there's no problem there.
The real issue is whether these have any power loss protection... which, at this price point, I suspect they do not.
Drives keep spare area. TRIM just allows the drive to use area that otherwise would be considered user space blocks as spare area, until written to. All that TRIM does is to maximize spare area.
Sandforce controllers (and all other SSD controllers that I know of) reserve enough spare area to guarantee a minimum level of write performance.
Therefore, TRIM is not essential to maintain good performance; that's what spare area is for. TRIM does help with retaining better-than-minimum performance though. Let's face it; the sandforce controller even when in its minimized write performance state is still 'fast enough' for most uses and the difference in write speed probably won't even be detectable to most people.
The GP pointed out that many people keep their drives full. I doubt this is really true. Who keeps their drive in a state where every file you want to store requires that you remove a previously-written file? Only people rarely doing writes I would imagine (i.e. keeping a drive full of stolen movies and then only deleting a movie when room is needed to write a newer movie, and that probably only happens on the order of daily or weekly).
If you're doing lots of 'work' using your drive (i.e. manipulating media, compiling software, etc), you're probably maintaining enough free space on your drive so as not to constantly be hassled with out of space issues while you do your work. In this case, you'll be keeping enough spare area to mitigate performance problems.
If you're not doing lots of 'work' using your drive (i.e just plopping stolen media on it and then reading that media on occasion), then you won't be worried about sandforce write performance on full drives anyway.
You're understanding of spare space and what it does differs from my own, as does your claim that spare space is all that is needed to maintain drive performance. ("...TRIM is not essential to maintain good performance; that's what spare area is for. ")
Drives (and/or operating systems) that do not incorporate TRIM suffer serious performance degradation over time. The TRIM function serves to reverse (or at least ameliorate) that degradation. Spare space, at least as I understand it, is for wear leveling. (As cells approach wear limits in the main space and begin to become unreliable, those addresses are mapped to cells in the reserved spare space.)
I'm sure that my plain language attempt at a technical explanation is lacking or off in some way or another. Didn't mean it to be a rigorous exposition anyway. Just saying that my understanding of the subjects of TRIM and spare space seems to be directly at odds with yours.
All TRIM does is mark cells with data that is no longer in use (because it was "deleted) so that the cells holding that data are properly reset. Without TRIM, the data is still there and, when a write comes in, that data has to overwritten instead of getting written to cells that already ready for new data.
Overwriting the data requires waiting for the cells to be "emptied", and *then* performing the write operation. That's much more time-consuming than just writing to cells that are already "empty", and that extra time is what's responsible for write speed degradation without TRIM or some form of garbage collection.
Truly, nowadays most SSDs have good enough garbage collection that, given enough idle time (which most drives in non-enterprise settings will have more than enough idle time), they can accomplish much the same thing without TRIM anyway. TRIM is just a nicety, really.
It all depends on the controller. I have some SSDs that never implemented TRIM and it is not a problem. Their garbage collection and block management algorithms are good enough to maintain good performance even without TRIM. Are they as fast as solutions that make better use of spare area and TRIMmed space? No. But they don't suffer from some kind of catastrophic write performance as I think some people are claiming is inevitable.
When you write data the SSD wants to write to already empty cells, because then there is no erase cycle required before the write. If the entire user space of the drive was already filled, then the next write would likely, rather than overwriting the existing block of data, write to a spare block and then mark the block that used to hold the data at that LBA as spare. Then in the background that no-longer-used block would be erased, making it available for a future write.
If the drive gets so far "ahead" of its ability to erase added-to-spare-area-but-not-yet-cleared blocks that it runs out of erased blocks, it would have to resort to erasing blocks before writing the next block, which would be a serious performance degredation.
Presumably, it would take a long time of sustained writes to 'use up' the already-zeroed spare area and go into a mode where every write requires an erase. Once the sustained stream of writes stopped, the drive would have an opportunity to 'catch up' on the erases that it deferred.
I suspect that real controllers actually balance erases against writes and 'slow down' some writes during periods of high activity in order to slow down the rate at which already-cleared blocks are used up.
If you are using TRIM and have given blocks back to the SSD, then it has even more space to use to keep ahead of sustained writes.
I suspect that there are various performance levels that an SSD would achieve based on how much already-erased spare area it has, how much 'virtual' spare area in the form of TRIMmed blocks that the O/S has said that it doesn't need, and what the current write load is.
I think that in the worst case, intelligent controllers are very resilient to running out of zeroed blocks, even if the drive is entirely 'full' and the spare area is at its minimum.
Hehehe... Only today I spotted a collegue's Vertex 3 on nearly full capacity. My flatmate is just as bad but he only has a 60GB drive... don't know how he copes.
Are they using a workload that would be write-heavy with those drives? If so, how are they coping with being almost out of space constantly? If not, then they won't care about sandforce worst case performance because they're not doing significant enough writing to the drive anyway.
Right now the Mushkin Chronos and Chronos Deluxe are the cheapest drives on the market, for both budget and performance categories. Want 34nm sync NAND in a 240GB drive? Newegg has the Chronos deluxe for $239. That's sub-$1/GB for top end performance.
I would never buy an Sandforce based SSD because of all its issues. I figure it was too much clever technology built in trying to do simple things. Which we all know would be prone to error.
But with Intel reliability track record and software design i am now very confident to recommend this Sandforce based SSD to anyone.
And the price listed for Intel 330 are suggested retail price!, You are very likely to find it cheaper and therefore making it one of the cheapest SSD on the market,
Will this drive in 60GB perform better/worse than the 313 in 20GB for Intel's Smart Response Technology (SRT)?
Its obviously MLC but has a larger capacity (the limit of SRT's capability I think), would this be a better choice than the 313 for that purpose? Also how does the 520 60GB Stack up against the above 2?
I've deployed many, many Intel 320 Series SSDs at my workplace with astounding results. Literally zero failures in over a year and a half on dozens of laptops and workstations.
I probably wouldn't consider this Sandforce-based offering, but are you aware of an EOL planned any time soon for the 320 series? It's been very, very good to us and I'd hate to see it go away.
What makes you think this drive will be any less reliable than the 320? Do you think that Intel is really going to release a drive that would wreck their SSD quality reputation? I don't think so ... which is why I bought a pair of 520s recently.
"""What makes you think this drive will be any less reliable than the 320?"""
No power fail protection (I assume, since it's cheap, and the 520 doesn't have any) means less reliability. Consumer Sandforce controllers have never been impressive on power loss, and I imagine many of the BSOD events that people see can be traced back to data loss from bad shutdowns, though it's hard for me to say for sure, since I've never see one personally.
I'm a fan of the 320 simply because it's the cheapest drive with power fail protection, and the performance is certainly good enough for my needs. Then again, it's been a while since I felt the need to run the highest-spec hardware available. I tend to buy with value for money in mind, and since I value my data, the 320 currently leads the SSD pack for me.
Can you expand on why you think the sandforce drives have 'no power fail protection'?
Are you saying that there are drives out there that 'lie' about the persisted state of data? If so, I'd really like to know about it.
Remember, that it's OK for a drive to cache data and not commit it to permanent store if either
a) the drive has a means for ensuring that the data is written even in the event of power loss, OR b) the operating system did not ask for a guarantee that the data was persisted
Are you saying that the sandforce drives disobey (b), i.e. tell the operating system that they have written data that they haven't?
If so, I really want to know about it. That seems like it could be class-action-lawsuit worthy.
He mean's that it probably has no capacitors to provide a short reserve of power in the case of an unexpected power loss (such capacitors allow the SSD to write its index to flash before shutting down).
All SSDs need to maintain an index which maps LBAs (sector addresses known to the host) to flash pages known only to the SSD. Many (most?) SSDs keep at least part of that index in volatile memory for at least some time before they write it to flash. If the index were updated in flash on every event (write or GC), performance would suffer, so most SSDs do not update the flash copy of the index immediately.
If the SSD loses power before it has updated the index, then you could lose some or all of your data. Unless the SSD has capacitors to provide a short power reserve to write the index to flash before shutting down in the event of an unexpected power loss.
I just did a little reading into the ATA command set, it looks to me like they have provisions for 'sync'ing the drive to ensure that data is persisted. For example, I'm seeing something called "Flush bit" that can be used in write commands to ensure that all data is written to persisted store (and thus impervious to power outage) before completion.
Are you saying that SSD controllers ignore such bits, lying to the operating system and returning before the data (and all metadata such as LBA maps required to access the data) is written to flash?
If so, then this is a major issue - class action lawsuit worthy if you ask me.
If what you're saying is that when the operating system does NOT set the 'flush bit', then the data could be lost on power outage - well that's a complete non-issue. Unless you request a guarantee that the data is written using the flush bit, you don't get any guarantee. No capacitor needed.
I think you are confusing host data (which can be held in a write cache, or forced to flush to persistent storage, as you say) with SSD metadata (the index of LBAs to flash pages).
I don't think there is any ATA command to force metadata to be written to persistent storage.
That doesn't matter. Flushing the user data to the drive without flushing the metadata to locate it has the same effect as not flushing the user data at all, with regards to the purpose of the flush bit, which is to ensure that the data is persisted.
I consider data that is persisted as data that can be read back after a power outage. if the data can't be read back because its metadata wasn't flushed, then it is not persisted.
If I were implementing a controller, I would flush the LBA index along with the user data if the flush bit were set.
Hard drives use the rotational inertia of the spinning platters to ensure that data is flushed on power outage, and some SSDs use capacitors. If a controller doesn't have some way to guarantee that enough energy will be available to flush cached written data, then it had *better* properly honor the flush bit, which includes flushing the LBA index and whatever other metadata is necessary to read the data that was written with the flush bit.
I find it very hard to believe that SSDs are not doing this.
Don't you know that the Intel 320 SSD has power-loss-protection capacitors for just the reason I described?
Also, Sandforce was touted from early days as supporting a supercapacitor for just this reason. But none of the consumer Sandforce SSDs actually use a supercapacitor.
To put it another way: "flush" is a simplified way of describing a complex process. "flushed" means "as persisted as the device is capable of making the data". "persisted" means "stored in such a manner as to be recoverable on a subsequent read".
Clearly, not writing metadata to flash means that the data is not persisted, because it is not recoverable on a subsequent read.
Therefore, if the SSD is flushing only the user data, but not the LBA index or whatever other metadata, then it is not persisting the data, and it is lying if it says that it is.
If this is the case, I'd like to sue Intel for using Sandforce controllers because they are inherently defective. There is NO DIFFERENCE betwee this kind of lying about persistence, and intentionally writing garbage instead of the data that I had written. Or perhaps a better analogy would be, advertising a 100 GB drive as being 200 GB and just dropping any writes to the second 100 GB on the floor. It's defective, plain and simple; it does not satisfy the requirements of the interface that it claims to support.
Well I was being facetious about suing Intel. But I am genuinely worried that SSD controller manufacturers may be playing games with data integrity for better performance numbers. Not sure what's strange about that.
It's not actually clear. Nobody can say for sure whether SSD controllers are properly persisting data + metadata on flush commands without rigorous testing.
I'll give you one more chance before concluding you're a troll just trying to annoy me.
A device can use capacitors to increase performance by allowing writes to be cached in volatile storage even in the cases where the device has been instructed to flush the data to persistent storage. Because volatile memory + a capacitor with enough capacity to flush the volatile store to nonvolatile store is for all intents and purposes nonvolatile.
If a device DOES NOT have a capacitor or a spinning disk or whatever to provide enough energy to flush volatile storage, THEN it must implement other mechanisms to ensure that data is persisted.
IF such a drive DOES NOT have a capacitor, and DOES use write caching, and DOES NOT flush all data necessary to properly persist cached writes (including all metadata and the data itself), THEN it is lying when reporting that the data has been flushed to nonvolatile storage.
Just because some devices use capacitors doesn't mean that this is the ONLY WAY to guarantee persisted data.
Operating system filesystems cache write data too, you know. And they also supply a means for flushing cached writes out to permanent storage. But they don't *lie* for the purposes of making writes look faster - if you ask for the data to be flushed, the operating system will wait until it has been flushed.
The SSD should do the same thing. It should guarantee that the data is persisted when asked to do so - either by writing it to volatile RAM with enough stored energy to guarantee that it will be written out in the event of a power loss, OR by actually flushing it and all associated metadata necessary to recover the data.
The suggestion that some SSDs are simply lying when asked to flush write caches is worrisome; and nobody can say whether or not a particular controller is lying in this way without testing.
The Intel 320 wasn't even the topic of conversation. The topic was sandforce based drives. But I'm done with this pointless back and forth as apparently you don't even know what was being discussed.
I would pick the 320 series over the 520 or 330 series anyday. Even with the premium. Intels own controller works. SandForce doesnt. Even Intels own firmware BSOD. And the drives still get sudden death. I know internally Intel people hate it.
Biggest mistake ever to start using the worst SSD controller there is. Not to mention the huge hit and PR disaster to the else now well established Intel reliability and quality brand in the SSD marked.
For all the people who think the 520 is immune from the BSOD's, a quick Google search will turn up pages and pages of different websites and forums with users with problems.
I did the search and read the results. I did not come away with the impression that there is a significant problem with BSODs on the 520. I think I saw 3 people reporting actual problems, and lots of reference to "lots of reports of this problem", which leads me to believe that most of it is hearsay.
Have a M4 128GB here and not a single issue or BSOD in 9 months. Looking around at reviews and forums, it's clear that Intel, Samsung and Crucial are definitively the 3 top dog as far as quality/reliability is concerned.
I have an Intel 520 240GB and have beat the crap out of thing for 2 weeks straight on my EVGA X58 Classified and I have not experienced a single BSOD nor have I ever had an issue with any SSD I have used with random BSOD. I read through the BSOD issue on the found on Google for the 520 before I purchased it and 80% are user error related. So I have listed some helpful tips when installing an SSD in your system.
1. Update your Mother Board BIOS to the latest version. (Unless there is a known issue with it on SSD) Always do this first.
2. Do not use IDE or Compatibility mode on SATA controller (This is the most common issue). Always use AHCI or RAID (even if you’re not putting it in a RAID Array). Also some add-in SATA Chipsets on MB don't play well with SSD's so it's good to stay on the chipset's SATA.
3. DO A FRESH INSTALL!!!!!! Don’t be lazy. Do not use a drive copy or image tool. Only bad Ju-Ju comes from this. **Also during the install, even if you’re OS supports the chipset controller (ex Intel ICH.x). Get the latest driver from the manufacture (Intel, AMD or other and not MB manufacture) and load the controller driver during the OS install. (You would not believe how much this fixes your BOSD issues). The driver in the OS is a watered down old driver just to make it work.
4. Once OS is install, get the latest drivers for all your hardware from their manufactures. This will also help stabilize your system. Some the BSOD that people thought were from the SSD where actually cuased by something else.
Hope this helps some of you out there pulling your hair out with SSD's.
While $20 - $50 (~20%) of savings is somewhat significant for some, I still think the 520 is a better and "safer" deal. You get two more years of warranty and estimated lifespan for ~20% increase on the price. I have more than 20GB of daily writes on my drive(s).
I'd recommend the 520 over the 330 any day. Especially if one's running a non-RAID workstation (production) setup. The 330 should do pretty well for the mainstream consumer though. But still at this point, those interested in buying separate SSDs aren't exactly mainstream.
So the 180GB costs more per GB than the 120GB, which makes no sense. Both prices include the cost of the controller, board and casing which is the same for both. Adding more flash compared to a lower model should result in lower price per GB...
The 120GB model has sixteen single-die NAND packages whereas the 180GB model has twelve dual-die NAND packages (the controller runs in 6-channel mode). It's possible that dual-die NAND packages cost more per GB, hence the difference.
It is foolish for intel to take the risk of tarnishing their reputation by using sandcrap. They must have found the bug that the others obviously missed. Hehe maybe Intel is the one who made sure those bugs were in there in the first place? Naw... omg they would never do something like that!
They all seem a bit off on the non-Intel side, frequently 7-20% off, sometimes significantly more.
I also think that you should be showing more budget oriented products listed. If the 520 is targeted at the Vertex 3/4, Crucial m4 and Samsung 830 market, which the price suggests, perhaps we should be showing the lower end versions of products, such as the Agility line in comparison. This ends up with a comparison of $120 (after rebates) for a 120GB drive, compared to the Intel drive costing $150 (a 25% cost premium for Intel's name).
I will say that I can find the products at those prices at like Best Buy and other expensive retailers, but Amazon and even Newegg seems to have better prices for everything than you have listed, except the Intel drives, which are holding the line pretty tightly (as I would expect). If this were a week or two old article I might expect some drift, but not just a couple of days.
I posted the following information on the Intel communities blogs but have not received any comments so far. I am still getting random BSODs, mostly while the computer is sitting idle.
Apr 19, 2012 12:32 PM in response to: skiman Re: Intel 520 Series 120G SSD random BSOD's - please help I had the same problem. Random BSOD's and random poor performance.
I purchased two of the Intel 520 SSDs. One worked perfectly in one of my computers with Gigabyte motherboard. The other had these problems.
That computer is a Dell Optiplex 755 with Dell/Intel motherboard and an Intel ICH9R SATA support chip.
I set BIOS to AHCI/RAID mode. I installed Windows 7 Home Premium.
Installed Intel Solid State Drive Toolbox version 3.0.2.
I took most of the published steps to fix this problem. I did secure erase after 1st install.
I upgraded BIOS from A19 to A21 (the latest) version. I reinstalled Windows two times.
I set power management to high performance. I could not get a good windows 7 dump because the SSD interface was what was failing.
I finally got the main problem fixed and I am now getting good performance and BSODs about once every 2 days. I found and applied the Intel LPM fix.
The Link power management (LPM) fix is to disable the LPM from setting the SATA interface to reduced power.
I still get BSODs once about every 2 days, but performance is much better. Intel still needs to fix the 520 SSD firmware to work with their own SATA controllers.
Message was edited by: Kenneth Denbow. Now getting BSODs about once a day. Tried a different SATA port and a different SATA cable, but still the same problems, I reinstalled my old backup WD 500GB drive in AHCI mode and no problems with it.
Message was edited by: Kenneth Denbow for the 2nd time. I cloned the Windows 7 on Intel 520 SSD to an old Samsung 1TB non-SSD SATA drive model HD103SJ. Then booted computer with this drive. After windows 7 loaded MSAHCI driver and re-booted I ran the Windows Experience Index assesment again. My hard disk subscore dropped from 7.8 to 5.9 but my computer runs without any BSODs. I guess I have lost $200 on the Intel 520 and over 20 hours of work trying to get it to work.
Message was edited by: Kenneth Denbow for the 3rd time. Reinstalled SSD and I still get BDODs, mostly overnight while computer sits idle. I noticed that I gave some wrong information earlier. My motherboard chipset has an Intel ICH9D0 SATA controller and not ICH9R. The motherboard is made by Intel, model GM819, in a Dell optiplex 755 computer. I installed a new 450 watt power supply just to make sure that power was not the problem. Still fails.
I get the BSOD overnight mainly I have the Intel 520 SSD 120Gb with a Dell OptiPlex 790 dt just purchased in April 2012. I generally get a BSOD once a day and it is random.
Dell support have told me to reinstall windows and load the drivers in a certain order to fix the issue, my computer support guy thinks this will not work and I am reluctant to try as it means reloading my whole system.
Just did a new build this weekend with the 180G version and it BSODed with the PC at idle. Ran it with a Sabertooh z77 and i7-3770. So far only one BSOD, but scandisk did find some issues and fixed it.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
75 Comments
Back to Article
eilersr - Monday, April 16, 2012 - link
Thanks for the comparison!How soon can we expect to see how this series compares to other SSD's in Bench?
Anand Lal Shimpi - Monday, April 16, 2012 - link
Give me a couple of weeks, I don't have the 330 in house yet and I'm unfortunately traveling at the moment :)Take care,
Anand
kensiko - Monday, April 16, 2012 - link
But I still feel the low-cost SSD should be 1$/GB today, the 330 would fit well in this price range.We can have much cheaper SandForce based SSD than that! Ex: http://www.newegg.ca/Product/Product.aspx?Item=N82...
kensiko - Monday, April 16, 2012 - link
Sorry for US people, here is the link for newegg.com: http://www.newegg.com/Product/Product.aspx?Item=N8...kensiko - Monday, April 16, 2012 - link
Deluxe got Toggle Nand: http://www.newegg.com/Product/Product.aspx?Item=N8...Wetworkz - Monday, April 16, 2012 - link
What I want to know is whether or not Intel finally got TRIM working properly with the Sandforce drives. TRIM is an important feature in a SSD drive and I really do not want a drive that loses performance over time. Any chance we can see testing on this or get official word from Intel? I am also still curious about RST 11.5 bringing TRIM to raid and wonder if it will be on all intel drives or only ones with certain controllers?Sufo - Monday, April 16, 2012 - link
Never mind TRIM, have they even solved the SF-2k BSODs?JarredWalton - Monday, April 16, 2012 - link
We already had an article on the BSODs: http://www.anandtech.com/show/5508/intel-ssd-520-r...For TRIM, the issue with SandForce is that it doesn't fully recover, but at the same time the worst-case performance is generally still very good. Or are you talking more about TRIM with RAID?
dananski - Monday, April 16, 2012 - link
Isn't the lack of TRIM through RAID a limitation of some RAID controllers rather than the SSD?Sufo - Tuesday, April 17, 2012 - link
Right, I saw this, but last I heard the 520s were still BSODing, is this not to do with the SF controller then?bji - Tuesday, April 17, 2012 - link
Sources, please. Nobody in this discussion has been able to find any reliable indication that there are BSODs on the 520s. Unless you have reliable sources, you're just adding more hearsay to the noise.Holly - Tuesday, April 17, 2012 - link
I haven't had bsod/freeze/restart since summer with my OCZ Vertex 3. Not sure about firmware version atm, but half year run without glitch sounds like a rock stable to me (machine runs 24/7).mustardman29 - Wednesday, May 9, 2012 - link
My OCZ Vertex 3 still blue screens once in awhile. Just updated to the latest firmware. v2.22 so will see what happens. v2.15 definitely didn't fix the problem for me although it happened a lot less than it used to.I won't ever buy an OCZ SSD again. Not because I blame them for the problems but because of the way they dealt with it. My next one will probably be one of these Intel 330's with the more reliable firmware.
bji - Monday, April 16, 2012 - link
AFAIK, Sandforce performance only noticeably suffers when the drive is full or very near full. Not to say that the issue isn't important, but the likelihood that it will affect you seems pretty small unless you keep your drives at 99% full, which nobody does.pc_void - Monday, April 16, 2012 - link
" which nobody does"I take it that's a joke... at these sizes lots of people would be going over 100% if it were actually possible.
Don't even try to imagine how many lazy people there are. It'll BLOW YOUR MIND.
nexox - Monday, April 16, 2012 - link
"""unless you keep your drives at 99% full, which nobody does."""Since drives don't understand file systems, TRIM is the only way for a disk to figure out that it's not full. If you were to temporarily use 95% of your disk, then delete files to free up space, without TRIM, the drive won't know that it's safe to erase the flash blocks which contained the deleted files - the disk will stay 95% allocated.
If, after deleting files, you write more, and they land on the previous 5% free space, the disk will be 100% allocated, and it will stay there. I don't believe that any operating systems preferentially over-write deleted data when writing new data, so if you don't use TRIM, the question is not whether you will fill the flash blocks, but when.
In summary: TRIM is pretty much essential for SSD use in a desktop, though I don't think that the Sandforce post-TRIM performance loss will be at all noticeable, and so there's no problem there.
The real issue is whether these have any power loss protection... which, at this price point, I suspect they do not.
bji - Monday, April 16, 2012 - link
Drives keep spare area. TRIM just allows the drive to use area that otherwise would be considered user space blocks as spare area, until written to. All that TRIM does is to maximize spare area.Sandforce controllers (and all other SSD controllers that I know of) reserve enough spare area to guarantee a minimum level of write performance.
Therefore, TRIM is not essential to maintain good performance; that's what spare area is for. TRIM does help with retaining better-than-minimum performance though. Let's face it; the sandforce controller even when in its minimized write performance state is still 'fast enough' for most uses and the difference in write speed probably won't even be detectable to most people.
The GP pointed out that many people keep their drives full. I doubt this is really true. Who keeps their drive in a state where every file you want to store requires that you remove a previously-written file? Only people rarely doing writes I would imagine (i.e. keeping a drive full of stolen movies and then only deleting a movie when room is needed to write a newer movie, and that probably only happens on the order of daily or weekly).
If you're doing lots of 'work' using your drive (i.e. manipulating media, compiling software, etc), you're probably maintaining enough free space on your drive so as not to constantly be hassled with out of space issues while you do your work. In this case, you'll be keeping enough spare area to mitigate performance problems.
If you're not doing lots of 'work' using your drive (i.e just plopping stolen media on it and then reading that media on occasion), then you won't be worried about sandforce write performance on full drives anyway.
Romberry - Monday, April 16, 2012 - link
You're understanding of spare space and what it does differs from my own, as does your claim that spare space is all that is needed to maintain drive performance. ("...TRIM is not essential to maintain good performance; that's what spare area is for. ")Drives (and/or operating systems) that do not incorporate TRIM suffer serious performance degradation over time. The TRIM function serves to reverse (or at least ameliorate) that degradation. Spare space, at least as I understand it, is for wear leveling. (As cells approach wear limits in the main space and begin to become unreliable, those addresses are mapped to cells in the reserved spare space.)
I'm sure that my plain language attempt at a technical explanation is lacking or off in some way or another. Didn't mean it to be a rigorous exposition anyway. Just saying that my understanding of the subjects of TRIM and spare space seems to be directly at odds with yours.
kyuu - Monday, April 16, 2012 - link
All TRIM does is mark cells with data that is no longer in use (because it was "deleted) so that the cells holding that data are properly reset. Without TRIM, the data is still there and, when a write comes in, that data has to overwritten instead of getting written to cells that already ready for new data.Overwriting the data requires waiting for the cells to be "emptied", and *then* performing the write operation. That's much more time-consuming than just writing to cells that are already "empty", and that extra time is what's responsible for write speed degradation without TRIM or some form of garbage collection.
Truly, nowadays most SSDs have good enough garbage collection that, given enough idle time (which most drives in non-enterprise settings will have more than enough idle time), they can accomplish much the same thing without TRIM anyway. TRIM is just a nicety, really.
bji - Monday, April 16, 2012 - link
It all depends on the controller. I have some SSDs that never implemented TRIM and it is not a problem. Their garbage collection and block management algorithms are good enough to maintain good performance even without TRIM. Are they as fast as solutions that make better use of spare area and TRIMmed space? No. But they don't suffer from some kind of catastrophic write performance as I think some people are claiming is inevitable.When you write data the SSD wants to write to already empty cells, because then there is no erase cycle required before the write. If the entire user space of the drive was already filled, then the next write would likely, rather than overwriting the existing block of data, write to a spare block and then mark the block that used to hold the data at that LBA as spare. Then in the background that no-longer-used block would be erased, making it available for a future write.
If the drive gets so far "ahead" of its ability to erase added-to-spare-area-but-not-yet-cleared blocks that it runs out of erased blocks, it would have to resort to erasing blocks before writing the next block, which would be a serious performance degredation.
Presumably, it would take a long time of sustained writes to 'use up' the already-zeroed spare area and go into a mode where every write requires an erase. Once the sustained stream of writes stopped, the drive would have an opportunity to 'catch up' on the erases that it deferred.
I suspect that real controllers actually balance erases against writes and 'slow down' some writes during periods of high activity in order to slow down the rate at which already-cleared blocks are used up.
If you are using TRIM and have given blocks back to the SSD, then it has even more space to use to keep ahead of sustained writes.
I suspect that there are various performance levels that an SSD would achieve based on how much already-erased spare area it has, how much 'virtual' spare area in the form of TRIMmed blocks that the O/S has said that it doesn't need, and what the current write load is.
I think that in the worst case, intelligent controllers are very resilient to running out of zeroed blocks, even if the drive is entirely 'full' and the spare area is at its minimum.
dananski - Monday, April 16, 2012 - link
Hehehe... Only today I spotted a collegue's Vertex 3 on nearly full capacity. My flatmate is just as bad but he only has a 60GB drive... don't know how he copes.bji - Monday, April 16, 2012 - link
Are they using a workload that would be write-heavy with those drives? If so, how are they coping with being almost out of space constantly? If not, then they won't care about sandforce worst case performance because they're not doing significant enough writing to the drive anyway.James5mith - Monday, April 16, 2012 - link
Check your prices.Right now the Mushkin Chronos and Chronos Deluxe are the cheapest drives on the market, for both budget and performance categories. Want 34nm sync NAND in a 240GB drive? Newegg has the Chronos deluxe for $239. That's sub-$1/GB for top end performance.
Kristian Vättö - Monday, April 16, 2012 - link
I know I'm nitpicking but Mushin Chronos Deluxe uses 32nm Toggle NAND, not ONFi 2.x NAND.pc_void - Monday, April 16, 2012 - link
I'm also nitpicking - that's $200+ category.pc_void - Monday, April 16, 2012 - link
Well, I was thinking of the 120GB... so with the 180GB sure go Mushkin if you really want a 'better value'.iwod - Monday, April 16, 2012 - link
I would never buy an Sandforce based SSD because of all its issues. I figure it was too much clever technology built in trying to do simple things. Which we all know would be prone to error.But with Intel reliability track record and software design i am now very confident to recommend this Sandforce based SSD to anyone.
And the price listed for Intel 330 are suggested retail price!, You are very likely to find it cheaper and therefore making it one of the cheapest SSD on the market,
What is not to like?
pc_void - Monday, April 16, 2012 - link
You sound like an ad.GreenEnergy - Monday, April 16, 2012 - link
Putting an Intel label on and adding a custom firmware doesnt make the SandForce magically reliable.Galcobar - Monday, April 16, 2012 - link
Nothing magical about a year's worth of validation work.Xeno_ - Monday, April 16, 2012 - link
Will this drive in 60GB perform better/worse than the 313 in 20GB for Intel's Smart Response Technology (SRT)?Its obviously MLC but has a larger capacity (the limit of SRT's capability I think), would this be a better choice than the 313 for that purpose? Also how does the 520 60GB Stack up against the above 2?
shanssv - Monday, April 16, 2012 - link
like to see 3 of this drives in RAID0 against Revodrive 3 X2 240GBTaft12 - Monday, April 16, 2012 - link
Anand,I've deployed many, many Intel 320 Series SSDs at my workplace with astounding results. Literally zero failures in over a year and a half on dozens of laptops and workstations.
I probably wouldn't consider this Sandforce-based offering, but are you aware of an EOL planned any time soon for the 320 series? It's been very, very good to us and I'd hate to see it go away.
bji - Monday, April 16, 2012 - link
What makes you think this drive will be any less reliable than the 320? Do you think that Intel is really going to release a drive that would wreck their SSD quality reputation? I don't think so ... which is why I bought a pair of 520s recently.nexox - Monday, April 16, 2012 - link
"""What makes you think this drive will be any less reliable than the 320?"""No power fail protection (I assume, since it's cheap, and the 520 doesn't have any) means less reliability. Consumer Sandforce controllers have never been impressive on power loss, and I imagine many of the BSOD events that people see can be traced back to data loss from bad shutdowns, though it's hard for me to say for sure, since I've never see one personally.
I'm a fan of the 320 simply because it's the cheapest drive with power fail protection, and the performance is certainly good enough for my needs. Then again, it's been a while since I felt the need to run the highest-spec hardware available. I tend to buy with value for money in mind, and since I value my data, the 320 currently leads the SSD pack for me.
bji - Monday, April 16, 2012 - link
Can you expand on why you think the sandforce drives have 'no power fail protection'?Are you saying that there are drives out there that 'lie' about the persisted state of data? If so, I'd really like to know about it.
Remember, that it's OK for a drive to cache data and not commit it to permanent store if either
a) the drive has a means for ensuring that the data is written even in the event of power loss, OR
b) the operating system did not ask for a guarantee that the data was persisted
Are you saying that the sandforce drives disobey (b), i.e. tell the operating system that they have written data that they haven't?
If so, I really want to know about it. That seems like it could be class-action-lawsuit worthy.
jwilliams4200 - Tuesday, April 17, 2012 - link
He mean's that it probably has no capacitors to provide a short reserve of power in the case of an unexpected power loss (such capacitors allow the SSD to write its index to flash before shutting down).All SSDs need to maintain an index which maps LBAs (sector addresses known to the host) to flash pages known only to the SSD. Many (most?) SSDs keep at least part of that index in volatile memory for at least some time before they write it to flash. If the index were updated in flash on every event (write or GC), performance would suffer, so most SSDs do not update the flash copy of the index immediately.
If the SSD loses power before it has updated the index, then you could lose some or all of your data. Unless the SSD has capacitors to provide a short power reserve to write the index to flash before shutting down in the event of an unexpected power loss.
bji - Tuesday, April 17, 2012 - link
I just did a little reading into the ATA command set, it looks to me like they have provisions for 'sync'ing the drive to ensure that data is persisted. For example, I'm seeing something called "Flush bit" that can be used in write commands to ensure that all data is written to persisted store (and thus impervious to power outage) before completion.Are you saying that SSD controllers ignore such bits, lying to the operating system and returning before the data (and all metadata such as LBA maps required to access the data) is written to flash?
If so, then this is a major issue - class action lawsuit worthy if you ask me.
If what you're saying is that when the operating system does NOT set the 'flush bit', then the data could be lost on power outage - well that's a complete non-issue. Unless you request a guarantee that the data is written using the flush bit, you don't get any guarantee. No capacitor needed.
jwilliams4200 - Tuesday, April 17, 2012 - link
I think you are confusing host data (which can be held in a write cache, or forced to flush to persistent storage, as you say) with SSD metadata (the index of LBAs to flash pages).I don't think there is any ATA command to force metadata to be written to persistent storage.
bji - Tuesday, April 17, 2012 - link
That doesn't matter. Flushing the user data to the drive without flushing the metadata to locate it has the same effect as not flushing the user data at all, with regards to the purpose of the flush bit, which is to ensure that the data is persisted.I consider data that is persisted as data that can be read back after a power outage. if the data can't be read back because its metadata wasn't flushed, then it is not persisted.
If I were implementing a controller, I would flush the LBA index along with the user data if the flush bit were set.
Hard drives use the rotational inertia of the spinning platters to ensure that data is flushed on power outage, and some SSDs use capacitors. If a controller doesn't have some way to guarantee that enough energy will be available to flush cached written data, then it had *better* properly honor the flush bit, which includes flushing the LBA index and whatever other metadata is necessary to read the data that was written with the flush bit.
I find it very hard to believe that SSDs are not doing this.
jwilliams4200 - Tuesday, April 17, 2012 - link
I'm not sure why you find it hard to believe.Don't you know that the Intel 320 SSD has power-loss-protection capacitors for just the reason I described?
Also, Sandforce was touted from early days as supporting a supercapacitor for just this reason. But none of the consumer Sandforce SSDs actually use a supercapacitor.
bji - Tuesday, April 17, 2012 - link
To put it another way: "flush" is a simplified way of describing a complex process. "flushed" means "as persisted as the device is capable of making the data". "persisted" means "stored in such a manner as to be recoverable on a subsequent read".Clearly, not writing metadata to flash means that the data is not persisted, because it is not recoverable on a subsequent read.
Therefore, if the SSD is flushing only the user data, but not the LBA index or whatever other metadata, then it is not persisting the data, and it is lying if it says that it is.
If this is the case, I'd like to sue Intel for using Sandforce controllers because they are inherently defective. There is NO DIFFERENCE betwee this kind of lying about persistence, and intentionally writing garbage instead of the data that I had written. Or perhaps a better analogy would be, advertising a 100 GB drive as being 200 GB and just dropping any writes to the second 100 GB on the floor. It's defective, plain and simple; it does not satisfy the requirements of the interface that it claims to support.
seapeople - Tuesday, April 17, 2012 - link
You are a very strange dude.bji - Tuesday, April 17, 2012 - link
Well I was being facetious about suing Intel. But I am genuinely worried that SSD controller manufacturers may be playing games with data integrity for better performance numbers. Not sure what's strange about that.jwilliams4200 - Tuesday, April 17, 2012 - link
It seems strange to me that you seem to expect that SSDs behave in a certain way when it is clear that they do not actually behave in that way.bji - Tuesday, April 17, 2012 - link
It's not actually clear. Nobody can say for sure whether SSD controllers are properly persisting data + metadata on flush commands without rigorous testing.jwilliams4200 - Tuesday, April 17, 2012 - link
Ah, okay, in your world the 320 has power loss capacitors, and Sandforce supports supercapacitors, just for fun. No good reason. Right.You are a very strange dude.
bji - Tuesday, April 17, 2012 - link
I'll give you one more chance before concluding you're a troll just trying to annoy me.A device can use capacitors to increase performance by allowing writes to be cached in volatile storage even in the cases where the device has been instructed to flush the data to persistent storage. Because volatile memory + a capacitor with enough capacity to flush the volatile store to nonvolatile store is for all intents and purposes nonvolatile.
If a device DOES NOT have a capacitor or a spinning disk or whatever to provide enough energy to flush volatile storage, THEN it must implement other mechanisms to ensure that data is persisted.
IF such a drive DOES NOT have a capacitor, and DOES use write caching, and DOES NOT flush all data necessary to properly persist cached writes (including all metadata and the data itself), THEN it is lying when reporting that the data has been flushed to nonvolatile storage.
Just because some devices use capacitors doesn't mean that this is the ONLY WAY to guarantee persisted data.
Operating system filesystems cache write data too, you know. And they also supply a means for flushing cached writes out to permanent storage. But they don't *lie* for the purposes of making writes look faster - if you ask for the data to be flushed, the operating system will wait until it has been flushed.
The SSD should do the same thing. It should guarantee that the data is persisted when asked to do so - either by writing it to volatile RAM with enough stored energy to guarantee that it will be written out in the event of a power loss, OR by actually flushing it and all associated metadata necessary to recover the data.
The suggestion that some SSDs are simply lying when asked to flush write caches is worrisome; and nobody can say whether or not a particular controller is lying in this way without testing.
jwilliams4200 - Tuesday, April 17, 2012 - link
I stopped reading at your second paragraph.Why do you persist in imagining that SSDs work in some strange way other than they obviously do?
That is not what the capacitors are used for.
bji - Tuesday, April 17, 2012 - link
You haven't offered any evidence or theory to the contrary, and I don't think you have a clue.jwilliams4200 - Tuesday, April 17, 2012 - link
All it takes is a little research to find that the Intel 320 does not cache any host data.Someone here is clueless, but it isn't me.
bji - Wednesday, April 18, 2012 - link
The Intel 320 wasn't even the topic of conversation. The topic was sandforce based drives. But I'm done with this pointless back and forth as apparently you don't even know what was being discussed.jwilliams4200 - Wednesday, April 18, 2012 - link
So you deny reality? Put your hands over your ears and hum? The 320 does not behave how I think SSDs should behave, so I will ignore it?And you call me clueless?
GreenEnergy - Monday, April 16, 2012 - link
I would pick the 320 series over the 520 or 330 series anyday. Even with the premium. Intels own controller works. SandForce doesnt. Even Intels own firmware BSOD. And the drives still get sudden death. I know internally Intel people hate it.Biggest mistake ever to start using the worst SSD controller there is. Not to mention the huge hit and PR disaster to the else now well established Intel reliability and quality brand in the SSD marked.
Coup27 - Monday, April 16, 2012 - link
For all the people who think the 520 is immune from the BSOD's, a quick Google search will turn up pages and pages of different websites and forums with users with problems.For me, Samsung 830. Solid.
bji - Monday, April 16, 2012 - link
I did the search and read the results. I did not come away with the impression that there is a significant problem with BSODs on the 520. I think I saw 3 people reporting actual problems, and lots of reference to "lots of reports of this problem", which leads me to believe that most of it is hearsay.Ramon Zarat - Monday, April 16, 2012 - link
Have a M4 128GB here and not a single issue or BSOD in 9 months. Looking around at reviews and forums, it's clear that Intel, Samsung and Crucial are definitively the 3 top dog as far as quality/reliability is concerned.SGTGimpy - Monday, April 16, 2012 - link
I have an Intel 520 240GB and have beat the crap out of thing for 2 weeks straight on my EVGA X58 Classified and I have not experienced a single BSOD nor have I ever had an issue with any SSD I have used with random BSOD. I read through the BSOD issue on the found on Google for the 520 before I purchased it and 80% are user error related. So I have listed some helpful tips when installing an SSD in your system.1. Update your Mother Board BIOS to the latest version. (Unless there is a known issue with it on SSD) Always do this first.
2. Do not use IDE or Compatibility mode on SATA controller (This is the most common issue). Always use AHCI or RAID (even if you’re not putting it in a RAID Array). Also some add-in SATA Chipsets on MB don't play well with SSD's so it's good to stay on the chipset's SATA.
3. DO A FRESH INSTALL!!!!!! Don’t be lazy. Do not use a drive copy or image tool. Only bad Ju-Ju comes from this. **Also during the install, even if you’re OS supports the chipset controller (ex Intel ICH.x). Get the latest driver from the manufacture (Intel, AMD or other and not MB manufacture) and load the controller driver during the OS install. (You would not believe how much this fixes your BOSD issues). The driver in the OS is a watered down old driver just to make it work.
4. Once OS is install, get the latest drivers for all your hardware from their manufactures. This will also help stabilize your system. Some the BSOD that people thought were from the SSD where actually cuased by something else.
Hope this helps some of you out there pulling your hair out with SSD's.
SGTGimpy - Monday, April 16, 2012 - link
And I apologize for the bad grammar on the above. I was doing 10 things at once and didn’t check it. :(UltraTech79 - Monday, April 16, 2012 - link
The prices go down down down and the NANDs go higher!Beenthere - Monday, April 16, 2012 - link
$30 diff and a 2 yr. shorter warranty should send up red flags all over the place.ShieTar - Tuesday, April 17, 2012 - link
3 Years of warranty still seems very reasonable. I don't think I ever kept the same system drive for more than 3 years.Taft12 - Wednesday, April 18, 2012 - link
It's not whether you'd use the warranty in years 4 and 5 -- it's the message the vendor is sending with a longer-than-standard warranty.lilmoe - Tuesday, April 17, 2012 - link
While $20 - $50 (~20%) of savings is somewhat significant for some, I still think the 520 is a better and "safer" deal. You get two more years of warranty and estimated lifespan for ~20% increase on the price. I have more than 20GB of daily writes on my drive(s).I'd recommend the 520 over the 330 any day. Especially if one's running a non-RAID workstation (production) setup. The 330 should do pretty well for the mainstream consumer though. But still at this point, those interested in buying separate SSDs aren't exactly mainstream.
DesktopMan - Tuesday, April 17, 2012 - link
So the 180GB costs more per GB than the 120GB, which makes no sense. Both prices include the cost of the controller, board and casing which is the same for both. Adding more flash compared to a lower model should result in lower price per GB...Kristian Vättö - Tuesday, April 17, 2012 - link
The 120GB model has sixteen single-die NAND packages whereas the 180GB model has twelve dual-die NAND packages (the controller runs in 6-channel mode). It's possible that dual-die NAND packages cost more per GB, hence the difference.Shadowmaster625 - Tuesday, April 17, 2012 - link
It is foolish for intel to take the risk of tarnishing their reputation by using sandcrap. They must have found the bug that the others obviously missed. Hehe maybe Intel is the one who made sure those bugs were in there in the first place? Naw... omg they would never do something like that!solinear - Thursday, April 19, 2012 - link
They all seem a bit off on the non-Intel side, frequently 7-20% off, sometimes significantly more.I also think that you should be showing more budget oriented products listed. If the 520 is targeted at the Vertex 3/4, Crucial m4 and Samsung 830 market, which the price suggests, perhaps we should be showing the lower end versions of products, such as the Agility line in comparison. This ends up with a comparison of $120 (after rebates) for a 120GB drive, compared to the Intel drive costing $150 (a 25% cost premium for Intel's name).
I will say that I can find the products at those prices at like Best Buy and other expensive retailers, but Amazon and even Newegg seems to have better prices for everything than you have listed, except the Intel drives, which are holding the line pretty tightly (as I would expect). If this were a week or two old article I might expect some drift, but not just a couple of days.
kendenbowsr - Thursday, April 19, 2012 - link
I posted the following information on the Intel communities blogs but have not received any comments so far. I am still getting random BSODs, mostly while the computer is sitting idle.Apr 19, 2012 12:32 PM in response to: skiman
Re: Intel 520 Series 120G SSD random BSOD's - please help
I had the same problem. Random BSOD's and random poor performance.
I purchased two of the Intel 520 SSDs. One worked perfectly in one of my computers with Gigabyte motherboard. The other had these problems.
That computer is a Dell Optiplex 755 with Dell/Intel motherboard and an Intel ICH9R SATA support chip.
I set BIOS to AHCI/RAID mode. I installed Windows 7 Home Premium.
Installed Intel Solid State Drive Toolbox version 3.0.2.
I took most of the published steps to fix this problem. I did secure erase after 1st install.
I upgraded BIOS from A19 to A21 (the latest) version. I reinstalled Windows two times.
I set power management to high performance. I could not get a good windows 7 dump because the SSD interface was what was failing.
I finally got the main problem fixed and I am now getting good performance and BSODs about once every 2 days. I found and applied the Intel LPM fix.
The Link power management (LPM) fix is to disable the LPM from
setting the SATA interface to reduced power.
I still get BSODs once about every 2 days, but performance is much better. Intel still needs to fix the 520 SSD firmware to work with their own SATA controllers.
Message was edited by: Kenneth Denbow. Now getting BSODs about once a day. Tried a different SATA port and a different SATA cable, but still the same problems, I reinstalled my old backup WD 500GB drive in AHCI mode and no problems with it.
Message was edited by: Kenneth Denbow for the 2nd time. I cloned the Windows 7 on Intel 520 SSD to an old Samsung 1TB non-SSD SATA drive model HD103SJ. Then booted computer with this drive. After windows 7 loaded MSAHCI driver and re-booted I ran the Windows Experience Index assesment again. My hard disk subscore dropped from 7.8 to 5.9 but my computer runs without any BSODs. I guess I have lost $200 on the Intel 520 and over 20 hours of work trying to get it to work.
Message was edited by: Kenneth Denbow for the 3rd time. Reinstalled SSD and I still get BDODs, mostly overnight while computer sits idle. I noticed that I gave some wrong information earlier. My motherboard chipset has an Intel ICH9D0 SATA controller and not ICH9R. The motherboard is made by Intel, model GM819, in a Dell optiplex 755 computer. I installed a new 450 watt power supply just to make sure that power was not the problem. Still fails.
sameproblem - Monday, June 25, 2012 - link
I have been experiencing the same problem.I get the BSOD overnight mainly I have the Intel 520 SSD 120Gb with a Dell OptiPlex 790 dt just purchased in April 2012. I generally get a BSOD once a day and it is random.
Dell support have told me to reinstall windows and load the drivers in a certain order to fix the issue, my computer support guy thinks this will not work and I am reluctant to try as it means reloading my whole system.
I am not sure what to do.
gonks - Friday, April 20, 2012 - link
Do you plan to review the Mushkin Chronos SSD anytime soon?dandjh - Friday, April 20, 2012 - link
The Mushkin have been doing pretty good with SSD's, I personally have one and I chose it after reading a lot about all of the brands.I think it really should be considered in Anandtech in general.
Downs.86 - Saturday, April 28, 2012 - link
http://www.amazon.com/gp/product/B003WV5DLA/ref=oh...Note: do not run a cloned drive with original in Windows system (Windows doesn't know how to respond and deletes a boot.ini file)
coolguy80 - Friday, May 11, 2012 - link
amazon priced 330 series 120GB at $128 and now for a week they are putting $145! can I expect a price correction(reduction) anytime soon?mike2k12 - Sunday, June 3, 2012 - link
Just did a new build this weekend with the 180G version and it BSODed with the PC at idle. Ran it with a Sabertooh z77 and i7-3770. So far only one BSOD, but scandisk did find some issues and fixed it.