Original Link: https://www.anandtech.com/show/7864/crucial-m550-review-128gb-256gb-512gb-and-1tb-models-tested
Crucial M550 Review: 128GB, 256GB, 512GB and 1TB Models Tested
by Kristian Vättö on March 18, 2014 8:00 AM ESTThis is shaping up to be the busiest month in the SSD frontier for ages. Intel released its new flagship SSD 730 series just a couple of weeks ago and there are at least two more releases coming in the next few weeks...but today it's Crucial's turn to up the ante.
Unlike many OEMs, Crucial has more or less had only one active series in its SSD portfolio at a time. A few years ago this approach made sense because the SSD market as a whole mainly focused on enthusiasts and there was no real benefit to a tiered lineup. As the market has matured and prices have dropped over time, we are now in a situation similar to other components: there is the high volume mainstream market where price is the key and the higher margin enthusiast/professional market where performance and features matter. Covering both of these markets with a single product is hard because in order to compete in price, it's usually necessary to use lower quality parts, which in turn affects performance and features.
With the M500 Crucial was mainly targeting the mainstream market. The performance was mostly better than in the m4 days but only mediocre compared to other SSDs in the market. The introduction of the likes of the SanDisk Extreme II, Seagate SSD 600, and OCZ Vector 150 has upped the ante even more in the enthusiast segment and it has become clear that the M500 has no place there. To stay competitive in all product areas, Crucial is now launching the big brother to their M500: the M550.
EDIT: Just to clarify, the M500 will continue to be available and the M550 is merely a higher performing option at slightly higher price.
With 64Gbit NAND, 240/256GB was usually the sweet spot in terms of price and performance. That combination offered enough NAND die to saturate the SATA 6Gbps as well as the controller's/firmware's potential, but with the M500 this was no longer the case thanks to the usage of 128Gbit NAND. With a die that was twice the capacity, you only needed half the dies to build a 240/256GB SSD. As NAND parallelism is a major source of SSD performance, this meant a decrease in performance at 240/256GB and you would now have to go to 480/512GB to get the same level of performance that 240/256GB offered with 64Gbit NAND.
The use of 128Gbit NAND was one of the main reasons for the M500's poor performance, and with others staying with 64Gbit NAND, that backfired on Crucial in terms of performance (more on this later). Since it's not possible to magically decrease program times or add parallelism, Crucial decided to bring back the 64Gbit NAND in the lower capacity M550s. Here's how the new and old models compare:
Crucial M550 vs Crucial M500 | ||||||||
M550 | M500 | |||||||
Controller | Marvell 88SS9189 | Marvell 88SS9187 | ||||||
NAND | Micron 64/128Gbit 20nm MLC | Micron 128Gbit 20nm MLC | ||||||
Capacity | 128GB | 256GB | 512GB | 1TB | 120GB | 240GB | 480GB | 960GB |
Sequential Read | 550MB/s | 500MB/s | ||||||
Sequential Write | 350MB/s | 500MB/s | 130MB/s | 250MB/s | 400MB/s | |||
4KB Random Read | 90K IOPS | 95K IOPS | 62K IOPS | 72K IOPS | 80K IOPS | |||
4KB Random Write | 75K IOPS | 80K IOPS | 85K IOPS | 35K IOPS | 60K IOPS | 80K IOPS | ||
Endurance | 72TB (~66GB/day) | 72TB (~66GB/day) | ||||||
Warranty | Three years | Three years |
The 128GB and 256GB models are now equipped with 64Gbit per die NAND while 512GB and 1TB models use the same 128Gbit NAND as in the M500. What this means is that the 128GB and 256GB models are much more competitive in performance because the die count is twice that of the same capacity M500 drives. You get roughly the same performance with both 256GB and 512GB models (unlike the nearly 50% drop in write performance like in the M500) and the 128GB actually beats the 240GB M500 in all metrics. There is obviously some firmware tweaking involved as well and the bigger capacities get a performance bump too, although it's much more moderate compared to the smaller capacities.
Another difference is the controller. Compared to the NAND, this isn't as substantial a change because the Marvell 9189 is more of an updated version of the 9187 and the only major upgrades are support for LPDDR and better optimization for DevSleep, both of which help with power consumption and can hence extend the battery life.
Crucial M550 Specifications | ||||
Capacity | 128GB | 256GB | 512GB | 1TB |
Controller | Marvell 88SS9189 | |||
NAND | Micron 64Gb 20nm MLC | Micron 128Gb 20nm MLC | ||
Cache (LPDDR2-1066) | 512MB | 512MB | 512MB | 1GB |
Sequential Read | 550MB/s | 550MB/s | 550MB/s | 550MB/s |
Sequential Write | 350MB/s | 500MB/s | 500MB/s | 500MB/s |
4KB Random Read | 90K IOPS | 90K IOPS | 95K IOPS | 95K IOPS |
4KB Random Write | 75K IOPS | 80K IOPS | 85K IOPS | 85K IOPS |
Similar to the earlier drives, Crucial continues to be Micron's household brand whereas OEM drives will be sold under Micron's name. It's just a matter of branding and there are no differences between the retail and OEM drives other than an additional 64GB model for OEMs.
Crucial switches back to binary capacities in the M550 and with the 1TB model you actually get the full 1024GB of space (though Crucial lists it as 1TB for marketing reasons, and there's still 1024GiB of actual NAND). The reason behind this isn't a reduction in over-provisioning but merely a more optimized use of RAIN (Redundant Array of Independent NAND).
RAIN is similar to SandForce's RAISE and the idea is that you take some NAND space and dedicate that to parity. Almost every manufacturer is doing this at some level nowadays since the NAND error and failure rates are constantly increasing as we move to smaller lithographies. When the M500 came out the 128Gbit NAND was very new and Crucial/Micron wanted to play it safe and dedicated quite a bit of NAND for RAIN to make sure the brand new NAND wouldn't cause any reliability issues down the road. In a year a lot happens in terms of maturity of a process and Crucial/Micron are now confident that they can offer the same level of endurance and reliability with less parity. The parity ratio is 127:1, meaning that for every 127 bits there is one parity bit. This roughly translates to 1GiB of NAND reserved for parity in the 128GB M550 and 2GiB, 4GiB and 8GiB for the higher capacities.
Feature wise the M550 adopts everything from the M500. There is TCG Opal 2.0 and IEEE-1667 support, which are the requirements for Microsoft's eDrive encryption. Along with that is full power loss protection thanks to capacitors that provide the necessary power to complete in-progress NAND writes in case of power loss.
Update: Micron just told us that in addition to the capacitors there is some NAND-level technology that makes the M550 even more robust against power losses. We don't have the details yet but you'll be the first to know once we got them.
NAND Configurations | ||||
Raw NAND Capacity | 128GiB | 256GiB | 512GiB | 1024GiB |
RAIN Allocation | ~1GiB | ~2GiB | ~4GiB | ~8GiB |
Over-Provisioning | 6.1% | 6.1% | 6.1% | 6.1% |
Usable Capacity | 119.2GiB | 238.4GiB | 476.8GiB | 953.7GiB |
# of NAND Packages | 16 | 16 | 16 | 16 |
# of NAND Die per Package | 1 x 8GiB | 2 x 8GiB | 2 x 16GiB | 4 x 16GiB |
Test System
CPU |
Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled) |
Motherboard | AsRock Z68 Pro3 |
Chipset | Intel Z68 |
Chipset Drivers | Intel 9.1.1.1015 + Intel RST 10.2 |
Memory | G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24) |
Video Card |
Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective) |
Video Drivers | NVIDIA GeForce 332.21 WHQL |
Desktop Resolution | 1920 x 1080 |
OS | Windows 7 x64 |
Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit
NAND Lesson: Why Die Capacity Matters
SSDs are basically just huge RAID arrays of NAND. A single NAND die isn't very fast but when you put a dozen or more of them in parallel, the performance adds up. Modern SSDs usually have between 8 and 64 NAND dies depending on the capacity and the rule of "the more, the better" applies here, at least to a certain degree. (Controllers are usually designed for a certain amount of NAND die, so too many dies can negatively impact performance because the controller has more pages/blocks to track and process.) But die parallelism is just a part of the big picture—it all starts inside the die.
Meet the inside version of our Mr. NAND die. Each die is usually divided into two planes, which are further divided into blocks that are then divided into pages. In the early NAND days there were no planes, just blocks and pages, but as the die capacities increased the manufacturers had to find a way to get more performance out of a single die. The solution was to divide the die into two planes, which can be read from or written to (nearly) simultaneously. Without planes you could only read or program one page per die at a time but two-plane reading/programming allows two pages to be read or programmed at the same time.
The reason I said "nearly" is because programming the NAND involves more than just the programming time. There is latency from all the command, address and data inputs, which are marginal compared to the program time but with two-plane programming they take twice the time (you'll still have to send all the necessary commands and addresses separately for both soon-to-be-programmed pages).
I did some rough calculations based on the data I have (though to be honest, it's probably not enough to make my calculations bulletproof) and it seems that the two-plane programming latency is about 2% compared to two individual dies (i.e. it takes 2% longer to program two pages with two-plane programming than with two individual dies). In other words, we can conclude that two-plane programming gives us roughly twice the throughput compared to one-plane programming.
"Okay," you're thinking, "that's fine and all, but what's the point of this? This isn't a new technology and has nothing to do with the M550!" Hold on, it'll make sense as you read further.
Case: M500
M550 128GB | M500 120GB | |
NAND Die Capacity | 64Gbit (8GB) | 128Gbit (16GB) |
NAND Page Size | 16KB | 16KB |
Sequential Write | 350MB/s | 130MB/s |
4KB Random Write | 75K IOPS | 35K IOPS |
The Crucial M500 was the first client SSD to utilize 128Gbit per die NAND. That allowed Crucial to go higher than 512GB without sacrificing performance but also meant a hit in performance at the smaller capacities. As mentioned many times before, the key to SSD performance is parallelism and when the die capacity doubles, the parallelism is cut in half. For the 120/128GB model this meant that instead of having sixteen dies like in the case of 64Gbit NAND, it only had eight 128Gbit dies.
It takes 1600µs to write 16KB (one page) to Micron's 128Gbit NAND. Convert that to throughput and you get 10MB/s. Well, that's the simple version and not exactly accurate. With eight dies, the total write throughput would be only 80MB/s but the 120GB M500 is rated 130MB/s. The big picture is more than just the program time as in reality you have to take into account the interface latency as well as the gains from two-plane programming and cache-mode (the command, address and data latches are cached so there is no need to wait for them between programmings).
Example of cache programming
Like I described above, two-plane programming gives us roughly twice the throughput compared to one-plane programming. As a result, instead of writing one 16KB page in 1600µs, we are able to write two pages with 32KB of data in total. That doubles our throughput from 80MB/s to 160MB/s. There is some overhead from the commands like the picture above shows but thankfully today's interfaces are so fast that it's only in the magnitude of a few percents and in real world the usable throughput should be around 155MB/s. The 120GB M500 manages around 140MB/s in sequential write, so 155MB/s of NAND write throughput sounds reasonable since there is always some additional latency from channel and die switching. Program times are also averages and vary slightly from die to die and it's possible that the set program times may actually be slightly over 1600µs to make sure all dies meet the criteria.
Case: M550
While the M500 used solely 128Gbit NAND, Crucial is bringing back the 64Gbit die for the 128GB and 256GB M550s. The switch means twice the amount of die and as we've now learned, that means twice the performance. This is actually Micron's second generation 64Gbit 20nm NAND with 16KB page size similar to their 128Gbit NAND. The increase in page size is required for write throughput (about 60% increase over 8KB page) but it adds complexity to garbage collection and can increase write amplification if not implemented efficiently (and hence lower endurance).
Micron wouldn't disclose the program time for this part but I'm thinking there is some improvement over the original 128Gbit part. As process nodes mature, you're usually able to squeeze out a little more performance (and endurance) out of the same chip and I'm thinking that is what's happening here. To get ~370MB/s out of the 128GB M550, the program time would have to be 1300-1400µs to be inline with the performance. It's certainly possible that there's something else going on (better channel switching management for instance) but it's clear that Crucial/Micron has been able to better optimize the NAND in the M550.
The point here was to give an idea of where the NAND performance comes from and why there is such dramatical difference between the M550 and M500. Ultimately all the NAND performance characteristics are something the manufacturers won't disclose and hence the figures here may not be accurate but should at least give a rough idea of what is happening at the low level.
Performance Consistency
Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.
To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.
We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.
Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.
For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.
Crucial M550 | Crucial M500 | Intel SSD 730 | SanDisk Extreme II | Samsung SSD 840 EVO | |||||
Default | |||||||||
25% Spare Area | - |
I can't say I'm very pleased with the IO consistency of the M550. There is a moderate increase (~4K IOPS vs 2.5K in M500) in steady-state performance but other than that there isn't much good to say. All the other higher-end drives run circles around the M550. I should note that the M550 does have considerably less over-provisioning than the other drives but even at 25% OP the results aren't pretty. There is huge variation in performance and the graphs with additional spare area certainly look quite abnormal, but the IOPS is still mostly below 5000. There are peaks of over 50K IOPS too but personally I would prefer a steady line (like the SSD 730) instead of this constant up and down. In client workloads the variation in IOPS isn't as critical as in the enterprise (where predictable performance is a must) but there can be an impact on performance in IO intensive scenarios.
Crucial M550 | Crucial M500 | Intel SSD 730 | SanDisk Extreme II | Samsung SSD 840 EVO | |||||
Default | |||||||||
25% Spare Area | - |
Crucial M550 | Crucial M500 | Intel SSD 730 | SanDisk Extreme II | Samsung SSD 840 EVO | |||||
Default | |||||||||
25% Spare Area | - |
TRIM Validation
To test TRIM, I took a secure erased drive and filled it with sequential data. Then I tortured the drive with 4KB random writes (QD32) for 30 minutes followed by a TRIM command (quick format in Windows). Finally I measured performance with HD Tach to bring you the graph below:
And as you should expect, TRIM works.
AnandTech Storage Bench 2013
Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based—we record all IO requests made to a test system and play them back on the drive we're testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. As some of you have asked, I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.
AnandTech Storage Bench 2013 - The Destroyer | ||
Workload | Description | Applications Used |
Photo Sync/Editing | Import images, edit, export | Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox |
Gaming | Download/install games, play games | Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite |
Virtualization | Run/manage VM, use general apps inside VM | VirtualBox |
General Productivity | Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan | Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware |
Video Playback | Copy and watch movies | Windows 8 |
Application Development | Compile projects, check out code, download code samples | Visual Studio 2012 |
We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.
The relatively poor IO consistency shows up in our Storage Bench 2013 as well. We are looking at about Samsung SSD 840 EVO level performance here (when taking the difference in over-provisioning into account), which is certainly a step up from the M500 but in the end the M550 is just a mediocre performer.
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). We perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.
Random performance is strong when dealing with an empty drive but as the two previous pages show the big picture isn't as pleasant. The difference between 64Gbit and 128Gbit NAND is very clear here as the M550 is up to twice as fast as the M500 at the smaller capacities.
Sequential Read/Write Speed
To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
Sequential speeds are up quite a bit from the M500 as well but the read performance is still a bit lacking.
AS-SSD Incompressible Sequential Read/Write Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers. Again, the M550 shows a decent improvement over the M500, particularly at the lower capacities.
Performance vs. Transfer Size
ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. Both read and write performance are up from the M500 but the read isn't very good in general—and unlike the Intel SSD 730, you don't have the benefit of consistent performance to make up for this particular shortcoming.
Click for full size
AnandTech Storage Bench 2011
Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
We tried to cover as many bases as possible with the software incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). We've included a large amount of email downloading, document creation and editing as well. To top it all off we even use Visual Studio 2008 to build Chromium during the test. The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:
AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown | |
IO Size | % of Total |
4KB | 28% |
16KB | 10% |
32KB | 10% |
64KB | 4% |
Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1. The full description of the test can be found here.
AnandTech Storage Bench 2011 - Heavy Workload
AnandTech Storage Bench 2011 - Light Workload
Our light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric). There's lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers.
AnandTech Storage Bench 2011 - Light Workload IO Breakdown | |
IO Size | % of Total |
4KB | 27% |
16KB | 8% |
32KB | 6% |
64KB | 5% |
Even with our older (generally less demanding) workloads, the M550—like the M500—don't really stack up all that well compared to the top performers. Provided the pricing is right, we can overlook a lot of this, but if you're after top performance there are definitely better SSDs.
Power Consumption
The M550 has excellent power profile. It comes in as the most efficient drive with low power states (HIPM+DIPM) enabled and is able to beat Samsung SSDs, which have usually been the leaders in this front. It seems like the switch to Marvell 9189 controller and LPDDR has really paid off here since the power is less than half of what the M500 consumes. Load power consumption is also decent and we are looking at the normal 3-4W range for both sequential and random writes.
Final Words
There are so many reasons why the M550 could be one of the best SSDs in the market. It has the best-in-class encryption support (along with Samsung 840 EVO) and it's also one of the only consumer-grade drives with power loss protection. Heck, it even supports DevSleep to enable low-power states in mobile platforms. Basically it has all the bells and whistles one could hope from a client drive. But there is one big "but": the performance. The M550 is supposed to be Crucial's high performance offering but compared to other high-end drives in the market, the performance is average at best. It's an upgrade over the M500, that's for sure, but that's not enough to make it to the medals podium.
The biggest Achilles' Heel of the M550 is its performance consistency. Given that it has been the focus of other manufacturers for the last year or so, it seems odd that Crucial hasn't done much to improve in this area. It's again better than the M500 consistency but compared to what SanDisk has been able to do with the same controller, the M550 doesn't impress. The potential saving grace would be pricing, so let's look there.
NewEgg Price Comparison (3/17/2014) | ||||
120/128GB | 240/256GB | 480/512GB | 960GB/1TB | |
Crucial M550 (MSRP) | $100 | $169 | $337 | $531 |
Crucial M500 | $75 | $120 | $275 | $440 |
Intel SSD 730 | - | $240 | $450 | - |
Intel SSD 530 | $115 | $180 | $399 | - |
OCZ Vector 150 | $138 | $190 | $390 | - |
OCZ Vertex 460 | $100 | $185 | $360 | - |
Samsung SSD 840 EVO | $95 | $160 | $265 | $554 |
Samsung SSD 840 Pro | $119 | $208 | $420 | - |
SanDisk Extreme II | $121 | $250 | $500 | - |
Seagate SSD 600 | $105 | $170 | $380 | - |
The positive side is that pricing is extremely competitive. The M500 is already lowballing every other SSD in our comparison and the M550 comes in as a close second—and we expect street pricing to be lower than the MSRPs we've listed. The 840 EVO and Seagate SSD 600 are the only ones that beat the M550 in price but that's only at specific capacities (512GB) and with the current large sales that are going on. If Crucial is able to keep the pricing as competitive as our comparison suggests, other OEMs will definitely have a hard time competing with the M500 and M550.
All in all, I'm not sure how I should feel about the M550. On the one hand it feels a bit redundant to release a "high performance" drive that in reality is only average, but on the other hand, does it really matter if the price is right? I think not, but my concern is whether the M550 is fast enough to justify the added cost over the M500.
If you're a light user and price is the key purchase factor, then the M500 suffices and saves you money. However, if you're a power user and want performance, then it's better to look for the SanDisk Extreme II or Seagate SSD 600, or grab the Samsung 840 EVO 500GB on sale. The M550 kind of falls in between the two user groups and I'm not sure if there's any significant market there. For people who are not entirely sure whether the M500 is fast enough for their needs, the M550 is certainly a good and safe choice but I would have liked to see something competitive with the SanDisk Extreme II instead, even if the result was higher cost. It's not fast enough to close the gap, so the result ends up being a rehash of what we've already seen.