Original Link: https://www.anandtech.com/show/8528/micron-m600-128gb-256gb-1tb-ssd-review-nda-placeholder



Those that have been following the SSD industry for a couple of years are likely aware that Micron does not sell retail drives under its own brand (unlike for example Samsung and Intel). Instead Micron has two subsidiaries, Crucial and Lexar, with their sole purpose being the handling of retail sales. The Crucial side handles RAM and SSD sales, whereas Lexar is focused on memory cards and USB flash drives. The Micron crew is left with business to business sales, which consists of OEM sales as well as direct sales to some large corporations.

In the past the difference between Micron and Crucial branded SSDs has merely been the label and packaging. The M500 and M550 were the same for both brands and the Micron version of the Crucial m4 was simply called C400 instead. However, that strategy came to its end with the MX100, which was a retail (i.e. Crucial) only product. Micron decided to separate the product planning of Micron and Crucial drives and the branding was also made clearer: the M lineup is now strictly Micron; Crucial drives will be using the MX branding that was introduced with the MX100.

The main reason for the separation was the distinctive needs of the two markets. Products that are aimed towards OEMs tend to require longer validation cycles because of stricter quality requirements (a bad drive in a laptop will hurt the laptop's brand, not just the drive's brand) and PC OEMs generally do their own validation as well, which further increases the overall validation time. The retail market, on the other hand, is more focused around being the first and providing the best value, which frankly does not play very well with the long OEM validation cycles.

Since the Crucial MX100 was strictly for retail, the Micron M600 will in turn be OEM-only. The two have a lot in common as both are based on Micron's latest 128Gbit 16nm NAND, but unlike previous releases the two are not identical. The M600 comes in a variety of form factors and also includes a 1TB model, whereas the MX100 is 2.5" only and tops out at 512GB. It makes sense to offer more form factors for the OEM market because the PC laptop industry is moving more and more towards M.2 for space savings, but on the other hand the retail SSD market is still mostly 2.5" because people are upgrading older laptops and desktops that do not have mSATA or M.2 slots.

Micron M600 Specifications
Capacity 128GB 256GB 512GB 1TB
Controller Marvell 88SS9189
NAND Micron 128Gbit 16nm MLC
Form Factors 2.5" 7mm, mSATA & M.2 2260/2280 2.5" 7mm
Sequential Read 560MB/s 560MB/s 560MB/s 560MB/s
Sequential Write 400MB/s 510MB/s 510MB/s 510MB/s
4KB Random Read 90K IOPS 100K IOPS 100K IOPS 100K IOPS
4KB Random Write 88K IOPS 88K IOPS 88K IOPS 88K IOPS
Idle Power (DevSleep/Slumber) 2mW / 95mW 2mW / 100mW 2mW / 100mW 3mW / 100mW
Max Power 3.6W 4.4W 4.7W 5.2W
Encryption TCG Opal 2.0 & eDrive
Endurance 100TB 200TB 300TB 400TB
Warranty Three years

Aside from the form factors and capacities, the M600 also brings something new and concrete. As I mentioned in the launch article, the M600 introduces pseudo-SLC caching to Micron's client SSDs, which Micron calls Dynamic Write Acceleration. I will talk a bit more about it in just a second but as an overview, Dynamic Write Acceleration increases the write performance at smaller capacities and also allows for higher endurance. As a result even the 128GB model is capable of 400MB/s sequential write and 88K IOPS random write and the endurance sees a boost from 72TB to 100TB despite the use of smaller lithography and higher capacity 128Gbit 16nm MLC. The endurance also scales with capacity and although the scaling is not linear, the 1TB model is rated at 400TB of host writes under typical client workloads.

Hardware encryption in the form of TCG Opal 2.0 and eDrive support has been a standard feature in Micron/Crucial client SSDs for a while now and the M600 continues that trend. Currently the M600 has not been certified by any third party software vendors (e.g. Wave and WinMagic), but Micron told us that the M600 is in the validation process as we speak and an official certification will come soon. The M600 lineup also has SKUs without any encryption support for regions where encryption is controlled by the government.

Dynamic Write Acceleration

During the past year there has been an industry-wide trend to include a pseudo-SLC cache in client-grade SSDs. The driving force behind the trend has been the transition to TLC NAND as an SLC cache is used to compensate for the slower performance and lower endurance of TLC NAND. Despite sticking with MLC NAND, the M600 has also added a pseudo-SLC feature.

Micron's implementation differs from the others in the sense that the size of the SLC cache is dynamic. While Samsung's and SanDisk's SLC caches have a fixed capacity, the M600's SLC cache size is determined by how full the drive is. With an empty drive, almost all blocks will be run in SLC mode and as the drive is filled the cache size decreases. Micron claims that even when the drive is 90% full, Dynamic Write Acceleration offers higher acceleration capacity than competing technologies, although unfortunately Micron did not share any actual numbers with us.

At the wafer level, the 128Gbit 16nm die is the same as in the MX100, but there are some proprietary processes that are applied on the die before it is packaged that enable the use of Dynamic Write Acceleration. The SLC cache is not fixed to any specific location as the firmware and special NAND allow any block to be run in either SLC or MLC mode, which is different from SanDisk's nCache 2.0 where every NAND die has a portion of blocks set in SLC mode. As a result Dynamic Write Acceleration will move data from die to die when transferring data from SLC to MLC, so there is a bit more controller and NAND interface overhead compared to SanDisk's implementation. There is no predefined threshold as to when data starts to be moved from SLC to MLC as that depends on a variety of factors, but Micron did say that idle time will trigger the migration (Micron's definition of idle time is anything over 50ms).


I ran HD Tach to see how the performance is affected when writing sequentially across all LBAs. You can clearly see that there are actually three stages. First writes go to SLC and performance is a steady 400MB/s, and it seems that an empty drive runs about 45% of its NAND in SLC mode. The first drop indicates that the drive is now switching to MLC NAND, hence the big difference between 128GB and 256GB models. The second drop, on the other hand, means that the drive is now moving data from SLC to MLC at the same time as it is taking in host writes, so there is significant overhead from the internal IOs that result in <100MB/s host write speeds.

Notice that Dynamic Write Acceleration is only enabled at 128GB and 256GB capacities (excluding mSATA/M.2, which have it enabled for 512GB too), so with the 512GB and 1TB models the write performance will be the same across all LBAs. The larger capacities have enough NAND for high parallelism, so DWA would not really bring any performance improvements due to SATA 6Gbps being the bottleneck.

Obviously, in real world client scenarios it is unlikely that one would constantly write to the drive like HD Tach does. Typically write bursts are no more than a few hundred megabytes at most, so all writes should hit the SLC portion at full speed. We will see how Dynamic Write Acceleration does in a more realistic scenarios in a moment.

The Truth About Micron's Power-Loss Protection

I want to begin by saying that I do not like calling out companies' marketing. I believe marketing should always be taken with a grain of salt and it is the glorified marketing that creates a niche for sites like us. I mean, if companies were truly honest and thorough in their marketing materials, you would not really need us because you could compare products by looking at the results the manufacturers publish. Hence I do not usually spend much time on how this and that feature are just buzzwords because I think it is fine as long as there is nothing clearly misleading.

But there is a limit. If a company manages to "fool" me with their marketing, then I think there is something seriously wrong. The case in point is Micron's/Crucial's client SSDs and their power-loss protection. Back when the M500 was launched a bit over a year ago, Micron introduced an array of capacitors to its client SSD and included power-loss protection as a feature in the marketing material.

I obviously assumed that the power-loss protection would provide the same level of protection as in the enterprise SSDs (i.e. all data that has entered the drive would be protected in case of a sudden power-loss) as Micron did not make any distinction between power-loss protection in client drives and power-loss protection in enterprise drives.

M500DC on the left, MX100 in the middle & M600 on the right

I got the first hint when I reviewed the M500DC. It had much larger and more expensive tantalum capacitors as the photo above shows, whereas the client drives had tiny ceramic capacitors. I figured that there must be some difference, but Micron did not really go into the details when I asked during the M500DC briefing, so I continued to believe that the client drives have power-loss protection as well, but maybe not just as robust as the enterprise drives.

In the MX100 review, I was still under the impression that there was full power-loss protection in the drive, but my impression was wrong. The client-level implementation only guarantees that data-at-rest is protected, meaning that any in-flight data will be lost, including the user data in the DRAM buffer. In other words the M500, M550 and MX100 do not have power-loss protection -- what they have is circuitry that protects against corruption of existing data in the case of a power-loss.

So, what are the capacitors there for then? To understand the technical details, I need to introduce some new terminology. Lower and upper pages stand for the two bits that are programmed to each cell in MLC NAND. Basically, lower page is the first bit and upper page is the second -- and with TLC there would be a middle page as well. Do not confuse lower and upper page terms with the typical 'page' term, though, which is used to refer to the smallest amount of data that must be written at once (usually 16KB with the currently available NAND).

MLC NAND programming is done in two steps by programming the lower and upper pages separately, where lower page programming is essentially like SLC NAND program. If the bit of the lower page is '1', then the cell will remain empty (i.e. no charge stored). But if it is '0', then the threshold voltage is increased until it reaches the '0' state.

Once the lower page has been programmed, the upper page i.e. the second bit can be programmed to the cell. This is done by fine tuning the cell voltage to the required state because the bit of the lower page already limits the possible bit outputs to two. In other words, if the lower bit is '0', then the possible bit outputs can be either '01' or '00'. Given that the cell is already charged to '0' state during lower page programming, only a minor change is needed to tune the cell voltage to either '01" or '00' state.

The bit of the lower page being '1' does not really change anything either. If the upper page is '1' too, then the cell will remain in erased state and the bit output will be '11'. If the upper page is '0', then the cell will be charged to '10' state. Both the '1' and '0' states are more like intermediate states because the upper page will define the final cell voltage and I suspect that pseudo-SLC works by only programming the lower pages.

The reason why MLC programming is done in two steps is to reduce floating gate coupling i.e. cell-to-cell interference. I covered cell-to-cell interference in more detail in our Samsung SSD 850 Pro review, but in a nutshell the neighboring cells introduce capacitive coupling to each other and the strength of this coupling depends on the charge of the cell. Going from empty to '00' state would present such a large change in the coupling that the state of the neighboring cells might change, which in turn would corrupt the bit value.

The graph above should give you a general idea of the programming algorithm. Simply put, the lower pages of neighbor cells are programmed first because lower pages have larger voltage distributions as shown above, which means that when the upper page of a neighbor cell is programmed the change in coupling is not significant enough to alter the state of the lower page.

It is time to connect this with Micron's power-loss protection. I hope everyone is still following me as that was a long and technical explanation, but I wanted to be thorough to ensure that what I say next makes sense. If power is lost during upper page program, the bit associated with the lower page of that cell may be lost. Because lower and upper page programs are not necessarily sequential, the data in the lower page might have been written earlier and would be considered data-at-rest. Thus a sudden power-loss might corrupt old data and the function of the capacitors is to ensure that the lower page data is safe. Any ongoing upper page program will not be completed and obviously all data in the DRAM buffer will be lost, but old data will be safe.

I want to apologize for spreading misinformation. I know some readers have opted for the MX100 and other Micron/Crucial drives because we said the drives feature full power-loss protection, but I hope this is not a deal-breaker for anyone. It was not Micron's or our intention to "fool" anyone into believe that the clients drives have full power-loss protection and after talking to Micron we reached an agreement that the marketing material needs to be revised to be more clear on the fact that only data-at-rest protection is guaranteed.

On the positive side, what Micron/Crucial is doing is still something that the others are not. I have not seen any other client SSDs that had capacitors to protect against lower page corruption, although there may be alternative methods to work around that (e.g. ECC). Anyway, I did not want this to come out as too negative because the capacitors still provide vital protection against data corruption -- there was just a gap between our and Micron's comprehension that lead to some misunderstandings, but that gap no longer exists.

Test Systems

For AnandTech Storage Benches, performance consistency, random and sequential performance, performance vs transfer size and load power consumption we use the following system:

CPU Intel Core i5-2500K running at 3.3GHz (Turbo & EIST enabled)
Motherboard ASRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit

For slumber power testing we used a different system:

CPU Intel Core i7-4770K running at 3.3GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe (BIOS 1707)
Chipset Intel Z87
Chipset Drivers Intel 9.4.0.1026 + Intel RST 12.9
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Desktop Resolution 1920 x 1080
OS Windows 7 x64


Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Micron M600 256GB
Default
25% Over-Provisioning

The 1TB M600 actually performs quite significantly worse than the 256GB model, which is most likely due to the tracking overhead that the increased capacity causes (more pages to track). Overall IO consistency has not really changed from the MX100 as Dynamic Write Acceleration only affects burst performance. I suspect the firmware architectures for sustained performance are similar between the MX100 and M600, although with added over-provisioning the M600 is a bit more consistent.

Micron M600 256GB
Default
25% Over-Provisioning

Micron M600 256GB
Default
25% Over-Provisioning

TRIM Validation

To test TRIM, I filled the 128GB M600 with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

It appears that TRIM does not fully recover the SLC cache as the acceleration capacity seems to be only ~7GB. I suspect that giving the drive some idle time would do the trick because it might take a couple of minutes (or more) for the internal garbage collection to finish after issuing a TRIM command.



AnandTech Storage Bench 2013

Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based – we record all IO requests made to a test system and play them back on the drive we are testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.

AnandTech Storage Bench 2013 - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, Bioshock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, Ad-Aware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we have been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

Storage Bench 2013 - The Destroyer (Data Rate)

Wow, this actually looks pretty bad. The 256GB M600 is slower than the 256GB MX100 and I guess it is due to the fact that under sustained workloads, the M600 will have to transfer data from SLC to MLC at the same time it is taking in host IOs, so the performance drops due to the internal IO overhead. The 1TB drive does better thanks to higher parallelism, but even then the M550 and 840 EVO are faster.

Storage Bench 2013 - The Destroyer (Service Time)



AnandTech Storage Bench 2011

Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 – Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives. The full description of the Heavy test can be found here, while the Light workload details are here.

Heavy Workload 2011 - Average Data Rate

Dynamic Write Acceleration starts to show its strength in our 2011 Benches. Since the Heavy and Light suites are run on an empty drive, the M600 can take full advantage of its dynamic SLC cache and as a result the M600 is faster than the MX100, although not substantially so. It seems that Samsung's TurboWrite is better optimized for light client workloads, though, as in our Light suite the EVO is faster even at the smallest capacities. That said, the difference is quite negligible, but it is interesting nevertheless.

Light Workload 2011 - Average Data Rate



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). We perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.

Desktop Iometer - 4KB Random Read

Desktop Iometer - 4KB Random Write

Desktop Iometer - 4KB Random Write (QD=32)

Random performance remains more or less unchanged from the MX100 and M550. Micron has always done well in random performance as long as the IOs are of bursty nature, but Micron's performance consistency under sustained workloads has never been top notch.

Sequential Read/Write Speed

To measure sequential performance we run a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read

Sequential write performance sees a minor increase at smaller capacities thanks to Dynamic Write Acceleration, but aside from that there is nothing surprising in sequential performance.

Desktop Iometer - 128KB Sequential Write

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers, but most other controllers are unaffected.

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance

 



Performance vs. Transfer Size

ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench and I have included separate graphs for each capacity to make the graphs more readable. Write performance sees a boost across all transfer sizes thanks to Dynamic Write Acceleration, but read performance at smaller IO sizes leaves a lot to be desired as there has not been any improvement since the M550/MX100.



Power Consumption

The M600 brings a fairly significant reduction in slumber power consumption compared to the MX100. I believe OEMs are more demanding on power requirements because the majority of SSDs will be shipped in mobile systems. While SSD power consumption only plays a small part in total power consumption, it is still important to save power on every aspect where possible to maximize battery life.

SSD Slumber Power (HIPM+DIPM) - 5V Rail

Load power consumption is also good and it seems that the Dynamic Write Acceleration helps a bit here. Writing to pseudo-SLC NAND is more efficient because less iterations and verifications cycles are needed compared to MLC or TLC, which brings the power consumption down during write operations.

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write



Final Words

My thoughts about the M600 are mixed. On the one hand I am happy to see that Micron is showing its commitment to the client market by investing on features like Dynamic Write Acceleration because to be honest, Micron has not really introduced anything new to its client SSDs since the M500. Innovation in the client segment is difficult because the market is so cost driven and even though a pseudo-SLC cache is nothing new, Micron's way of implementing it is.

On the other hand, I am a bit disappointed by the performance of the M600 and especially Dynamic Write Acceleration. In theory Dynamic Write Acceleration sounds great because it should provide the maximum acceleration capacity under every circumstance and thus maximize performance, but the truth is that the speed improvements over the MX100 are minimal. Add to that the fact that the M600 is actually outperformed by the 840 EVO, which utilizes TLC NAND with smaller SLC caches. It is not like the M600 is a slow or bad drive, not at all; it is just that I expected a bit more given the combination of MLC NAND and dynamic SLC cache.

The positive side of Dynamic Write Acceleration is the increased endurance. While 72TB was without a doubt enough for average client workloads, it is never a bad thing to have more. Especially OEMs tend to appreciate higher endurance because it is associated with higher reliability, and it also opens a wider market for the M600 as it can be used in workstation setups without having to worry about drives wearing out. Of course, I would pick a faster drive like 850 Pro for workstation use, but for OEMs the cost tends to be more important.

NewEgg Price Comparison (9/28/2014)
  120/128GB 240/256GB 480/512GB 960GB/1TB
Micron M600 $80 $140 $260 $450
Crucial MX100 $75 $112 $213 -
Crucial M550 $90 $150 $272 $480
SanDisk Ultra II $80 $110 $200 $433
SanDisk Extreme Pro - $190 $370 $590
Samsung SSD 850 Pro $120 $200 $380 $700
Samsung SSD 840 EVO $82 $140 $236 $500
OCZ ARC 100 $75 $120 $240 -
Plextor M6S $75 $135 $280 -
Intel SSD 530 $84 $140 $250 -

Since the M600 is an OEM-only product, it will not be available through the usual retail channels. Thus the pricing will depend highly on the quantity ordered, so the prices in the above table are just approximate prices for orders of one that Micron provided us. The M600 enjoys a price premium over the MX100, but I suspect that in high volumes the M600 pricing should drop close to the MX100 levels, perhaps even lower.

All in all, I would have liked to see Micron going after Samsung's 850 Pro and SanDisk's Extreme Pro with the M600, but I do see the logic behind sticking to the high volume mainstream market. In terms of performance, features, and price, the M600 is a solid product and I am certain that PC OEMs will see the appeal in MLC NAND and high endurance over competitors' TLC offerings, especially in the more professional-oriented PC segments.

It will nevertheless be interesting to see how the separation of retail and OEM product teams plays out for Micron. I am eager to see whether Micron can optimize Dynamic Write Acceleration for heavier workloads and finally provide competition in the high-end SSD market as well. For now, this is a good first step, but it might take a revision or two before Dynamic Write Acceleration can reach its full potential.

Log in

Don't have an account? Sign up now