Original Link: https://www.anandtech.com/show/7947/micron-m500-dc-480gb-800gb-review



While the client SSD space has become rather uninteresting lately, the same cannot be said of the enterprise segment. The problem in the client market is that most of the modern SSDs are already fast enough for the vast majority and hence price has become the key, if not the only, factor when buying an SSD. There is a higher-end market for enthusiasts and professionals where features and performance are more important, but the mainstream market is constantly taking a larger and larger share of that.

The enterprise market, on the other hand, is totally different. Unlike in the client world, there is no general "Facebook-email-Office" workload that can easily be traced and the drives can be optimized for that. Another point is that enterprises, especially larger ones, are usually well aware of their IO workloads, but the workloads are nearly always unique in one way or the other. Hence the enterprise SSD market is heavily segmented as one drive doesn't usually fit all workloads: one workload may require a drive that does 100K IOPS in 4KB random write consistently with endurance of dozens of petabytes, while another workload may be fine with a drive that provides enough 4KB random read performance to be able to replace several hard drives. Case in point, this is what Micron's enterprise SSD lineup looks like:

Comparison of Micron's Enterprise SSDs
  M500DC P400m P400e P320h P420m
Form Factor 2.5" SATA 1.8" ; .2.5" SATA, mSATA 2.5" PCIe, HHHL PCIe
Capacities (GB) 120, 240, 480, 800 100, 200, 400 50, 100, 200, 400 175, 350, 700 350, 700, 1400
Controller Marvell 9187 Marvell 9174 IDT 32-channel PCIe 2.0 x8
NAND 20nm MLC 25nm MLC 25nm MLC 34nm SLC 25nm MLC
Sequential Read (MB/s) 425 380 440 / 430 3200 3300
Sequential Write (MB/s) 200 / 330 / 375 200 / 310 100 / 160 / 240 1900 600 / 630
4KB Random Read (IOPS) 63K / 65K 52K / 54K / 60K 59K / 44K / 47K 785K 750K
4KB Random Write (IOPS) 23K / 33K / 35K / 24K 21K / 26K 7.3K / 8.9K / 9.5K / 11.3K 205K 50K / 95K
Endurance (PB) 0.5 / 1.0 / 1.9 1.75 / 3.5 / 7.0 0.0875 / 0.175 25 / 50

5 / 10

In order to fit the table on this page, I even had to leave out a few models, specifically the P300, P410m, and P322h. With today's release of the M500DC, Micron has a total of eight different active SSDs in its enterprise portfolio while its client portfolio only has two.

Micron's enterprise lineup has always been two-headed: there are entry to mid-level SATA/SAS products, which are followed by the high-end PCIe drives. The M500DC represents Micron's new entry-level SATA drive and as the naming suggests, it's derived from the client M500. The M500 and M500DC share the same controller (Marvell 9187) and NAND (128Gbit 20nm MLC) but otherwise the M500DC has been designed from ground up to fit the enterprise requirements.

Micron M500DC Specifications (Full, Steady-State)
Capacity 120GB 240GB 480GB 800GB
Controller Marvell 88SS9187
NAND Micron 128Gbit 20nm MLC
DRAM 256MB 512MB 1GB 1GB
Sequential Read 425MB/s 425MB/s 425MB/s 425MB/s
Sequential Write 200MB/s 330MB/s 375MB/s 375MB/s
4KB Random Read 63K IOPS 63K IOPS 63K IOPS 65K IOPS
4KB Random Write 23K IOPS 33K IOPS 35K IOPS 24K IOPS
Endurance (TBW) 0.5PB 1.0PB 1.9PB 1.9PB

The M500DC is aimed at data centers that require affordable solid-state storage, such content streaming, cloud storage, and big data analytics. These are typically hyperscale enterprises and due to their exponentially growing storage needs, the storage has to be relatively cheap or otherwise the company may not have the capital to keep up with the growth. In addition, most of these data centers are more read heavy (think about Netflix for instance) and hence there is no need for high-end PCIe drives with endurance in the order of dozens of petabytes.

In terms of NAND the M500DC features the same 128Gbit 20nm MLC NAND as its client counterpart. This isn't even a high-endurance or enterprise specific part -- it's the same 3,000 P/E cycle part you find inside the normal M500. Micron did say that the parts going inside the M500DC are more carefully picked to meet the requirements but at a high-level we are dealing with consumer-grade MLC (or cMLC).

To get away with cMLC in the enterprise space, Micron sets aside an enormous portion of the NAND for over-provisioning. The 480GB model features a total of six NAND packages, each consisting of eight 128Gbit dies for a total NAND capacity of 768GiB. In other words, only 58% of the NAND ends up being user-accessible. Of course not all of that is over-provisioning as Micron's NAND redundancy technology, RAIN, dedicates a portion of the NAND for parity data, but the M500DC still has more over-provisioning than a standard enterprise drive. The only exception is the 800GB model which has 1024GiB of NAND onboard with 73% of that being accessible by the user.

User Capacity 120GB 240GB 480GB 800GB
Total NAND Capacity 192GiB 384GiB 768GiB 1024GiB
RAIN Stripe Ratio 11:1 11:1 11:1 15:1
Effective Over-Provisioning 33.5% 33.5% 33.5% 21.0%

A quick explanation for the numbers above. To calculate the effective over-provisioning, the space taken by RAIN must be taken into account first because RAIN operates at the page/block/die level (i.e. parity is not only generated for the user data but all data in the drive). A stripe ratio of 11:1 basically means that every twelfth bit is a parity bit and thus there are eleven data bits in every twelve bits. In other words, out of 192GiB of raw NAND only 176GiB is usable by the controller to store data. Out of that 120GB (~112GiB) is accessible by the user, which leaves 64GiB for over-provisioning. Divide that by the total NAND capacity (192GiB) and you should get the same 33.5% figure for effective over-provisioning as I did.

Test Setup
CPU Intel Core i7-4770K at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe (BIOS 1707)
Chipset Intel Z87
Chipset Drivers 9.4.0.1026
Storage Drivers Intel RST 12.9.0.1001
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Power Supply Corsair RM750
OS Windows 7 Ultimate 64-bit

Before we get into the actual tests, we would like to thank the following companies for helping us with our 2014 SSD testbed.



The Features

Micron calls its enterprise feature set eXtended Performance and Enchanced Reliability Technology or just XPERT. It's a combination of hardware and software-level features that help to extend endurance and ensure data integrity at any time. Some elements of XPERT are present in client drives as well while some are limited to Micron's enterprise SSDs.

ARM/OR

ARM/OR, short for Advanced Read Management/Optimized Read, is Micron's adaptive DSP technology (i.e. aDSP). The idea behind ARM/OR and aDSP in general is to monitor the NAND voltages in real time and make changes if necessary.

As NAND wears out, electrons get trapped inside the silicon oxide and floating gate. This changes the voltage required to read/program the cell (as the graph above shows) and eventually the cell would reach a point where it cannot be read/programmed using the original voltage. With ARM/OR the controller can adapt to the changes in read/program voltages and can continue to operate even after significant wear.

RAIN

We've talked about RAIN (Redundant Array of Independent NAND) before as it is a feature Micron utilizes in all of its SSDs but I'll go through it briefly here. Essentially RAIN is a RAID 5 like structure that uses parity to protect against data loss. In the case of the M500DC, the stripe ratio is 11:1 for the 120GB, 240GB and 480GB models and 15:1 for the 800GB one, meaning that one page/block is reserved for parity for every 11 or 15 pages/blocks. The parity can then be used to recover data in case there is a block failure or the data gets corrupted.

DataSAFE

DataSAFE ensures that all data, read or written, is intact. When writing data, Micron not only writes the data from the host to the NAND but also the metadata associated with the data (i.e. its logical block address/LBA). When the host then requests the data to be read from the drive, the LBA of the data is first compared against the value in the L2P table (logical to physical -- Micron's name for the NAND mapping table) before sending the data to the host. This ensures that the data that is being read from the NAND is in fact the same data that the host requested. If the LBA values of the NAND and L2P table are different, then the drive will have to rely on ECC or RAIN to recover the correct data.

In addition to LBA embedding and read checking, DataSAFE does data path protection from the SATA transport all the way to the NAND controller. While client drives only generate CRC and BCH error correction codes to detect errors, the enterprise drives add an additional memory correction ECC (MPECC) layer. MPECC is a 12-byte error-correcting code that follows the user data during its whole path. The difference between the client and enterprise solutions is that the client drives rely on the host to make the actual error correcting, whereas the MPECC layer can correct a single-bit error within the data path (i.e. before the data even hits the host).

Power Loss Protection

While both Micron's client and enterprise SSDs feature power loss protection, there is a difference between them. The enterprise drives features more rigid tantalum capacitors that provide a higher capacitance compared to the standard capacitors used in the client drives. The higher capacitance ensures that absolutely no data is lost during a power loss, whereas there is still a small risk of data loss in client drives. I believe the difference is that the capacitors in client drives only provide enough capacitance to flush the NAND mapping table (or L2P table as Micron calls it) to the NAND, while the enterprise solution guarantees that in addition to the NAND mapping table, all write requests in process will also be completed.



Endurance Ratings: How They Are Calculated

One of the questions I quite often face is about the manufacturers' endurance ratings. Go back two or three years and nobody had any endurance limits in their client SSDs but every SSD released in the past year or so has an endurance limitation associated with it. Why did that happen? Let's open up the situation a bit.

A few years ago, many enterprises would just go and buy regular consumer SSDs and use them in their servers. Generally there is nothing wrong with that because there are scenarios where enterprises can get by with client-grade hardware, but the problem was that a share of the enterprises knew that the drives weren't durable enough for their needs. However, they also knew that if they wore out the drive before the warranty ran out, the manufacturer would have to replace it.

Obviously that wasn't very good business for the manufacturers because for one drive sold, more than one had to be given away for free. At the same time less customers were buying the more expensive, high profit enterprise drives. Without disrupting the client market by either increasing prices or reducing quality, the manufacturers decided to start including a maximum endurance rating, which would invalidate the warranty if exceeded.

The equation for endurance is rather simple. All you need to take into account is the capacity of the drive, the P/E cycles of the NAND and the wear leveling and write amplification factors. When all that is put into an equation, it looks like this:

Notice that the correct term for TBW is TeraBytes Written, not TotalBytes Written although both are fairly widely used. The hardest part in calculating the TBW is figuring out the wear leveling and write amplification factors because these are workload depedent. Hence manufacturers often use a worst case 4KB random write scenario to come up with the TBW figure as this ensures that the end-user cannot have a more demanding workload with higher write amplification.

For the uninitiated, the wear leveling factor (WLF) in this context means the maximum stress that the wear leveling method would put onto the most heavily cycled block compared to the average number of cycles. A factor of two would mean that the most heavily cycled block would have twice the number of cycles compared to the average. Write amplification factor (WAF), on the other hand, refers to the ratio of host and NAND writes. A factor of two would in this case mean that for every megabyte that the host writes, two megabytes are written to the NAND. These two factors go hand in hand in the sense that a small WLF results in higher WAF because the drive will do more internal reorganization operations to cycle all blocks equally, which consumes NAND writes.

The interesting part about TBWs is that they actually give us a way to estimate the combined wear leveling and write amplification factor of the drive. In the case of 120GB M500DC, that would be a surprising 0.72x. Obviously you can go lower than 1x without using some form of compression but the 120GB M500DC actually has 192GiB of NAND onboard that extends the endurance. If we used that figure to calculate the combined WLF and WAF, it would be 1.24x, which is much more reasonable. For some reason the JEDEC spec defines the capacity as the usable capacity even for endurance calculations but in the end it doesn't matter what figure you change as they are all related to each other (e.g. with 120GB used as the capacity, the P/E cycles could be higher than 3,000 because the over-provisioned NAND adds cycles).

Ultimately none of the manufacturers are willing to disclose the exact details of how they calculate their endurance ratings but at the high-level this is how it's done according to JEDEC's standards. Furthermore, I wouldn't rule out the possibility that some OEMs artificially lower the ratings for their consumer drives just to make sure they are not used by enterprises. In the end, there isn't really a way for us to find out whether the TBW is accurate or not since the efficiency factors are not easily measurable by third parties like us.



Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  Micron M500DC Crucial M500 Micron P400e Micron P400m Intel SSD DC S3500
Default

As you would expect with such hefty over-provisioning, the IO consistency is excellent. The 480GB model manages around 40K IOPS at steady-state, which is certainly one of the highest we have tested. (For the record, the Intel DC S3700 does about 33K but unfortunately there wasn't room in the table to include it.) The 800GB version doesn't do as well given its smaller over-provisioning percentage but it's relatively competitive with the 200GB P400m, which has 336GiB of NAND onboard (that's 44.6% total over-provisioning versus 27.2% in the 800GB M500DC). Compared with the P400m, the M500DC is, after all, a nice upgrade because it's able to improve IO consistency without using more over-provisioning to do that.

  Micron M500DC Crucial M500 Micron P400e Micron P400m Intel SSD DC S3500
Default

 

  Micron M500DC Crucial M500 Micron P400e Micron P400m Intel SSD DC S3500

Default


In addition to our regular performance consistency graphs, I decided to include two additional graphs that test the IO consistency over a longer period of time. For client drives, 2000 seconds is generally enough to show steady-state behavior but enterprise SSDs tend to have more over-provisioning and hence require a longer time to reach steady-state. Instead of running the test for 2000 seconds, I extended the run to 7200 seconds (i.e. two hours) while keeping everything else the same. I only have results for the M500DC drives right now, but let's see if anything changes with a longer test cycle:


As you can see, neither of the drives show steady behavior right after 2000 seconds. In fact, it looks as though the initial trigger of data reordering drops performance and then the M500DC drives recover somewhat around the 3000 second mark, though the different amount of overprovisioning means they don't look all that similar. Unfortunately I don't have any results for other drives at this point but I'll be sure to test more drives before the next review to give you a better picture ot the M500DC's position in the market.




Random & Sequential Performance

We are currently in the process of updating our enterprise SSD test suite and the new random and sequential performance tests are the first fruits of that. In the past our tests were set to a certain queue depth (mostly 32 in the enterprise tests), which didn't give the full picture of performance. As enterprise workloads are almost always unique, the queue depths vary greatly and the proper way to test performance is across all the possible queue depths. In our new tests we are now testing queue depth scaling from one to all the way to 128. While it's unlikely for enterprise workloads to have small queue depths, testing them gives us an important look into the architecture of the drive. Similarly, it's rare for even the more demanding enterprise workloads to exceed queue depth of 64 but we are still including the 128 in case it matters to some of you.

Since we are testing an enterprise class drive, we cannot look at the performance in a secure erased state as that would be unrealistic. Enterprise workloads tend to stress the drive 24/7 and thus we need to simulate the worst case performance by preconditioning the drive into steady-state before running the actual tests. To do this, we first fill the drive with sequential 128KB data and proceed with 4KB random writes at a queue depth of 32. The length of the torture depends on the drive and its characteristics but in the case of the M500DC, I ran the 4KB random write workload for two hours. As the performance consistency graphs on the previous page show, two hours is enough for the M500DC to enter steady-state and ensure consistent results.

After the preconditioning, we tested the performance across all queue depths at full LBA with Iometer. The test was ran for three minutes at each queue depth and the next test was started right after the previous one to make sure the drive was given no time to rest. The preconditioning process was repeated before every test (excluding read tests, which were run right after write tests) to guarantee that the drive was always in steady-state when tested.

4KB Random Performance

The random write scaling graph shows pretty much the same as our consistency tests. After the queue depth of four the performance reaches its limit and no longer scales. Interestingly, the DC S3500 doesn't scale at all, although it's performance is low to begin with when compared with the M500DC. (This is due to the difference in over-provisioning -- the S3500 only has 12% whereas the M500DC has 27/42%.)

Random read performance, on the other hand, behaves a bit differently. As steady-state doesn't really affect read performance, the performance scales all the way to 90K IOPS. The M500DC does well here and is able to beat the S3700 quite noticeably for typical enterprise workloads. The S3500 does have a small advantage at smaller queue depths but at QD16 and after, which are what matter for enterprise customers, the M500DC takes the lead.

4KB Random 70% Read - 30% Write Performance

Typically no workload is 100% read or write, so to give a perspective of a mixed workload we are now including a 4KB random test with 70% read and 30% write commands. The LBA space is still 100% and the IOs are fully random, something which is also common for enterprise workloads.

Once again the M500DC beats the DC S3500, which is mostly due to its superior random write performance. This is also the only workload where scaling happens up to queue depth of 32.

128KB Sequential Performance

Due to lack of time, I unfortunately don't have results for the sequential performance of the DC S3500. However, the tests still provide a look into the M500DC even though the graphs lack a comparison point.

The 480GB M500DC is again significantly faster than the 800GB model thanks to the added over-provisioning. Bear in mind that these are steady-state figures, which is why the performance may seem a bit slow compared to what we usually see in benchmarks.

In terms of sequential read performance, on the other hand, the drives appear equal.



Final Words

Overall the M500DC is a sensible addition to Micron's enterprise SSD lineup. It fills the gap between the consumer M500 and the P400m by providing a solution that is affordable yet has the feature set and meets the performance requirements of enterprise customers. The performance is actually far better than I expected from an entry-level drive, although I must admit that I was surprised (and perhaps a little terrified too) when I noticed that Micron sets aside up to 42% (!) of the NAND for over-provisioning and RAIN. While this isn't anything new for Micron (45% of the NAND in the P400m is inaccessible to the user), it's certainly a lot given that most of the competitors are only setting aside 27% or 12% of the NAND.

I think this is also where Micron's strength culminates. While using every possible bit in the NAND is crucial for the fab-less competitors to cut costs, Micron can use a bit more NAND for over-provisioning while remaining competitive in price as the NAND is so much cheaper for them. That also helps with the R&D costs because unlike Intel and many others, Micron isn't designing their own controller but relies on Marvell for the silicon.

  Micron M500DC Intel SSD DC S3500
120GB $220 $159
240GB $366 $275
480GB $609 $543
800GB $1006 $886

Pricing in the enterprise space behaves a bit differently than in the client world. As drives are generally purchased in bulk, Micron couldn't provide any specific MSRPs for the drives, hence I had to rely on one of their resellers. The above table uses Arrow's pricing to give some idea of the typical cost. The Intel S3500 prices are Intel's bulk prices listed on their site but I'd like to emphasize that the prices here may not be very accurate and potential buyers should consult their distributors before making any purchasing decisions.

Update: Micron just sent us a note that one of their other resellers, CDW, sells the M500DC at noticeably lower prices, so I've updated the pricing table with the new prices. CDW also carries the S3500 and I've included its retail price in the table as well. Still, customers buying straight from Micron should expect even lower pricing as these single unit prices.

The M500DC carries a small premium over the S3500, but then it often performs substantially better as well. Most of the difference is due to the amount of NAND Micron sets aside for over-provisioning and NAND because that NAND is still a part of the bill of materials. If we compare the price against the total amount of NAND onboard, the pricing of the M500DC doesn't look that bad ($0.79/GiB vs $1.06/GiB for the S3500 at 480GB). I'm still not convinced that setting aside that much NAND is the best solution but it's understandable when seeking maximum performance and reliability for enterprise workloads. As the NAND lithographies get smaller, the increasing over-provisioning is the trade off that has to be made in order to avoid impacts on performance and endurance.

Ultimately, there is no one drive that is the best in all aspects and it's up to one's workload to find out the most suitable drive. I believe the M500DC provides a well balanced solution for the hyperscale customers that require consistent performance but are not looking for extreme endurance. The hyperscale market is quickly growing and will continue to do so and more affordable enterprise SSDs with regular MLC will continue to aid that growth.

Log in

Don't have an account? Sign up now