Original Link: https://www.anandtech.com/show/9430/micron-m510dc-480gb-enterprise-sata-ssd-review
Micron M510DC (480GB) Enterprise SATA SSD Review
by Kristian Vättö on July 21, 2015 8:00 AM ESTThe client SSD market is very price sensitive, but likewise there is a craving for lower $/GB in the enterprise world as well. The lower cost per gigabyte has enabled the use of SSDs in scenarios that have typically been dominated by hard drives. Not only do SSDs provide tremendously better performance (especially random IO), power efficiency per IO is also considerably higher. Combine this with capacities that are constantly increasing, SSDs are starting to offer higher density over hard drives.
Most of the workloads where SSDs are now replacing HDDs tend to be read-centric. For performance sensitive and write-intensive workloads SSDs have been the choice for years now, at least as a buffer/cache before a further outward write. The beauty of NAND flash is the fact that while it has finite write endurance, it can practically be read for an unlimited number of times (read disturb only becomes an issue after ~100K read cycles on an unerased block i.e. erasing and reprogramming the block will allow for another 100K cycles). Increasing endurance and decreasing price at the same time is a difficult objective, but if write endurance isn't a major concern it's easy to drive the price down by moving to a smaller lithography. Lithography shrinks are the biggest factor that drive NAND prices down because a smaller lithography generates more gigabytes per wafer, reducing the overall cost per gigabyte. The downside of moving to smaller lithographies is reduced endurance due to cells being less error tolerant (fewer electrons to play with and increased disturbance from neighboring cells), but if you are targeting a read-intensive segment of the market, that's a fair tradeoff.
That introduces us to the M510DC. Last year Micron released the M500DC with endurance of three drive writes per day (DWPD), which is typical for enterprise drives aimed for mixed workloads. While the M500DC is reasonably priced for a drive with 3 DWPD endurance, Micron realized that it's missing the customers who are seeking low cost drives for read-intensive applications. The M510DC is here to fill that gap and in short it's a derivative of the M500DC using Micron's latest 16nm 128Gbit MLC NAND for higher cost efficiency and with a lower DWPD metric.
Micron's internal research suggests that two thirds of drives used in data centers experience less than one fill per day. That makes sense because ultimately the majority of data stored in data centers is static, so a large share of the data is accessed in a read-only basis with infrequent changes to the data itself (imagine a Facebook status update or photo for instance).
The M510DC is powered by Marvell's 88SS9187 controller, which is found in the M500DC, andis a few years old by now, but it's still the muscle to some of the best SATA SSDs on the market. The controller is accompanied by Micron's in-house firmware.
Since the M510DC is built upon the M500DC platform, the capacities only go to up to 960GB. I was told that it's not a limitation in the hardware design, but because the core firmware remains unchanged from the M500DC and a 1TB+ SKU would have required more significant changes and hence additional engineering resources. Micron also said that 480GB is currently its highest volume product and while the 800GB is gaining popularity, there isn't that much demand for higher capacities (yet). I found that to be a little surprising given that higher capacities yield much higher density per rack and the cost per gigabyte is about the same, but Micron explained that the server OEMs (which are Micron's biggest customers) are relatively slow to adopt anything new because there is always optimization work that goes into play (e.g. a need for a new RAID card that needs to be validated and so on).
Micron M510DC Specifications | ||||||
Capacity | 120GB | 240GB | 480GB | 960GB | ||
Controller | Marvell 88SS9187 | |||||
NAND | Micron 16nm 128Gbit MLC | |||||
Sequential Read | 420MB/s | 420MB/s | 420MB/s | 420MB/s | ||
Sequential Write | 170MB/s | 290MB/s | 380MB/s | 380MB/s | ||
4KB Random Read | 63K IOPS | 63K IOPS | 63K IOPS | 65K IOPS | ||
4KB Random Write | 12K IOPS | 18K IOPS | 23K IOPS | 10.5K IOPS | ||
Idle Power | 1.2W | 1.2W | 1.2W | 1.2W | ||
Read Power | <4W | <5W | <6W | <6.3W | ||
Write Power | 4W | 5W | 6W | 6.3W | ||
Endurance (TBW) | 460TB | 920TB | 1,850TB | 1,140TB | ||
Endurance (DWPD) | 2 | 2 | 2 | 1 | ||
Encryption | TCG Enterprise |
Because the M510DC isn't a retail drive, Micron couldn't deliver any pricing because it will vary depending on the volume. I was told, however, that the M510DC is priced between the M600 and M500DC, so it should be relatively competitive.
As I explained above, the change to more cost efficient 16nm NAND comes at the expense of endurance. The endurance drops from 3 DWPD to 2 (from 2 to 1 in the highest capacity), which is still very good for a read-focused drive because several other manufacturers are offering drives with only ~0.3 DWPD. However, the 20nm to 16nm transition isn't the whole story because the M510DC also has less over-provisioning than the M500DC, again making the M510DC more cost competitive.
Usable Capacity | 120GB | 240GB | 480GB | 960GB |
Total NAND Capacity | 160GiB | 320GiB | 640GiB | 1024GiB |
RAIN Stripe Ratio | 9:1 | 9:1 | 9:1 | 31:1 |
Effective Over-Provisioning | 20.2% | 20.2% | 20.2% | 9.6% |
The M510DC includes the full set of Micron eXtended Performance and Enhanced Reliability Technology (or just XPERT) features, including power loss protection for all user data and RAIN for protection against page, block and die level NAND failures. I covered the XPERT features in detail in our M500DC review, so I suggest you refer to that review for further details of the feature set.
There is one new feature in the M510DC though, and that is TCG Enterprise encryption. In the enterprise space it's less likely that someone would have physical access to the server as data centers tend to be well guarded, but once drives are recycled or repurposed they are vulnerable to theft and unauthorized access. This is especially true with regards to financial and medical data, which can be highly damaging in the wrong hands so employing encryption is crucial for ensuring data protection.
When a drive is encrypted using TCG-E, it's tied to a single host and an authentication key is required if the drive is being accessed using a different host. The encryption key itself never leaves the drive (similar to the client spec) which guarantees that there is no way to obtain the key without authentication whereas software-based encryption has loopholes in which the encryption key can be obtained from the host's DRAM. TCG-E is also completely transparent to the host and requires no software - as long as TCG-E compliant RAID card is used TCG-E will be enabled automatically. The M510DC is also available without TCG-E as some regions have tight restrictions when it comes to encrypted storage.
AnandTech 2015 Enterprise SSD Suite
It's been close to a year since our last enterprise SSD review and to be honest the last year has just been crazy busy. When Anand retired last year, SSDs became solely my responsibility. I was more or less already running the SSD show, but Anand still covered some of the substantial launches (like the Intel P3700) and most importantly he was always around to help in case there was a tight deadline on a launch or another obstacle. I also quickly realized that the second year at university wasn't going to be as laid-back as the first one was, so in order to graduate on time I decided to prioritize my studies and not let work take over my life just yet.
This all led to me making the executive decision to hold off on enterprise testing until I have enough time to perform both client and enterprise testing properly. I could have continued enterprise testing, but since I thought our enterprise test suite needed an overhaul and I knew extensive testing would have jeopardized our client coverage, I wanted to give 110% to our new 2015 Client SSD Suite and then get back to the enterprise drives when the time is right. While enterprise SSDs are certainly intriguing, especially all the PCIe/NVMe ones, I believe our core competence lies in the client space because of our deep understanding and experience in that field. The enterprise segment is far more complex and testing wise it's simply impossible for me to do what I would ideally like to do because gaining access to real world enterprise workloads is very difficult and I don't think AnandTech server workloads are enough to give an accurate picture of all the different workloads there are.
That said, I think our new tests still do a good job of characterizing performance. I'm not going to overhype and say that the way we test is somehow special because it mostly isn't. All our new tests are based on custom Iometer 1.1.0 settings and results, rather than base sequentials and 4Ks that many other sites do. I think where we distinguish ourselves from other sites is the way we present our test data as a result of our custom design. I find it important to present both easily understandable and comparable data as well as more in-depth graphs for those who really have specific requirements, so in the new 2015 Enterprise Suite I'm trying to cover both grounds as well as possible.
AnandTech 2015 SSD Test System | |
CPU | Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled) |
Motherboard | ASUS Z97 Deluxe (BIOS 2205) |
Chipset | Intel Z97 |
Chipset Drivers | Intel 10.0.24 |
AHCI Driver | Windows Native |
NVMe Driver | Vendor Specific |
Memory | Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T) |
Graphics | Intel HD Graphics 4600 |
Graphics Drivers | 15.33.8.64.3345 |
Desktop Resolution | 1920 x 1080 |
OS | Windows Server 2012 R2 x64 |
- Thanks to Intel for the Core i7-4770K CPU
- Thanks to ASUS for the Z97 Deluxe motherboard
- Thanks to Corsair for the Vengeance 16GB DDR3-1866 DRAM kit, RM750 power supply, Hydro H60 CPU cooler and Carbide 330R case
The test platform is essentially our client SSD testbed. I know some will argue that the system is not suitable for enterprise testing, but in my experience as long we are testing a single drive the CPU won't become a bottleneck. If we were testing a multi-drive RAID array, then I would agree that a more powerful CPU or a dual-CPU setup is needed for maximum performance, but since we aren't the i7-4770K delivers more than enough crunching power to max out the SSD.
For SATA drive testing, I've decided to stick with the native Windows AHCI driver. The reason for this is that in a real server the drive will most likely be connected to a RAID card, meaning that it won't be utilizing the normal Microsoft or Intel AHCI driver anyway. Since Intel RST drivers have some level of performance variation, I decided to just use the native driver to eliminate any driver anomalies. In the end, what matters is that all drives are tested using the same system and settings because it's not really the absolute performance that matters, but how drives compare with each other.
For NVMe and other PCIe SSD testing, I will be using vendor-specific drivers because the native Windows NVMe driver lacks some crucial management features (such as secure erase) that are vital for accurate testing. For now I'm only testing SATA drives anyway because I still need to figure out PCIe power measurement, and to be honest it's not fair to compare SATA and PCIe drives given that they are aimed at totally different market segments.
4KB Random Write
Random write is ultimately the benchmark that separates the good from the bad. Read and sequential write operations are rather easy to manage, but a sustained random workload consisting of small IOs will put any SSD down on its knees. The reason lies in NAND architecture because NAND can be programmed at the page level, but erasing can only be done at the block level (usually a few hundred pages). When a drive is subjected to a sustained IO workload, there will inevitably be a point where the drive has to perform garbage collection (read-modify-write cycles) to free up blocks for new host writes, which leads to all SSDs having lower sustained (i.e. steady-state) performance.
Unlike in the client space where workloads are often bursty by nature, enterprise workloads tend to stress the drive 24/7, meaning that the drive effectively operates in steady-state at all times. Hence it's critical to measure enterprise SSD performance only after the drive has reached its steady-state.
Our 4KB random write regime is as follows. To make sure all LBAs have data associated with them, I first run a two-hour 128KB sequential write pass, which accelerates the process of entering steady-state. The fill operation is then followed by a six-hour 4KB random write at queue depth of 32 and all the data (IOPS, standard deviation and power consumption) in the bar graphs are based on the last 500 seconds of that six-hour run. The final step is queue depth scaling, which starts at QD1 and the queue depth increases exponentially with each queue depth being tested for 10 minutes. This whole process is scripted, so there is absolutely no idle time between the tests to ensure that the drive has no time to recover and that we are really measuring worst-case steady-state performance.
While the M510DC is not particularly designed for write-intensive application, its write performance is significantly better than what competing read-focused drives offer (namely the 845DC EVO and CloudSpeed Eco).
Queue depth scaling doesn't present anything out of the ordinary. All SSDs in our test reach their maximum performance below or at QD4, which is below the intensity of most enterprise workloads.
The consistency metric is one that I've been reporting in our client SSD reviews for quite some time now and I think it's a useful metric for measuring both performance and its variation in a simple way. Despite the good average IOPS, the M510DC doesn't appear to be very consistent. That's a shame because I would argue that in the enterprise space consistency is just as important as performance because designing the whole server infrastructure around an inconsistent drive is a difficult and inefficient task. It seems that Samsung is really the one who's dominating in consistency, regardless of the type of NAND.
I made power consumption a first-class citizen in our 2015 Client SSD Suite and I'm now doing the same for the enterprise suite. I find the industry as a whole is often too fixated with performance and forgets that performance is just one piece of the puzzle. Power consumption in the enterprise space has a slightly different importance because there is no battery life to worry about, but when there are thousands of drives in a data center, the differences in power efficiencies will show up in the electricity bill, making power consumption a crucial element of the total cost of ownership.
Instead of reporting power as an absolute figure, I'm reporting IOPS per watt, which measures the efficiency of the drive. Power as an absolute number is quite meaningless because a high performance drive may draw more power while being relatively higher in efficiency.
The M510DC is again better than the other entry-level enterprise drives by delivering up to twice the IOPS per watt compared to the CloudSpeed Eco.
Default |
The consistency really is quite bad. Even most client-grade drives have lower variation in performance, although the good news is that there is a fairly steady baseline at ~22K IOPS with the variations mostly being peaks rather than drops in performance. Still, I would like to see better consistency, even if it came at the expense of minor performance loss.
Default |
While our performance over time graphs do a good job of characterizing consistency and its variation, ultimately each data point is an average of all IOs occurring during one second. With tens of thousands of IOs being processed each second, the average can easily hide nasty drops in performance. Fortunately, Iometer also reports the number of IOs that occur within certain latency ranges and to get a deeper dive into the drive's behavior I'm reporting the latency distribution during the last 500 seconds of the six-hour run.
Default |
The latency distribution shows a problem in the M510DC right away. A lion share of the IOs have latency below 1ms, which is typically considered to be excellent, but 10% of IOs have >5ms latency. >1ms latency on its own isn't a problem, but jumping from 1ms to up to 10ms is a tenfold increase. That's a hiccup that is significant enough to be noticeable in user performance. For instance the 845DC EVO stays below 5ms, even though it only provides about half of the IOPS the M510DC does.
Mixed 4KB Random
In real world, workloads are rarely just pure reads or writes, hence it's important to test mixed performance as it better illustrates the performance under an enterprise workload. The read/write distribution varies greatly depending on the workload, but 70% read and 30% write is often considered as the benchmark for mixed performance. It's a little write heavy to mirror the most read-centric workloads (like media streaming or cloud storage), but it's fairly realistic for mimicking virtual desktop infrastucture (VDI) workloads for example.
The test sequence is similar to the random write benchmark. I start off with a two-hour sequential write pass, which is followed by six hours of 4KB random IOs at QD32 with 70% being reads and 30% writes. I again record the results of the last 500 seconds to ensure that the results represent steady-state performance. I also test queue depth scaling after the six-hour run and as a final test I run a 4KB random IO test (QD32) at six different read/write distributions in order to determine performance for different workloads.
The 70R/30W mixed performance is a slight disappointment. While the M510DC deliver considerably higher random write performance than the 845DC EVO, it cannot match the EVO in mixed performance.
At low queue depths the M510DC actually provides better performance than the EVO, but after QD8 the performance no longer scales optimally, which is inherent in the design as the same scaling phenomenon is present in the M500DC as well.
The performance at different read/write distributions is overall good and on par with the CloudSpeed Eco and 240GB EVO. For performance-focused mixed workloads, the M500DC is a better option, although ultimately the S3700 easily takes the crown here.
Consistency again leaves something to be desired because compared to the EVOs the M510DC isn't a very consistent drive.
In power efficiency, however, the M510DC is very competitive and considerably better than the CloudSpeed Eco.
Default |
The M510DC is consistently inconsistent. It's not as bad as the M500DC, but given that the 845DC EVO provides better performance at much higher consistency it's hard to recommend the M510DC for applications that require very consistent mixed IO performance.
Default |
The same is also visible in the latency distribution. While the M510DC has frequent IOs in the range of 100µs, the consistency is again damaged by 3% of >10ms IOs. No other drive has IOs in that range, so it's a bit alarming given how even the CloudSpeed Eco consistently delivers IOs below 10ms.
Default |
4KB Random Read
With 4KB random writes and mixed workloads out of the way, that leaves us with only pure random reads remaining. The test is a two-hour span and the results correspond to an average of the last 500 seconds of the run.
In random read performance, the difference between SATA drives is negligible. At best the difference is 10%, but between most drives we are looking at less than 5%.
At QD8 and QD16 the difference is a bit larger and not in favor of the M510DC. The EVO does deliver better performance in this case, but not substantially.
There aren't major differences in consistency either and only the S3700 has a notable lead in this area.
The M510DC isn't as power efficient as I would like it to be because it's outperformed by the EVOs again. It's not a significant margin, but with better performance and efficiency the EVO is turning out to be a better drive for read workloads.
128KB Sequential Write
While most enterprise IO patterns are random by nature, there are use cases such as media streaming where sequential performance has a significant role. The sequential write test is a two-hour run with performance being measured during the last 500 seconds for accurate steady-state performance.
The M510DC offers mediocre sequential write performance, but making a direct comparison is hard given the different capacities. Random IO performance isn't as heavily impacted by the amount of NAND, but in sequential write performance the capacity plays a significant role (take the two 845DC EVOs as an example).
Queue depth scaling doesn't present any surprises. Nearly all drives reach maximum data rate at QD2, which is well below the average queue depth of a typical enterprise workload.
Unfortunately my script had a minor error in it and I don't have consistency to report. I've now fixed the script, but in order to deliver this review on a timely matter I decided to leave it out for now because retesting all drives would have taken an extra week or so.
The efficiency isn't admirable, but honestly only the 845DC PRO shines here thanks to the higher power efficiency of 3D NAND.
128KB Sequential Read
Given that the M510DC is supposed to be a read-centric drive, the sequential read performance is quite poor. The performance is by the spec, so it seems like it's a limitation in the firmware design itself.
The scaling of Samsung drives is pleasant to watch, but the others not so much. I'm actually a little surprised by how poor the sequential read performance is, but it could simply be a matter of random IO optimization (but still, Samsung delivers in both random and sequential IO).
Consistency wise the M510DC isn't very good either, especially compared to the EVOs with outstanding consistency.
The same goes for power efficiency where the EVOs are again more efficient than the M510DC.
Final Words
I'll start with the positive sides of the M510DC. First off, the M510DC delivers higher endurance than the most read-focused drives. Samsung 845DC EVO and Intel's DC S3510 both come in at only 0.3 DWPD with SanDisk's CloudSpeed Eco being rated at 1 DWPD, so at 2 DWPD (1 for 960GB SKU), the M510DC is more durable than its direct competitors. Whether the extra endurance is beneficial depends on the workload because I would argue that for the most read-intensive workloads (such as media streaming and cloud storage) 0.3 DWPD is sufficient because such large amount of the data is static, but the higher endurance obviously opens the doors for usage in workloads with more write activity.
The second thing I like is the inclusion of TCG Enterprise encryption. Micron has always been at the forefront of encryption adoption and it was the first one in the industry to adopt TCG Opal 2.0 to its M500 client SSD. That extends with the M510DC as the company is bringing next generation encryption to the enterprise space. The advantage of TCG-E is the transparency to the host and ease of deployment because with a supported RAID card TCG-E won't require any additional installation. I don't expect TCG-E to be adopted very quickly in a large scale, but I do see the verticals (financial and medical institutions) having interest for the technology.
Unfortunately, the performance leaves room for improvement. In pure random write performance the M510DC is actually faster than its competitors, but when it comes to mixed and read performance it's outperformed by the 845DC EVO. Sequential read in particular is an area where the M510DC falls behind the EVO and frankly that's a quite important metric for a drive that's designed for read workloads.
Ultimately it's really the consistency that is M510DC's Achilles' heel, though. On average the performance is decent, but when digging deeper it turns out that the average hides significant variations in performance with worse drops being >10ms in latency. The importance of consistency depends on the application and some are more lenient than others, but in general it's critical for a drive to deliver performance that is good but also consistent, so the end-user will have a consistent user experience of whatever service the drive is powering.
All in all, it boils down to whether the pros outweigh the cons and whether the price reflects this. The M510DC is kind of a model in between the drives for read and mixed workloads. For performance sensitive read-centric applications, the 845DC EVO is a better pick because it delivers better read and mixed performance, and most importantly it's very consistent and power efficient. However, if write endurance is a bigger concern than performance and consistency, the M510DC is a viable alternative that is priced below the mixed workload drives (such as the M500DC), while still delivering competitive endurance at the expense of some performance and consistency.