Original Link: https://www.anandtech.com/show/9258/crucial-mx200-250gb-500gb-1tb-ssd-review
Crucial MX200 (250GB, 500GB & 1TB) SSD Review
by Kristian Vättö on May 22, 2015 8:00 AM EST- Posted in
- Storage
- SSDs
- Crucial
- MX200
- Micron 16nm
Last year Micron launched M600 SSD for the OEM market, but unlike in the past there was no simultaneous retail product release. We were told that the M600 firmware features would sooner than later find their way into a Crucial branded product, which finally materialized back at CES when Crucial unveiled the MX200.
The hardware has remained unchanged from the MX100 as the MX200 sports the same Marvell 88SS9189 controller with Crucial-Micron's custom firmware and Micron's 16nm MLC NAND, but the biggest change compared to the MX100 is the addition of Dynamic Write Acceleration (DWA), which is the company's SLC cache implementation. I covered DWA in detail in our Micron M600 review, so I suggest you give it a read if you are interested in a more thorough explanation, but in short the size of the cache is dynamic and varies depending on the amount of data in the drive. Basically, all empty NAND runs in SLC mode, so the size of the cache shrinks as data is written to the drive. I wasn't very satisfied with DWA in the M600 review because it didn't seem to yield any noticeable performance gains, so I hope the engineers have tweaked the caching algorithms for the MX200.
The MX200 is positioned above the BX100 we recently reviewed and liked very much. The CES announcement signaled a change in product strategy because previously Crucial has more or less had just one main product line, but with BX100 and MX200 Crucial is trying to cater a larger segment of the market by offering the BX100 for the mainstream public and the MX200 for the more demanding enthusiast/professional user group.
Crucial MX200 Specifications | ||||
Capacity | 250GB | 500GB | 1TB | |
Form Factors | 2.5" 7mm, mSATA, M.2 2260 & 2280 | 2.5" 7mm | ||
Controller | Marvell 88SS9189 | |||
NAND | Micron 16nm 128Gbit MLC | |||
DRAM (DDR3-1600) | 512MB | 512MB | 1GB | |
Sequential Read | 555MB/s | 555MB/s | 555MB/s | |
Sequential Write | 500MB/s | 500MB/s | 500MB/s | |
4KB Random Read | 100K IOPS | 100K IOPS | 100K IOPS | |
4KB Random Write | 87K IOPS | 87K IOPS | 87K IOPS | |
Dynamic Write Acceleration | Yes | Yes (mSATA and M.2 models only) | No | |
DevSleep Power | 2mW | 2mW | 2mW | |
Slumber Power | 100mW | 100mW | 100mW | |
Max Power | 4.4W | 4.7W | 5.2W | |
Encryption | TCG Opal 2.0 & IEEE-1667 (eDrive) | |||
Endurance | 80TB | 160TB | 320TB | |
Warranty | Three years | |||
Crucial MX200 | $110 | $200 | $427 |
The MX200 is available in a variety of form factors, which includes rarer double-sided M.2 2260 and single-sided M.2 2280 on top of the normal 2.5" and mSATA models. Due to PCB space restrictions in mSATA and M.2 form factors, only the 2.5" lineup carries a 1TB SKU and the mSATA and M.2 models top out at 500GB. The 2.5" SKUs also include an Acronis True Image HD license and 7mm to 9.5mm spacer to ensure laptop compatibility, whereas the mSATA and M.2 retail packages have only mounting screws in addition to the drive itself. All MX200 models are compatible with Crucial's Storage Executive software, which we covered thoroughly in the BX100 review.
250GB | 500GB | 1TB | |
Raw NAND Capacity | 256GiB | 512GiB | 1024GiB |
# of NAND Packages | 8 | 8 | 16 |
# of Die per Package | 2 | 4 | 4 |
Over-Provisioning | 9.1% | 9.1% | 9.1% |
The MX200 includes a bit more over-provisioning than its predecessor as Crucial has decided to switch from power of two capacities to even tens/hundreds. A part of the over-provisioning is dedicated to Micron's NAND-level parity scheme called RAIN, but the rest is used as additional provisioning to increase stead-state performance as well as overall endurance.
Endurance wise the MX200 is one of the top drives on the market. Whether the extra endurance is needed is another question, though, because the 320TB rating in the 1TB model would translate to 175GB of writes per day for five years, which is far more than what most power users write to a drive on a daily basis, let alone a typical client user. Since Dynamic Write Acceleration is only enabled on the 250GB model (although 500GB mSATA and M.2 models have DWA enabled), the high endurance comes purely from Crucial's longer validation, although obviously Crucial has access to the best NAND dies given that its parent company Micron manufactures the NAND. Oddly enough the warranty is only three years as it would make sense offer longer warranty with such a high endurance rating and nearly all high-end SSDs today offer either five or ten year warranties.
Aside from higher endurance and supposedly better performance, the key differentiator to BX100 is the support for hardware-accelerated encryption in the form of TCG Opal 2.0 and IEEE-1667 standards that together enable the use of Microsoft eDrive. Crucial was an early supporter of the standards and first implemented them into the M500 two years ago, so they have now become a part of the common Crucial feature set, although the BX100 dropped the support for higher cost efficiency. It seems that Crucial has taken a strategy to guide the corporations that require hardware encryption towards the more expensive MX200, which is what many SSD companies have been executing in the form of a separate "business SSD" lineup.
The ceramic capacitor array in the 1TB MX200
In the M600 review, I explained how Crucial's power loss protection in client SSDs is not the same as in enterprise drives, but a backup circuitry that merely protects the existing data from corruption. The MX200 brings no changes to that and only offers data-at-rest protection, meaning that the NAND mapping table as well as any in-flight user data are still vulnerable to sudden power losses.
AnandTech 2015 SSD Test System | |
CPU | Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled) |
Motherboard | ASUS Z97 Deluxe (BIOS 2205) |
Chipset | Intel Z97 |
Chipset Drivers | Intel 10.0.24+ Intel RST 13.2.4.1000 |
Memory | Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T) |
Graphics | Intel HD Graphics 4600 |
Graphics Drivers | 15.33.8.64.3345 |
Desktop Resolution | 1920 x 1080 |
OS | Windows 8.1 x64 |
- Thanks to Intel for the Core i7-4770K CPU
- Thanks to ASUS for the Z97 Deluxe motherboard
- Thanks to Corsair for the Vengeance 16GB DDR3-1866 DRAM kit, RM750 power supply, Hydro H60 CPU cooler and Carbide 330R case
Performance Consistency
We've been looking at performance consistency since the Intel SSD DC S3700 review in late 2012 and it has become one of the cornerstones of our SSD reviews. Back in the days many SSD vendors were only focusing on high peak performance, which unfortunately came at the cost of sustained performance. In other words, the drives would push high IOPS in certain synthetic scenarios to provide nice marketing numbers, but as soon as you pushed the drive for more than a few minutes you could easily run into hiccups caused by poor performance consistency.
Once we started exploring IO consistency, nearly all SSD manufacturers made a move to improve consistency and for the 2015 suite, I haven't made any significant changes to the methodology we use to test IO consistency. The biggest change is the move from VDBench to Iometer 1.1.0 as the benchmarking software and I've also extended the test from 2000 seconds to a full hour to ensure that all drives hit steady-state during the test.
For better readability, I now provide bar graphs with the first one being an average IOPS of the last 400 seconds and the second graph displaying the IOPS divided by standard deviation during the same period. Average IOPS provides a quick look into overall performance, but it can easily hide bad consistency, so looking at standard deviation is necessary for a complete look into consistency.
I'm still providing the same scatter graphs too, of course. However, I decided to dump the logarithmic graphs and go linear-only since logarithmic graphs aren't as accurate and can be hard to interpret for those who aren't familiar with them. I provide two graphs: one that includes the whole duration of the test and another that focuses on the last 400 seconds of the test to get a better scope into steady-state performance.
It looks like Crucial has finally taken steps to improve steady-state performance, although the additional over-provisioning is partially to thank for the increase. One criticism I always had about Crucial's SSDs was the relatively bad steady-state performance, but the MX200 finally brings the performance closer to other high-end drives.
The consistency is very good as well and far better than what the BX100 offers.
The IO consistency appears to behave differently from the MX100 and the graph resembles 850 EVO and Pro quite a bit by dropping quickly in performance and then slowly increasing before evening out. The 1TB model is an exception, though, as it seems that the firmware can't properly handle such a large capacity, which results in worse performance and considerably higher variation. Unfortunately, the MX200 wouldn't respond to the hdparm command that I use for over-provisioning testing, so I don't have any results with added over-provisioning at this point.
AnandTech Storage Bench - The Destroyer
The Destroyer has been an essential part of our SSD test suite for nearly two years now. It was crafted to provide a benchmark for very IO intensive workloads, which is where you most often notice the difference between drives. It's not necessarily the most relevant test to an average user, but for anyone with a heavier IO workload The Destroyer should do a good job at characterizing performance. For full details of this test, please refer to this article.
Despite the improved IO consistency, the MX200 doesn't have any advantage over the MX100 in our heaviest The Destroyer trace. The MX200 is clearly not crafted for intensive IO workloads because there are far better drives available, which is a shame because I've been waiting for Crucial to deliver a true high-end drive, but the MX200 clearly isn't that. What's alarming is the fact that the BX100 is actually faster than the MX200, which doesn't speak too highly of Crucial-Micron's custom firmware for the Marvell controller.
Average latency doesn't really change the story as the MX200 is still a relatively slow drive by today's high-end SSD standards. Especially the performance of the 250GB is surprisingly bad and it looks like Crucial's SLC cache implementation isn't optimal for intensive IO workloads.
The share of high latency IOs is also pretty bad, although fortunately even the 250GB model manages to keep +100ms IOs within a reasonable limit.
The power consumption is quite average, but the BX100 is without a doubt far more power efficient even for high intensity IO workloads despite its position as a value drive.
AnandTech Storage Bench - Heavy
While The Destroyer focuses on sustained and worst-case performance by hammering the drive with nearly 1TB worth of writes, the Heavy trace provides a more typical enthusiast and power user workload. By writing less to the drive, the Heavy trace doesn't drive the SSD into steady-state and thus the trace gives us a good idea of peak performance combined with some basic garbage collection routines. For full details of the test, please refer to the this article.
In our Heavy trace the MX200 is an average drive: it's not as fast as Samsung drives, but roughly on par with the BX100. The most notable data point is the 250GB MX200 in full state because the drop in performance is tremendous, which is due to Dynamic Write Acceleration that is only enabled on the 250GB model. Because DWA writes everything to the SLC cache first, the drive constantly needs to migrate data from SLC to MLC, adding a significant amount of overhead and reducing the performance of host IOs.
The latencies are also good, except for the full 250GB MX200.
Power consumption under load is decent, but not BX100 level. The advantage over Samsung drives is notable, though, so the MX200 appears to be a pretty good fit for a laptop.
AnandTech Storage Bench - Light
The Light trace is designed to be an accurate illustration of basic usage. It's basically a subset of the Heavy trace, but we've left out some workloads to reduce the writes and make it more read intensive in general. Please refer to this article for full details of the test.
Now the 250GB MX200 enjoys a small benefit over the larger capacities when the drive is empty as the SLC cache is in its most effective state (the Light trace writes so little that all can be written in SLC). However, once the drive is full, the 250GB again loses a substantial part of its performance drops below any other SSD we have tested. Overall the BX100 is a better fit for light workloads, although the difference isn't huge (10-15%).
Power consumption remains good and competitive against other high-end drives, except for the full 250GB model that draws a lot more power as it needs to write all data twice.
Random Read Performance
For full details of how we conduct our Iometer tests, please refer to this article.
Random read performance is typical to Crucial's Marvell based drives a bit better than the BX100.
The power consumption is fairly average too, resulting in good efficiency.
Looking at the scaling with queue depth, the performance increases smoothly across all queue depths and capacities. It's not 850 Pro level, but I suspect the NAND has its play in this too.
Random Write Performance
Random write performance, on the other hand, is top of the class and similar to the MX100. Crucial's Marvell based SSDs have always had excellent peak random write performance and the MX200 finally adopts the performance to steady-state too.
Despite the high performance, the power efficiency is good. The SLC cache in the 250GB model shows it's advantage because the performance is nearly the same, whereas power consumption is considerably lower. Writing to SLC NAND is more power efficient because each write operation requires less programming pulses to set the correct voltage state, although the downside is that the MX200 will basically rewrite all data to MLC later, which will defeat any power savings as we saw in our Storage Bench traces.
Especially QD1 and QD2 performance is great and the throughput also scales well with queue depth. The 250GB model hits the wall of its SLC cache in QD4 (half of the drive is now filled with data i.e. the whole SLC cache), so the performance takes a slight hit while the drive moves existing data from SLC to MLC and processes new write requests from the host.
Sequential Read Performance
For full details of how we conduct our Iometer tests, please refer to this article.
Sequential read performance hasn't been a strength of Crucial's Marvell based SSDs and the MX200 doesn't change that.
Power efficiency isn't too good either because the performance is low, yet the power draw is quite average.
The scaling graph reveals why: the performance at QD1 and QD2 is simply terrible compared to other drives. Most drives max out the SATA 6Gbps interface at QD2 by providing over 500MB/s, but the MX200 requires QD4 before it reaches its full potential.
Sequential Write Performance
Fortunately sequential write performance is much better. The 250GB does leave a bit to be desired with its SLC cache, but compared to competing drive in the same capacity class it does well.
Power efficiency is also decent.
Again the full SLC cache shows its impact, but now the slowdown starts at QD2 already (sequential writes are faster than random, so the cache is filled quicker). The larger capacities scale well and reach their maximum throughput at QD2.
Mixed Random Read/Write Performance
For full details of how we conduct our Iometer tests, please refer to this article.
Mixed random performance is great and received a nice upgrade from the MX100. The 250GB version doesn't do that well, but note that our mixed testing is conducted on a full drive, so the SLC cache can't do its magic.
Power consumption is relatively low given the performance, resulting in good efficiency.
The performance of the 250GB SKU drops as more writes are thrown into the mix. That's not surprising given that the drive is full, so the drive needs to transfer data from SLC to MLC inflight, which reduces overall performance. The higher capacities without the SLC cache perform very well, though, and the performance scales nicely as the portion of writes increases.
Mixed Sequential Read/Write Performance
Mixed sequential performance isn't as good, but is still pretty decent when excluding the 250GB MX200.
Power consumption is average too.
Again, as the drive is full, the 250GB just slows down with more write IOs. The higher capacities actually have a fairly even curve that isn't similar to the full-fledged bathtub curve that many drives have. It looks like Crucial has made an effort to improve mixed performance, which is always great news because it's an area where most drives are quite bad at.
ATTO - Transfer Size vs Performance
ATTO is a handy tool for quickly measuring performance across various transfer sizes and it's also freeware that can easily be run by the end-user.
AS-SSD Incompressible Sequential Performance
Similar to ATTO, AS-SSD is freeware as well and uses incompressible data for all of its transfers, making it a valuable tool when testing drives with built-in compression engines (e.g. SandForce).
Idle Power Consumption
Since we truncate idle times to 25µs in our Storage Bench traces, they don't give a fully accurate picture of real world power consumption as idle power consumption is not taken properly into account. Hence I'm still reporting idle power consumption as a separate benchmark because it's one of the most critical metrics when it comes evaluating an SSD for mobile use.
Unfortunately I still don't have a way to test DevSleep power consumption due to lack of platform support, but my testbed supports HIPM+DIPM power commands (also referred to as Slumber power), so the results give a rather accurate picture of real-world idle power consumption.
The MX200 supports both DevSleep and slumber power states, so power efficiency during idle times is good. It's not Samsung level, but at ~60mW the MX200 enjoys a small benefit over the MX100 and is overall fairly average.
TRIM Validation
The move from Windows 7 to 8.1 introduced some problems with the methodology we have previously used to test TRIM functionality, so I had to come up with a new way to test. I tested a couple of different methods, but ultimately I decided to go with the easiest one that can actually be used by anyone. The software is simply called trimcheck and it was made by a developer that goes by the name CyberShadow in GitHub.
Trimcheck tests TRIM by creating a small, unique file and then deleting it. Next the program will check whether the data is still accessible by reading the raw LBA locations. If the data that is returned by the drive is all zeros, it has received the TRIM command and TRIM is functional.
And TRIM works!
Final Words
By having two separate BX and MX lineups, it's clear that Crucial is trying to position the MX200 in the higher-end segment and aim the drive towards enthusiast and professional users. On paper that works well because the MX200 does deliver considerably higher maximum performance than the BX100 and the feature set is more professional-oriented with hardware encryption support, but unfortunately the MX200 doesn't fulfill its promises in real world based IO trace testing. In fact, it turns out that the BX100 performs better in typical low queue depth client workloads.
That actually speaks of an industry wide problem. Most manufacturers only publish performance figures at high queue depths (typically 32, i.e. the max of AHCI), but as our IO traces show, only a fraction of real world client IOs happen at such a high queue depth. Even very intensive client IO workloads rarely go above QD2, so it's totally unrealistic to use QD32 figures as the basis of marketing and product positioning. Of course everyone likes big numbers, especially the marketing teams, but the truth is that focusing solely on those can potentially result in erroneous product positioning like in Crucial's case. I think the industry as a whole should try to move more towards low queue depth optimization because that yields better user performance and at the end of the day it's the user experience that matters, not the number of IOs the drive can theoretically process.
Another thing I'm not very satisfied with is the Dynamic Write Acceleration. I don't think an SLC cache is very useful in an MLC based drive because the performance benefits are marginal, at least with SATA 6Gbps. PCIe and NVMe open the door for potentially higher peak performance, but even then I think the design of DWA is inherently flawed. You don't really need more than a few gigabytes of SLC cache in a client drive because client workloads are bursty by nature, meaning that running as much NAND as possible in SLC NAND doesn't provide any substantial performance gain. In fact, DWA actually works against itself in more sustained workloads because everything is written to SLC (basically all empty space is in SLC mode in the 250GB MX200), so if you write more than the SLC cache can incorporate at the time the drive needs to transfer data from SLC to MLC in-flight, which has a larger negative impact on performance compared to just writing straight to MLC NAND that competing SLC cache designs do. As we saw in our tests, filling the 250GB MX200 with data results in performance decrease that is by far larger than we've encountered on other drives.
Amazon Price Comparison (5/22/2015) | |||
240/250/256GB | 480/500/512GB | 960GB/1TB | |
Crucial MX200 | $110 | $200 | $427 |
Crucial BX100 | $96 | $186 | $380 |
Crucial MX100 | $109 | $210 | - |
OCZ ARC 100 | $95 | $185 | - |
Mushkin Reactor | - | - | $404 |
Samsung 850 EVO | $98 | $198 | $350 |
Samsung 850 Pro | $143 | $258 | $483 |
SanDisk Ultra II | $90 | $170 | $330 |
SanDisk Extreme Pro | $145 | $260 | $440 |
Transcend SSD370 | $90 | $175 | $360 |
The pricing is obviously higher than BX100, but compared to other high-end SATA drives the MX200 is pretty reasonably priced. That said, it still doesn't provide enough value for the money because the only advantage the MX200 has over the BX100 is hardware encryption, but if that's something you need/want the 850 EVO provides better bang for the buck given that it's cheaper, offers higher performance and you even get a 5-year warranty versus Crucial's three years.
I think Crucial seriously needs to reconsider its product positioning strategy. If Crucial can't deliver a true high performance drive, then I think it's better to focus all resources on one drive rather than have two overlapping products. I really liked the MX100 because it was such a simplified lineup, whereas the MX200 just adds unnecessary complexity without providing any real value. The BX100 is still a great drive and definitely at the top of my list of value drives, but as it stands today I honestly can't see a scenario where the MX200 would be a justifiable purchase because the performance just isn't anywhere near good enough to justify the higher price tag.