Original Link: https://www.anandtech.com/show/7896/plextor-m6s-m6m-256gb-review



Continuing our spring full of SSDs, the next drives under the scope are Plextor's M6S and M6M. Both drives were already showcased at CES this year but after a series of delays the release was pushed to April, which brings us to this review. Similar to the M5 series, the M6S is Plextor's mainstream offering with a focus on price, whereas the M6M is the mSATA variant of the M6S. The M5M was actually derived from the M5 Pro, so in this regard Plextor has slightly modified their strategy but I'm guessing this is to make the M6M more competitive in price. There will be an M6 Pro as well later this year and we'll probably know more after Computex (Plextor said June/July).

Photography by Juha Kokkonen

The biggest changes the M6S and M6M bring is Marvell's 88SS9188 controller and Toshiba's second generation 19nm MLC NAND. I'll cover the NAND in more detail in just a bit but the 9188 controller is essentially a "lite" version of the 9187 found in drives like the M5 Pro and Crucial M550. The amount of channels has been cut from eight to four, which is typical for budget and low power designs. We saw a similar trend with Marvell's previous generation 917x controllers where the 9174 was the full blown 8-channel design with 9175 being the cut down derivative. I believe the main market for the 9188 is mSATA drives because the spec only allows four NAND packages anyway, but Plextor has decided to use the 9188 controller in both the M6S and M6M.

Plextor M6S Specifications
Capacity 128GB 256GB 512GB
Controller Marvell 88SS9188
NAND Toshiba A19nm MLC
Cache 256MB 512MB 768MB
Sequential Read 520MB/s 520MB/s 520MB/s
Sequential Write 300MB/s 420MB/s 440MB/s
4KB Random Read 88K IOPS 90K IOPS 94K IOPS
4KB Random Write 75K IOPS 80K IOPS 80K IOPS
Endurance 72TB (~66GB/day)
Warranty Three years

In terms of performance, there isn't any noticeable difference between the M6S and M6M. Similar to Crucial, Plextor is using 64Gbit die in the smaller capacity drives (see the table below for details) and 128Gbit in the larger ones. I covered this in the M550 review but in short the usage of smaller die increases parallelism, which in turn increases performance. For small drives the 128Gbit die is too large in capacity and the limited parallelism would lead to slow write speeds as we saw with the M500.

Plextor M6M Specifications
Capacity 64GB 128GB 256GB 512GB
Controller Marvell 88SS9188
NAND Toshiba A19nm MLC
Cache 128MB 256MB 512MB 768MB
Sequential Read 520MB/s 520MB/s 520MB/s 520MB/s
Sequential Write 160MB/s 340MB/s 440MB/s 440MB/s
4KB Random Read 73K IOPS 90K IOPS 94K IOPS 94K IOPS
4KB Random Write 42K IOPS 76K IOPS 80K IOPS 80K IOPS
Endurance 72TB (66GB/day)
Warranty Three years

Unfortunately, there is no hardware encryption support. This seems to be a feature that only a few manufacturers consider important for client drives, although I disagree because the value of the data we are carrying around is constantly increasing.

  M6S M6M
Capacity 128GB 256GB 512GB 64GB 128GB 256GB 512GB
# of NAND Packages 8 8 8 4 4 4 4
# of Die per Package 2 x 8GB 4 x 8GB 4 x 16GB 2 x 8GB 4 x 8GB 4 x 16GB 8 x 16GB

There is no NAND level redundancy either, which is also becoming a standard. The need for redundancy of course depends on the NAND and its reliability, but as we move to smaller lithographies it'll certainly be important to have some level of protection against page/block level failures. Plextor does have fairly strict quality control, though, as every drive is tested for at least 48 continuous hours including idle and power cycling tests (which are often what causes issues). Such rigid testing can make NAND redundancy less needed but I'd still like to have at least some redundancy just in case.

Test System

CPU Intel Core i5-2500K running at 3.3GHz
(Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5
(1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit



The Math of Marketing: Not All 19nm NAND Is the Same

Almost a year ago, Toshiba/SanDisk announced their second generation 19nm NAND. It's typical for NAND manufacturers to use the same process node for more than one generation because manufacturers can cut the die size by increasing either the page, block, or die capacity (or even all three at once), which will lead to lower production costs. However, Toshiba/SanDisk had already upped the page size to 16KB and the die capacity remained at 64Gbit, so how did they manage to decrease the die size by 17%? We'll need to dig a little deeper into the die to find that out but first (no, I'm not gonna take a selfie) let's quickly revise how NAND works.

That is what a cross-section of a NAND cell looks like when it's turned into nice colorful graph. This is what it looks like in practice:

Not as nice as the graph above, right? The reason we need the cross-section photo is because the graph is a bit too simplified and doesn't show one crucial thing: the control gate isn't something that just sits on top of the floating gate and the inter poly dielectric - it actually wraps around the whole floating gate. That's to keep the capacitance between the control and floating gates as high as possible, which in turn helps to maintain the charge in the floating gate and increase performance (the math behind this is actually more simple than you would expect but I'm not going to scare people off with a bunch of equations here). B ut that is just one cell. To truly understand the structure of NAND, we need to zoom out a bit.

That is what a bunch of NAND cells look like from above. The bitlines (i.e. silicon in the cross-section photo above) are made of silicon and on top of them are the wordlines that are also known as control gates. At every intersection of a bitline and wordline, there is one cell capable of holding one (SLC), two (MLC) or three (TLC) bits of data.

Traditionally cells are symmetrical. The process node refers to the size of one cell, so in the case of 20nm NAND, the cell size would be 20nm x 20nm. However, there is no rule against making an asymmetrical cell and that is in fact what Toshiba/SanDisk did with their first generation 19nm NAND. Instead of being a symmetrical 19nm x 19nm design, the cell size was 19nm x 26nm. Compared to IMFT's symmetrical 20nm design, the actual cell size is quite a bit larger (494 nm2 vs 400 nm2 ), yet in terms of marketing Toshiba's/SanDisk's "19nm" NAND was smaller and more advanced.

You could call that cheating but there is a good technical reason as to why building an asymmetrical design makes sense. As I mentioned earlier, the wordline (control gate) wraps around every floating gate and between them is an insulating inter poly dielectric (often referred to as ONO due to its oxide-nitride-oxide structure, or just IPD). Since the floating gate is where the electrons are stored, it needs to be insulated; otherwise the electrons could easily escape the floating gate and you would have a brick that can't reliably hold data.

The inter poly dielectric (IPD) is the tricky part here -- because it wraps around every floating gate, the minimum distance between two floating gates (and hence bitlines) must be at least twice the thickness of the IPD. Scaling the IPD is difficult because if you make it too thin, the cell becomes vulnerable to leakage because the IPD won't be able to reliably insulate the floating gate. Generally it's considered impossible to scale the IPD below 10nm, so 26nm is already pretty good and 20nm is hitting the limits.

IMFT's approach is different. It's a high-k metal gate design and the wordline no longer needs to be wrapped around the floating gate, but I'm not going to go into more detail about that here. The short summary is that it allows for a symmetrical 20nm design without sacrificing reliability.

Courtesy of EETimes.com

In addition, you will want to have some conducting poly-silicon (i.e. wordline/control gate) between every bitline as well to build up capacitance, so in reality it's much harder scale the bitline length compared to the wordline. There is no wrapping issue with wordlines and the only thing you really have to take into account is wordline to wordline interference. To battle that, all NAND manufacturers are currently using tiny air gaps between every wordline to reduce the interference and still be able to scale down the NAND.

  IMFT (Intel/Micron) Toshiba/SanDisk
NAND Process 20nm 64Gbit 20nm 128Gbit 19nm 64Gbit (1st gen) A19nm 64Gbit (2nd gen)
Cell Size 20nm x 20nm 19nm x 26nm 19nm x 19.5nm
Die Size 118mm2 202mm2 113mm2 94mm2
Gbit per mm2 0.542 0.634 0.566 0.681

With the second generation 19nm NAND, Toshiba/SanDisk has been able to cut the bitline length from 26nm to 19.5nm. It's still 19nm per marketing standards but at the engineering level this is pretty significant. Unfortunately the NAND is so new that I don't know what Toshiba/SanDisk has done to achieve a 19nm x 19.5nm cell size. It's certainly possible that Toshiba/SanDisk has also transitioned to a high-k metal gate process but we'll know more when the chip is put under a microscope.

It's interesting that while IMFT's 20nm NAND has smaller cell size than Toshiba's/SanDisk's first generation 19nm die, the density is still lower. I believe this has to do with IMFT's relatively poor memory area efficiency, which is only 52% for the 20nm 64Gbit die (i.e. 52% of the die is the actual memory part; the rest is peripheral circuits like interconnects and alike).

The real question for most users will be what sort of performance you can get out of the new NAND, however, so let's move on to benchmarks with the Plextor M6S/M6M.



Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  Plextor M6S Plextor M5M Crucial M550 SanDisk Extreme II Samsung SSD 840 EVO mSATA
Default
25% Spare Area

Ouch, this doesn't look too good. The IOPS is constantly dropping below 1000, which is something I'm not expecting to see anymore. Even the M550 can keep the IOPS at ~4000 minimum, so the M6S certainly doesn't do well here. With added over-provisioning (OP) the performance does look a lot better and the minimum IOPS jumps to ~10K but it's still a downgrade from the M5M. Given the change to a lighter controller this is perhaps expected, but I'm still worried about the consistency without additional OP. I would make sure to leave ~10-15% of empty space with the M6S and M6M to avoid running into inconsistent performance.

  Plextor M6S Plextor M5M Crucial M550 SanDisk Extreme II Samsung SSD 840 EVO mSATA
Default
25% Spare Area

  Plextor M6S Plextor M5M Crucial M550 SanDisk Extreme II Samsung SSD 840 EVO mSATA
Default
25% Spare Area

 



AnandTech Storage Bench 2013

Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based—we record all IO requests made to a test system and play them back on the drive we're testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.

AnandTech Storage Bench 2013 - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

Storage Bench 2013 - The Destroyer (Data Rate)

The IO consistency already hinted that the M6S wouldn't perform very well in The Destoyer and that is indeed the case. The performance is slightly down from the M5 Pro Xtreme and is overall similar to the Crucial M550. Both of these drives have only 7% over-provisioning, which definitely hurts the benchmarks scores as there is unfortunately no way to test with added over-provisioning. It's hard to say how much the scores would improve with 12% over-provisioning but Toshiba (the Strontium Hawk is a rebranded Toshiba drive with a Marvell controller) is able to achieve more than twice the performance without any added over-provisioning, so there should certainly be room for improvement even without touching the over-provisioning.

Storage Bench 2013 - The Destroyer (Service Time)



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). We perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.

Desktop Iometer - 4KB Random Read

Desktop Iometer - 4KB Random Write

Desktop Iometer - 4KB Random Write (QD=32)

While random read speed remains unchanged from the M5 days, random write performance is nicely up. We are still not seeing very high random write performance at low queue depths but some improvement is always better than nothing.

Sequential Read/Write Speed

To measure sequential performance we run a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read

Both sequential read and write performance are up slightly compared to the M5S and M5M. Even though the improvements are rather marginal (4-13%) at least the performance is going up even though we are dealing with smaller lithography NAND and a more limited controller.

Desktop Iometer - 128KB Sequential Write

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers. In AS-SSD the sequential read performance actually takes a small hit, although it's could be just normal test variation. In any case, we are bottlenecked by the SATA 6Gbps interface when it comes to read performance anyway. Write performance, on the other hand, is up by up to 24%, which is rather significant.

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance



Performance vs. Transfer Size

ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. Both read and write performance across all IO sizes is up from the M5M and is now similar to the M5 Pro Xtreme. Write performance does have room for improvement as all Plextor drives are behind in this category, and overall the performance is quite average.


Click for full size


Click for full size



AnandTech Storage Bench 2011

Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives. The full description of the Heavy test can be found here, while the Light workload details are here.

Heavy Workload 2011 - Average Data Rate

In our old Storage Bench, the M6S and M6M do much better. They are still not best-in-class drives but are able to compete with the Samsung 840 EVO and other mainstream drives. The scenarios here might apply better to average client workloads, whereas the new 2013 Bench focuses on worst-case performance that is important for heavy users and professionals. Unless price is a significant factor, getting a better SSD is generally recommended for long-term use.

Light Workload 2011 - Average Data Rate



Power Consumption

In the ADATA SP920 review, I mentioned that I finally was collecting slumber power numbers for all the drives I have tested, but I hadn't tested enough drives to meet the strict deadline for the SP920 review. I now have the numbers to share. I'm only including the most relevant drives here but you can find all the results in out Bench.

Note that these results differ from the ones Anand has published earlier due to the fact that we are using totally different testbeds. Anand is using a modified laptop and only measures the current on the 3.3V rail, whereas I'm using our 2014 SSD testbed, which is a desktop system. I was surprised that the system even supported HIPM+DIPM as traditionally those are mobile-only features, but with aggressive link power management enabled in the BIOS, I was able to get them to work on a desktop as well. Unfortunately my multimeter only supports current measurement at the accuracy of one milliamp, so the results go in multiples of five (0.001A*5V is 5mW, so that's the smallest increment).

The M6S and M6M do pretty well here. Out of the drives that support low power states, the M6S and M6M are mediocre, but having the support is more important than the actual numbers. Whether the drive consumes 80mW or 40mW at idle won't make a huge difference to the battery life but 80mW vs 800mW certainly does. Both drives also support DevSleep and Plextor claims power consumption as low as 2mW but unfortunately we don't have the tools to test that yet (it is in the works, though).

Power consumption under load is also very good. The days of super power hungry SSDs are fortunately over and most drives stay easily below 4W, but keeping below 3W in all cases is still quite rare.

SSD Slumber Power (HIPM+DIPM) - 5V Rail

Drive Power Consumption - Idle

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write



Final Words

The M6S and M6M are mostly marginal updates to their M5 series counterparts. I believe Plextor's number one goal with the M6S and M6M was to cut costs by utilizing a cheaper controller and smaller lithography NAND and that's why the performance improvements are only minor. It seems that most, if not all, SSD OEMs are preparing for the PCIe era and all the updates we've seen lately have been rather modest with no firmware overhauls to significantly improve performance.

The IO consistency is the biggest issue I have with the M6S and M6M. While I understand that the lighter controller introduces some obstacles in all performance segments, the IO consistency seems to be built deep into Plextor's firmware because the behavior has been the same with all Plextor drives I've tested (even with the M6e). For a light user I don't believe that is a major issue as long as the user keeps ~10-15% of the drive empty, but truth to be told there are other mainstream drives that have far more consistent performance (such as Samsung SSD 840 EVO and Crucial M550).

Another thing the M6S and M6M miss is hardware encryption support and more specifically support for TCG Opal 2.0 and IEEE-1667. There is AES-256 support but only through an ATA password, which isn't the most secure and requires BIOS support to function. Especially for laptop users that can be a deal breaker since laptops are much more vulnerable to theft, and personally I wouldn't use a laptop for any work purposes without some form of encryption. 

NewEgg Price Comparison (4/9/2014)
  120/128GB 240/256GB 480/512GB
Plextor M6S (MSRP) $105 $170 $400
Plextor M6M (MSRP) $110 $180 $420
Plextor M5 Pro Xtreme $200 $230 $452
Plextor M5S $110 $200 -
Plextor M5M $115 $220 -
ADATA Premier Pro SP920 (MSRPs) $90 $160 -
ADATA XPG SX910 $110 $320 $600
Crucial M550 $100 $169 $335
Crucial M500 $78 $120 $240
Intel SSD 730 - $230 $480
Intel SSD 530 $100 $220 -
OCZ Vector 150 $115 $190 $370
OCZ Vertex 460 $100 $266 $300
Samsung SSD 840 EVO $90 $150 $280
Samsung SSD 840 Pro $120 $208 $420
SanDisk Extreme II $120 $215 $420
Seagate SSD 600 $105 $136 $380

The MSRPs seem very, very high for a mainstream drive. Plextor's MSRPs have been quite over the top in the past as well and I would expect pricing to drop to M5S levels, but even then the question is: are the M6S and M6M price competitive enough? The 250GB 840 EVO can be had for $50 less than the 256GB M5S, which is substantial considering that the EVO has better performance and support for hardware encryption. Crucial's M500/M550 are alternatives as well with affordable pricing and especially the M500 is unbeatable for value oriented buyers.

Update 4/17: Plextor just sent us updated MSRP pricing, which I've listed in the table above. The changes are rather significant as for example the 512GB M6S drops from $500 to $400, making the pricing much more sensible. The M6S is now close to the M550 and 840 EVO in terms of pricing, although if I had to choose, I'd still go with the M550 or 840 EVO due to better performance and feature set. 

All in all, I think it's currently very hard for the tier-two OEMs (i.e. ones without a NAND fab) to compete in the mainstream SSD market. Crucial and Samsung are dominating the market with very aggressive pricing and competitive feature sets. That said, the M6S/M6M can be a decent buy if the prices drop enough, but I find that unlikely to occur due to Crucial's and Samsung's NAND advantage. With better performance and an improved feature set the M6S and M6M could be more competitive, but as it stands there are better options in the market, namely the 840 EVO and M500/M550.

Log in

Don't have an account? Sign up now