Original Link: https://www.anandtech.com/show/6505/plextor-releases-firmware-102-for-the-m5-pro-promises-100k-iops



Earlier this week Plextor put out a press release about a new firmware for the M5 Pro SSD. The new 1.02 firmware is branded "Xtreme" and Plextor claims increases in both sequential write and random read performance. We originally reviewed the M5 Pro back in August and it did well in our tests but due to the new firmware, it's time to revisit the M5 Pro. It's still the only consumer SSD based on Marvell's 88SS9187 controller and it's one of the few that uses Toshiba's 19nm MLC NAND as most manufacturers are sticking with Toshiba's 24nm MLC for now.

It's not unheard of for manufacturers to release faster firmware updates even after the product has already made it to market. OCZ's Vertex 4 is among the most well known for its firmware updates because OCZ didn't provide just one, but two firmware updates that increased performance by a healthy margin. The SSD space is no stranger to aggressive launch schedules that force products out before they're fully baked. Fortunately a lot can be done via firmware updates.

The new M5 Pro firmware is already available at Plextor's site and the update should not be destructive, although we still strongly suggest that you have an up-to-date backup before flashing the drive. I've compiled the differences between the new 1.02 firmware and older versions in the table below:

Plextor M5 Pro with Firmware 1.02 Specifications
Capacity 128GB 256GB 512GB
Sequential Read 540MB/s 540MB/s 540MB/s
Sequential Write 340MB/s -> 330MB/s 450MB/s -> 460MB/s 450MB/s -> 470MB/s
4KB Random Read 91K IOPS -> 92K IOPS 94K IOPS -> 100K IOPS 94K IOPS -> 100K IOPS
4KB Random Write 82K IOPS 86K IOPS 86K IOPS -> 88K IOPS

The 1.02 firmware doesn't bring any major performance increases and the most you'll be getting is 6% boost in random read speed. To test if there are any other changes, I decided to run the updated M5 Pro through our regular test suite. I'm only including the most relevant tests in the article but you can find all results in our Bench. The test system and benchmark explanations can be found in any of our SSD reviews, such as the original M5 Pro review.

AnandTech Storage Bench

Heavy Workload 2011 - Average Data Rate

In our storage suites, the 1.02 firmware isn't noticeably faster. In our Heavy suite the new firmware is able to pull 3.8MB/s (1.7%) higher throughput but that falls within the range of normal run to run variance. The same applies to the Light suite test where the new firmware is actually slightly slower.

Light Workload 2011 - Average Data Rate'

Random & Sequential Read/Write Speed

Desktop Iometer - 4KB Random Read (4K Aligned)

Random speeds are all up by 3-5%, though that's hardly going to impact real world performance. 

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Sequential speeds are essentially not changed at all and the M5 Pro is still a mid-range performer.

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Desktop Iometer - 128KB Sequential Write (4K Aligned)

 


Performance Consistency

In our Intel SSD DC S3700 review we introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.


         

     

Wow, that's bad. While we haven't run the IO consistency test on all the SSDs we have in our labs, the M5 Pro is definitely the worst one we have tested so far. In less than a minute the M5 Pro's performance drops below 100, which at 4KB transfer size is equal to 0.4MB/s. What makes it worse is that the drops are not sporadic but in fact most of the IOs are in the magnitude of 100 IOPS. There are singular peak transfers that happen at 30-40K IOPS but the drive consistently performs much worse.

Even bigger issue is that over-provisioning the drive more doesn't bring any relief. As we discovered in our performance consistency article, giving the controller more space for OP usually made the performance much more consistent, but unfortunately this doesn't apply to the M5 Pro. It does help a bit as it takes longer for the drive to enter steady-state and there are more IOs happening in the ~40K IOPS range, but the fact is that most IO are still handicapped to 100 IOPS.

The next set of charts look at the steady state (for most drives) portion of the curve. Here we'll get some better visibility into how everyone will perform over the long run.


         

     

Concentrating on the final part of the test doesn't really bring anything new because as we saw in the first graph already, the M5 Pro reaches steady-state very quickly and the performance stays about the same throughout the test. The peaks are actually high compared to other SSDs but having one IO transfer at 3-5x the speed every now and then won't help if over 90% of the transfers are significantly slower.


         

     



Final Words

It's always great to see manufacturers improving their existing products but the 1.02 firmware for the M5 Pro doesn't really change its ranking. It provides minor tweaks to random IO performance but when looking at the big picture, the changes are fairly insignificant. The M5 Pro is still noticeably behind Samsung's 840 Pro and OCZ's Vector, which are currently in their own class when it comes to performance. Plextor's pricing is, however, pretty competitive and depending on the capacity you can get the M5 Pro for as much as $100 cheaper than the 840 Pro or Vector. Plextor has turned out to be one of the more reliable SSD vendors, although admittedly their customer base isn't as large as some of the other players we cover.

Despite Plextor's reliability, I'm not very comfortable with the high IO latency in the M5 Pro. I would rather have slower peak performance if it translated to more consistent overall performance as the end user is not going to notice the peaks but is definitely going to notice the hiccups caused by frequent, high maximum latencies. 

Log in

Don't have an account? Sign up now