Original Link: https://www.anandtech.com/show/7006/sandisk-extreme-ii-review-480gb



A few weeks ago I mentioned on twitter that I had a new favorite SSD. This is that SSD, and surprisingly enough, it’s made by SanDisk.

The SanDisk part is very unexpected, because until now SanDisk hadn’t really put out a very impressive drive. Much like Samsung in the early days of SSDs, SanDisk is best known for its OEM efforts. The U100 and U110 are quite common in Ultrabooks, and more recently even Apple adopted SanDisk as a source for its notebooks. Low power consumption, competitive pricing and solid validation kept SanDisk in the good graces of the OEMs. Unfortunately, SanDisk did little to push the envelope on performance, and definitely did nothing to prioritize IO consistency. Until now.

The previous generation SanDisk Extreme SSD used a SandForce controller, with largely unchanged firmware. This new drive however moves to a much more favorable combination for companies who have their own firmware development team. Like Crucial’s M500, the Extreme II uses Marvell’s 88SS9187 (codename Monet) controller. SanDisk also rolls its own firmware, a combination we’ve seen in previous SanDisk SSDs (e.g. the SanDisk Ultra Plus). Rounding out the nearly vertical integration is the use of SanDisk’s 19nm eX2 ABL MLC NAND.

This is standard 2-bit-per-cell MLC NAND with a twist: a portion of each MLC NAND die is set to operate in SLC/pseudo-SLC mode. SanDisk calls this its nCache. The nCache is used as a lower latency/higher performance write buffer. In the Ultra Plus, I pointed out that there simply wasn’t much NAND allocated to the nCache since it is pulled from the ~7% spare area on the drive. With the Extreme II SanDisk doubled the amount of spare area on the drive, which could impact the size of the nCache.

SanDisk Extreme II Specifications
  120GB 240GB 480GB
Controller Marvell 88SS9187
NAND SanDisk 19nm eX2 ABL MLC NAND
DRAM

128MB DDR3-1600

256MB DDR3-1600
512MB DDR3-1600
Form Factor 2.5" 7mm
Sequential Read

550MB/s

550MB/s
545MB/s
Sequential Write
340MB/s
510MB/s
500MB/s
4KB Random Read
91K IOPS
95K IOPS
95K IOPS
4KB Random Write
74K IOPS
78K IOPS
75K IOPS
Drive Lifetime 80TB Written
Warranty 5 years
MSRP
$129.99
$229.99
$439.99

Some small file writes are supposed to be buffered to the nCache, but that didn’t seem to improve performance in the case of the Ultra Plus, leading me to doubt its effectiveness. However, SanDisk mentioned the nCache can be used to improve data integrity as well. The indirection/page table is stored in nCache, which SanDisk believes gives it a better chance of maintaining the integrity of that table in the event of sudden power loss (since writes to nCache are quicker than to the MLC portion of the NAND). The Extreme II itself doesn’t have any capacitor based power loss data protection.

Don't be too put off by the 80TB of drive writes rating for the drives. The larger drives should carry higher ratings (and they will last longer), but in order to claim a higher endurance SanDisk would have to actually validate to that higher endurance specification. For client drives, we often times see SSD vendors provide a single endurance rating in order to keep validation costs low - despite the fact that larger drives will be able to sustain more writes over the lifetime of the drive. SanDisk offers a 5 year warranty with the Extreme II.

Despite the controller’s capabilities (as we’ve seen with the M500), SanDisk’s Extreme II doesn’t enable any sort of AES encryption or eDrive support.

With the Extreme II, SanDisk moved to a much larger amount of DRAM per capacity point. Similar to Intel’s S3700, SanDisk now uses around 1MB of DRAM per 1GB of NAND capacity. With a flat indirection/page table structure, sufficient DRAM and an increase in spare area, it would appear that SanDisk is trying to improve IO consistency. Let’s find out if they have.



Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 50K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

Um, hello, awesome? The SanDisk Extreme II is the first Marvell based consumer SSD to actually prioritize performance consistency. The Extreme II does significantly better than pretty much every other drive here with the exception of Corsair's Neutron. Note that increasing the amount of spare area on the drive actually reduces IO consistency, at least during the short duration of this test, as SanDisk's firmware aggressively attempts to improve the overall performance of the drive. Either way this is the first SSD from a big OEM supplier that actually delivers consistent performance in the worst case scenario.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

 

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
Default
25% Spare Area  

 



AnandTech Storage Bench 2013

When I built the AnandTech Heavy and Light Storage Bench suites in 2011 I did so because we didn't have any good tools at the time that would begin to stress a drive's garbage collection routines. Once all blocks have a sufficient number of used pages, all further writes will inevitably trigger some sort of garbage collection/block recycling algorithm. Our Heavy 2011 test in particular was designed to do just this. By hitting the test SSD with a large enough and write intensive enough workload, we could ensure that some amount of GC would happen.

There were a couple of issues with our 2011 tests that I've been wanting to rectify however. First off, all of our 2011 tests were built using Windows 7 x64 pre-SP1, which meant there were potentially some 4K alignment issues that wouldn't exist had we built the trace on a system with SP1. This didn't really impact most SSDs but it proved to be a problem with some hard drives. Secondly, and more recently, I've shifted focus from simply triggering GC routines to really looking at worst case scenario performance after prolonged random IO. For years I'd felt the negative impacts of inconsistent IO performance with all SSDs, but until the S3700 showed up I didn't think to actually measure and visualize IO consistency. The problem with our IO consistency tests are they are very focused on 4KB random writes at high queue depths and full LBA spans, not exactly a real world client usage model. The aspects of SSD architecture that those tests stress however are very important, and none of our existing tests were doing a good job of quantifying that.

I needed an updated heavy test, one that dealt with an even larger set of data and one that somehow incorporated IO consistency into its metrics. I think I have that test. The new benchmark doesn't even have a name, I've just been calling it The Destroyer (although AnandTech Storage Bench 2013 is likely a better fit for PR reasons).

Everything about this new test is bigger and better. The test platform moves to Windows 8 Pro x64. The workload is far more realistic. Just as before, this is an application trace based test - I record all IO requests made to a test system, then play them back on the drive I'm measuring and run statistical analysis on the drive's responses.

Imitating most modern benchmarks I crafted the Destroyer out of a series of scenarios. For this benchmark I focused heavily on Photo editing, Gaming, Virtualization, General Productivity, Video Playback and Application Development. Rough descriptions of the various scenarios are in the table below:

AnandTech Storage Bench 2013 Preview - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

While some tasks remained independent, many were stitched together (e.g. system backups would take place while other scenarios were taking place). The overall stats give some justification to what I've been calling this test internally:

AnandTech Storage Bench 2013 Preview - The Destroyer, Specs
  The Destroyer (2013) Heavy 2011
Reads 38.83 million 2.17 million
Writes 10.98 million 1.78 million
Total IO Operations 49.8 million 3.99 million
Total GB Read 1583.02 GB 48.63 GB
Total GB Written 875.62 GB 106.32 GB
Average Queue Depth ~5.5 ~4.6
Focus Worst case multitasking, IO consistency Peak IO, basic GC routines

SSDs have grown in their performance abilities over the years, so I wanted a new test that could really push high queue depths at times. The average queue depth is still realistic for a client workload, but the Destroyer has some very demanding peaks. When I first introduced the Heavy 2011 test, some drives would take multiple hours to complete it - today most high performance SSDs can finish the test in under 90 minutes. The Destroyer? So far the fastest I've seen it go is 10 hours. Most high performance I've tested seem to need around 12 - 13 hours per run, with mainstream drives taking closer to 24 hours. The read/write balance is also a lot more realistic than in the Heavy 2011 test. Back in 2011 I just needed something that had a ton of writes so I could start separating the good from the bad. Now that the drives have matured, I felt a test that was a bit more balanced would be a better idea.

Despite the balance recalibration, there's just a ton of data moving around in this test. Ultimately the sheer volume of data here and the fact that there's a good amount of random IO courtesy of all of the multitasking (e.g. background VM work, background photo exports/syncs, etc...) makes the Destroyer do a far better job of giving credit for performance consistency than the old Heavy 2011 test. Both tests are valid, they just stress/showcase different things. As the days of begging for better random IO performance and basic GC intelligence are over, I wanted a test that would give me a bit more of what I'm interested in these days. As I mentioned in the S3700 review - having good worst case IO performance and consistency matters just as much to client users as it does to enterprise users.

Given the sheer amount of time it takes to run through the Destroyer, and the fact that the test was only completed a little over a week ago, I don't have many results to share. I'll be populating this database over the coming weeks/months. I'm still hunting for any issues/weirdness with the test so I'm not ready to remove the "Preview" label from it just yet. But the results thus far are very telling.

I'm reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric I've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

AnandTech Storage Bench 2013 - The Destroyer

As you'd expect, the combination of great performance consistency and competitive peak performance drives the Extreme II to the top of our Destroyer charts. I didn't expect to see anyone put out an SSD faster than the Seagate 600 so soon but it looks like SanDisk did it.

AnandTech Storage Bench 2013 - The Destroyer

 



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

If we look at the raw random read/write speed, SanDisk does fairly well but not quite to the level of Samsung's SSD 840 Pro.

Desktop Iometer - 4KB Random Read (4K Aligned)

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length. Sequential IO performance is very good on the Extreme II, effectively equalling the performance of Samsung's SSD 840 Pro.

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Desktop Iometer - 128KB Sequential Write (4K Aligned)

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

As a client focused drive, it's no susprise that the Extreme II does well in all of the sequential tests. I did notice consistently higher sequential read performance on the lower capacity Extreme II for some reason, but the gap isn't large enough to be significant. On the sequential write side, the 120GB drive is appreciably slower than the 240 and 480GB models simply because of the reduction in NAND die count.

Incompressible Sequential Read Performance - AS-SSD

Incompressible Sequential Write Performance - AS-SSD



Performance vs. Transfer Size

ATTO is a useful tool for quickly measuring the impact of transfer size on performance. You can get the complete data set in Bench.

Once again, SanDisk seems to know what it's doing. The Extreme II pretty much equals the Samsung SSD 840 Pro in terms of small file performance, something that we rarely see. Only the 120GB model is off here, the rest of the capacities look excellent.



AnandTech Storage Bench 2011

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running in 2010.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

The Extreme II's peak performance isn't as good as the 840 Pro or OCZ Vector, but it's definitely very quick.

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)



AnandTech Storage Bench 2011 - Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Light Workload 2011 - Average Data Rate

Light Workload 2011 - Average Read Speed

Light Workload 2011 - Average Write Speed

Light Workload 2011 - Disk Busy Time

Light Workload 2011 - Disk Busy Time (Reads)

Light Workload 2011 - Disk Busy Time (Writes)



Power Consumption

We're introducing a new part of our power consumption testing with this review: measurement of slumber power with host initiated power management (HIPM) and device initiated power management (DIPM) enabled. It turns out that on Intel desktop platforms, even with HIPM and DIPM enabled, SSDs will never go into their lowest power states. In order to get DIPM working, it seems that you need to be on a mobile chipset platform. I modified an ASUS Zenbook UX32VD to allow me to drive power to the drive bay from an external power supply/power measurement rig. I then made sure HIPM+DIPM were enabled, and measured average power with the drive in an idle state. The results are below:

SSD Slumber Power (HIPM+DIPM)

Samsung does amazingly well in this test, with only Intel's first generation X25-M SSD coming anywhere close. The SanDisk drives do alright here, although they're a bit more power hungry than some of the others the differences aren't large enough to meaningfully impact most notebook usage. The important thing to note is just how bad power consumption can get if your drive doesn't properly support HIPM and DIPM. It's when you start getting into the 500mW - 1000mW range that you'll see real impacts to notebook battery life.

Our traditional idle power test is still useful as this is representative of power consumption in an active idle state. The lowest power states do take time to get in/out of, so if you're actively using your machine you may see some time spent in a non-slumber idle state which is effectively the data you see below:

Drive Power Consumption - Idle

Once again, the Extreme II does alright here. Idle power consumption isn't high enough to be a problem for notebook users, it's just not low enough to be as good as Samsung.

Under load the story is a little different. Peak sequential IO power consumption is very Samsung-like, but power consumption with a random write workload is amazingly low. I suspect this is a side effect of whatever SanDisk is doing to keep IO consistency in check.

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write



Final Words

SanDisk's Extreme II is an amazingly consistent performer. Joining the ranks of Seagate's SSD 600 and Corsair's Neutron, the Extreme II offers a balance of good peak performance and performance consistency. The former is important with any high end product, while the latter is important for any SSD where the user wants to use as much of the drive's capacity as possible. SanDisk picked a very good balance of IO consistency and peak performance, resulting in the best scores we've ever seen in our new storage benchmark for 2013 (a test that happens to greatly value drives with good worst case scenario performance). As a flagship drive, SanDisk also ships the Extreme II with a nice 5 year warranty.

The Extreme II is an above average performer when it comes to power consumption. Samsung's SSD 840 Pro still holds the title as having the lowest HIPM+DIPM slumber power but the Extreme II isn't power hungry enough to be a problem for mobile users. Power consumption under load is fine as well.

The only complaint I really have about the Extreme II is the lack of encryption/eDrive support. If you don't care about running with encryption enabled however, there's really nothing wrong with SanDisk's Extreme II. It's honestly my favorite client SSD on the market today. What I'm particularly excited about is the potential for all of the work SanDisk has put into the Extreme II's firmware to spill over into its OEM drives as well.

Assuming there are no strange compatibility issues or firmware problems that develop, the Extreme II will likely become one of my most recommended SSDs. Far too often I have to supply the caveat of "make sure you don't fill the drive!" whenever I recommend an SSD. With great worst case performance and good IO consistency in that state, I can recommend SanDisk's Extreme II without any stipulations which I greatly appreciate.

Log in

Don't have an account? Sign up now