Original Link: https://www.anandtech.com/show/8520/sandisk-ultra-ii-240gb-ssd-review



Two years ago Samsung dropped the bomb by releasing the first TLC NAND based SSD. I still vividly remember Anand's reaction when I told him about the SSD 840. I was in Korea for the launch event and was sitting in a press room where Samsung held the announcement. Samsung had only sampled us and everyone else with the 840 Pro, so the 840 and its guts had remained as a mystery. As soon as Samsung lifted the curtain on the 840 specs, I shot Anand a message telling him that it was TLC NAND based. The reason why I still remember this so clearly is because I had to tell Anand three times that, "Yes, I am absolutely sure and am not kidding" before he took my word.

For nearly two years Samsung was the only manufacturer with an SSD that utilized TLC NAND. Most of the other manufacturers have talked about TLC SSDs in one way or another, but nobody has come up with anything retail worthy... until now. A month ago SanDisk took the stage and unveiled their Ultra II, the company's first TLC SSD and the first TLC SSD that is not by Samsung. Obviously, there's a lot to discuss, but let's start with a quick overview of TLC and the market landscape.

There are a variety of reasons for Samsung's head start in the TLC game. Samsung is the only client SSD manufacturer with a fully integrated business model: everything inside Samsung SSDs is designed and manufactured by Samsung. That is unique in the industry because even though the likes of SanDisk and Micron/Crucial manufacture NAND and develop their own custom firmware, they rely on Marvell's controllers for their client drives. Third party silicon always creates some limitations because it is designed based on the needs of several customers, whereas in-house silicon can be designed for a specific application and firmware architecture.

Furthermore, Samsung is the only NAND manufacturer in addition to SK Hynix that is not in a NAND joint-venture. Without a partner Samsung has the full freedom to dedicate as much (or as little) resources and fab space to TLC development and production as necessary, while SanDisk must coordinate with Toshiba to ensure that both companies are satisfied with the development and production strategy.

From what I have heard, the two major problems with TLC have been late production ramp up and low volume. In other words, it has taken two or three additional quarters for TLC NAND to become mature enough for SSDs compared to MLC NAND at the same node, which means that a new MLC node is already sampling and will be available for SSD use within a couple of quarters. It has simply made sense to wait for the smaller and more cost efficent MLC node to become available instead of focusing development resources on a TLC SSD that would become obsolete in a matter of months.

SanDisk and Toshiba seem to have revised their strategy, though, because their second generation 19nm TLC is already SSD-grade and the production of both MLC and TLC flavors of the 15nm node are ramping up as we speak. Maybe TLC is finally becoming a first class citizen in the fab world. Today's review will help tell us more about the state of TLC NAND outside of Samsung's world. I am not going to cover the technical aspects of TLC here because we have done that several times before, so take a look at the links in case you need a refresh on how TLC works and how it differs from SLC/MLC.

The Ultra II is available in four capacities: 120GB, 240GB, 480GB and 960GB. The 120GB and 240GB models are shipping already, but the larger 480GB and 960GB models will be available in about a month. All come in a 2.5" 7mm form factor with a 9.5mm spacer included. There are no mSATA or M.2 models available and from what I was told there are not any in the pipeline either (at least for retail). SanDisk has always been rather conservative with their retail lineup and they have not been interested in the small niches that mSATA and M.2 currently offer, so it is logical decision to stick with 2.5" for now.

SanDisk Ultra II Specifications
Capacity 120GB 240GB 480GB 960GB
Controller Marvell 88SS9190 Marvell 88SS9189
NAND SanDisk 2nd Gen 128Gbit 19nm TLC
Sequential Read 550MB/s 550MB/s 550MB/s 550MB/s
Sequential Write 500MB/s 500MB/s 500MB/s 500MB/s
4KB Random Read 81K IOPS 91K IOPS 98K IOPS 99K IOPS
4KB Random Write 80K IOPS 83K IOPS 83K IOPS 83K IOPS
Idle Power (Slumber) 75mW 75mW 85mW 85mW
Max Power (Read/Write) 2.5W / 3.3W 2.7W / 4.5W 2.7W / 4.5W 2.9W / 4.6W
Encryption N/A
Warranty Three years
Retail Pricing $80 $112 $200 $500

There are two different controller configurations: the 120GB and 240GB models are using the 4-channel 9190 "Renoir Lite" controller, whereas the higher capacity models use the full 8-channel design 9189 "Renoir" silicon. To my knowledge there is not any difference besides the channel count (perhaps in internal cache sizes too), and the "Lite" version is cheaper. SanDisk has done this before with the X300s for instance, so having two different controllers is not really anything new and it makes sense because the smaller capacities cannot take full advantage of all eight channels anyway. Note that the 9189/9190 is not the new TLC-optimized 1074 controller from Marvell – it is the same controller that is used in Crucial's MX100 for example.

Similar to the rest of SanDisk's client SSD lineup, the Ultra II does not offer any encryption support. For now SanDisk is only offering encryption in the X300s, but in the future TCG Opal 2.0 and eDrive support will very likely make their way to the client drives as well. DevSleep is not supported either and SanDisk said the reason is that the niche for aftermarket DevSleep-enabled SSDs is practically non-existent. Nearly all platforms that support DevSleep (which is very few, actually) already come with SSDs, e.g. laptops, so DevSleep is not a feature that buyers find valuable.

nCache 2.0

The Ultra II offers rather impressive performance numbers for a TLC drive as even the smallest 120GB model is capable of 550MB/s read and 500MB/s write. The secret behind the performance is the new nCache 2.0, which takes SanDisk's pseudo-SLC caching mode to a next level. While the original nCache was mainly designed for caching the NAND mapping table along with some small writes (<4KB), the 2.0 version caches all writes regardless of their size and type by increasing the capacity of the pseudo-SLC portion. nCache 1.0 had an SLC portion less than 1GB (SanDisk never revealed the actual size), but the 2.0 version runs 5GB of the NAND in SLC mode for every 120GB of user space.

nCache 2.0 Overview
Capacity 120GB 240GB 480GB 960GB
SLC Cache Size 5GB 10GB 20GB 40GB

The interesting tidbit about SanDisk's implementation is the fact that each NAND die has a fixed number of blocks that run in SLC mode. The benefit is that when data has to be moved from the SLC to the TLC portion, the transfer can be done internally in the die, which is a feature SanDisk calls On Chip Copy. This is a proprietary design and uses a special die, so you will not see any competitive products using the same architecture. Normally the SLC to TLC transfer would be done like any other wear-leveling operation by using the NAND interface (Toggle or ONFI) and DRAM to move the data around internally from die to die, but the downside is that such a design may interrupt host IO processing since the internal operations occupy the NAND interface and DRAM.

OCZ's "Performance Mode" is a good example of a competing technology: once the fast buffer is full the write speed drops to half because in addition to the host IOs, the drive now has to move the data from SLC to MLC/TLC, which increases overheard since there is additional load on the controller, NAND interfaces, and in the NAND itself. Performance recovers once the copy/reorganize operations are complete.

SanDisk's approach introduces minimal overhead because everything is done within the die. Since an SLC block is exactly one third of a TLC block, three SLC blocks are simply folded into one TLC block. Obviously there is still some additional latency if you are trying to access a page in a block that is in the middle of the folding operation, but the impact of that is far smaller than what a die-to-die transfer would cause.

The On Chip Copy has a predefined threshold that will trigger the folding mechanism, although SanDisk said that it is adaptive in the sense that it will also look at the data type and size to determine the best action. Idle time will also trigger On Chip Copy, but there is no set threshold for that either from what I was told.

In our 240GB sample the SLC cache size is 10GB and since sixteen 128Gbit (16GiB) NAND dies are needed for the raw NAND capacity of 256GiB, the cache per die works out to be 625MB. I am guessing that in reality there is 32GiB of TLC NAND running in SLC mode (i.e. 2GiB per die), which would mean 10.67GiB of SLC, but unfortunately SanDisk could not share the exact block sizes of TLC and MLC with us for competitive reasons.

The performance benefits of the SLC mode are obvious. A TLC block requires multiple iterations to be programmed because the distribution of the voltage states is much narrower, so there is less room for errors, which needs a longer and more complex programming process. 

I ran HD Tach to see what the performance is across all LBAs. With sequential data the threshold for On Chip Copy seems to be about 8GB because after that the performance drops from 400MB to ~230MB/s. For average client workloads that is more than enough because users do not usually write more than ~10GB per day and with idle time nCache 2.0 will also move data from SLC to TLC to ensure that the SLC cache has enough space for all incoming writes.

The improved performance is not the only benefit of nCache 2.0. Because everything gets written to the SLC portion first, the data can then be written sequentially to TLC. That minimizes write amplification on the TLC part, which in turn increases endurance because there will be less redundant NAND writes. With sequential writes it is typically possible to achieve write amplification of very close to 1x (i.e. the minimum without compression) and in fact SanDisk claims write amplification of about 0.8x for typical client workloads (for the TLC portion, that is). That is because not all data makes it to the TLC in the first place – some data will be deleted while it is still in the SLC cache and thus will not cause any wear on the TLC. Remember, TLC is generally only good for about 500-1,000 P/E cycles, whereas SLC can easily surpass 30,000 cycles even at 19nm, so utilizing the SLC cache as much as possible is crucial for endurance with TLC at such small lithographies.

Like the previous nCache 1.0, the 2.0 version is also used to cache the NAND mapping table to prevent data corruption and loss. SanDisk does not employ any power loss protection circuitry (i.e. capacitors) in client drives, but instead the SLC cache is used to flush the mapping table from the DRAM more often, which is possible due to the higher endurance and lower latency of SLC. That obviously does not provide the same level of protection as capacitors do because all writes in progress will be lost during a power failure, but it ensures that the NAND mapping table will not become corrupt and turn the drive into a brick. SanDisk actually has an extensive whitepaper on power loss protection and the techniques that are used, so those who are interested in the topic should find it a good read.

Multi Page Recovery (M.P.R)

Using parity as a form of error correction has become more and more popular in the industry lately. SandForce made the first move with RAISE several years ago and nearly every manufacturer has released their own implementation since then. However, SanDisk has been one of the few that have not had any proper parity-based error correction in client SSDs, but the Ultra II changes that.

SanDisk's implementation is called Multi Page Recovery and as the name suggests, it provides page-level redundancy. The idea is exactly the same as with SandForce's RAISE, Micron's RAIN, and any other RAID 5-like scheme: parity is created for data that comes in, which can then be used to recover the data in case the ECC engine is not able to do it.

The parity ratio in the Ultra II is 5:1, which means that there is one parity page for every five pages of actual data. But here is the tricky part: with 256GiB of raw NAND and a 5:1 parity ratio, the usable capacity could not be more than 229GB because one sixth of the NAND is dedicated for parity.

The secret is that the NAND dies are not really 128Gbit – they are in fact much larger than that. SanDisk could not give us the exact size due to competitive reasons, but told us that the 128Gbit number should be treated as MLC for it to make sense. Since TLC stores three bits per cell instead of two, it can store 50% more data in the same area, so 128Gbit of MLC would become 192Gbit of TLC. That is in a perfect world where every die is equal and there are no bad blocks; in reality TLC provides about a 30-40% density increase over MLC because TLC inherently has more bad blocks (e.g. stricter voltage requirements because there is less room for errors due to narrower distribution of the voltage states).

In this example, let's assume that TLC provides a 35% increase over TLC when the bad blocks are taken away. That turns our 128Gbit MLC die into a 173Gbit TLC die. Now, with nCache 2.0, every die has about 5Gbit of SLC, which eats away ~15Gbit of TLC and we end up with a die that has 158Gbit of usable capacity. Factor in the 5:1 parity ratio and the final user capacity is ~132Gbit per die. Sixteen of those would equal 264GiB of raw NAND, which is pretty close to the 256GiB we started with.

Note that the above is just an example to help you understand how 5:1 parity ratio is possible. Like I said, SanDisk would not disclose the actual numbers and in the real world the raw NAND capacity may vary a bit because the number of bad blocks will vary from die to die. What matters, though, is that the Ultra II has the same 12.7% over-provisioning as the Extreme Pro, and that is after nCache 2.0 and Multi Page Recovery have been taken into account (i.e. 12.7% is dedicated to garbage collection, wear-leveling and the usual NAND management schemes).

Furthermore, all NAND die have what are called spare bytes, which are additional bytes meant for ECC. For instance Micron's 20nm MLC NAND has an actual page size of 17,600 bytes (16,384 user space + 1,216 spare bytes), so in reality a 128Gbit die is never truly 128Gbit – there is always a bit more for ECC and bad block management. The number of spare bytes has grown as the industry has moved to smaller process nodes because the need for ECC has increases and so has the number of bad blocks. TLC is just one level worse because it is less reliable by its design, hence more spare bytes are needed to make it usable in SSDs.

Testing Endurance

SanDisk does not provide any specific endurance rating for the Ultra II, which is similar to what Samsung is doing with the SSD 840 EVO. The reason is that both are only validated for client usage, meaning that if you were to employ either of them in an enterprise environment, the warranty would be void anyway. I can see the reasoning behind not including a strict endurance rating for an entry-level client drive because consumers are not very good at understanding their endurance needs and having a rating (which would obviously be lower for a TLC drive) would just lead to confusion. However, the fact that SanDisk has not set any rating does not mean that I am not going to test endurance.

To do it, I turned to our standard endurance testing methodology. Basically, I wrote sequential 128KB data (QD1) to the drive and monitored the SMART values 230 and 241, i.e. Media Wear Out Indicator (MWI) and Total GB Written. In the case of the Ultra II, the MWI works opposite of what we are used to: it starts from 0% and increases as the drive wears out. When the MWI reaches 100%, the drive has come to the end of its rated lifespan – it will likely continue to work because client SSD endurance ratings are with one-year data retention, but I would not recommend using it for any crititical data after that point.

SanDisk Ultra II Endurance Test
Change in Media Wear Out Indicator 7.8%
Change in Total GB Written 9,232GiB
Observed Total Endurance 118,359GiB
Observed P/E Cycles ~530

The table above summarizes the results of my test. The duration of the test was 12 hours and I took a few data points during the run to ensure that the results are valid. From the data, I extrapolated the total endurance and used it as the basis to calculate the P/E cycles with the following formula:

I made the assumption that the combined Wear Leveling and Write Amplification factor is 1x because that is plausible with pure sequential writes and it makes the calculation much simpler. For capacity I used the user capacity (240GB i.e. 223.5GiB), so the observed P/E cycles is simply total endurance divided by the capacity (both in GiB).

The number I came up with is 530 P/E cycles. There are a number of factors factors that make it practically impossible to figure out the exact NAND endurance because a part of the NAND operates in SLC mode with far greater endurance and there is a hefty amount of spare bytes for parity, but I think it is safe to say that the TLC NAND portion is rated at around 500 P/E cycles.

SanDisk Ultra II Estimated Endurance
Capacity 120GB 240GB 480GB 960GB
Total Estimated Endurance 54.6TiB 109.1TiB 218.3TiB 436.6TiB
Writes per Day 20GiB
Write Amplification 1.2x
Total Estimated Lifespan 6.4 years 12.8 years 25.5 years 51.0 years

Because the P/E cycle count alone is easy to misunderstand, I put it into context that is easier to understand i.e. lifespan of the drive. All I did was multiply the user capacity by the P/E cycle count to get the total endurance, which I then used to calculate the estimated lifespan. I selected 20GiB of writes per day because even though SanDisk did not provide an endurance rating for the Ultra II, their internal design goal was 20GiB per day, which is a fairly common standard for client drives. I set the write amplification to 1.2x as the TLC blocks are written sequentially thanks to nCache 2.0, and that should result in write amplification that is very close to 1x.

500 P/E cycles certainly does not sound much, but when you put it into context it is more than enough. At 20GiB a day, even the 120GB Ultra II will easily outlive the rest of the components. nCache 2.0 plays a huge role in making the Ultra II as durable as it is because it keeps the write amplification close to the ideal 1x. Without nCache 2.0, 500 P/E cycles would be a major problem, but as it stands I do not see endurance being an issue. Of course, if you write more than 20GiB per day and your workload is IO intensive in general, it is better to look for drives that are meant for heavier usage, such as the Extreme Pro and SSD 850 Pro.

A look at SanDisk's Updated SSD Dashboard

Along with the Ultra II, SanDisk is bringing an updated version of its SSD Dashboard, labeled as 1.1.1.

The start view has not changed and provides the same overview of the drive as before.

The most important new features are under the "Tools" tab and are provided by third parties. In the past SanDisk has not offered any cloning software with their drives, but the new version brings the option to use Apricorn's EZ GIG IV software to migrate from an old drive. Clicking the link in the Dashboard will lead to Apricorn's website where the user can download the software. Note that the Dashboard includes a special version of EZ GIG IV that can only be used to clone a drive once.

In addition to EZ GIG IV, the 1.1.1 version adds Trend Micro's Titanium Antivirus+ software. While most people are already running antivirus software of some sort, there are (too) many people who do not necessarily have an up-to-date antivirus software that has the definitions for the latest malware, so the idea behind including the antivirus software is to ensure that all users have a free and easy way to check their system for malware before migrating to a new drive.

The new version also adds support for "sanitation". It is basically an enhanced version of secure erase and works the same way as 0-fill erase does: instead of just erasing the data, sanitation writes zeros to all LBAs to guarantee that there is absolutely no way to recover the old data. Crypto erase is now a part of the Dashboard too, although currently that is only supported by the X300s since it is the only drive with hardware encryption support.

Live performance monitoring is also supported, but unfortunately there is no option to run a benchmark within the software (i.e. it just monitors the drive similar to what Windows Performance Monitor does). For OSes without TRIM, the Dashboard includes an option to run a scheduled TRIM to ensure maximum performance. If TRIM is support and enabled (like in this case since I was running Windows 8.1), the TRIM tab is grayed out since the OS will take care of sending the TRIM commands when necessary.

Support has also been improved in the new version as options for live chat and email support have been added. New languages have been added, too, and the Dashboard is now available in 17 different languages. During the install process, the installer will ask for the preferred language, but it is also possible to change that afterwards under the Settings tab.

Test Systems

For AnandTech Storage Benches, performance consistency, random and sequential performance, performance vs transfer size and load power consumption we use the following system:

CPU Intel Core i5-2500K running at 3.3GHz (Turbo & EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit

For slumber power testing we used a different system:

CPU Intel Core i7-4770K running at 3.3GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe (BIOS 1707)
Chipset Intel Z87
Chipset Drivers Intel 9.4.0.1026 + Intel RST 12.9
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Desktop Resolution 1920 x 1080
OS Windows 7 x64


Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

SanDisk Ultra II 240GB
Default
25% Over-Provisioning

The IO consistency of the Ultra II is not too good. At steady-state it averages about 2,500 IOPS, whereas MX100 and 840 EVO manage around 4,000-5,000. However, what is positive is the fact that it takes about 200 seconds before the performance starts to drop, which is mostly due to the fact that the Ultra II does not provide as many IOPS in the first place.

Since we are dealing with a value client drive, I would not consider the IO consistency to be a big issue because it is very unlikely that the drive will be used in a workload that is even remotely comparable to our performance consistency benchmark, but nevertheless it is always interesting to dive into the architecture of the drive. While the Ultra II is not the fastest SSD, it is still relatively consistent, which is ultimately the key to a smooth user experience.

SanDisk Ultra II 240GB
Default
25% Over-Provisioning

 

SanDisk Ultra II 240GB
Default
25% Over-Provisioning

 

TRIM Validation

To test TRIM, I filled the Ultra II with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

And TRIM works as it should.



AnandTech Storage Bench 2013

Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based – we record all IO requests made to a test system and play them back on the drive we are testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.

AnandTech Storage Bench 2013 - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we have been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

Storage Bench 2013 - The Destroyer (Data Rate)

As the Ultra II is not geared towards heavy write workloads, its performance in our 2013 Storage Bench does not come as a surprise. However, it is one of the better value drives as it beats the MX100 by almost a 50% margin and is only slightly slower than the 500GB 840 EVO (unfortunately, I do not have the results for the 250GB EVO).

Storage Bench 2013 - The Destroyer (Service Time)



AnandTech Storage Bench 2011

Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 – Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives. The full description of the Heavy test can be found here, while the Light workload details are here.

Heavy Workload 2011 - Average Data Rate

The 2011 Heavy Storage Bench is a bit of a letdown. SanDisk has never really excelled in peak performance like Samsung has and even with nCache 2.0 the Ultra II is not as fast as e.g. the MX100 and 840 EVO. In the Light suite, which is more relevant for typical client users, the differences are far more marginal and practically negligible in the real world.

Light Workload 2011 - Average Data Rate



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). We perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.

Desktop Iometer - 4KB Random Read

Desktop Iometer - 4KB Random Write

Desktop Iometer - 4KB Random Write (QD=32)

Random performance is decent but not overwhelming. I am surprised that the Ultra II is not faster considering the SLC cache that nCache 2.0 provides. Random write performance especially is a bit slow by today's standards and does not scale with queue depth, but for light client usage the Ultra II should still be fine.

Sequential Read/Write Speed

To measure sequential performance we run a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read

Fortunately sequential read performance is much better, although sequential write performance gets handicapped due to TLC, similar to the 840 EVO.

Desktop Iometer - 128KB Sequential Write

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers, but most other controllers are unaffected.

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance



Performance vs. Transfer Size

ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. Both read and write performance are actually on par with the Extreme Pro, which is impressive. That is explained by nCache 2.0 because ATTO only writes 2GB of data, so the Ultra II has no problem of caching all the writes to its 10GB SLC cache.

Click for full size

 



Power Consumption

As I mentioned on page one, the Ultra II does not support DevSleep, but it does support slumber power. SanDisk decided against DevSleep support based on the fact that there are only a handful of systems that support DevSleep and most of them are already equipped with M.2 SSDs. In other words, DevSleep is not a feature that adds value to the aftermarket buyer because it will not be supported by the device anyway.

Overall the Ultra II has great power characteristics. The slumber power is low (although not the lowest) and under load the Ultra II is very efficient and stays at about 2W.

SSD Slumber Power (HIPM+DIPM) - 5V Rail

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write

 



Final Words

Samsung set the bar for TLC SSDs extremely high with the SSD 840 and further raised it with the SSD 840 EVO. Since Samsung set the base level of what to expect from TLC, now every TLC drive will be put directly against Samsung's offerings, and what Samsung taught us is that a TLC SSD does not have to be inferior to an MLC drive. Coming up with something better than Samsung is a massive challenge because Samsung has more control over what they do than anyone else thanks to vertical integration.

If there is one company that has the resources to take on Samsung, that is SanDisk. Despite the pressure, the Ultra II meets the high expectations Samsung set for TLC SSDs. Saying that the Ultra II is faster than the 840 EVO would not be accurate since the two trade blows in our benchmarks, but the truth is that the Ultra II is a tough competitor to the 840 EVO. The same goes for the MX100 – the Ultra II goes head to head with it, and some benchmarks are in favor of the Ultra II while the MX100 excels in others.

There are only two minor shortcomings that I see in the Ultra II. The first one is peak performance, which is not on par with the MX100 and 840 EVO. For very light workloads (web browsing, email, Office, etc.) that is not a concern, but users with heavier workloads (though not heavy workloads, just something more than basic web browsing and email; e.g. gaming and photo editing) may get slightly better performance with the MX100 or 840 EVO.

The other is the lack of hardware encryption. Both the MX100 and 840 EVO support TCG Opal 2.0 and eDrive encryption, so the fact that the Ultra II does not have any form of encryption support cannot go without a mention. Whether that is valuable is totally up to you – eDrive has fairly strict software and hardware limitations and thus is not important for the majority of potential buyers, but if you plan on utilizing encryption now or sometime in the future it is better to go with a drive that has the proper hardware support.

NewEgg Price Comparison (9/15/2014)
  120/128GB 240/256GB 480/512GB 960GB/1TB
SanDisk Ultra II $80 $110 $220 $430
SanDisk Extreme Pro - $190 $370 $590
SanDisk Extreme II $75 $150 $480 -
Crucial MX100 $75 $112 $210 -
Crucial M550 $90 $155 $280 $470
Samsung SSD 850 Pro $130 $210 $400 $700
Samsung SSD 840 EVO $90 $150 $250 $460
OCZ ARC 100 $75 $120 $240 -
Plextor M6S $80 $130 $280 -
Intel SSD 530 $85 $140 $250 -

It is clear that SanDisk is going after the MX100 in pricing. The prices are within $10 of each other and due to normal price fluctuations the two will likely switch places on a regular basis. I am inclined to say that the MX100 is still a better buy because not only do you get hardware encryption, you also get higher usable capacities since the MX100 features less over-provisioning compared to the Ultra II (7% vs 13%), so technically the price per gigabyte is lower. Of course, even a small drop in the Ultra II's prices will render the difference negligible at which point it boils down to whether you value SanDisk's SSD Dashboard over the MX100's hardware encryption.

The SanDisk 960GB model, however, is an obvious case because the MX100 tops out at 512GB, so the Ultra II is the best available option (unless you need hardware encryption in which case it is worth it to spend a bit more on the 840 EVO).

All in all, it seems that SanDisk is finally becoming more aggressive on the retail frontier. SanDisk has always been a big name among the OEMs, but I have felt that their retail drives have been a bit like second class citizens. I mean, the Ultra Plus and Extreme II were both good SSDs, but SanDisk never pushed them to the full potential that the drives could have had in the market. But I see a change happening.

The goal of the Extreme Pro was to be the fastest client SATA drive on the market, and it succeeded in that (before the 850 Pro came out, although the two are very close), plus the pricing was fair. With the Ultra II, SanDisk finally has a value drive that is competitive in both price and performance. I am glad that SanDisk is showing more commitment to the retail space because if there is one company that can challenge Samsung and Micron in all aspects, that is SanDisk.

Log in

Don't have an account? Sign up now