Original Link: https://www.anandtech.com/show/7594/samsung-ssd-840-evo-msata-120gb-250gb-500gb-1tb-review
Samsung SSD 840 EVO mSATA (120GB, 250GB, 500GB & 1TB) Review
by Kristian Vättö on January 9, 2014 1:35 PM ESTSamsung is in a unique position in the SSD market. It’s the only company in the consumer SSD business with a fully vertically integrated business model and zero reliance on other companies. The controller, NAND, DRAM, firmware and software are all designed and produced by Samsung, whereas other companies only focus on their core strengths and outsource the rest. Even the semiconductor giant Intel has switched to SandForce in their consumer lineup and only their enterprise drives get the complete in-house treatment, which to be honest makes a lot of sense given that enterprise market is the one bringing the fat profits home.
Designing a platform (silicon, firmware & software) is expensive and making the same platform suitable for both consumer and enterprise markets is difficult. A consumer platform needs to be affordable and low power, whereas enterprises appreciate high performance and a rich feature set (remote management, data protection etc). That is a combination that does not mix very well. If the design focus is on the enterprise market, the platform tends to be too pricey and high power to succeed in the client space (like Intel’s DC S3500/S3700 platform), while a consumer focus results in a too limited platform in order to meet the price point. Ideally you would have two separate platforms but that is not very cost efficient.
What makes SandForce and Marvell lucrative partners is the platform they offer but it comes at a cost: you lose the ability to go custom. This is true especially with SandForce because they provide everything from the silicon to the firmware/software stack and the OEM can only configure the firmware to a certain degree, which based on what we've seen is very limited. Marvell's business model, on the other hand, only includes the silicon but the development of the firmware and additional software is up to the OEM. Since the characteristics of an SSD are mostly defined by the firmware, Marvell's offering is very alluring for larger OEMs (like Micron/Crucial and SanDisk) because it saves them the development costs of the silicon and still allows them to design the firmware from a scratch.
The moment when having control of everything from silicon to NAND and firmware production is the most beneficial is when transitioning to new technologies. Every time there's a change in NAND (be that a change in manufacturer or lithography), the firmware has to be tweaked due to differences in program/erase times and possibly page/block sizes as well. In case there's a bigger change (like moving from MLC to TLC or from planar NAND to 3D NAND), the silicon itself may have to updated. Compared to a simple firmware update, a new silicon update is always a much longer process and can take several years if we're talking about a bigger overhaul. Of course, a new silicon always needs an updated firmware too.
When all the development happens under the same roof in a vertically integrated company, things tend to be smoother and quicker. All teams can work seamlessly together and there should not be information barriers (at least in theory). If you have to work with a partner (or even worse, multiple partners), a lot of time will be spent on evaluating what details can be shared and with whom. In the end, the likes of LSI, Intel and Micron for instance are all competitors in one market or another and giving too detailed information may give the opponent an unwanted advantage. There is also the tradeoff angle: when developing a product for multiple partners, it is impossible to build a product that would meet everyone's needs and wants.
Samsung's SSDs are a great example of how vertical integration can provide a significant advantage. Over a year later, Samsung is still the only OEM with a TLC NAND based SSD. When you hold the ties of silicon and NAND design and production, you can make whatever you can and want. For example in the case of TLC NAND, the limited supply and hence high pricing has pushed other OEMs away from it. In theory, TLC NAND is 33% cheaper to produce than 2-bit-per-cell MLC but due to the way the markets work, the price delta is smaller because MLC is a much higher volume product. If you are in control of the production like Samsung is, all you care about is the production price, which is where TLC NAND wins. Sure, Samsung isn't the only NAND manufacturer but it is the only one with a consumer orientated controller IP (although SK Hynix owns LAMD now but that deal has yet to materialize in a product). While TLC does not require a special controller, the NAND type has to be taken into account while designing the silicon in order to build an efficient SSD (e.g. ECC needs are higher and endurance is significantly lower).
So why all the talk about Samsung SSD business model and its benefits? Because their latest product is yet another proof of their strengths. Please meet the SSD 840 EVO mSATA.
In the past Samsung's mSATA SSDs have been OEM only. I asked why and Samsung told me the small market for retail mSATA SSDs has kept them from entering the retail market. Unlike smaller OEMs, Samsung isn't interested in covering niche markets. Their advantage lies in scalability, which doesn't suit the niche market. Due to the rise of Ultrabooks, mini-ITX systems and other small form factor computers, Samsung saw that the market for retail mSATA SSDs if finally big enough. However, Samsung didn't want to provide just another alternative -- they wanted to offer a product that gives consumers a reason to upgrade.
Samsung SSD 840 EVO mSATA Specifications | ||||
Capacity | 120GB | 250GB | 500GB | 1TB |
Controller | Samsung MEX (3x ARM Cortex R4 cores @400MHz) | |||
NAND | 19nm Samsung TLC | |||
DRAM Cache | 256MB | 512MB | 512MB | 1GB |
Sequential Read | 540MB/s | 540MB/s | 540MB/s | 540MB/s |
Sequential Write | 410MB/s | 520MB/s | 520MB/s | 520MB/s |
4KB Random Read | 98K IOPS | 98K IOPS | 98K IOPS | 98K IOPS |
4KB Random Write | 35K IOPS | 66K IOPS | 90K IOPS | 90K IOPS |
Warranty | Three years |
Hardware and specification wise the EVO mSATA is a match with the 2.5" EVO, which shouldn't surprise anyone since we're dealing with identical hardware. All features including RAPID, TurboWrite and hardware encryption (TCG Opal 2.0 & eDrive) are supported. I won't go into detail about any of these since we've covered them in the past but be sure to check the links for a refresh.
The uniqueness of the EVO mSATA is its capacity. Like its 2.5" sibling, the EVO mSATA is offered in capacities of up to 1TB. Most Ultrabooks and similar systems still ship with only 128GB of internal storage, leaving a good market for bigger aftermarket drives.
To date we've only seen a couple of 480GB mSATA SSDs (Mushkin Atlas and Crucial M500), while most models have been limited to 256GB. The limiting factor has been the physical dimensions of mSATA, which only allow up to four NAND packages. Given that the highest density NAND available to OEMs is currently 64Gb (8GB) per die and up to eight of those dies can be packed into a single package, the maximum capacity with four packages comes in at 256GB (4x8x8GB). Micron is supposed to start shipping their 128Gbit NAND (the one used in the M500) to OEMs during the next few months, which will double the capacity to 512GB, though still only half of what the EVO mSATA offers.
Like the 2.5" EVO, the EVO mSATA uses Samsung's own 19nm 128Gb TLC (3-bit-per-cell) die. We've gone in-depth with TLC a handful of times already and we have also shown that its endurance is fine for consumer usage, so I am not going to touch those points here. However, there is something particular in the EVO mSATA and its NAND that allows a capacity of 1TB in mSATA form factor. Hop on to the next page to find out.
NewEgg Price Comparison (1/6/2014) | |||||
120/128GB | 240/256GB | 480/500GB | 1TB | ||
Samsung SSD 840 EVO mSATA | $150 | $260 | $490 | $860 | |
Samsung SSD 840 EVO | $101 | $180 | $325 | $599 | |
Mushkin Atlas | $109 | $195 | $468 | - | |
Crucial M500 | $113 | $176 | $320 | - | |
Plextor M5M | $112 | $200 | - | - | |
Intel SSD 525 | $146 | $290 | - | - | |
ADATA XPG SX300 | $110 | $200 | - | - |
The EVO mSATA will be available this month and exact launch schedule depends on the region. There will only be a bare drive version -- no notebook and desktop update kits like the 2.5" EVO offers.
I wasn't able to find the EVO mSATA on sale anywhere yet, hence the prices in the table are the MSRPs provided by Samsung. For the record, the MSRPs for EVO mSATA are only $10 higher than 2.5" EVO's, so I fully expect the prices to end up being close to what the 2.5" EVO currently retails for.
Test System
CPU | Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled) |
Motherboard | AsRock Z68 Pro3 |
Chipset | Intel Z68 |
Chipset Drivers | Intel 9.1.1.1015 + Intel RST 10.2 |
Memory | G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24) |
Video Card |
XFX AMD Radeon HD 6850 XXX (800MHz core clock; 4.2GHz GDDR5 effective) |
Video Drivers | AMD Catalyst 10.1 |
Desktop Resolution | 1920 x 1080 |
OS | Windows 7 x64 |
Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit
The NAND: Going Vertical, Not 3D (Yet)
Since we are dealing with a fixed number of NAND packages (unless you want to do Mushkin's and build a dual-PCB design), there are two ways to increase the capacity. One is to increase the capacity per die, which is what has been done before. This is a logical way as the lithographies get smaller (3Xnm was 32Gb, 2Xnm 64Gb and now 1X/2Ynm 128Gb) and you can double the capacity per die while keeping the die are roughly the same as with the previous process node. However, it's not very efficient to increase the capacity per die unless there is a die shrink because otherwise the end result is a large die, which is generally bad for yields and that in turn is bad for financials. Since we are still moving to 128Gb die (Samsung and IMFT have moved already, Toshiba/SanDisk and SK Hynix are still using 64Gb), moving to 256Gb die this quickly was out of question. I do not expect 256Gb die to happen until 2015/2016 and it may well be that we will not even see manufacturers going past 128Gb at that point. We'll see a big push for 3D NAND during the next few years and I am not sure if planar NAND will get to a point where 256Gb die becomes beneficial.
Samsung SSD 840 EVO mSATA NAND Configurations | ||||
Capacity | 120GB | 250GB | 500GB | 1TB |
Raw NAND Capacity | 128GiB | 256GiB | 512GiB | 1024GiB |
# of NAND Packages | 2 | 2 | 4 | 4 |
# of Dies per Package | 4 | 8 | 8 | 16 |
Capacity per Die | 16GiB | 16GiB | 16GiB | 16GiB |
Over-Provisioning | 12.7% | 9.1% | 9.1% | 9.1% |
If you cannot increase the capacity per die, the only way left is to increase the die count. So far the limit has been eight dies and with traditional packaging methods there is already some performance loss after exceeding four dies per package. That is due to the limits of the interconnects that connect the dies to the PCB and as you add more dies the signal integrity degrades and latency goes up exponentially.
Source: Micron
In order to achieve the 1TB capacity with only four NAND packages, Samsung had to squeeze sixteen NAND dies into one package. To my surprise, when I started researching a bit about Samsung's 16-die NAND, I found out that it's actually nothing new. Their always-so-up-to-date NAND part number decoder from August 2009 mentions a 16-die MLC configuration and I managed to find TechInsights' report of a 512GB SSD used in 2012 Retina MacBook Pro with x-ray shots of a 16-die NAND package. That is a SSD 830 based drive, so I circled back to check the NAND used in the 512GB SSD 830 and it indeed has sixteen 32Gb dies per package too.
Courtesy of TechInsights
I also made a graph based on the x-ray shot since it's not exactly clear unless you know what you're looking for.
Unfortunately I couldn't find any good x-ray shots of other manufacturers' NAND to see if Samsung's packaging method is different, which would explain their ability to ship a 16-die package with no significant performance loss. However, what I was able to find suggested that others use similar packaging (i.e. an inclinated tower of dies with interconnects falling from both sides). Samsung is also very tight-lipped about their NAND and the technologies involved, so I've not been able to get any details out of them. Anand is meeting with their SSD folks at CES and there is hope that he will be able to convince them to give us even a brief overview.
I am thinking this is not strictly hardware related but software too. In the end, the problem is signal integrity and latency, both which can be overcome with high quality engineering. The two are actually related: Poor signal integrity means more errors, which in turn increases latency because it's up to the ECC engine to fix the error. The more errors there are, the longer it obviously takes. With an effective combination of DSP and ECC (and a bunch of other acronyms), it's possible to stack more dies without sacrificing performance. Samsung's control over the silicon is a huge help here -- ECC needs to be built into the hardware to be efficient and since it's up to Samsung to decide how much resources and die area they want to devote to ECC, they can make it happen.
Performance Consistency
In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.
To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.
We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.
The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.
The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).
The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.
Samsung SSD 840 EVO mSATA 1TB | Mushkin Atlas 240GB | Intel SSD 525 | Plextor M5M | Samsung SSD 840 EVO 1TB | |||||
Default | |||||||||
25% OP | - | - |
As expected, IO consistency is mostly similar to the regular EVO. The only change appears to be in steady-state behavior where the 2.5" EVO exhibits more up-and-down behavior, whereas the EVO mSATA is more consistent. This might be due to the latest firmware update because it changed some TurboWrite algorithms and it seems that the TurboWrite is kicking in in the 2.5" EVO every once in a while to boost performance (our EVO mSATA has the latest firmware but the 2.5" EVO was tested with the original firmware).
Increasing the OP in the EVO mSATA results in noticeably better performance but also causes some weird behavior. After about 300 seconds, the IOPS repeatedly drops to 1000 until it evens out after 800 seconds. I am not sure what exactly is happening here but I have asked Samsung to check if this is normal and if they can provide an explanation. My educated guess would be TurboWrite (again) because the drive seems to be reorganizing blocks to increase performance back to peak level. If you're focusing too much on reorganizing existing blocks of data, the latency for incoming writes will increase (and IOPS will drop).
Samsung SSD 840 EVO mSATA 1TB | Mushkin Atlas 240GB | Intel SSD 525 | Plextor M5M | Samsung SSD 840 EVO 1TB | |||||
Default | |||||||||
25% OP | - | - |
Samsung SSD 840 EVO mSATA 1TB | Mushkin Atlas 480GB | Intel SSD 525 | Plextor M5M | Samsung SSD 840 EVO 1TB | |||||
Default | |||||||||
25% OP | - | - |
TRIM Validation
To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 60 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.
Surprisingly, it's not. The write speed should be around 300MB/s for the 250GB model based on our Iometer test but here the performance is only 100-150MB/s for the earliest LBAs. Sequential writes do restore performance slowly but even after a full drive the performance has not fully recovered.
Samsung SSD 840 EVO mSATA Resiliency - Iometer Sequential Write | |||
Clean | Dirty (40min torture) | After TRIM | |
Samsung SSD 840 EVO mSATA 120GB | 180.4MB/s | 69.3MB/s | 126.2MB/s |
At first I thought this was an error in our testing but I was able to duplicate the issue with our 120GB sample and using Iometer for testing (i.e. 60-second sequential write run in Iometer instead of HD Tach). Unfortunately I ran out of time to test this issue more thoroughly (e.g. does a short period of idling help) but I'll be sure to run more tests once I get back to my testbed.
AnandTech Storage Bench 2013
When Anand built the AnandTech Heavy and Light Storage Bench suites in 2011 he did so because we did not have any good tools at the time that would begin to stress a drive's garbage collection routines. Once all blocks have a sufficient number of used pages, all further writes will inevitably trigger some sort of garbage collection/block recycling algorithm. Our Heavy 2011 test in particular was designed to do just this. By hitting the test SSD with a large enough and write intensive enough workload, we could ensure that some amount of GC would happen.
There were a couple of issues with our 2011 tests that we've been wanting to rectify however. First off, all of our 2011 tests were built using Windows 7 x64 pre-SP1, which meant there were potentially some 4K alignment issues that wouldn't exist had we built the trace on a system with SP1. This didn't really impact most SSDs but it proved to be a problem with some hard drives. Secondly, and more recently, we've shifted focus from simply triggering GC routines to really looking at worst-case scenario performance after prolonged random IO.
For years we'd felt the negative impacts of inconsistent IO performance with all SSDs, but until the S3700 showed up we didn't think to actually measure and visualize IO consistency. The problem with our IO consistency tests is that they are very focused on 4KB random writes at high queue depths and full LBA spans–not exactly a real world client usage model. The aspects of SSD architecture that those tests stress however are very important, and none of our existing tests were doing a good job of quantifying that.
We needed an updated heavy test, one that dealt with an even larger set of data and one that somehow incorporated IO consistency into its metrics. We think we have that test. The new benchmark doesn't even have a name, we've just been calling it The Destroyer (although AnandTech Storage Bench 2013 is likely a better fit for PR reasons).
Everything about this new test is bigger and better. The test platform moves to Windows 8 Pro x64. The workload is far more realistic. Just as before, this is an application trace based test–we record all IO requests made to a test system, then play them back on the drive we're measuring and run statistical analysis on the drive's responses.
Imitating most modern benchmarks Anand crafted the Destroyer out of a series of scenarios. For this benchmark we focused heavily on Photo editing, Gaming, Virtualization, General Productivity, Video Playback and Application Development. Rough descriptions of the various scenarios are in the table below:
AnandTech Storage Bench 2013 Preview - The Destroyer | ||||||||||||
Workload | Description | Applications Used | ||||||||||
Photo Sync/Editing | Import images, edit, export | Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox | ||||||||||
Gaming | Download/install games, play games | Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite | ||||||||||
Virtualization | Run/manage VM, use general apps inside VM | VirtualBox | ||||||||||
General Productivity | Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan | Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware | ||||||||||
Video Playback | Copy and watch movies | Windows 8 | ||||||||||
Application Development | Compile projects, check out code, download code samples | Visual Studio 2012 |
While some tasks remained independent, many were stitched together (e.g. system backups would take place while other scenarios were taking place). The overall stats give some justification to what we've been calling this test internally:
AnandTech Storage Bench 2013 Preview - The Destroyer, Specs | |||||||||||||
The Destroyer (2013) | Heavy 2011 | ||||||||||||
Reads | 38.83 million | 2.17 million | |||||||||||
Writes | 10.98 million | 1.78 million | |||||||||||
Total IO Operations | 49.8 million | 3.99 million | |||||||||||
Total GB Read | 1583.02 GB | 48.63 GB | |||||||||||
Total GB Written | 875.62 GB | 106.32 GB | |||||||||||
Average Queue Depth | ~5.5 | ~4.6 | |||||||||||
Focus | Worst-case multitasking, IO consistency | Peak IO, basic GC routines |
SSDs have grown in their performance abilities over the years, so we wanted a new test that could really push high queue depths at times. The average queue depth is still realistic for a client workload, but the Destroyer has some very demanding peaks. When we first introduced the Heavy 2011 test, some drives would take multiple hours to complete it; today most high performance SSDs can finish the test in under 90 minutes. The Destroyer? So far the fastest we've seen it go is 10 hours. Most high performance SSDs we've tested seem to need around 12–13 hours per run, with mainstream drives taking closer to 24 hours. The read/write balance is also a lot more realistic than in the Heavy 2011 test. Back in 2011 we just needed something that had a ton of writes so we could start separating the good from the bad. Now that the drives have matured, we felt a test that was a bit more balanced would be a better idea.
Despite the balance recalibration, there is just a ton of data moving around in this test. Ultimately the sheer volume of data here and the fact that there's a good amount of random IO courtesy of all of the multitasking (e.g. background VM work, background photo exports/syncs, etc...) makes the Destroyer do a far better job of giving credit for performance consistency than the old Heavy 2011 test. Both tests are valid; they just stress/showcase different things. As the days of begging for better random IO performance and basic GC intelligence are over, we wanted a test that would give us a bit more of what we're interested in these days. As Anand mentioned in the S3700 review, having good worst-case IO performance and consistency matters just as much to client users as it does to enterprise users.
We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.
Update: It appears that something was off in the first run as the 1TB scored 261.52MB/s when I retested it.
I'm not sure if I'm comfortable with the score above. There are no other benchmarks that would indicate the EVO mSATA to be over 20% faster than the 2.5" EVO, so I'm thinking there has been some kind of an error in the test. Unfortunately I didn't have time to rerun the test because The Destroyer takes roughly 12 hours to run and another eight or so hours to be analyzed. However, I managed to run it on the 500GB EVO mSATA and as the graph above shows, its performance is on-par with the 2.5" EVO. I'll rerun the test on the 1TB sample once I get back and will update this based on its output.
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.
As expected, random IO performance is similar to the original EVO. There is some slight variation of course but nothing that stands out.
Sequential Read/Write Speed
To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
For some reason, the 1TB EVO mSATA is a few percent faster in 128KB sequential read test but falls short in the sequential write test. It's possible that the 16-die NAND has some effect on performance, which would explain the difference, but we're still dealing with rather small differences.
AS-SSD Incompressible Sequential Read/Write Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.
Performance vs. Transfer Size
ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. The EVO mSATA is top of the class at all transfer sizes. Even though we're dealing with highly compressible data, which is the strength of SandForce, the EVO mSATA turns out to be slightly faster than the Intel SSD 525. Keep in mind that these tests are with an empty drive so TurboWrite plays a massive role and since the test only writes 2GB, all EVO mSATAs perform similarly regardless of the capacity.
Click for full size
AnandTech Storage Bench 2011
Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. Anand assembled the traces out of frustration with the majority of what we have today in terms of SSD benchmarks.
Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.
Originally the benchmarks were kept short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.
1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
2) We tried to cover as many bases as possible with the software incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). We've included a large amount of email downloading, document creation and editing as well. To top it all off we even use Visual Studio 2008 to build Chromium during the test.
The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:
AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown | ||||
IO Size | % of Total | |||
4KB | 28% | |||
16KB | 10% | |||
32KB | 10% | |||
64KB | 4% |
Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.
AnandTech Storage Bench 2011 - Heavy Workload
The full data set including disk busy times and read/write separation can be found in our Bench.
AnandTech Storage Bench 2011 - Light Workload
Our light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric). There's lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming.
The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers. Interestingly, the 480GB drive actually comes out ahead in this case, suggesting it's more capable at light workloads.
AnandTech Storage Bench 2011 - Light Workload IO Breakdown | ||||
IO Size | % of Total | |||
4KB | 27% | |||
16KB | 8% | |||
32KB | 6% | |||
64KB | 5% |
Power Consumption
One problem we have with testing mSATA SSDs is the fact that they require an mSATA to 2.5" adapter with a voltage regulator since 2.5" SATA is 5V whereas mSATA spec is rated at 3.3V. If you were to connect an mSATA SSD to a native mSATA slot, the power supply would automatically provide the drive with a voltage of 3.3V (SATA spec has three voltage rails: 3.3V, 5V and 12V) but because we have to deal with an adapter, the current we're measuring is based on the 5V rail (which the voltage regulator then changes to 3.3V). The voltage itself isn't the issue because it's fairly safe to assume that the adapter provides 3.3V to the drive but figuring out the difference between the input and output currents of the voltage regulator is. With the help of one of our readers (thanks Evan!), we were able to find out that the delta between input and output currents is typically 8mA (i.e. 0.008A) in the voltage regulator used in our adapter. Our measurements confirmed that as the adapter drew 8mA without a drive connected to it, although under load the regulator may draw a little more than that (datasheet claims a maximum of 20mA).
As we now know the difference between the input and output currents (current on 5V rail minus 8mA) as well as the voltage of the drive (3.3V), we're able to measure the actual power consumed by the drive. I've updated all mSATA SSD results to correspond with the change, so the number's you're seeing here are different from the ones you've seen in earlier reviews.
The EVO mSATA has great power characteristics, just like it's 2.5" sibling. There's no significant difference between the two and overall the EVO mSATA is one of the most power efficient mSATA drives. Unfortunately I don't have the necessary equipment to test slumber power consumption (HIPM+DIPM enabled) but I'd expect the EVO mSATA to have numbers similar to the 2.5" EVO (see the chart here).
The numbers here are worst-case scenarios and something I noticed during testing was that it takes a while for EVO to reach its maximum power draw. With the 500GB and 1TB models, the power draw was ranging between 2.2W and 2.5W for up to 30 seconds or so until it jumped to over 4W. I'm guessing this is due to TurboWrite because there is less NAND in use when the SLC buffer is employed (you simply don't need as many die to achieve high performance since program/erase times are much shorter). Once the buffer is full, the drive has to start writing to the TLC portion of the NAND, which increases the power draw as more NAND is in use. The downside is once you stop writing, the drive will keep drawing high power for a while in order to move the data from the SLC buffer to the TLC NAND. For example the 1TB model kept drawing ~3.5W for about a minute after I had stopped writing to the drive. I like Samsung's method because the garbage collection is done immediately instead of waiting for long idle periods like some manufacturers do -- doing garbage collection actively recovers performance quicker whereas it may take hours for idle garbage collection to kick in.
Final Words
I would say the biggest market for mSATA drives right now is DIY upgrades to Ultrabooks and other laptops with existing mSATA SSDs (or in some cases there's an empty mSATA slot). One of the most common complaints I hear about Ultrabooks is their limited storage capacity because many got used to the thinking that even $400 laptops have 500GB of storage. Nowadays most SSD-only systems ship with a 128GB SSD, which to be honest is a significant downgrade if you've gotten used to having at least half a terabyte in your laptop. The rise of cloud services has reduced the need for internal storage (most smartphones and tablets only have 8-32GB) but there are still plenty of scenarios where cloud is out of question. Take photographers as an example -- if you shoot large RAW photos, uploading/downloading them constantly doesn't sound like the best idea especially if you happen to live in a region where unlimited Internet is only a dream.
In the past if you wanted a bigger mSATA SSD, your options maxed out at 256GB (Mushkin's Atlas was the only exception, though you had to sacrifice performance for capacity). For hybrid systems (i.e. PCs with a small SSD for caching and a hard drive for storage) 256GB can cut it since the storage needs are fulfilled by the hard drive but if you have an SSD-only system 256GB may not be much of an upgrade over the 128GB that most systems ship with.
Four EVO mSATA take roughly the same space as one 2.5" drive
That is where the EVO mSATA excels in. With capacity of up to 1TB and an impressive performance result, it is truly a no-compromise mSATA SSD. mSATA is no longer a tradeoff between capacity and size, the EVO mSATA provides everything that the 2.5" EVO does but at ~1/4 the footprint. Add that to the fact that the EVO mSATA is built on the same platform as the 2.5" EVO, which has been one of our highest recommendations since its release. What Anand said in his 840 EVO review's final words suits here perfectly as well: "To say that I really like the EVO is an understatement".
Samsung continually amazes me in the SSD space. The EVO mSATA is yet another proof that their engineering is state-of-the-art and almost one step ahead of others. I cannot wait to see what else Samsung has in their sleeves for 2014, and we will analyze with a critical eye as always, but the start is great for sure.