Original Link: https://www.anandtech.com/show/9023/the-samsung-ssd-850-evo-msata-m2-review



Four months ago Samsung introduced the world to TLC V-NAND in the form of SSD 850 EVO. It did well in our tests and showed that 3D NAND technology essentially brings TLC NAND to the level where planar MLC NAND stands today. The initial launch only included the most popular form factor in 2.5", but did not address the upgrade market where mSATA and M.2 are constantly growing in popularity. With today's release, Samsung is expanding the 850 EVO lineup with M.2 and mSATA models.

The move isn't really surprising because Samsung released an mSATA version of the SSD 840 EVO a bit over a year ago and when the 850 EVO was originally launched we were told that mSATA and M.2 models would follow later. The addition of M.2 is new to Samsung's retail lineup, but it makes a lot of sense given that many PC OEMs have switched from mSATA to M.2, and ultimately M.2 will be replacing mSATA in full.

Architecturally the mSATA and M.2 models are not any different from their 2.5" sibling. The heart of the drives is still Samsung's own MGX controller (excluding the 1TB model, which is powered by the older MEX controller) and the NAND is 32-layer 128Gbit TLC (3-bit per cell) V-NAND that is manufactured using 40nm lithography. TurboWrite (Samsung's pseudo-SLC caching) is also included and the cache sizes have remained unchanged from the 2.5" model. DevSleep and TCG Opal 2.0 (eDrive) are both supported too and endurance comes in at a respectable 75TB for 120GB/250GB and 150TB for 500GB/1TB models.  

Given the similarity with the 2.5" model, I strongly suggest that you read our 850 EVO review for a full scope of TurboWrite, TLC V-NAND and other tidbits as I won't be covering those in detail in this review. 

Samsung SSD 850 EVO mSATA Specifications
Capacity 120GB 250GB 500GB 1TB
Form Factor mSATA
Controller Samsung MGX Samsung MEX
NAND Samsung 40nm 128Gbit TLC V-NAND
DRAM (LPDDR2) 512MB 1GB
Sequential Read 540MB/s 540MB/s 540MB/s 540MB/s
Sequential Write 520MB/s 520MB/s 520MB/s 520MB/s
4KB Random Read (QD1) 10K IOPS 10K IOPS 10K IOPS 10K IOPS
4KB Random Read (QD32) 95K IOPS 97K IOPS 97K IOPS 97K IOPS
4KB Random Read (QD1) 40K IOPS 40K IOPS 40K IOPS 40K IOPS
4KB Random Write (QD32) 88K IOPS 88K IOPS 88K IOPS 88K IOPS
Steady-State 4KB Random Write Performance 3.1K IOPS 4.9K IOPS 6.8K IOPS 9.7K IOPS
DevSleep Power Consumption 2mW 2mW 2mW 4mW
Slumber Power Consumption 50mW
Active Power Consumption (Read/Write) Max 3.5W / 4.3W
Encryption AES-256, TCG Opal 2.0, IEEE-1667 (eDrive)
Endurance 75TB (41GB/day) 150TB (82GB/day)
Warranty Five years

Like its predecessor, the 850 EVO mSATA offers capacity of up to 1TB, which still remains the highest capacity mSATA drive in the industry. Samsung has a substantial lead in its NAND packaging technology because currently no-one else is shipping 16-die packages in high volume and by comparison Samsung has been doing this for quite some time now. I've heard Toshiba has some 16-die packages available, but the yields are very low and pricing comes in at about a dollar per gigabyte, whereas other packages are priced at ~30 cents per gigabyte. Micron also has 16-die packages on paper, but I've yet to see them used in any actual products.

Samsung SSD 850 EVO M.2 Specifications
Capacity 120GB 250GB 500GB
Form Factor M.2 2280 (single-sided; SATA 6Gbps)
Controller Samsung MGX
NAND Samsung 40nm 128Gbit TLC V-NAND
DRAM (LPDDR2) 512MB
Sequential Read 540MB/s 540MB/s 540MB/s
Sequential Write 500MB/s 500MB/s 500MB/s
4KB Random Read (QD1) 10K IOPS 10K IOPS 10K IOPS
4KB Random Read (QD32) 97K IOPS 97K IOPS 97K IOPS
4KB Random Write (QD1) 40K IOPS 40K IOPS 40K IOPS
4KB Random Write (QD32) 89K IOPS 89K IOPS 89K IOPS
Steady-State 4KB Random Write Performance 2.8K IOPS 4.1K IOPS 5.8K IOPS
DevSleep Power Consumption 2mW 2mW 2mW
Slumber Power Consumption 50mW
Active Power Consumption (Read/Write) Max 2.4W / 3.5W
Encryption AES-256, TCG Opal 2.0, IEEE-1667 (eDrive)
Endurance 75TB (41GB/day) 150TB (82GB/day)
Warranty Five years

Unfortunately the M.2 version tops out at 500GB. The reason lies in the fact that the M.2 is a single-sided design, which only has room for two NAND packages. There are quite a few laptops that use the single-sided M.2 2280 form factor as it allows for thinner designs, but I still would have liked to see a 1TB double-sided version. It is worth noting that while both PCIe and SATA based devices can have M.2 form factors, Samsung is only releasing the 850 EVO M.2 in the SATA format at this time.

With a 128Gbit die and sixteen die per package, the maximum capacity for each package comes in at 256GiB, yielding a raw NAND capacity of 512GiB, of which 500GB is usable in the 850 EVO.

AnandTech 2015 SSD Test System
CPU Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z97 Deluxe (BIOS 2205)
Chipset Intel Z97
Chipset Drivers Intel 10.0.24+ Intel RST 13.2.4.1000
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Desktop Resolution 1920 x 1080
OS Windows 8.1 x64


Performance Consistency

We've been looking at performance consistency since the Intel SSD DC S3700 review in late 2012 and it has become one of the cornerstones of our SSD reviews. Back in the days many SSD vendors were only focusing on high peak performance, which unfortunately came at the cost of sustained performance. In other words, the drives would push high IOPS in certain synthetic scenarios to provide nice marketing numbers, but as soon as you pushed the drive for more than a few minutes you could easily run into hiccups caused by poor performance consistency. 

Once we started exploring IO consistency, nearly all SSD manufacturers made a move to improve consistency and for the 2015 suite, I haven't made any significant changes to the methodology we use to test IO consistency. The biggest change is the move from VDBench to Iometer 1.1.0 as the benchmarking software and I've also extended the test from 2000 seconds to a full hour to ensure that all drives hit steady-state during the test.

For better readability, I now provide bar graphs with the first one being an average IOPS of the last 400 seconds and the second graph displaying the standard deviation during the same period. Average IOPS provides a quick look into overall performance, but it can easily hide bad consistency, so looking at standard deviation is necessary for a complete look into consistency.

I'm still providing the same scatter graphs too, of course. However, I decided to dump the logarithmic graphs and go linear-only since logarithmic graphs aren't as accurate and can be hard to interpret for those who aren't familiar with them. I provide two graphs: one that includes the whole duration of the test and another that focuses on the last 400 seconds of the test to get a better scope into steady-state performance.

Steady-State 4KB Random Write Performance

For a mainstream drive, the 850 EVO mSATA/M.2 does relatively well in IO consistency except for the highest capacity 1TB model. Strangely enough the 2.5" 1TB 850 EVO does just fine, so this issue seems to be limited to the mSATA version. 

Steady-State 4KB Random Write Consistency

Looking at the standard deviation reveals why: the IO consistency of the 850 EVO mSATA 1TB, even with overprovisioning, is horrible compared to the rest of the 850 EVO lineup. 

Samsung 850 EVO mSATA 250GB
Default
25% Over-Provisioning

The issue with the 1TB mSATA is actually worse than I expected because it's literally stopping for seconds in a frequent manner. The pauses can even be over 50 seconds, so this isn't just some normal garbage collection that's happening in the background. I find this to be very alarming because it may have dramatic impact to user experience and it's simply something that no modern SSD should do anymore. I did let Samsung know about my findings before publishing this review, but I wasn't really able to get any comment from them regarding this issue and whether Samsung has noticed something similar in its internal tests. Adding over-provisioning seems to help as the pauses become much more infrequent, but for now I would still advise against buying the 1TB mSATA version until there's a fix for the IO consistency.

As for the other capacities, the 850 EVO has excellent consistency and steady-state performance for a mainstream drive. The capacity has some effect on performance, but even the 250GB model has roughly twice the performance of 240GB Ultra II thanks to faster V-NAND.

Samsung 850 EVO mSATA 250GB
Default
25% Over-Provisioning


AnandTech Storage Bench - The Destroyer

The Destroyer has been an essential part of our SSD test suite for nearly two years now. It was crafted to provide a benchmark for very IO intensive workloads, which is where you most often notice the difference between drives. It's not necessarily the most relevant test to an average user, but for anyone with a heavier IO workload The Destroyer should do a good job at characterizing performance.

AnandTech Storage Bench - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

The table above describes the workloads of The Destroyer in a bit more detail. Most of the workloads are run independently in the trace, but obviously there are various operations (such as backups) in the background. 

AnandTech Storage Bench - The Destroyer - Specs
Reads 38.83 million
Writes 10.98 million
Total IO Operations 49.8 million
Total GB Read 1583.02 GB
Total GB Written 875.62 GB
Average Queue Depth ~5.5
Focus Worst case multitasking, IO consistency

The name Destroyer comes from the sheer fact that the trace contains nearly 50 million IO operations. That's enough IO operations to effectively put the drive into steady-state and give an idea of the performance in worst case multitasking scenarios. About 67% of the IOs are sequential in nature with the rest ranging from pseudo-random to fully random. 

AnandTech Storage Bench - The Destroyer - IO Breakdown
IO Size <4KB 4KB 8KB 16KB 32KB 64KB 128KB
% of Total 6.0% 26.2% 3.1% 2.4% 1.7% 38.4% 18.0%

I've included a breakdown of the IOs in the table above, which accounts for 95.8% of total IOs in the trace. The leftover IO sizes are relatively rare in between sizes that don't have a significant (>1%) share on their own. Over a half of the transfers are large IOs with one fourth being 4KB in size.

AnandTech Storage Bench - The Destroyer - QD Breakdown
Queue Depth 1 2 3 4-5 6-10 11-20 21-32 >32
% of Total 50.0% 21.9% 4.1% 5.7% 8.8% 6.0% 2.1% 1.4

Despite the average queue depth of 5.5, a half of the IOs happen at queue depth of one and scenarios where the queue depths is higher than 10 are rather infrequent. 

The two key metrics I'm reporting haven't changed and I'll continue to report both data rate and latency because the two have slightly different focuses. Data rate measures the speed of the data transfer, so it emphasizes large IOs that simply account for a much larger share when looking at the total amount of data. Latency, on the other hand, ignores the IO size, so all IOs are given the same weight in the calculation. Both metrics are useful, although in terms of system responsiveness I think the latency is more critical. As a result, I'm also reporting two new stats that provide us a very good insight to high latency IOs by reporting the share of >10ms and >100ms IOs as a percentage of the total.

I'm also reporting the total power consumed during the trace, which gives us good insight into the drive's power consumption under different workloads. It's better than average power consumption in the sense that it also takes performance into account because a faster completion time will result in less watt-hours consumed. Since the idle times of the trace have been truncated for faster playback, the number doesn't fully address the impact of idle power consumption, but nevertheless the metric is valuable when it comes active power consumption. 

AnandTech Storage Bench - The Destroyer (Data Rate)

The pausing issue of the 1TB 850 EVO mSATA also translates straight to our The Destroyer trace. While higher capacities are usually faster than smaller capacities in this test, the 1TB mSATA is about 15% slower than the 500GB M.2. Generally speaking the 850 EVO does very well at 500GB and above, but since Samsung's TLC V-NAND is 128Gbit in capacity and single-plane, the 250GB can't keep up with drives that are using smaller capacity NAND for higher parallelism.

AnandTech Storage Bench - The Destroyer (Latency)

The latency graph further illustrates the poor performance of the 1TB mSATA, but quite surprisingly the 500GB 850 EVO is actually faster than the 512GB 850 Pro. I suspect the additional over-provisioning helps because The Destroyer is a very intensive trace that practically puts the drive into steady-state .

AnandTech Storage Bench - The Destroyer (Latency)

The share of high latency IOs is a bit high, but nothing to be concerned of. Only the 250GB model has a significant amount of >10ms IOs and for IO intensive workloads I would strongly advise to go with 500GB or higher. 

AnandTech Storage Bench - The Destroyer (Latency)

Active power consumption seems to be quite high for the 850 EVO, although I'm not surprised since TLC generally consumes more power than MLC due to its inherent design (more program pulses needed to achieve the final voltage state). 

AnandTech Storage Bench - The Destroyer (Power)



AnandTech Storage Bench - Heavy

While The Destroyer focuses on sustained and worst-case performance by hammering the drive with nearly 1TB worth of writes, the Heavy trace provides a more typical enthusiast and power user workload. By writing less to the drive, the Heavy trace doesn't drive the SSD into steady-state and thus the trace gives us a good idea of peak performance combined with some basic garbage collection routines.

AnandTech Storage Bench - Heavy
Workload Description Applications Used
Photo Editing Import images, edit, export Adobe Photoshop
Gaming Pllay games, load levels Starcraft II, World of Warcraft
Content Creation HTML editing Dreamweaver
General Productivity Browse the web, manage local email, document creation, application install, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Application Development Compile Chromium Visual Studio 2008

The Heavy trace drops virtualization from the equation and goes a bit lighter on photo editing and gaming, making it more relevant to the majority of end-users.

AnandTech Storage Bench - Heavy - Specs
Reads 2.17 million
Writes 1.78 million
Total IO Operations 3.99 million
Total GB Read 48.63 GB
Total GB Written 106.32 GB
Average Queue Depth ~4.6
Focus Peak IO, basic GC routines

The Heavy trace is actually more write-centric than The Destroyer is. A part of that is explained by the lack of virtualization because operating systems tend to be read-intensive, be that a local or virtual system. The total number of IOs is less than 10% of The Destroyer's IOs, so the Heavy trace is much easier for the drive and doesn't even overwrite the drive once.

AnandTech Storage Bench - Heavy - IO Breakdown
IO Size <4KB 4KB 8KB 16KB 32KB 64KB 128KB
% of Total 7.8% 29.2% 3.5% 10.3% 10.8% 4.1% 21.7%

The Heavy trace has more focus on 16KB and 32KB IO sizes, but more than half of the IOs are still either 4KB or 128KB. About 43% of the IOs are sequential with the rest being slightly more full random than pseudo-random.

AnandTech Storage Bench - Heavy - QD Breakdown
Queue Depth 1 2 3 4-5 6-10 11-20 21-32 >32
% of Total 63.5% 10.4% 5.1% 5.0% 6.4% 6.0% 3.2% 0.3%

In terms of queue depths the Heavy trace is even more focused on very low queue depths with three fourths happening at queue depth of one or two. 

I'm reporting the same performance metrics as in The Destroyer benchmark, but I'm running the drive in both empty and full states. Some manufacturers tend to focus intensively on peak performance on an empty drive, but in reality the drive will always contain some data. Testing the drive in full state gives us valuable information whether the drive loses performance once it's filled with data.

AnandTech Storage Bench - Heavy (Data Rate)

In the Heavy trace the 850 EVO scores highly. As I've said before, it seems that only Samsung has found the secret recipe to boost performance under SATA 6Gbps because no other manufacturer comes close to it in this benchmark. 

AnandTech Storage Bench - Heavy (Latency)

Moving on to latency and the 850 EVO still keeps its lead compared to other manufacturers' drives. The difference is nowhere near as significant as in the throughput metric above, but the 850 EVO is still without a doubt one of the highest performing drives on the market. The smaller capacities are a bit of a disappointement, though, because the 250GB mSATA loses to MX100 by a quite hefty margin, but it still beats the Ultra II for what it's worth.

AnandTech Storage Bench - Heavy (Latency)

The smaller capacities, especially the 120GB one, seem to have quite a few high latency IOs. I wouldn't say the situation for the 250GB model is critical, but I do think that individuals with heavier workloads should focus on the 500GB and higher capacities in order to avoid any storage performance issues. 

AnandTech Storage Bench - Heavy (Power)

But in terms of power the 850 EVO is very efficient at smaller capacities. Given that mSATA and M.2 standards are mostly used in mobile applications, this is very good news.



AnandTech Storage Bench - Light

The Light trace is designed to be an accurate illustration of basic usage. It's basically a subset of the Heavy trace, but we've left out some workloads to reduce the writes and make it more read intensive in general. 

AnandTech Storage Bench - Light - Specs
Reads 372,630
Writes 459,709
Total IO Operations 832,339
Total GB Read 17.97 GB
Total GB Written 23.25 GB
Average Queue Depth ~4.6
Focus Basic, light IO usage

The Light trace still has more writes than reads, but a very light workload would be even more read-centric (think web browsing, document editing, etc). It has about 23GB of writes, which would account for roughly two or three days of average usage (i.e. 7-11GB per day). 

AnandTech Storage Bench - Light - IO Breakdown
IO Size <4KB 4KB 8KB 16KB 32KB 64KB 128KB
% of Total 6.2% 27.6% 2.4% 8.0% 6.5% 4.8% 26.4%

The IO distribution of the Light trace is very similar to the Heavy trace with slightly more IOs being 128KB. About 70% of the IOs are sequential, though, so that is a major difference compared to the Heavy trace.

AnandTech Storage Bench - Light - QD Breakdown
Queue Depth 1 2 3 4-5 6-10 11-20 21-32 >32
% of Total 73.4% 16.8% 2.6% 2.3% 3.1% 1.5% 0.2% 0.2%

Over 90% of the IOs have a queue depth of one or two, which further proves the importance of low queue depth performance. 

AnandTech Storage Bench - Light (Data Rate)

The 850 EVO also shines in our Light trace by being the fastest SATA drive we have tested along with the 850 Pro.

AnandTech Storage Bench - Light (Latency)

The latency is also great despite the capacity, so I have no problem recommending the 850 EVO for basic workloads -- it's only the heavier workloads that bring the smaller capacities to their knees.

AnandTech Storage Bench - Light (Latency)

Power is again excellent except for the 1TB model. I'm honestly a big surprised that the 850 EVO is so much more power efficient than the 850 Pro despite the fact that MLC NAND should be more power efficient by design.

AnandTech Storage Bench - Light (Power)



Random Read Performance

One of the major changes in our 2015 test suite is the synthetic Iometer tests we run. In the past we used to test just one or two queue depths, but real world workloads always contain a mix of different queue depths as shown by our Storage Bench traces. To get the full scope in performance, I'm now testing various queue depths starting from one and going all the way to up to 32. I'm not testing every single queue depth, but merely how the throughput scales with the queue depth. I'm using exponential scaling, meaning that the tested queue depths increase in powers of two (i.e. 1, 2, 4, 8...). 

Read tests are conducted on a full drive because that is the only way to ensure that the results are valid (testing with an empty drive can substantially inflate the results and in reality the data you are reading is always valid rather than full of zeros). Each queue depth is tested for three minutes and there is no idle time between the tests. 

I'm also reporting two metrics now. For the bar graph, I've taken the average of QD1, QD2 and QD4 data rates, which are the most relevant queue depths for client workloads. This allows for easy and quick comparison between drives. In addition to the bar graph, I'm including a line graph, which shows the performance scaling across all queue depths. To keep the line graphs readable, each drive has its own graph, which can be selected from the drop-down menu.

I'm also plotting power for SATA drives and will be doing the same for PCIe drives as soon as I have the system set up properly. Our datalogging multimeter logs power consumption every second, so I report the average for every queue depth to see how the power scales with the queue depth and performance.

Iometer - 4KB Random Read

Random read performance has always been Samsung's strength and particularly the 500GB and smaller capacities do well thanks to the faster MGX controller.

Iometer - 4KB Random Read (Power)

Power consumption is also good, although the 1TB model sips quite a bit of power.

Samsung 850 EVO M.2 120GB

The performance scales nicely with the queue depth too.

Random Write Performance

Write performance is tested in the same way as read performance, except that the drive is in a secure erased state and the LBA span is limited to 16GB. We already test performance consistency separately, so a secure erased drive and limited LBA span ensures that the results here represent peak performance rather than sustained performance.

Iometer - 4KB Random Write

Random write performance is equally strong, which is mostly thanks to TurboWrite.

Iometer - 4KB Random Write (Power)

Power consumption is decent as well, and while the larger capacities are more power hungry the difference to competing drives isn't substantial.

Samsung 850 EVO M.2 120GB

Since the 120GB SKU has less parallelism due to having less NAND, its performance doesn't scale at all with the queue depth (QD1 is already saturating the available NAND bandwidth), but the other models scale pretty nicely. You do see a slight drop in performance after the TurboWrite buffer has been filled, but in client workloads it's unlikely that you will be filling the buffer at once like our tests do.



Sequential Read Performance

Our sequential tests are conducted in the same manner as our random IO tests. Each queue depth is tested for three minutes without any idle time in between the tests and the IOs are 4K aligned similar to what you would experience in a typical desktop OS.

Iometer - 128KB Sequential Read

In sequential read performance the difference between drives is rather marginal, but in power consumption we start to see some differences. At 250GB and 120GB the 850 EVO is very efficient, but the 500GB and 1TB are again more power hungry.

Iometer - 128KB Sequential Read (Power)

Looking at performance across all queue depths doesn't reveal any surprises: at QD2 and higher all drives are practically saturating the SATA 6Gbps interface. What's notable, though, is that the 1TB degrades in performance as the queue depth increases. I wonder if this is a thermal issue (mSATA/M.2 don't have a chassis to use as a heatsink) or just poor firmware optimization. 

Samsung 850 EVO M.2 120GB

 

Sequential Write Performance

Sequential write testing differs from random testing in the sense that the LBA span is not limited. That's because sequential IOs don't fragment the drive, so the performance will be at its peak regardless. 

Iometer - 128KB Sequential Write

In sequential write speed the capacity plays a more significant role because the 120GB and 250GB are noticeably behind the 500GB and other higher capacity drives. The poor performance of the 1TB model is once again a surprise, though.

Iometer - 128KB Sequential Write (Power)

The full graph of all queue depths shows the reason for the 1TB's low performance: it starts high at ~430MB/s, but after that the performance decreases. 

Samsung 850 EVO M.2 120GB


Mixed Random Read/Write Performance

Mixed read/write tests are also a new addition to our test suite. In real world applications a significant portion of workloads are mixed, meaning that there are both read and write IOs. Our Storage Bench benchmarks already illustrate mixed workloads by being based on actual real world IO traces, but until now we haven't had a proper synthetic way to measure mixed performance. 

The benchmark is divided into two tests. The first one tests mixed performance with 4KB random IOs at six different read/write distributions starting at 100% reads and adding 20% of writes in each phase. Because we are dealing with a mixed workload that contains reads, the drive is first filled with 128KB sequential data to ensure valid results. Similarly, because the IO pattern is random, I've limited the LBA span to 16GB to ensure that the results aren't affected by IO consistency. The queue depth of the 4KB random test is three.

Again, for the sake of readability, I provide both an average based bar graph as well as a line graph with the full data on it. The bar graph represents an average of all six read/write distribution data rates for quick comparison, whereas the line graph includes a separate data point for each tested distribution. 

Iometer - Mixed 4KB Random Read/Write

Mixed random performance appears to be brilliant and power consumption is moderate too.

Iometer - Mixed 4KB Random Read/Write (Power)

The 850 EVO has a typical curve at 250GB and above where the performance more or less stays constant until hitting 100% writes where it jumps up considerably. Only the 850 Pro breaks this trend as its performance in fact decreases as the share of writes is increased.

Samsung 850 EVO M.2 120GB

 

Mixed Sequential Read/Write Performance

The sequential mixed workload tests are also tested with a full drive, but I've not limited the LBA range as that's not needed with sequential data patterns. The queue depth for the tests is one.

Iometer - Mixed 128KB Sequential Read/Write

In mixed sequential workload the 850 EVO is good, but not overwhelming. 

Iometer - Mixed 128KB Sequential Read/Write (Power)

The 850 EVO's "bathtub" curve is a bit different from others' in the sense that the drop in performance is smooth rather than being sudden right after adding reads/writes to the mix. 

Samsung 850 EVO M.2 120GB


ATTO - Transfer Size vs Performance

I'm keeping our ATTO test around because it's a tool that can easily be run by anyone and it provides a quick look into performance scaling across multiple transfer sizes. I'm providing the results in a slightly different format because the line graphs didn't work well with multiple drives and creating the graphs was rather painful since the results had to be manually inserted cell be cell as ATTO doesn't provide a 'save as CSV' functionality.

Samsung 850 EVO M.2 120GB

 

AS-SSD Incompressible Sequential Performance

I'm also keeping AS-SSD around as it's freeware like ATTO and can be used by our readers to confirm that their drives operate properly. AS-SSD uses incompressible data for all of its transfers, so it's also a valuable tool when testing SandForce based drives that perform worse with incompressible data.

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance



Idle Power Consumption

Since we truncuate idle times to 25µs in our Storage Bench traces, they don't give a fully accurate picture of real world power consumption as idle power consumption is not taken properly into account. Hence I'm still reporting idle power consumption as a separate benchmark because it's one of the most critical metrics when it comes evaluating an SSD for mobile use.

Unfortunately I still don't have a way to test DevSleep power consumption due to lack of platform support, but my testbed supports HIPM+DIPM power commands (also referred to as Slumber power), so the results give a rather accurate picture of real-world idle power consumption. 

Idle Power Consumption (HIPM+DIPM)

All Samsung's recent SSDs have had excellent idle power consumption and the 850 EVO mSATA/M.2 is no exception. 

TRIM Validation

The move from Windows 7 to 8.1 introduced some problems with the methodology we have previously used to test TRIM functionality, so I had to come up with a new way to test. I tested a couple of different methods, but ultimately I decided to go with the easiest one that can actually be used by anyone. The software is simply called trimcheck and it was made by a developer that goes by the name CyberShadow in GitHub. 

Trimcheck tests TRIM by creating a small, unique file and then deleting it. Next the program will check whether the data is still accessible by reading the raw LBA locations. If the data that is returned by the drive is all zeros, it has received the TRIM command and TRIM is functional. 

And TRIM appears to be working fine.



Final Words

From a technological perspective, the new 850 EVO drives do not bring anything new to the table since it's essentially the 2.5" 850 EVO in a smaller form factor, but what it does bring to the market is more selection in mSATA and M.2 form factors using the SATA protocols. There still aren't too many mSATA/M.2 retail drives available, so the 850 EVO adds a lot of value to that segment because it's by far the fastest mSATA/M.2 SSD and in general the 850 EVO is one of the highest performing SATA drives on the market. 

With that said, I do have some concerns regarding the 1TB model and its performance. Especially the IO consistency with 50-second pauses is worrying. While that won't have any major impact on very light workloads, anything that taxes the drive a bit more may run into the issue, which is basically that the drive stops for up to dozens of seconds (i.e. your system freezes). Until Samsung fixes that, I would advise against buying the 1TB version unless you have a very light workload (web browsing, email, etc.). I suspect it's fixable through a firmware update, but I'll have to wait for Samsung's reply to be sure of that.

Amazon Price Comparison (3/29/2015)
  120/128GB 240/250/256GB 480/500/512GB 1TB
Samsung 850 EVO mSATA $80 $130 $230 $450
Samsung 850 EVO M.2 $80 $130 $230 -
Samsung 840 EVO mSATA $89 $150 $228 $429
Crucial M550 mSATA $172 $107 $184 -
Crucial M500 M.2 $88 $129 $244 -
Crucial MX200 mSATA - $120 $213 -
Crucial MX200 M.2 - $120 $226 -
Plextor M6M mSATA $76 $133 $280 -
Mushkin Atlas Deluxe mSATA $65 $108 $183 -

The 850 EVO mSATA/M.2 is already available on Amazon and the pricing appears to be fairly competitive. It's not the cheapest mSATA/M.2 drive around, but the premium isn't that significant when taking 850 EVO's feature set into account (5-year warranty, hardware encryption etc.). 

All in all, the 850 EVO presents another option to users who are looking for an mSATA or M.2 SSD. It's equipped with the same extensive feature set as its 2.5" sibling, the performance is good and the pricing is fair. As long as Samsung is able to fix the 1TB mSATA on a timely manner, I have no reason not to recommend the 850 EVO. After all, it's still the only mSATA with 1TB capacity.

Log in

Don't have an account? Sign up now