Original Link: https://www.anandtech.com/show/8319/samsung-ssd-845dc-evopro-preview-exploring-worstcase-iops



Traditionally Samsung's enterprise SSDs have only been available to large server OEMs (e.g. Dell, EMC, and IBM). In other words, unless you were buying tens of thousands of drives, Samsung would not sell you any. This is a rather common strategy in the industry because it cuts the validation time as you only need to work with a handful of partners and systems, whereas channel products need to be validated for a variety of configurations and may require more aftermarket support as well.

However, back in June Samsung made a change in its strategy and released the 845DC EVO, Samsung's first enterprise SSD for the channel. The 845DC EVO was accompanied by the 845DC PRO a month later, which targets the more write-intensive workloads, whereas the EVO is more geared towards mixed and read-centric workloads. Samsung also sent us the PM853T, which is an OEM version of the 845DC EVO and we now have the chance to see if there is any difference between the channel and OEM versions.

Due to my hectic travel schedule over the past couple of months, I only have a performance preview today. I am in the process of testing the drives through our new enterprise suite, so a full review will follow soon, but I already have a sneak peek of what we will be doing in the future. Or actually, I do not have any new tests to share, but a new way of looking at our existing test data. 

In the past our IO consistency test has only looked at the average IOPS in one second intervals, which is an industry standard and a fairly accurate way of describing performance. However, an average is still an average. Modern SSDs can easily process tens of thousands of IOs per second, so even within one second the variation in performance can be huge, which is something that the average does not show. 

What we are doing is reporting two additional metrics that give us an idea of what is happening every second. These two are minimum IOPS and standard deviation. The former is simply maximum response time translated into IOPS and it gives us the worst possible IOPS that the drive provides at any given time. The latter, on the other hand, gives us an insight to the performance variation for every single second.

Explaining Throughput, IOPS, and Latency

Before we get to our new benchmarks, there is something I want to discuss regarding storage benchmarking in general that has been bugging me for a while now. Throughput, IOPS, and latency are the three main metrics of storage performance, yet the relation between the three is not that well understood. If you take a look at any enterprise SSD spec sheet, you will very likely see all three metrics used, but what many do not know is that it doesn't have to be that way. Everything could be reported in megabytes or IOs per second, or through latency, and I will soon explain how; meanwhile, manufacturers have chosen to use all three because it suits them better for marketing. To see the relation between the three metrics, we first need to understand what each metric really is.

Let's start with latency because it is ultimately the mother of all other metrics and the simplest to understand. Sometimes IO latency is referred to as response or service time, but ultimately all three terms measure the same thing: the time it takes for an IO to complete. Basically, the measurement starts when the OS sends a request to the drive and ends when the drive finishes processing the request. In the case of a read, the request is considered completed when the OS receives the data, and with writes completion is achieved when the drive informs the OS that it has received the data. Note that receiving the data does not mean that the drive has written the data to its final medium – the drive can send the completion command even though the data is sitting in a DRAM cache (which is why even hard drives can have very high burst speeds).

Now that we understand latency, we can have a look at IOPS, which is simply an abbreviation for IOs per second. IOPS describes how many IO operations the drive can handle per second and it is directly related to latency. For instance, if your drive has a constant latency of 1ms (0.001s), your drive can process 1,000 IOs per second. Pretty simple, right? But things get a bit more complicated when we incorporate queue depth into the equation.

Let's assume that it still takes 1ms for our drive to process one IO, but now there are two IO in the queue. As it still takes 1ms to process the first IO, the latency for the second IO cannot be 1ms because the OS already sent the IO request for both, but the second request was placed in a queue as the drive had not finished processing the previous IO. Assuming that both IOs were sent at the same time, the latency of the second IO would be 2ms.

Do you see what happened there? We placed two IOs in the queue (i.e. doubled the queue depth) and suddenly our latency doubled, but our drive was still doing 1,000 IOPS. Basically, we asked the drive to process more IOs by adding them to the queue, but in this case our hypothetical drive could only process 1,000 IOs per second. If we added even more IOs to the queue, the latency would keep increasing because there would be more IOs waiting in the queue and it would still take 1ms to process each one of them.

Fortunately SSDs can take advantage of parallelism by writing to multiple dies and planes at the same time, so in the real world any decent SSD would not be limited to 1,000 IOPS across all queue depths, meaning that the latency would not scale up like crazy as in our example above.

One thing that IOPS does not take into account is the transfer size. Obviously it takes longer to transfer and process 128KB of data than it takes to do the same for 4KB of data, so IOPS can be misleading. In other words, 1,000 IOPS on its own does not tell you anything useful because that could be with a transfer size of 512 bytes, meaning that your drive might only do a mere 125 IOPS with 4KB transfer size, which would be pretty awful for a modern drive.

We need throughput to bring the transfer size into the equation. Throughput measures how many bytes are transferred per second and commonly throughput is measured in mega- or gigabytes per second given the speed of today's storage. Mathematically throughput is simply IOPS times the transfer size as IOPS already tells us how many IOs are happening per second, so all we need is the size of those IOs to know how many bytes are transferred. 

Putting The Secret Marketing Sauce Together

What we just learned is that all three specifications – latency, IOPS, and throughput – are ultimately the same. You can translate IOPS to MB/s and MB/s to latency as long as you know the queue depth and transfer size. As I mentioned in the beginning, the reason why we "need" all three is marketing.

Take 128KB sequential performance for instance. 500MB/s sounds much better than ~4,000 IOPS because the 4KB random write IOPS for the same SSD can be 90,000. On the other hand, 90,000 IOPS with 4KB transfer size is only ~370MB/s, so from a marketing perspective it is much better to say that the drive does 500MB/s 128KB sequential and 90,000 IOPS 4KB random. 

Latency is no different. It is just another best case scenario figure that the manufacturers want to report to make their product look better. 20µs latency sounds brilliant until you read the actual data sheet and realize that it is often with a sequential 4KB transfer at a queue depth of one. That is the best case scenario because the latency of a random 4KB transfer at queue depth of 32 can easily be several milliseconds, which is far from the stated 20µs latency. If the latency was truly 20µs for a random 4KB transfer at queue depth of 32, then the drive could do 1,600,000 IOPS!

The reason I wanted to cover this is to help everyone understand what the storage metrics really mean and the relationship between them. I have found this to be fairly poorly understood and I want to fix that. We do not necessarily need all three metrics because they just allow for misleading marketing (like the latency example above) and I believe the best way to fix the marketing is to educate the buyers to take the marketing with a grain of salt (or at least read the fine print to see if the figure is meaningful at all).

Test Setup

CPU Intel Core i7-4770K running at 3.3GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe (BIOS 1707)
Chipset Intel Z87
Chipset Drivers Intel 9.4.0.1026 + Intel RST 12.9
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Desktop Resolution 1920 x 1080
OS Windows 7 x64


Samsung SSD 845DC EVO

Samsung SSD 845DC EVO
Capacity 240GB 480GB 960GB
Controller Samsung MEX
NAND Samsung 19nm 128Gbit TLC
Sequential Read 530MB/s 530MB/s 530MB/s
Sequential Write 270MB/s 410MB/s 410MB/s
4KB Random Read 87K IOPS 87K IOPS 87K IOPS
4KB Random Write 12K IOPS 14K IOPS 14K IOPS
Idle Power 1.2W 1.2W 1.2W
Load Power (Read/Write) 2.7W / 3.8W 2.7W / 3.8W 2.7W / 3.8W
Endurance (TBW) 150TB 300TB 600TB
Endurance (DWPD) 0.35 Drive Writes per Day
Warranty Five years

The 845DC EVO is based on the same MEX controller as the 840 EVO and 850 Pro are and it also uses the same 128Gbit 19nm TLC NAND as the EVO. While the SSD 840 was the first client TLC drive, the 845DC EVO is the first enterprise drive to utilize TLC NAND. We have covered TLC in detail multiple times by now, but in a nutshell TLC provides lower cost per gigabyte by storing three bits per cell instead of two like MLC does, but the increased density comes with a tradeoff in performance and endurance. 

Based on our endurance testing, the TLC NAND in the SSD 840 and 840 EVO is good for 1,000 P/E cycles, which is about one third of what typical MLC is good for. I have not had the time to test the endurance of 845DC EVO yet, but based on tests run by others, the TLC NAND in the 845DC EVO is rated at 3,000 P/E cycles. I will confirm this in the full review, but assuming that the tests I've seen are accurate (they should be since the testing methology is essentially the same as what we do), Samsung has taken the endurance of TLC NAND to the next level.

I never believed that we would see 19nm TLC NAND with 3,000 P/E cycles because that is what MLC is typically rated at, but given the maturity of Samsung's 19nm process, it is plausible. Unfortunately I do not know if Samsung has done any special tricks to extend the endurance, but I would not be surprised if these were just super high binned dies. In the end, there are always better and worse dies in the wafer and with most TLC dies ending up in applications like USB flash drives and SD cards, the best of the best can be saved for the 845DC EVO.

Ultimately the 845DC EVO is still aimed mostly towards read-intensive workloads because 0.35 drive writes per day is not enough for anything write heavy in the enterprise sector. Interestingly, despite the use of TLC NAND the endurance of the EVO is actually slightly higher than what Intel's SSD DC S3500 offers (150TB vs 140TB at 240GB). 

Like most enterprise drives, the 845DC EVO features capacitors to protect the drive against power losses. For client drives it is enough to flush the NAND mapping table from DRAM to NAND frequently enough to prevent corruption, but in the enterprise there is not much room for lost user data.



Samsung SSD 845DC PRO

Samsung SSD 845DC PRO
Capacity 400GB 800GB
Controller Samsung MDX
NAND Samsung 128Gbit 24-layer 40nm MLC V-NAND
Sequential Read 530MB/s 530MB/s
Sequential Write 460MB/s 460MB/s
4KB Random Read 92K IOPS 92K IOPS
4KB Random Write 50K IOPS 51K IOPS
Idle Power 1.0W 1.0W
Load Power (Read/Write) 1.7W / 3.1W 1.7W / 3.3W
Endurance (TBW) 7,300TB 14,600TB
Endurance (DWPD) 10 DWPD
Warranty Five years

Surprisingly the 845DC PRO goes for the older MDX controller that was used in the SSD 840 and 840 Pro. Architecturally the MDX and MEX are the same since both are based on the 3-core ARM Cortex R4 base, but the MEX just runs at a higher clock speed (400MHz vs 300MHz). I suspect the MEX controller does not really offer a major benefit for MLC NAND based SSDs because there is much less NAND management to do, but with TLC the extra processing power is certainly useful given the amount of ECC and management TLC needs.

The 845DC PRO is only available in two capacities: 400GB and 800GB. I heard Samsung has plans to add a higher capacity version (1,600GB?) later on but for the time being the 845DC PRO is limited to just 800GB. I suspect that going above 1TiB of raw NAND requires a controller update, which would explain why higher capacities are not available yet. In the end, the 845DC PRO is using silicon that is now two years old, which adds some design limitations.

Similar to the 845DC EVO, the PRO has capacitors that offer data protection in case of a power loss.

The 845DC PRO uses Samsung's first generation V-NAND, which is a 24-layer design with a die capacity of 128Gbit. The part numbers of the first and second generation are almost equal and the only way to distinquish the two is to look at the third, fourth and fifth characters since they reveal the number of die per package as well as the total capacity of the package. Our 400GB sample has four and our 800GB has eight 8-die packages on the PCB, so the raw NAND capacities work out to be 512GiB and 1024GiB respectively with over-provisioning being 28%.

I am not going to cover V-NAND in detail here as I did that in the 850 Pro review and architecturally the first generation V-NAND is no different – it is just 24 layers instead of 32. The first generation is an older, more mature process and thus more suitable for enterprise SSDs. I measured the endurance of the first generation V-NAND to be 40,000 P/E cycles, whereas the second generation V-NAND in the 850 Pro is only rated at 6,000 P/E cycles. For the record, you would either need eMLC or SLC to get 40,000 P/E cycles with 2D NAND, but V-NAND does that while being normal MLC. The benefit over eMLC is performance as eMLC sacrifices program and erase latencies for higher endurance, and the eMLC manufacturing process is also more complicated than regular MLC (although I am pretty sure that V-NAND is still more complicated and hence more expensive).



Samsung PM853T

Samsung PM853T
Capacity 240GB 480GB 960GB
Controller Samsung MEX
NAND Samsung 19nm 128Gbit TLC
Sequential Read Up to 530MB/s
Sequential Write Up to 410MB/s
4KB Random Read Up to 87K IOPS
4KB Random Write Up to 15K IOPS
Endurance (DWPD) 0.3 DWPD (4KB Random) / 1.6 DWPD (64KB Sequential)
Warranty Three years

The PM853T is the OEM version of the 845DC EVO and as you would expect, the two are very much alike. The difference between the 845DC EVO and PM853T is the firmware and the PM853T is geared more towards sustained workloads, which results in slightly higher random write speed (15K IOPS vs 14K IOPS) for the highest capacity. Endurance is also a bit lower (0.3 DWPD vs 0.35 DWPD) and warranty has dropped from five to three years, but otherwise the 845DC EVO and PM853T should be alike. Unfortunately I do not have the full data sheet, so all the specs are 'up to', but I will update the table when I receive the full specs. 



Performance Consistency - Average IOPS

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Samsung 845DC PRO 400GB

The 845DC PRO is just amazing. While it only has 28% over-provisioning, the 845DC PRO is able to provide 50K IOPS while for instance Intel's SSD DC S3700 is only capable of 35K IOPS, even though both have the same over-provisioning. A part of that can come from the controller and firmware design, but it is obvious that the lower program and erase latencies of V-NAND are behind the performance. It looks like Samsung's heavy investment in 3D NAND technology is really paying off.

The 845DC EVO does very well too despite the slower TLC NAND as it is still able to achieve steady-state performance of ~10K IOPS. The impact of NAND management from higher capacity is evident since the 240GB 845DC EVO performs better than the 960GB version, although I need to run a longer test for the full review to see if the difference evens out.

The PM853T, on the other hand, is a weird case because it is considerably slower than the 845DC EVO. I asked Samsung about this and they told me that there are some differences in garbage collection and wear-leveling algorithms, which causes the performance to be bumpy at first but it should even out after about 3,000 seconds. I will confirm this in the full review.

Samsung 845DC PRO 400GB

Samsung 845DC PRO 400GB



Performance Consistency - Worst-Case IOPS

In the past we have only reported average IOPS in one second intervals, which gives us a good picture of overall performance. With modern SSDs, however, there are tens of thousands of IOs happening every second and the fundamental problem with averages is that they do not show how the performance varies and what the worst-case performance is.

Especially in enterprise environments, the worst-case performance can be more salient because the drives operate in an array, which scales the overall performance but also makes worst-case performance drops more significant. For example, if you have a 12-drive array of SSDs that each provide 50K IOPS on average, the theoretical maximum speed of that array would be 600K IOPS. However, if the worst-case performance of each drive is 5K IOPS, the worst-case performance of the array can suddenly drop to 60K IOPS, which can be significant enough to have an impact on user experience. 

We are now reporting the worst-case IOPS in addition to the average IOPS. The blue dots in the graphs stand for average IOPS just like before, but the red dots show the worst-case IOPS for every second. Since worst-case IOPS is not reported by any benchmark by default, we had to derive it from maximum response time, which is reported by all the major storage benchmarks. Technically reporting the average and maximum latency would be more appropriate because IOPS cannot be calculated based on a single IO (and that is what the maximum latency is), but for the sake of consistency and compatibility with our old graphs we will be using IOPS for the time being. The end result is the same because IOPS is just queue depth over latency, so the graph itself would not be any different aside from the fact that with latency lower scores would be better.

Samsung 845DC PRO 400GB

It appears that both the 845DC PRO and EVO have a worst-case IOPS of around 5K, which translates to latency of 6.4ms. Compared to other enterprise SSDs we have tested, that is excellent since for Intel's S3500/S3700, the worst-case IOPS varies between 3K and 5K, and for Micron's M500DC it is about 2K.

Samsung 845DC PRO 400GB

Samsung 845DC PRO 400GB



Performance Consistency - Standard Deviation

The second new data set is standard deviation. It gives us a more comprehensive idea of how the average IOPS really varies because in the end, worst-case IOPS is just showing the slowest IO of every second. That does not tell how the rest of the IOs are doing, which is where standard deviation becomes useful.

Once again, the blue dots represent average IOPS, while the red dots are now standard deviation instead of worst-case IOPS. The graphs are dual-axis with IOPS on the left and standard deviation on the right with its own scale. For these graphs, the lower the standard deviation, the less variance there is in performance and the more consistent the performance is, which is better.

Samsung 845DC PRO 400GB

The 845DC EVO and PRO are awesome again. The EVO sports a standard deviation of around one, which is slightly better than what the S3500 offers and significantly better than M500DC's. Given that EVO's IOPS is very comparable with the S3500's, the EVO presents some serious competition to Intel despite the fact that the EVO uses slower TLC NAND with the same 12.7% over-provisioning.

The PRO does even better and is without a doubt the most consistent enterprise SATA SSD. The S3700 comes close, but when you include that fact that the 845DC PRO provides around 15K more IOPS, there is no doubt that it is a better drive for write intensive workloads that require high consistency.

Samsung 845DC PRO 400GB

Samsung 845DC PRO 400GB



Initial Thoughts

Since we are dealing with two drives, it makes sense to split the conclusion into two and I will start with the 845DC PRO. While all we have today is a performance preview, the 845DC PRO is turning out to be one of the best enterprise SATA SSDs that we have tested. With only 28% over-provisioning, the PRO offers the most consistent 4KB random write performance that we have seen to date. When you add the fact that the PRO is also rated at ten drive writes per day, it is shaping up to be an excellent drive for write intensive workloads.

Price Comparison - MSRP
Capacity 400GB 800GB
Samsung 845DC PRO $960 ($2.4/GB) $1,830 ($2.29/GB)
Intel SSD DC S3700 $729 ($1.82/GB) $1,459 ($1.82/GB)

While the performance is great, pricing could be more competitive. Intel's DC S3700 is considerably cheaper at both capacities and offers the same 10 drive writes per day endurance. The 845DC PRO does provide higher 4KB random write performance (~50K IOPS vs ~35K IOPS) and is a bit more consistent, but ultimately the workload determines whether the extra performance is worth the extra cost. For workloads where absolute performance is more important than capacity, the 845DC PRO is a better pick as it provides slightly more IOPS per dollar, but the S3700 still offers lower $/GB if capacity is a concern. Of course, as enterprise SSDs are usually bought in bulk, the prices may vary depending on the volume and the MSRPs listed here may not be fully accurate. 

Price Comparison - MSRP
Capacity 240GB 480GB 800/960GB
Samsung 845DC EVO $250 ($1.04/GB) $490 ($1.02/GB) $969 ($1.01/GB)
Intel SSD DC S3500 $219 ($0.91/GB) $439 ($0.91/GB) $729 ($0.91/GB)

While the 845DC EVO is not crafted for write-intensive workloads, it still provides very consistent random write performance, although obviously the performance is lower than the PRO's. The EVO is very comparable to Intel's SSD DC S3500 as both have random write IOPS of around 15K and even the consistencies are close to a match. Endurance wise both are rated at about 0.35 drive writes per day despite the fact that Samsung is using TLC NAND instead of MLC, so it is clear that Samsung is going directly after Intel's S3500 with the EVO. It is too early to make any final conclusions yet as the EVO is really designed for mixed and read-centric workloads, which are not included in our performance preview, but if the write performance consistency is any clue the EVO will be a tough competitor for Intel's S3500. 

Unfortunately I do not have an ETA for the full review yet. It will be a while, though, because testing an enterprise SSD takes a long time as the drive must be tested in steady-state to mimic a realistic scenario, and I need to test a bunch of older drives to have more data points. Moreover, there are some very interesting client drives coming in the next few weeks that will take priority, but the full review is coming along with our new enterprise SSD test suite. Today is a glimpse of some of the new things that we will be looking at, but the full suite will be way more extensive than what you have seen today. Stay tuned!

Log in

Don't have an account? Sign up now