Original Link: https://www.anandtech.com/show/14541/the-kingston-dc500-series-enterprise-sata-ssds-review
The Kingston DC500 Series Enterprise SATA SSDs Review: Making a Name In a Commodity Market
by Billy Tallis on June 25, 2019 8:00 AM EST- Posted in
- Storage
- SSDs
- Kingston
- SATA
- Enterprise
- Phison
- Enterprise SSDs
- 3D TLC
- PS3112-S12
Kingston may not be a name that rolls off the tip of the tongue when you're talking about datacenter hardware vendors, but the company has come to have a major presence in datacenters through their DRAM modules. A lucrative and high-volume market on its own, the company has unspririsngly been attempting to pivot off of their success with datacenter DRAM into other datacenter products, but they've only met limited success thus far. Their other product lines – in particular enterprise/datacenter SSDs – have been servicable, but haven't been able to crack the market as a whole.
Still intent on slicing out a larger portion of the datacenter SSD market, Kingston has decided to raise their profile by introducing SSDs that are based around the needs of their existing DRAM customers. That means that the company's new DC500 family of SSDs is intended for second-tier cloud service providers and system integrators, rather than the top hyperscalers like Google, Microsoft, Amazon, etc. This also means that the new drives are SATA SSDs, because in this market segment – which relies more heavily on commodity components and platforms than Open Compute Project-style thorough customization – there is still significant demand for SATA SSDs.
Using NVMe SSDs adds to platform costs in the form of expensive PCIe switches and backplanes, the drives themselves are each more expensive than a SATA drive of the same capacity, and power efficiency is often better for SATA than NVMe. PCIe SSDs make it possible to cram a lot of storage performance into a smaller number of drives and servers, but where the emphasis is more on capacity and cost effectiveness, SATA still has a place.
The SATA interface itself is stuck at 6Gbps, but the technology that goes into SATA SSDs continues to evolve with new generations of NAND flash memory and new SSD controllers. Kingston's new DC500 family of enterprise SATA SSDs are our first look at Phison's new S12 SSD controller (specifically, the S12DC variant), the replacement for the S10 that has been on the market for over five years. (S11 is Phison's current DRAMless SATA controller.) While consumer SATA SSD controllers have mostly dropped down to just four NAND channels, the S12DC still has eight channels, but more for the sake of supporting high capacities than for improving performance. The S12DC officially supports 8TB, but Kingston isn't pushing things that far yet. The S12DC controller is fabbed on a 28nm process and brings major improvements to the error correction capabilities including Phison's third-generation LDPC engine.
The DC500 family uses Intel's 64-layer TLC NAND flash memory, a break from Kingston's usual preference for Toshiba NAND. 96/92-layer TLC has started to show up in the client/consumer SSD market, but it's still a bit early to be seeing it in this part of the enterprise storage market.
The DC500 family includes two tiers: the DC500R for read-heavy workloads (endurance rating of 0.5 DWPD) and the DC500M for more mixed read/write workloads (endurance rating of 1.3 DWPD). Kingston says the Intel NAND they are using is rated for about 5000 program/erase cycles, so with a warranty for a bit less than 1000 total drive writes on the DC500R they're clearly allowing for quite a bit of write amplification.
NVMe SSDs have mostly killed off the market for very high endurance SATA drives, because applications that need to support several drive writes per day tend to need higher performance than SATA can support (and as drive capacities increase, there's no longer enough time in a day to complete more than a few drive writes at ~0.5GB/s). Micron still offers a 5 DWPD SATA model (5200 MAX) but most other brands now top out around 3 DWPD for SATA drives. Those 3 DWPD and higher drives only account for about 20% of the market, so Kingston isn't missing out on too many sales by only going up to 1.3 DWPD with the DC500 family. The introduction of QLC NAND has helped lower the entry-level of this market down to around 0.1 DWPD, but Kingston doesn't have anything to offer at that level yet.
Kingston DC500 Series Specifications | ||||||
Capacity | 480 GB | 960 GB | 1920 GB | 3840 GB | ||
Form Factor | 2.5" 7mm SATA | |||||
Controller | Phison PS3112-S12DC | |||||
NAND Flash | Intel 64-layer 3D TLC | |||||
DRAM | Micron DDR4-2666 | |||||
Sequential Read | 555 MB/s | |||||
Sequential Write |
DC500R | 500 MB/s | 525 MB/s | 525 MB/s | 520 MB/s | |
DC500M | 520 MB/s | 520 MB/s | 520 MB/s | 520 MB/s | ||
Random Read | 98k IOPS | |||||
Random Write |
DC500R | 12k IOPS | 20k IOPS | 24k IOPS | 28k IOPS | |
DC500M | 58k IOPS | 70k IOPS | 75k IOPS | 75k IOPS | ||
Power | Read | 1.8 W | ||||
Write | 4.86 W | |||||
Idle | 1.56 W | |||||
Warranty | 5 years | |||||
Write Endurance |
DC500R | 438 TB 0.5 DWPD |
876 TB 0.5 DWPD |
1752 TB 0.5 DWPD |
3504 TB 0.5 DWPD |
|
DC500M | 1139 TB 1.3 DWPD |
2278 TB 1.3 DWPD |
4555 TB 1.3 DWPD |
9110 TB 1.3 DWPD |
||
Retail Price (CDW) | DC500R | $104.99 (22¢/GB) | $192.99 (20¢/GB) | $364.99 (19¢/GB) | $733.99 (19¢/GB) | |
DC500M | $125.99 (26¢/GB) | $262.99 (27¢/GB) | $406.99 (21¢/GB) | $822.99 (21¢/GB) |
The DC500R and DC500M are available in the same set of usable capacities ranging from 480GB to 3840GB, but they differ in the amount of spare area included, which is what allows the -M to have higher write endurance and higher sustained write performance. For sequential IO, the -R and -M versions are rated to deliver essentially the same performance, bottlenecked by the SATA link. The same is true for random reads, but steady-state random write performance is limited by the flash itself and varies with drive capacity and spare area. The DC500M models all have higher random write performance than all of the DC500R models.
Power consumption is rated at a modest 1.8 W for reads and a fairly typical 4.86 W for writes. Low-power idle states are usually not included on enterprise drives, so the DC500s are rated to idle at 1.56 W.
Left: DC500R 3.84 TB, Right: DC500M 3.84 TB
The DC500R and DC500M both use the same plain metal case, but the PCBs inside have some minor layout changes due to the differences in overprovisioning. Our 3.84TB samples feature raw capacities of 4096GB for the DC500R and 5120GB for the DC500M, so the -R versions have comparable overprovisioning to consumer SSDs while the -M versions have about three times as much spare area. The extra flash on the DC500M also requires it to have more DRAM: 6GB instead of the 4GB found on the DC500R 3.84TB.
Physically, the memory is laid out differently between the two drives. The 3.84TB DC500R has a total of 16 packages with 256GB each of NAND, and the 3.84TB DC500M uses 10 packages of 512GB each rather than mix packages of different capacities. In both cases this is Intel NAND packaged by Kingston. Since the -M has fewer NAND packages, it also gets away with fewer of the small TI multiplexer chips that sit next to the controller. The -M also has two fewer tantalum caps for power loss protection despite having more total NAND and DRAM.
The Competition
There are plenty of competing enterprise SATA SSDs based on 64-layer 3D TLC, but many of them have been on the market for quite a while; Kingston's a bit late to market for this generation. Samsung's SATA SSDs launched last fall are the only current-generation drives we have to compare against the Kingston DC500s, and all of our older enterprise SATA SSDs are far too outdated to be relevant.
The Samsung 883 DCT falls somewhere in between the DC500R and DC500M, with a write endurance of 0.8 DWPD (compared to 0.5 and 1.3 for the Kingston drives). The Samsung 860 DCT is a bit of an oddball since it lacks one of the defining features of enterprise SSDs: power loss protection capacitors. It also has quite a low endurance rating of just 0.2 DWPD, which is almost in QLC territory. Despite these handicaps, it still uses Samsung's excellent controller and firmware, and is tuned to offer much better performance and QoS on server workloads than can be expected from the client and consumer SSDs it superficially resembles.
To give a sense of scale, we've also included results for Samsung's entry-level datacenter NVMe drive, the 983 DCT, specifically the 960GB M.2 model. Some relevant SATA competitors that we have not tested include the Intel D3-S4510 and Micron 5200 ECO, both using the same 64L TLC as the Kingston drives but with different controllers.
Test System
Intel provided our enterprise SSD test system, one of their 2U servers based on the Xeon Scalable platform (codenamed Purley). The system includes two Xeon Gold 6154 18-core Skylake-SP processors, and 16GB DDR4-2666 DIMMs on all twelve memory channels for a total of 192GB of DRAM. Each of the two processors provides 48 PCI Express lanes plus a four-lane DMI link. The allocation of these lanes is complicated. Most of the PCIe lanes from CPU1 are dedicated to specific purposes: the x4 DMI plus another x16 link go to the C624 chipset, and there's an x8 link to a connector for an optional SAS controller. This leaves CPU2 providing the PCIe lanes for most of the expansion slots, including most of the U.2 ports.
Enterprise SSD Test System | |
System Model | Intel Server R2208WFTZS |
CPU | 2x Intel Xeon Gold 6154 (18C, 3.0GHz) |
Motherboard | Intel S2600WFT |
Chipset | Intel C624 |
Memory | 192GB total, Micron DDR4-2666 16GB modules |
Software | Linux kernel 4.19.8 fio version 3.12 |
Thanks to StarTech for providing a RK2236BKF 22U rack cabinet. |
The enterprise SSD test system and most of our consumer SSD test equipment are housed in a StarTech RK2236BKF 22U fully-enclosed rack cabinet. During testing for this review, the front door on this rack was generally left open to allow better airflow, and some Silverstone FQ141 case fans have been installed to help exhaust hot air from the top of the cabinet.
The test system is running a Linux kernel from the most recent long-term support branch. This brings in about a year's work on Meltdown/Spectre mitigations, though strategies for dealing with Spectre-style attacks are still evolving. The benchmarks in this review are all synthetic benchmarks, with most of the IO workloads generated using FIO. Server workloads are too widely varied for it to be practical to implement a comprehensive suite of application-level benchmarks, so we instead try to analyze performance on a broad variety of IO patterns.
Enterprise SSDs are specified for steady-state performance and don't include features like SLC caching, so the duration of benchmark runs doesn't have much effect on the score, so long as the drive was thoroughly preconditioned. Except where otherwise specified, for our tests that include random writes the drives were prepared with at least two full drive writes of 4kB random writes. For all the other tests, the drives were prepared with at least two full sequential write passes.
Our drive power measurements are conducted with a Quarch HD Programmable Power Module. This device supplies power to drives and logs both current and voltage simultaneously. With a 250kHz sample rate and precision down to a few mV and mA, it provides a very high resolution view into drive power consumption. For most of our automated benchmarks, we are only interested in averages over time spans on the order of at least a minute, so we configure the power module to average together its measurements and only provide about eight samples per second, but internally it is still measuring at 4µs intervals so it doesn't miss out on short-term power spikes.
QD1 Random Read Performance
Drive throughput with a queue depth of one is usually not advertised, but almost every latency or consistency metric reported on a spec sheet is measured at QD1 and usually for 4kB transfers. When the drive only has one command to work on at a time, there's nothing to get in the way of it offering its best-case access latency. Performance at such light loads is absolutely not what most of these drives are made for, but they have to make it through the easy tests before we move on to the more realistic challenges.
Power Efficiency in kIOPS/W | Average Power in W |
The Kingston DC500 SSDs offer similar QD1 random read throughput to Samsung's current SATA SSDs, but the Kingston drives require 35-45% more power. Samsung's most recent SATA controller platform has provided a remarkable improvement to power efficiency for both client and enterprise drives, while the new Phison S12DC controller leaves the Kingston drives with a much higher baseline for power consumption. However, the Samsung entry-level NVMe drive has even higher power draw, so despite its lower latency it is only as efficient as the Kingston drives at this light workload.
The Kingston DC500M has slightly better QoS than the DC500R for QD1 random reads, but both of the Samsung SATA drives are better still. The NVMe drive is better in all three latency metrics, but the 99th percentile latency has the most significant improvement over the SATA drives.
The Kingston drives offer 8-9k IOPS for QD1 random reads of 4kB or smaller blocks, but jumping up to 8kB blocks cuts IOPS in half, leaving bandwidth unchanged. After that, increasing block size does bring steady throughput improvements, but even at 1MB reading at QD1 isn't enough to saturate the SATA link.
QD1 Random Write Performance
The steady-state QD1 random write throughput of the DC500s is pretty good, especially for the DC500R that is only rated for 28k IOPS regardless of queue depth. At higher queue depths, the Samsung 883 DCT is supposed to reach the speeds the DC500s are providing here, but then the DC500M should also be much faster. The entry-level NVMe drive outpaces all the SATA drives, despite having a fourth the capacity.
Power Efficiency in kIOPS/W | Average Power in W |
The good random write throughput of the Kingston DC500s comes with a proportional cost in power consumption, leaving them with comparable efficiency to the Samsung drives. The DC500M was fractionally slower than the DC500R but uses much less power, probably due to the extra spare area allowing the -M to have a much easier time with background garbage collection.
The latency statistics for the DC500R and DC500M only differ meaningfully for the 99.99th percentile score, where the -R is predictably worse off—but not too much worse off than the Samsung drives. Overall, the Kingston drives have competitive QoS with the Samsung SATA drives during this test.
The Kingston DC500R has oddly poor random write performance for 1kB blocks, but otherwise both Kingston drives do quite well across the range of block sizes, with a clear IOPS advantage over the Samsung SATA drives for small block size random writes and better throughput once the drives have saturated somewhere around 8-32kB.
QD1 Sequential Read Performance
Power Efficiency in MB/s/W | Average Power in W |
The Kingston DC500s clearly aren't saturating the SATA link when performing 128kB sequential reads at QD1, while the Samsung drives are fairly close. We've noticed in our consumer SSD reviews that Phison-based drives often require a moderately high queue depth (or block sizes above 128kB) in order to start delivering good sequential performance, and that seems to have carried over to the S12DC platform. This disappointing performance really hurts the power efficiency scores for this test, especially considering that the DC500s are drawing a bit more power than they're supposed to for reads.
Performing sequential reads with small block operations isn't particularly useful, but the Samsung drives are much better at it. They start getting close to saturating the SATA link with block sizes around 64kB, while the Kingston drives still haven't quite caught up when the block size reaches 1MB—showing again that they really need queue depths above 1 to deliver the expected sequential read performance.
QD1 Sequential Write Performance
Power Efficiency in MB/s/W | Average Power in W |
Sequential writes at QD1 isn't a problem for the Kingston drives the way reads were: the DC500M is a hair faster than the Samsung SATA SSDs and the DC500R is less than 10% slower. However, Samsung again comes out way ahead on power efficiency, and the DC500R exceeds its specified power draw for writes by 30%.
The DC500M and the Samsung 883 DCT are fairly evenly matched for sequential write performance across the range of block sizes, except that the Kingston is clearly faster for 512-byte writes (which in practice are basically never sequential). The DC500R differs from the -M by hitting a throughput limit sooner than the rest of the drives, and a bit lower during this test than the above 128kB sequential write test.
Peak Random Read Performance
For client/consumer SSDs we primarily focus on low queue depth performance for its relevance to interactive workloads. Server workloads are often intense enough to keep a pile of drives busy, so the maximum attainable throughput of enterprise SSDs is actually important. But it usually isn't a good idea to focus solely on throughput while ignoring latency, because somewhere down the line there's always an end user waiting for the server to respond.
In order to characterize the maximum throughput an SSD can reach, we need to test at a range of queue depths. Different drives will reach their full speed at different queue depths, and increasing the queue depth beyond that saturation point may be slightly detrimental to throughput, and will drastically and unnecessarily increase latency. Because of that, we are not going to compare drives at a single fixed queue depth. Instead, each drive was tested at a range of queue depths up to the excessively high QD 512. For each drive, the queue depth with the highest performance was identified. Rather than report that value, we're reporting the throughput, latency, and power efficiency for the lowest queue depth that provides at least 95% of the highest obtainable performance. This often yields much more reasonable latency numbers, and is representative of how a reasonable operating system's IO scheduler should behave. (Our tests have to be run with any such scheduler disabled, or we would not get the queue depths we ask for.)
One extra complication is the choice of how to generate a specified queue depth with software. A single thread can issue multiple I/O requests using asynchronous APIs, but this runs into at several problems: if each system call issues one read or write command, then context switch overhead becomes the bottleneck long before a high-end NVMe SSD's abilities are fully taxed. Alternatively, if many operations are batched together for each system call, then the real queue depth will vary significantly and it is harder to get an accurate picture of drive latency. Finally, the current Linux asynchronous IO APIs only work in a narrow range of scenarios. There is a new general-purpose async IO interface that will enable drastically lower overhead, but until that is adopted by applications other than our benchmarking tools, we're sticking with testing through the synchronous IO system calls that almost all Linux software uses. This means that we test at higher queue depths by using multiple threads, each issuing one read or write request at a time.
Using multiple threads to perform IO gets around the limits of single-core software overhead, and brings an extra advantage for NVMe SSDs: the use of multiple queues per drive. Enterprise NVMe drives typically support at least 32 separate IO queues, so we can have 32 threads on separate cores independently issuing IO without any need for synchronization or locking between threads.
Power Efficiency in kIOPS/W | Average Power in W |
Now that we're looking at high queue depths, the SATA link becomes the bottleneck and performance equalizer. The Kingston DC500s and the Samsung SATA drives differ primarily in power efficiency, where Samsung again has a big advantage.
The Kingston DC500s have slightly worse QoS for random reads compared to the Samsung SATA drives. The Samsung entry-level NVMe drive has even higher tail latencies, but that's because it needs a queue depth four times higher than the SATA drives in order to reach its full speed, and that's getting close to hitting bottlenecks on the host CPU.
Peak Sequential Read Performance
Since this test consists of many threads each performing IO sequentially but without coordination between threads, there's more work for the SSD controller and less opportunity for pre-fetching than there would be with a single thread reading sequentially across the whole drive. The workload as tested bears closer resemblance to a file server streaming to several simultaneous users, rather than resembling a full-disk backup image creation.
Power Efficiency in MB/s/W | Average Power in W |
For sequential reads, the story at high queue depths is the same as for random reads. The SATA link is the bottleneck, so the difference comes down to power efficiency. The Kingston drives both blow past their official rating of 1.8W for reads, and have substantially lower efficiency than the Samsung SATA drives. The SATA drives are all at or near full throughput with a queue depth of four, while the NVMe drive is shown at QD8.
Steady-State Random Write Performance
The hardest task for most enterprise SSDs is to cope with an unending stream of writes. Once all the spare area granted by the high overprovisioning ratios has been used up, the drive has to perform garbage collection while simultaneously continuing to service new write requests, and all while maintaining consistent performance. The next two tests show how the drives hold up after hours of non-stop writes to an already full drive.
The Kingston DC500s looked pretty good at random writes when we were only considering QD1 performance, and now that we're looking at higher queue depths they still exceed expectations and beat the Samsung drives. The DC500M's 81.2k IOPS is above its rated 75k IOPS, but not by as much as the DC500R's 58.8k IOPS beats the specification of 28k IOPS. When testing across a wide range of queue depths, the DC500R didn't always maintain this throughput, but it was always above spec.
Power Efficiency in kIOPS/W | Average Power in W |
The Kingston DC500s are pretty power-hungry during the random write test, but they stay just under spec. The Samsung SATA SSDs draw much less power and match or exceed the efficiency of the Kingston drives even when performance is lower.
The DC500R's best performance while testing various random write queue depths happened when the queue depth was high enough to have significant software overhead from juggling so many thread, so it has pretty poor latency scores. It managed about 17% lower throughput with a mere QD4 where QoS was much better, but this test is set up to report how the drive behaved at or near the highest throughput observed. It's a bit concerning that the DC500R's throughput seems to be so variable, but since it's all faster than advertised, it's not a huge problem. The DC500M's great throughput was achieved even at pretty low queue depths, so the poor 99.99th percentile latency score is entirely the drive's fault rather than any artifact of the host system configuration. The Samsung 860 DCT has 99.99th percentile tail latency almost as bad as the DC500R, but the 860 was only running at QD4 at the time so that's another case where the drive is having trouble, not the host system.
Steady-State Sequential Write Performance
Testing at higher queue depths didn't help the DC500R do any better on our sequential write test, but the other SATA drives do get a bit closer to the SATA limit. Since this test uses multiple threads each performing sequential writes at QD1, going too high hurts performance because the SSD has to juggle multiple write streams. As a result, these SATA drives peaked with just QD2 and weren't quite as close to the SATA limit as they could have been with a single stream running at moderate queue depths.
Power Efficiency in MB/s/W | Average Power in W |
The Kingston DC500R's excessive power draw was commented on when this result turned up on the last page for the QD1 test, and it's still the most power-hungry and least efficient result here. The DC500M is drawing a bit more power than at QD1 but is within spec and manages to more or less match the efficiency of the NVMe drive, but Samsung's SATA drives again turn in much better efficiency scores.
Mixed Random Performance
Real-world storage workloads usually aren't pure reads or writes but a mix of both. It is completely impractical to test and graph the full range of possible mixed I/O workloads—varying the proportion of reads vs writes, sequential vs random and differing block sizes leads to far too many configurations. Instead, we're going to focus on just a few scenarios that are most commonly referred to by vendors, when they provide a mixed I/O performance specification at all. We tested a range of 4kB random read/write mixes at queue depth 32, the maximum supported by SATA SSDs. This gives us a good picture of the maximum throughput these drives can sustain for mixed random I/O, but in many cases the queue depth will be far higher than necessary, so we can't draw meaningful conclusions about latency from this test. As with our tests of pure random reads or writes, we are using 32 threads each issuing one read or write request at a time. This spreads the work over many CPU cores, and for NVMe drives it also spreads the I/O across the drive's several queues.
The full range of read/write mixes is graphed below, but we'll primarily focus on the 70% read, 30% write case that is a fairly common stand-in for moderately read-heavy mixed workloads.
Power Efficiency in MB/s/W | Average Power in W |
The Kingston and Samsung SATA drives are rather evenly matched for performance with a 70% read/30% write random IO workload: the DC500M is tied with the 883 DCT, and the DC500R is a bit slower than the 860 DCT. The two Kingston drives use the same amount of power, so the slower DC500R has a much worse efficiency score. The two Samsung drives have roughly the same great efficiency score; the slower 860 DCT also uses less power.
The Kingston DC500M and Samsung 883 DCT perform similarly for read-heavy mixes, but once the workload is more than about 30% writes, the Samsung falls behind. Their power consumption is very different: the Samsung plateaus at just over 3W while the DC500M starts at 3W and steadily climbs to over 5W as more writes are added to the mix.
Between the DC500R and the 860 DCT, the Samsung drive has better performance until the workload has shifted to be much more write-heavy than either drive is intended for. The Samsung drive's power consumption also never gets as high as the lowest power draw recorded from the DC500R during this test.
Aerospike Certification Tool
Aerospike is a high-performance NoSQL database designed for use with solid state storage. The developers of Aerospike provide the Aerospike Certification Tool (ACT), a benchmark that emulates the typical storage workload generated by the Aerospike database. This workload consists of a mix of large-block 128kB reads and writes, and small 1.5kB reads. When the ACT was initially released back in the early days of SATA SSDs, the baseline workload was defined to consist of 2000 reads per second and 1000 writes per second. A drive is considered to pass the test if it meets the following latency criteria:
- fewer than 5% of transactions exceed 1ms
- fewer than 1% of transactions exceed 8ms
- fewer than 0.1% of transactions exceed 64ms
Drives can be scored based on the highest throughput they can sustain while satisfying the latency QoS requirements. Scores are normalized relative to the baseline 1x workload, so a score of 50 indicates 100,000 reads per second and 50,000 writes per second. Since this test uses fixed IO rates, the queue depths experienced by each drive will depend on their latency, and can fluctuate during the test run if the drive slows down temporarily for a garbage collection cycle. The test will give up early if it detects the queue depths growing excessively, or if the large block IO threads can't keep up with the random reads.
We used the default settings for queue and thread counts and did not manually constrain the benchmark to a single NUMA node, so this test produced a total of 64 threads scheduled across all 72 virtual (36 physical) cores.
The usual runtime for ACT is 24 hours, which makes determining a drive's throughput limit a long process. For fast NVMe SSDs, this is far longer than necessary for drives to reach steady-state. In order to find the maximum rate at which a drive can pass the test, we start at an unsustainably high rate (at least 150x) and incrementally reduce the rate until the test can run for a full hour, and the decrease the rate further if necessary to get the drive under the latency limits.
The Kingston drives don't handle the Aerospike test well at all. The DC500R can only pass the test at twice the throughput of a baseline standard that was set years ago, and DC500M's score of 4x the base load is still much worse than even the Samsung 860 DCT. The Kingston drives can provide comparable throughput to the Samsung SATA drives (as seen on the 70/30 test above), but they don't pass the strict latency QoS requirements imposed by the Aerospike test. This test is more write-intensive than the above 70/30 test and is definitely beyond what the DC500R is intended to be used for, but the DC500M should be able to do better.
Power Efficiency | Average Power in W |
The Kingston drives don't draw any more power than the Samsung drives for once, but since they are running at much lower throughput their efficiency scores are a fraction of what the Samsung drives earn.
Conclusion
The SATA SSD market is fairly mature, and in many respects our performance testing boils down to confirming that the bottleneck for a particular workload is the SATA link itself rather than the drive. For the most part, the Kingston DC500 SSDs check the necessary boxes, saturating the SATA interface for reads (random or sequential) and coming pretty close for sequential writes. Differentiating the DC500R and DC500M from other enterprise SATA SSDs requires digging a bit deeper.
The Kingston drives inherit a recurring problem we've noticed with Phison-based drives – sub-par sequential IO performance at QD1 – but they are competitive once queue depths climb a bit. Our tests of pure random writes turned out much better than expected for both Kingston drives, though the degree to which the DC500R beat its specifications was variable. Unfortunately, this good performance didn't carry over to the mixed read/write tests, where the Kingston drives merely matched the Samsung competition at best. The Aerospike Certification Tool showed that the Kingston drives both had much worse latency QoS than the Samsung SATA drives for a mixed workload that's a bit more write-heavy than the target market for these drives.
But these performance disparities are all dwarfed by the power efficiency gap between the Kingston DC500s and Samsung's SATA SSDs. Samsung's drives consistently use less power than the Kingston drives, and usually deliver equal or better performance. On some tests one or both of the Kingston drives match the efficiency of the Samsung entry-level NVMe SSD, but Samsung's SATA platform simply has unbeatable efficiency. It would take a fairly large pile of drives for this efficiency gap to seriously affect TCO including power and cooling costs, but it's something to watch out for, especially since the DC500 sometimes substantially exceeds its rated power draw when hit with a lot of writes.
The Phison S12DC controller used by the Kingston DC500s is a significant advance over their older SATA controllers and the move to 28nm undoubtedly helped keep power draw in check while adding the more robust error correction that modern SSDs need. But after hitting the SATA performance wall Samsung pivoted to reducing power for their SATA drives, and at the moment they're well ahead of everyone else on that score.
It's often hard to do any meaningful price comparison of enterprise SSDs, because many models are only or primarily sold in volume directly from the manufacturer to major customers. Those drives often show up at grey market resellers with pricing that is not at all indicative of the volume prices; spotty availability and the occasional surplus of a specific SKU cause major price distortions. This doesn't apply to the Kingston DC500 series, since they actually are intended for retail sales in individual quantities. The Samsung SATA drives in this review are also sold through distributors, so we can make a reasonable comparison. In the table below, we're using current prices from CDW, and we've included a few other relevant competing models with similar endurance ratings. All of these drives are likely to have volume pricing that's lower than these prices, but this should be an accurate picture of their relative positioning.
Enterprise/Datacenter SATA SSD Price Comparison Unit Quantity from CDW, June 24, 2019 |
||||||||
Endurance (DWPD) |
480 GB | 960 GB | 1920 GB | 3840 GB | ||||
Kingston DC500R | 0.5 | $104.99 (22¢/GB) | $192.99 (20¢/GB) | $364.99 (19¢/GB) | $733.99 (19¢/GB) | |||
Kingston DC500M | 1.3 | $125.99 (26¢/GB) | $262.99 (27¢/GB) | $406.99 (21¢/GB) | $822.99 (21¢/GB) | |||
Samsung 860 DCT | 0.2 | $174.99 (18¢/GB) | $349.99 (18¢/GB) | $699.99 (18¢/GB) | ||||
Samsung 883 DCT | 0.8 | $119.99 (25¢/GB) | $219.99 (23¢/GB) | $419.99 (22¢/GB) | $799.99 (21¢/GB) | |||
Intel D3-S4510 |
1.4–2.1 | $131.99 (27¢/GB) | $215.99 (22¢/GB) | $427.99 (22¢/GB) | $824.99 (21¢/GB) | |||
Micron 5200 PRO | 1.3–2.5 | $196.99 (21¢/GB) | $373.99 (19¢/GB) | $1221.99 (32¢/GB) | ||||
Micron 5200 ECO | 0.6–1.1 | $101.99 (21¢/GB) | $187.99 (20¢/GB) | $355.99 (19¢/GB) | $651.99 (17¢/GB) | |||
Seagate Nytro 1351 | 1.0 | $118.99 (25¢/GB) | $211.99 (22¢/GB) | $394.99 (21¢/GB) | $743.99 (19¢/GB) |
Kingston's pricing for the DC500 SSDs is generally in line with the rest of the market. At the higher capacities, the premium for the DC500M over the DC500R is much smaller than we would expect given how much extra flash and DRAM the -M version includes for the same usable capacity. Strictly comparing the two DC500s, the -M seems to be offering a pretty good deal for the improved write performance and endurance. However, the Samsung 883 DCT offers similar throughput and better QoS for slightly lower prices than the DC500M, with the caveat that the Samsung drive has 40% lower rated write endurance. And it's important to keep in mind that the more write-heavy workloads where the DC500M stands out from the DC500R and the Samsung drives are also the workloads least likely to be run on a SATA SSD of any brand—that's NVMe territory now.
While we have not personally evaluated its performance, the Micron 5200 ECO looks pretty well positioned to compete against the DC500R: it's rated for similar write performance, but aside from the 7.68TB model it has twice the write endurance, and it's cheaper across the board.
What's Next?
Kingston has shared some of their plans for enterprise and datacenter SSDs going forward. The DC500 family is just two of several product lines they are launching this year. They haven't released specifics about what other market segments they will be going after in the near future, but at the very least we expect an NVMe drive, probably an entry-level M.2 model based around the Phison E12DC or one of their later datacenter controllers. Since the datacenter drives take much longer to qualify than consumer products, we may not see a PCIe 4.0 datacenter drive before the end of the year even though consumer drives using the E16 controller will be hitting the shelves very soon.
Meanwhile for the successor to the DC500 family, Kingston is planning to update to Intel's 96L TLC, but that transition is probably at least a year away. To support this broadening of their enterprise/datacenter offerings Kingston has significantly increased the staff dedicated to these product lines, so we can expect a more regular cadence of updates—but still at enterprise product cycle pacing, which is not as fast-moving as the consumer SSD market has been lately.