Original Link: https://www.anandtech.com/show/5734/kingston-hyperx-3k-240gb-ssd-review



With OCZ intent on moving as much volume to its in-house Indilinx controllers as possible, SandForce (now LSI) needed to expand to additional partners. OCZ has strong control over the channel so SandForce needed to turn to multiple partners to diversify its portfolio. One key win for SandForce was Kingston. We saw the launch of the first Kingston SandForce (SF-2281) based drive last year under the HyperX brand. Today Kingston is announcing a lower cost version of the drive with the HyperX 3K.


Kingston's HyperX 3K (top) vs. Kingston's HyperX (bottom)

The 3K in this case refers to the number of program/erase cycles the NAND inside the SSD is rated for. As we've discussed on numerous occasions, NAND endurance is a finite thing. The process of programming a NAND cell is physically destructive to the cell itself and over time you'll end up with NAND that can no longer hold a charge (or your data).

Intel's 50nm MLC NAND was rated for 10,000 program/erase cycles. Smaller transistor geometries, although tempting from a cost/capacity standpoint, do come with a reduction in program endurance. At 34nm Intel saw its p/e count drop to 5,000 cycles, and at 25nm we saw a range from 3,000 - 5,000. Modern day SSD controllers include wear leveling logic to ensure that all cells are written to evenly, so even at the lower end of the Intel 25nm range there's more than enough lifespan for a typical client workload. Let's do some math on a hypothetical 100GB drive with four different types of NAND (3K, 5K, 10K and 30K P/E cycles):

SSD Endurance
  3K P/E Cycles 5K P/E Cycles 10K P/E Cycles 30K P/E Cycles
NAND Capacity 100GB 100GB 100GB 100GB
Writes per day 10GB 10GB 10GB 10GB
Write Amplification 10x 10x 10x 10x
P/E Cycles per Day 1 1 1 1
Total Estimated Lifespan 8.219 years 13.698 years 27.397 years 82.191 years

Assuming you write 10GB to your drive every day (on the high end for most client workloads), and your workload is such that the controller sees an effective write amplification of 10x (due to wear leveling/garbage collection the controller has to write 10x the amount of data to NAND that you write to host), you'll blow through one p/e cycle per day. For 25nm 3K p/e cycle NAND that works out to be 8.219 years, at which point your data will remain intact (but presumably read-only) for 12 months. Heavier workloads come with higher write amplification factors, but for client use this math works out quite well.

NAND is binned similar to CPUs, except instead of binning for clock speed and power NAND is binned for endurance. Intel offers both 3K and 5K rated 25nm NAND (among others, including ~30K p/e cycle eMLC NAND). The standard HyperX drive ships with 5K 25nm Intel MLC NAND, while the HyperX 3K ships with 3K 25nm Intel MLC NAND.


Kingston's HyperX 3K 240GB

There's no obvious difference in Intel's part numbers, so I'm not sure if there's a good way to tell whether you're looking at 3K or 5K rated parts from Intel.

The loss of endurance shouldn't matter for most client workloads as I mentioned above, but if you're deploying these drives in a write heavy enterprise environment I'd look elsewhere. Otherwise, the cost savings are worth it:

SSD Pricing Comparison
  90GB 120GB/128GB 240GB/256GB 480GB/512GB
Crucial m4   $154.99 $299.99 $549.99
Intel SSD 520   $184.99 $344.99 $799.99
Kingston HyperX   $189.99 $329.99 $874.99
Kingston HyperX 3K $139.99 $169.99 $319.99 $699.99
Samsung SSD 830   $174.99 $299.99 $779.99
OCZ Octane   $199.99 $339.99 $849.99
OCZ Vertex 3   $199.99 $339.99 $1199.99
OCZ Vertex 4   $179.99 $349.99 $699.99

HyperX 3K drives are already available via Newegg. The biggest cost savings appear to be at the 480GB capacity, although you'll save a good $10 - $20 for most capacities.

Kingston also offers an optional upgrade bundle kit including a handy screwdriver, SATA cable, USB cable, Acronis cloning software, SATA to USB adapter and a 3.5" sled for mounting in a desktop case. It's a nice bundle although I really wish we'd see better 3.5" adapters in these kits. The bundle will set you back another $10 over the prices in the table above.

Like many SandForce partners, Kingston offers a toolbox to view SMART data however you cannot secure erase the drive from the utility. A separate utility is offered to handle firmware updates.

Performance & Final Words

With Kingston's latest performance is, thankfully, unharmed. The HyperX 3K performs just as well as the 5K rated HyperX drive:

Heavy Workload 2011 - Average Data Rate

Light Workload 2011 - Average Data Rate

Kingston's SF-2281 implementation has always been extremely fast and the HyperX 3K is no exception. You are looking at one of the fastest drives we've ever tested. I've included our usual benchmarks in the subsequent pages but you can also use Bench to compare drives directly.

The risk, as always, with SandForce based drives is that you'll run into a combination of drive + system that triggers either an incompatibility or the infamous BSOD issue. I haven't seen the bug crop up on the HyperX 3K but I haven't spent a lot of time looking for it either.

All I can offer is the same caution we tack onto most of our SSD reviews. If you know someone with a system similar (identical?) to yours who isn't running into problems then you'll probably be fine.

If you've had a good experience with the HyperX in a system, then the HyperX 3K is a no-brainer. You get similar, already great, performance at a lower cost.



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read (4K Aligned)

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Desktop Iometer - 128KB Sequential Write (4K Aligned)

AS-SSD Incompressible Sequential Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance - AS-SSD

Incompressible Sequential Write Performance - AS-SSD



AnandTech Storage Bench 2011

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running in 2010.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our heavy workload test:

Heavy Workload 2011 - Average Data Rate

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)



AnandTech Storage Bench 2011 - Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Light Workload 2011 - Average Data Rate

Light Workload 2011 - Average Read Speed

Light Workload 2011 - Average Write Speed

Light Workload 2011 - Disk Busy Time

Light Workload 2011 - Disk Busy Time (Reads)

Light Workload 2011 - Disk Busy Time (Writes)

PCMark 7

PCMark 7 Secondary Storage Score



TRIM Performance

In practice, SandForce based drives running a mainstream client workload do very well and typically boast low average write amplification. However if subjected to a workload composed entirely of incompressible writes (e.g. tons of compressed images, videos and music) you can back the controller into a corner.

To simulate this I filled the drive with incompressible data, ran a 4KB (100% LBA space, QD32) random write test with incompressible data for an hour and a half, and then ran AS-SSD (another incompressible data test) to see how low performance could get:

Kingston HyperX 3K - Resiliency - AS SSD Sequential Write Speed - 6Gbps
  Clean After 1.5hr Torture After TRIM
Kingston HyperX 3K 240GB 312.4 MB/s 103.9 MB/s 245.8 MB/s

I usually run this test for only 20 minutes but after seeing an unusually resilient performance by the 240GB drives I decided to extend the test period to a full 90 minutes. Performance does drop pretty far at that point, down to 103MB/s. TRIMing the drive does restore some performance but not all. If you have a workload that uses a lot of incompressible data (e.g. JPGs, H.264 videos, software encrypted data, highly random datasets etc...) then SandForce just isn't for you.

Power Consumption

SandForce boasts fairly low power consumption, particularly at idle. Even with incompressible data the HyperX 3K's power draw is competitive:

Drive Power Consumption - Idle

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write

 

Log in

Don't have an account? Sign up now