Original Link: https://www.anandtech.com/show/6733/kingston-ssdnow-v300-review



Kingston's SSD lineup is as follows: They have HyperX-branded SSDs for enthusiasts and the mainstream market is catered by SSDNow brand. The HyperX SSDs have been fairly popular from what I've seen and we have also reviewed the regular HyperX as well as the 3K variant of it.

However, the SSDNow lineup has always been a big mess in my opinion. There's V-Series, V+ Series, V100, V+100, V200 and V+200 -- but there is very little consistency in the products. The V100 and V200 are both JMicron based, but the V+100 is Toshiba and the V+200 is SandForce based. And the new V300 is also SandForce based. Add to the mix that HyperX SSDs are SandForce too and the lineup couldn't really get any more confusing.

Kington's Current SSD Lineup
  SSDNow V+200 SSDNow V300 HyperX 3K
Controller SandForce SF-2281
NAND 25nm Intel asynchronous MLC 19nm Toshiba (?) Toggle-Mode MLC 25nm Intel synchronous MLC
Capacities (GB) 60, 90, 120, 240, 480 60, 120, 240 90, 120, 240, 480
Warranty Three years

Currently Kingston offers three SSDs with very little distinctiveness. I certainly hope that the V300 will simplify things and there won't be a V+300. The V+200 is still available (others are EOL) but I'm guessing it will be discontinued once Kingston has cleared their stocks. I don't have a problem with the SSDNow brand as a whole but I strongly dislike Kingston's naming system because the plus signs just don't make any sense. It would be okay if the plus sign stood for SandForce and the Vxxx was for example a budget Marvell-based drive, but currently the plus sign has no definite meaning. Considering Kingston has the HyperX lineup as well, I think the SSDNow should only consist of one model at a time to keep things neat and avoid product overlaps; then the V+xxx wouldn't have to exist anymore. 

Product branding criticism aside, let's look at the actual V300:

Kingston SSDNow V300 Specifications
Capacity 60GB 120GB 240GB
Controller SandForce SF-2281
NAND 19nm Toshiba (?) MLC NAND
Sequential Read 450MB/s 450MB/s 450MB/s
Sequential Write 450MB/s 450MB/s 450MB/s
4KB Random Read 85K IOPS 85K IOPS 85K IOPS
4KB Random Write 60K IOPS 55K IOPS 43K IOPS
Power Consumption 0.640W (idle) - 2.052W (load)
Warranty Three years

There's a question mark after the NAND because Kingston only told us that the NAND is 19nm MLC. However, Toshiba-SanDisk join-venture is the only NAND manufacturer who uses 19nm process (IMFT is 20nm, Samsung 21nm and Hynix is still 26nm as far as I know), so there really aren't any other options. Similarly to many other OEMs, Kingston buys NAND in wafers and then does the validation and packaging on their own. The product numbers are also in-house and obviously there are no public datasheets, hence the info on the NAND is very limited.

Kingston branded NAND in the 120GB V300

Buying NAND seems to have become a trend among SSD OEMs lately. If you go back a year, everyone was using pre-packaged NAND but now at least OCZ, ADATA, Kingston and Transcend are buying NAND in wafers. I believe there's currently so much price competition (especially between SandForce OEMs) in the consumer SSD industry that costs need to be cut wherever possible. Buying NAND in wafers is cheaper because there are no binning or packaging costs involved and you also get a ton of lower quality NAND. It's actually a rather small percentage of the NAND wafer that's suitable for SSDs and the lower quality NAND usually gets used in devices where endurance isn't as critical (USB flash sticks, lower-end smartphones/tablets).

You might have noticed that most of the OEMs buying in wafers also make other NAND-based products. On the other hand, you also get the highest quality NAND dies that can be used in enterprise SSDs -- in the retail NAND market you would have to pay a hefty premium for those. The only concern I have is that SSD OEMs won't give us enough details about their NAND and its validation, which may result in lower quality NAND being used because the specifications are not public (for example Intel has always been very open about the endurance of their NAND, while Kingston wouldn't even tell us the original manufacturer). The price competition is very harsh and it can be tempting to use the lower quality NAND but I hope this is just my pondering and we won't see this happening.

Kingston touted that they worked very closely with SandForce/LSI to customize the SF-2281 platform for the V300. Now, even though the chip has Kingston's logo on it, it's the same SF-2281 that's available to everyone else. SandForce allows the client to customize the firmware by a certain degree but I don't know the exact level of customization that can be done (no straight access to the source code, though). I suspect the bigger the client, the more customization SandForce is willing to offer because the client is also able to put more resources into customization. I spoke with one of Kingston's SSD engineers and he said Kingston's firmware is not stock SandForce (like for example Corsair's is), but a custom one where they have tried to pick the best features from every firmware. What that really means in practice, I don't know, but let's see how it performs.

 

Test System

CPU Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 2 x 4GB (9-9-9-24)
Video Card XFX AMD Radeon HD 6850 XXX
(800MHz core clock; 4.2GHz GDDR5 effective)
Video Drivers AMD Catalyst 10.1
Desktop Resolution 1920 x 1080
OS Windows 7 x64

 

 



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

 

Desktop Iometer - 4KB Random Read (4K Aligned)

Random read performance is similar to other SF-2281 SSDs; only Intel has a small advantage here.

 

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Random write speed is also typical SandForce. The 120GB model does take a pretty big hit when using incompressible data because there's less parallelism due to fewer NAND die. 

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

 

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Desktop Iometer - 128KB Sequential Write (4K Aligned)

No surprises in the sequential Iometer tests either. 

AS-SSD Incompressible Sequential Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

 

Incompressible Sequential Read Performance - AS-SSD

Incompressible Sequential Write Performance - AS-SSD

 



Performance vs. Transfer Size

ATTO does a good job of showing us how sequential performance varies with transfer size. Most controllers optimize for commonly seen transfer sizes and neglect the rest. The optimization around 4KB, 8KB and 128KB transfers makes sense given that's what most workloads are bound by, but it's always important to understand how a drive performs across the entire gamut.

As ATTO uses compressible data, SandForce based drives have an advantage due to their real-time compression engine. There isn't really anything surprising here as the V300 is on par with the other SandForce based drives. Read performance at smaller transfer sizes has never been SandForce's biggest strength but the write performance is strong at all IO sizes thanks to compression. 

Click for full size



Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area.  If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

 

Impact of Spare Area
  Kingston SSDNow V300 240GB Intel SSD DC S3700 200GB Intel SSD 335 240GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB

Default

25% Spare Area - -

IO consistency has always been good in SandForce based SSDs. The V300 actually behaves a bit differently from Intel SSD 335 as it takes longer for it to enter steady-state (1400s vs 800s) but on the other hand, the IOPS also drops more in steady-state compared to Intel. For consumer workloads, I believe pushing the steady-state back might not be a bad idea because it's unlikely that the SSD will even reach steady-state, so you'll get better performance at the state where the SSD will be used in. 

Impact of Spare Area
  Kingston SSDNow V300 240GB Intel SSD DC S3700 200GB Intel SSD 335 240GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area - -

The difference between V300 and SSD 335 is quite dramatic here. The IOPS of V300 drops to near zero in the worst cases, whereas for the SSD 335 it stays at over 7K at all times. What's surprising is that giving the V300 more OP doesn't actually help at all. I'm not sure why that's happening but SandForce has always behaved weirdly when it comes to steady-state due to the compression. 

Impact of Spare Area
  Kingston SSDNow V300 240GB Intel SSD DC S3700 200GB Intel SSD 335 240GB Corsair Neutron 240GB OCZ Vector 256GB Samsung SSD 840 Pro 256GB
Default
25% Spare Area - -

 



AnandTech Storage Bench 2011

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. Anand assembled the traces out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally we kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system. Later, however, we created what we refer to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. This represents the load you'd put on a drive after nearly two weeks of constant usage. And it takes a long time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011—Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading, and multitasking with all of this that you can really notice performance differences between drives.

2) We tried to cover as many bases as possible with the software incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II and WoW are both a part of the test), as well as general use stuff (application installing, virus scanning). We included a large amount of email downloading, document creation, and editing as well. To top it all off we even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011—Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential; the rest ranges from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result we're going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time we'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, we will also break out performance into reads, writes, and combined. The reason we do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. It has lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback, as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

We don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea. The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011—Heavy Workload

We'll start out by looking at average data rate throughout our heavy workload test:

 

Heavy Workload 2011 - Average Data Rate

Our Heavy suite also shows that the V300 performs similarly to most SandForce based SSDs. The HyperX is slightly faster but if you're looking for the fastest SSD, then Samsung SSD 840 Pro or OCZ Vector is your best choice.

 

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)

 

 



AnandTech Storage Bench 2011—Light Workload

Our light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011—Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

 

Light Workload 2011 - Average Data Rate

Light Workload 2011 - Average Read Speed

Light Workload 2011 - Average Write Speed

Light Workload 2011 - Disk Busy Time

Light Workload 2011 - Disk Busy Time (Reads)

Light Workload 2011 - Disk Busy Time (Writes)



TRIM Performance

SandForce has always had trouble with TRIM. SandForce's approach of compressing data on the fly definitely complicates things and I believe it's at least partially the reason why TRIM behaves the way it does in SandForce based SSDs. Even though TRIM doesn't work perfectly in any SandForce SSD, there are differences between drives and some do better than others.

As usual, I took a 240GB V300, secure erased it, filled it with incompressible sequential data and then tortured the drive with incompressible 4KB random writes (QD=32, LBA space 100%) for 60 minutes. I then ran AS-SSD after the torture to get dirty-state performance. Finally I TRIM'ed the drive and reran AS-SSD.

Kingston SSDNow V300 Resiliency - AS-SSD
  Clean After Torture After TRIM
Kingston SSDNow V300 240GB 278.2MB/s 204.7MB/s 257.6MB/s

And the issue still persists. For the most part, this isn't a big issue because the majority of users won't store just incompressible data in their SSD (e.g. Windows is very compressible) but if you know you'll be storing lots of incompressible data (H.264 videos, MP3s or encrypted data), then going with something non-SandForce is a better option. 

Power Consumption

In terms of power consumption, the V300 does pretty well. It doesn't break any records but for example Corsair's Force GS draws more power in all of our tests. The utilization of smaller process node NAND (19nm versus 24nm) does have some impact but I wouldn't be surprised if Kingston's customization also has something to do with the lower power draw.

 

Drive Power Consumption - Idle

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write



Final Words

I have one issue with the V300: There's nothing particular in it that would differentiate it from the other SandForce based SSDs. While Kingston said they did lots of customization and worked closely with SandForce/LSI, at least in our testing there aren't any striking differences. It's possible that the tweaks Kingston applied are at a such low level that they won't really impact the end-user experience but that raises a question if the tweaks are needed at all as you'll get the same performance with the stock firmware. Kingston may of course be using slower or lower endurance NAND that required customization but as it stands, this is all just speculation. I'm still a bit disappointed that Kingston couldn't give us more details about their customizations because it would really be great to know how much SandForce allows customization and what the customization can do.

Price Comparison (4/30/2013)
  60/64GB 120/128GB 240/256GB
Kingston SSDNow V300 $70 $110 $200
Kingston HyperX 3K N/A $120 $230
Corsair Neutron GTX N/A $125 $220
Corsair Neutron N/A $120 $200
Corsair Force GS N/A $130 $210
Plextor M5 Pro N/A $130 $230
Plextor M5S N/A $110 $190
Crucial m4 $80 $130 $200
Intel SSD 520 $100 $145 $270
Intel SSD 330 $90 N/A $220
Samsung SSD 840 Pro N/A $150 N/A
Samsung SSD 840 N/A $110 $190
OCZ Vector N/A $140 $270
Mushkin Chronos Deluxe N/A $130 $185

Because the V300 doesn't bring anything special to the market, pricing is more important than ever. The V300 is affordable but not exceptionally cheap. Right now you can find Plextor M5S and Samsung SSD 840 for the same price or even for slightly less, and I would rather pick one of them instead of the V300. However, I wouldn't put too much emphasis on the prices quoted in the table because pricing fluctuates and NewEgg is just one reseller. If you're buying an SSD, I suggest that you follow the prices for a few days and try to catch a hot sale.

In general I think Kingston should differentiate their SSDs more because the V300 and HyperX 3K are too similar. For example giving the HyperX 3K a 5-year warranty would help justify its high-end positioning and a higher price too, but currently Kingston is selling two very similar products at nearly identical price points. That's not the ideal way to do business, especially when one of the products is supposed to be a budget mainstream drive while the other is aimed towards enthusiasts. 

Log in

Don't have an account? Sign up now