Original Link: https://www.anandtech.com/show/5272/ocz-octane-128gb-ssd-review
OCZ Octane 128GB SSD Review
by Anand Lal Shimpi on December 28, 2011 12:27 AM ESTAn SSD State of the Union Update
I'm very pleased with the level of acceptance of SSDs today. When the X25-M first hit the market and even for the year that followed, any positive SSD recommendation was followed by a discussion of how many VelociRaptors you could RAID together for the same price as one SSD. Now we have an entire category of notebooks that come standard with some degree of solid state storage. End users are much more accepting of SSDs in general, aided by the fact that prices have finally dropped very close to the magical $1/GB marker (a boundary I expect us to finally cross by the end of 2012).
Today's SSDs are not only more prevalent than those we were reviewing just a couple of years ago, they're also a lot better. While the first SSDs still had difficulties competing with mechanical storage for sequential transfers, modern SSDs are several times faster even in the most HDD-friendly workloads. The implementation of technologies such as TRIM helps ensure the performance degradation issues of the very first drives are less likely to be encountered by anyone with a normal client workload.
Although it helped shape the client SSD business, Intel's drives are no longer the only option for performance and reliability. There are good, reliable SSDs from companies like Micron and Samsung that can easily hang with their Intel competitors. And if you're willing to live on the bleeding edge for the promise of the absolute best performance, there's always SandForce.
The client SSD space may not be mature, but I'm happy with the current state of things. With the exception of the ultra low price points, if you want an SSD, there's a solution on the market for you today.
Going forward, I'm not expecting a ton of change in the near term. Intel's Cherryville SSD has been delayed. This is the long awaited SandForce based drive from Intel. I suspect the delay has to do with Intel working through bugs in SandForce's firmware, but its efforts should hopefully make the platform more robust (although it remains to be seen if any of Intel's efforts are ported back into the general SF codebase).
In the first half of next year we'll see the first 20nm IMFT NAND shipping in client SSDs. The smaller transistor geometry will eventually pave the way for cheaper drives, although I wouldn't expect an immediate drop in prices. At the very least we'll see firmware updates enabling 20nm NAND support, although we may see the introduction of some new controllers as well. We're going to be pretty limited in terms of performance gains until ONFI 3.0 based controllers/NAND show up in early 2013. I expect the next 12 months to be more about driving enterprise SSDs and bringing down the cost of consumer drives.
The Topic At Hand: OCZ's Octane
Now for the reason we're all here today. Earlier this year OCZ acquired Indilinx, one of the first SSD controller makers to really make a splash in the enthusiast community. Ever since OCZ entered the SSD business it wanted to guarantee its independence by securing exclusive rights to a controller. OCZ initially did so by buying up all available inventory, first of Indilinx controllers, then of SandForce controllers. That strategy would only work for a (relatively) short period of time as the controller vendors sought to expand their market by selling chips to OCZ's competitors. A few slip ups on the roadmap and Indilinx was ripe for acquisition. OCZ stepped up to the plate and sealed the deal. Several months later, OCZ debuted its first drive based on an unreleased, exclusive Indilinx design: Octane.
Although Octane didn't set any performance records, it was competitive. Performance was definitely current gen, giving OCZ an in-house alternative to SandForce. There was just one issue: OCZ only sent out 512GB Octane review samples. SSDs get a good amount of their performance by executing reads/writes in parallel across multiple NAND devices. Higher capacities have more devices to read/write in parallel, and thus generally deliver the best performance. The greatest sales volume is of the lower capacity models - they're cheaper to own and NAND prices are falling quickly enough that investing in a 512GB drive rarely makes financial sense.
OCZ Octane Lineup | |||||||
1TB | 512GB | 256GB | 128GB | ||||
NAND Type | 25nm Intel Sync MLC | 25nm Intel Sync MLC | 25nm Intel Sync MLC | 25nm Intel Sync MLC | |||
NAND | 1TB | 512GB | 256GB | 128GB | |||
User Capacity | 953GiB | 476GiB | 238GiB | 119GiB | |||
Random Read Performance | Up to 45K IOPS | Up to 37K IOPS | Up to 37K IOPS | Up to 37K IOPS | |||
Random Write Performance | Up to 19.5K IOPS | Up to 16K IOPS | Up to 12K IOPS | Up to 7.7K IOPS | |||
Sequential Read Performance | Up to 560 MB/s | Up to 535 MB/s | Up to 535 MB/s | Up to 535 MB/s | |||
Sequential Write Performance | Up to 400 MB/s | Up to 400 MB/s | Up to 270 MB/s | Up to 170 MB/s | |||
MSRP | TBD | $879.99 | $369.99 | $199.99 |
OCZ finally sent out a 128GB Octane, which I promptly put through our standard test suite.
The drive still uses sixteen IMFT synchronous NAND devices. In this case each package features 64Gb (8GB) of 25nm MLC NAND on a single die. You may remember from our original review of the 512GB Octane that the Indilinx Everest controller supports 8-channels, but pipelining read/write requests to multiple devices per channel is supported.
Gone from the Octane's PCB are the TI muxes that we found on the 512GB version. With the only difference between these drives being their capacity, it's likely that the muxes were used to switch between NAND die/packages. The 512GB version has 4x the number of NAND die than the 128GB version, and it's possible that the Everest controller is only capable of directly communicating with 16 or 32 die on its own. An external mux per channel would allow OCZ to scale capacities much further.
Other than the absent muxes and a change in PCB color, the 128GB Octane is no different than the original 512GB drive we reviewed. The drive does ship with a newer firmware revision:
The updated firmware doesn't do much for performance, although it does apparently fix a number of bugs that existed in the previous version. I haven't seen any mass reports of significant issues with the Octane, although it is still pretty new. Let's hope the trend continues.
The Test
CPU |
Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO |
Motherboard: |
Intel DH67BL Motherboard |
Chipset: |
Intel H67 |
Chipset Drivers: |
Intel 9.1.1.1015 + Intel RST 10.2 |
Memory: | Corsair Vengeance DDR3-1333 2 x 2GB (7-7-7-20) |
Video Card: | eVGA GeForce GTX 285 |
Video Drivers: | NVIDIA ForceWare 190.38 64-bit |
Desktop Resolution: | 1920 x 1200 |
OS: | Windows 7 x64 |
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.
Random read performance remains untouched with the move to 128GB, although random write performance is cut in half compared to the 512GB version:
Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:
Sequential Read/Write Speed
To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
Sequential read performance is once again untouched compared to the larger capacity Octane, while sequential write performance is seriously impacted:
AS-SSD Incompressible Sequential Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.
AnandTech Storage Bench 2011
Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.
Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.
Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.
Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.
1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.
The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:
AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown | ||||
IO Size | % of Total | |||
4KB | 28% | |||
16KB | 10% | |||
32KB | 10% | |||
64KB | 4% |
Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.
Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.
There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.
As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.
The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.
AnandTech Storage Bench 2011 - Heavy Workload
We'll start out by looking at average data rate throughout our new heavy workload test:
For write-heavy workloads the 128GB Octane is faster than the previous generation 3Gbps drives, but still a bit slower than the 6Gbps Intel, Crucial and SandForce based offerings.
The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:
AnandTech Storage Bench 2011 - Light Workload
Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).
The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:
AnandTech Storage Bench 2011 - Light Workload IO Breakdown | ||||
IO Size | % of Total | |||
4KB | 27% | |||
16KB | 8% | |||
32KB | 6% | |||
64KB | 5% |
Lighter/more conventional client workloads (thanks to the test being more read-biased than our heavy workload) do quite well on the 128GB Octane. Here the drive is even faster than the SF-2281 based Vertex 3, although the SF-2281 based Patriot Wildfire does pull ahead thanks to the fact that it has more available NAND die for increased parallelism.
Performance Over Time & TRIM
In our initial Octane review I mentioned that the drive exhibited very high write amplification under a workload comprised of heavy random writes. I mentioned that this would mostly impact server workloads, however you could see issues if you were running on a system without TRIM enabled. We turn to our standard TRIM test to show how bad things could get.
The easiest way to ensure real time garbage collection is working is to fill the drive with data and then write sequentially across the drive. All LBAs will have data in them and any additional writes will force the controller to allocate from the drive's pool of spare area. This path shouldn't have any bottlenecks in it; the process should be seamless. As we've already seen from our Iometer numbers, sequential write performance at low queue depths is around 160MB/s. A quick HD Tach pass of a completely full drive gives us the same result:
The Octane works as expected here, but now what happens if we subject the drive to a ton of 4KB random writes? Unfortunately this is where the Octane falls short. Our standard test involves a 20 minute, 4KB random write across all LBAs at a queue depth of 32. Look at the drive's performance after our torture test:
Average write speed is now less than a tenth of what it was when new. The good news is that any reasonable client workload won't put the drive in this state. The bad news is that OCZ is going to have its work cut out for itself when it goes to move Everest into the enterprise space. With the drive in this state we can test the garbage collection path of the firmware. A quick format in Windows 7 TRIMs all user addressable LBAs, which should fully restore performance if TRIM is working:
Indeed it does. In reality, client workloads won't generate anywhere near this amount of random data and TRIM should help keep everything else in check. I would still like to see lower write amplification (it makes me sleep better at night) but I suspect we won't see that until we meet Everest's true successor.
Power Consumption
With a more reasonable sized drive in house, we're able to find out just how power efficient the Octane really is. At idle, the drive still uses more power than the competition but under load it's actually quite good - about on par with SandForce's SF-2281 (or better depending on the workload).
Final Words
I'm still trying to get my hands on smaller capacities of other newer SSDs to add them to our growing database of SSD performance data. For now it looks like the Octane is a good solution for typical desktop users, even at its 128GB capacity. Performance in those tests is once again competitive with SandForce drives. It's in write heavy workloads that the reduction in number of available NAND die penalizes the Octane. The distinction is as simple as that: if you're running write heavy workloads, the higher capacity Octanes remain competitive where the 128GB falls off. For most desktop/notebook users however, the 128GB drive should be among the best.
I still wouldn't recommend the Octane for Mac OS X use without TRIM. SandForce is still best suited for the TRIM-less environments, although I've been quite pleased with the Samsung SSD 830 under OS X for the past few months as well.
My recommendation continues to be that you wait-and-see. The Octane has only been publicly available for a month now, it'll be several more before we get a good idea of how well these drives are holding up in the myriad of system configurations and usage models that are out there. So far, so good though.