Original Link: https://www.anandtech.com/show/4470/ocz-revodrive-3-x2-480gb-preview



Although consumer SSDs are far from a mature technology, PCIe SSDs are even further behind on the growth curve. The upside is huge. As SandForce has already demonstrated with the right dataset a single SSD can nearly saturate the 6Gbps SATA bus. Rather than force OEMs into putting yet another very high bandwidth bus on the motherboard, SSD vendors everywhere (Intel included) turn their attention to PCIe as a solution to the problem.

The holy grail is a native PCIe solution. Recently Micron announced such a thing: the P320h. However the estimated price tag on the P320h could be in the $5000 - $10000 range depending on capacity.

Manufacturers in the consumer SSD space are attracted to PCIe solutions simply because margins are higher. For the most part, client storage is a commodity and if you don't make the NAND and controller going into an SSD you're not making a ton of money selling drives to end users. Sell to an enterprise customer and all of the sudden a couple thousand dollars per drive seems like a bargain.


The original RevoDrive

OCZ started making PCIe SSDs the simplest way possible. Take a couple of SSDs, put them on a single PCB behind an on-board RAID controller and you're good to go. The single card performance was decent but of course there were issues. A single controller failure would take out the whole drive and things like TRIM weren't supported either.


The RevoDrive X2

Recently OCZ has been trying very hard to be more than just a rebrander of components. The acquisition of Indilinx puts a wedge between OCZ and a lot of its former peers in the memory business, but it's still a far cry from Intel or Samsung. Its latest PCIe SSD is another step in the maturing of the company. This is the RevoDrive 3 X2:


The RevoDrive 3 X2

Like previous RevoDrives, the third edition places two SSD controllers (and their associated NAND) on a single PCB. The X2 suffix denotes the two PCBs in use on this particular design. Unlike previous RevoDrives however, the RevoDrive 3 doesn't use a kludgy SATA-to-PCI-X-to-PCIe chain. Instead this drive has a single chip that acts as a 4-port 6Gbps SAS to PCIe x4 (gen2) bridge.

Right off the bat that gives the RevoDrive 3 a full 2GB/s of bandwidth to your system, assuming you've got a x4 (or wider) port available. The RevoDrive 3, like many PCIe devices, will work in x8 and x16 slots as well.

As you'd expect, the RevoDrive 3 uses SandForce's latest SF-2281 controllers. The drive will be available in dual and quad SF-2281 configurations, with capacities ranging from 120GB all the way up to 960GB. Prices and configurations are below:

OCZ RevoDrive 3 Lineup
  Number of PCBs Number of Controllers Price
OCZ RevoDrive 3 120GB 1 2 $399.99
OCZ RevoDrive 3 240GB 1 2 $599.99
OCZ RevoDrive 3 480GB 1 2 $1499.99
OCZ RevoDrive 3 X2 480GB 2 4 $1699.99
OCZ RevoDrive 3 X2 960GB 2 4 $3199.99

OCZ isn't revealing the manufacturer of the SAS controller and even went as far as to silkscreen its own logo and part number on the chip. OCZ's efforts are understandable because it claims that the manufacturer of the controller allowed OCZ to modify the firmware and driver of the controller, to customize it to the specific needs of the RevoDrive 3.

Even a look at the RevoDrive 3's driver INF file reveals no indication of the chip's OEM (Update: As a number of you have pointed out, the vendor ID seems to indicate that OCZ is using a Marvell SAS controller as the base for the RevoDrive 3):

OCZ calls this chip its SuperScale storage controller.

VCA 2.0

OCZ wanted to address key limitations of previous RevoDrives: the ability to TRIM and receive SMART data from the drive. The former helps keep performance high while the later is important for users concerned about the health of their drive. Customizing the RevoDrive's controller was the only way to ensure these commands were properly passed. OCZ calls this its Virtualized Controller Architecture 2.0.

The RevoDrive 3's controller is a standard SAS RAID device, however unlike typical RAID-0 arrays data isn't striped across all controllers on the card. Instead OCZ claims its driver accumulates IO queues and dispatches individual IO requests to available SSD controllers. In theory, OCZ's approach wouldn't require matching stripe size to workload - all you need is tons of parallel IO and the software/firmware should take care of the rest. OCZ tells me it has special allowances for corner cases to make sure that one controller isn't writing proportionally more data to its NAND than the others in the system.

The RevoDrive 3 currently has WHQL driver support for 32-bit Windows 7 and non-WHQL support for 64-bit Windows 7. OCZ is working on WHQL certification for its 64-bit driver but that wasn't available to us at the time of this review.

Unfortunately without WHQL certification on the 64-bit driver that means Windows 7 x64 won't boot off of the RevoDrive 3. The 32-bit version will and OCZ expects both versions of Windows to be bootable by the end of July when the drive is available. This is an important feature as OCZ is touting the drive's ability to function as a boot device as a key feature.


RevoDrive 3 X2 - main PCB


RevoDrive 3 X2 - daughterboard

The other major problem with the RevoDrive 3, and unfortunately one that won't be solved in the near future is that although the drive supports TRIM - Windows 7 won't pass the command to the drive. Apparently this is a current limitation that impacts all SCSI/SAS controllers and it's something that only Microsoft can fix. I asked OCZ what the likelihood was that Microsoft would fix this in the near term. OCZ believes that for the next version of Windows Server it's a problem that will have to be addressed, however I didn't get a clear answer on if we can expect anything between now and then.

Unlike the RevoDrive 2 however, you can secure erase the RevoDrive 3 using OCZ's SandForce Toolbox. Unfortunately any hopes for real time TRIM support are thrown out the window until Microsoft decides to update Windows.

OCZ did mention that the upcoming z-Drive would have Linux support with fully functional TRIM. Under Windows however if you send a TRIM command to a SCSI/SAS controller the command just gets thrown away before it ever hits the driver.

The Test

CPU

Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO

Motherboard:

Intel DX58SO (Intel X58)

Intel H67 Motherboard

Chipset:

Intel X58 + Marvell SATA 6Gbps PCIe

Intel H67
Chipset Drivers:

Intel 9.1.1.1015 + Intel IMSM 8.9

Intel 9.1.1.1015 + Intel RST 10.2

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

 



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Low queue depth operation isn't going to show any advantage on the RevoDrive 3 X2. This is to be expected. I include these results to point out that for the majority of desktop users, you won't see any benefit from a 4-drive RAID-0 or the RevoDrive 3 X2. I already talked about how most modern SSDs deliver similar real world performance in our last SSD article. The RevoDrive 3 is no exception.

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

What happens during periods of intense IO activity however? The RevoDrive 3 X2 excels. With incompressible data the RevoDrive 3 X2 is over 2x faster than the 240GB Vertex 3 and with compressible data we're almost at 800MB/s for a single PCIe card.

Note that OCZ's specs for the RevoDrive 3 X2 promise up to 200,000 IOPS, however we're only seeing around 180K IOPS in our QD32 test. What gives? In order to hit those sorts of numbers you actually need to run in a multithreaded/ultra high queue depth configuration (two threads + QD64 in each case). If you actually run in this configuration but hit 100% of the LBA space, a reasonable workload for a high traffic server you'll get numbers similar to ours above (766MB/s vs. 756MB/s). If you limit the workload to an 8GB LBA space however you'll hit the 200K that OCZ advertises:

OCZ RevoDrive 3 X2 (480GB) 4KB Random Write Performance (IOPS/MBps)
  QD=3 QD=32 QD=64
IOPS 52131 IOPS 184649 IOPS 202661 IOPS
MB/s 213.5 MB/s 756.3 MB/s 830.1 MB/s

Desktop Iometer - 4KB Random Read (4K Aligned)

Low queue depth random read performance is nothing to be impressed by.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length. These results are going to be the best indicator of large file copy performance.

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Desktop Iometer - 128KB Sequential Write (4K Aligned)

We're still going through the unimpressive tests here - we see some benefit for larger transfer sizes, but the real advantages come when you start loading up the RevoDrive 3...



AnandTech Storage Bench 2011

Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

Average performance in our Heavy workload is improved over a single Vertex 3, but not tremendously. Where we really see a tremendous performance increase is when we look at the breakdown of reads/writes:

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

Write speed is much improved over a single 240GB Vertex 3. It's not quite the improvement you'd expect from a 4-controller configuration, but I'd expect that this is the sort of performance improvement most workstation users would see.

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)



AnandTech Storage Bench 2011 - Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Despite the reduction in large IOs, over 60% of all operations are perfectly sequential. Average queue depth is a lighter 2.2029 IOs.

Light Workload 2011 - Average Data Rate

Surprisingly enough our light workload seems to do better on the RevoDrive 3 than the heavy workload. Perhaps what we're looking at are a set of IOs that are more easily parallelizable, which results in a better match for the RevoDrive 3 X2's architecture.

Light Workload 2011 - Average Read Speed

Light Workload 2011 - Average Write Speed

Light Workload 2011 - Disk Busy Time

Light Workload 2011 - Disk Busy Time (Reads)

Light Workload 2011 - Disk Busy Time (Writes)



Sequential Performance vs. Transfer Size (ATTO)

I stopped putting these charts in our reviews (although I do include the data in Bench) because they are generally difficult to read. For this however I just want to illustrate two curves: a single Vertex 3 (240GB) and the RevoDrive 3 X2:

Obviously we're looking at compressible read/write performance, but the numbers are staggering. The RevoDrive 3 X2 is able to deliver just under 1.5GB/s in sequential read performance and up to 1.25GB/s in sequential write performance. What's most surprising to me is that these numbers are at a relatively low queue depth of 4. The incredible scaling beyond 16KB transfers seems to imply that OCZ is actually breaking up transfers above a certain size and striping them similar to a traditional RAID-0 array. Either way we're able to reach some pretty unheard of numbers here.



AS-SSD Incompressible Sequential Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance - AS-SSD

Incompressible Sequential Write Performance - AS-SSD

AS-SSD performance is pretty impressive as well. We see a huge advantage over a single Vertex 3.



Final Words

I believe there's a very specific niche that is interested in something like the RevoDrive 3. A customer that needs the performance and is too resource limited to go after the more expensive solutions on the market. I believe it's a lot like what we did in the early days of AnandTech. A decade ago we couldn't get or afford powerful server hardware, but we needed the performance. Our solution was to build a bunch of desktops (using hardware that we did have available to us) and deploy them as our servers. The solution was incredibly fast and cost effective, exactly what we needed at the time. We of course later ran into reliability issues down the road when all of our desktop motherboards died at the same time (apparently we had a bad batch), but when they worked the systems served us well.

The RevoDrive 3 reminds me a lot of something we would've deployed back then: something that can deliver the performance but whose track record isn't proven. OCZ insists that the RevoDrive 3 isn't targeted at servers, although it fully expects users to deploy it in machines running server OSes.

The RevoDrive 3 X2's performance shouldn't be surprising. In fact, you should be able to get similar performance out of a 4-drive RAID-0 array of Vertex 3s. Unfortunately you wouldn't be able to do so on a 6-series Intel motherboard as you're limited to two 6Gbps SATA ports. You'd either need to invest in a 4-port 6Gbps SATA RAID card like this or look at AMD's 8/9-series chipset, which does make the RevoDrive 3 X2 a little more attractive. Ultimately this has been one of my biggest issues with these multi-controller PCIe SSDs, they rarely offer a tangible benefit over a DIY RAID setup.

For the majority of users the RevoDrive 3 X2 is simply overkill. I even demonstrated in some of our IO bound tests that you're bottlenecked by the workload before you're limited by the hardware. That being said, if you have the right workload - I've already shown that you can push nearly 1.5GB/s of data through the card and hit random IOPS numbers of over 180K (~756MB/s in our QD32 test). Even if you have the workload however I still have two major concerns: TRIM support and reliability.

While I'm glad you can finally secure erase the drive under Windows, missing TRIM is a tangible downside in my opinion. The only salvation is the fact that if you're running with mostly compressible data, SandForce's controllers tend to be very resilient and you probably won't miss TRIM. If your workload is predominantly incompressible (e.g. highly compressed videos, images or data of a highly random makeup) then perhaps something SandForce based isn't the best option for you to begin with.

The reliability issue is what will likely keep the RevoDrive 3 out of mission critical deployments. A single controller failure will kill your entire array, not to mention the recent unease about using anything SandForce SF-2281 based. OCZ tells me it hasn't seen a single BSOD issue on the RevoDrive 3 X2 thus far (it's currently running version 2.06 of the OCZ/SF firmware) however it'll be updated to the latest firmware before shipping in late July just in case.

As with anything else in the SSD space, I'd suggest waiting to see how the RevoDrive 3 X2 works deployed in an environment similar to your own before pulling the trigger.

Log in

Don't have an account? Sign up now