Original Link: https://www.anandtech.com/show/3949/oczs-fastest-ssd-the-ibis-and-hsdl-interface-reviewed
OCZ's Fastest SSD, The IBIS and HSDL Interface Reviewed
by Anand Lal Shimpi on September 29, 2010 12:01 AM ESTTake virtually any modern day SSD and measure how long it takes to launch a single application. You’ll usually notice a big advantage over a hard drive, but you’ll rarely find a difference between two different SSDs. Present day desktop usage models aren’t able to stress the performance high end SSDs are able to deliver. What differentiates one drive from another is really performance in heavy multitasking scenarios or short bursts of heavy IO. Eventually this will change as the SSD install base increases and developers can use the additional IO performance to enable new applications.
In the enterprise market however, the workload is already there. The faster the SSD, the more users you can throw at a single server or SAN. There are effectively no limits to the IO performance needed in the high end workstation and server markets.
These markets are used to throwing tens if not hundreds of physical disks at a problem. Even our upcoming server upgrade uses no less than fifty two SSDs across our entire network, and we’re small beans in the grand scheme of things.
The appetite for performance is so great that many enterprise customers are finding the limits of SATA unacceptable. While we’re transitioning to 6Gbps SATA/SAS, for many enterprise workloads that’s not enough. Answering the call many manufacturers have designed PCIe based SSDs that do away with SATA as a final drive interface. The designs can be as simple as a bunch of SATA based devices paired with a PCIe RAID controller on a single card, to native PCIe controllers.
The OCZ RevoDrive, two SF-1200 controllers in RAID on a PCIe card
OCZ has been toying in this market for a while. The zDrive took four Indilinx controllers and put them behind a RAID controller on a PCIe card. The more recent RevoDrive took two SandForce controllers and did the same. The RevoDrive 2 doubles the controller count to four.
Earlier this year OCZ announced its intention to bring a new high speed SSD interface to the market. Frustrated with the slow progress of SATA interface speeds, OCZ wanted to introduce an interface that would allow greater performance scaling today. Dubbed the High Speed Data Link (HSDL), OCZ’s new interface delivers 2 - 4GB/s (that’s right, gigabytes) of aggregate bandwidth to a single SSD. It’s an absolutely absurd amount of bandwidth, definitely more than a single controller can feed today - which is why the first SSD to support it will be a multi-controller device with internal RAID.
Instead of relying on a SATA controller on your motherboard, HSDL SSDs feature a 4-lane PCIe SATA controller on the drive itself. HSDL is essentially a PCIe cable standard that uses a standard SAS cable to carry a 4 PCIe lanes between a SSD and your motherboard. On the system side you’ll just need a dumb card with some amount of logic to grab the cable and fan the signals out to a PCIe slot.
The first SSD to use HSDL is the OCZ IBIS. As the spiritual successor to the Colossus, the IBIS incorporates four SandForce SF-1200 controllers in a single 3.5” chassis. The four controllers sit behind an internal Silicon Image 3124 RAID controller. This is the same controller used in the RevoDrive which is natively a PCI-X controller, picked to save cost. The 1GB/s of bandwidth you get from the PCI-X controller is routed to a Pericom PCIe x4 switch. The four PCIe lanes stemming from the switch are sent over the HSDL cable to the receiving card on the motherboard. The signal is then grabbed by a chip on the card and passed through to the PCIe bus. Minus the cable, this is basically a RevoDrive inside an aluminum housing. It's a not-very-elegant solution that works, but the real appeal would be controller manufacturers and vendors designing native PCIe-to-HSDL controllers.
OCZ is also bringing to market a 4-port HSDL card with a RAID controller on board ($69 MSRP). You’ll be able to raid four IBIS drives together on a PCIe x16 card for an absolutely ridiculous amount of bandwidth. The attainable bandwidth ultimately boils down to the controller and design used on the 4-port card however. I'm still trying to get my hands on one to find out for myself.
Meet the IBIS
OCZ sent us the basic IBIS kit. Every IBIS drive will come with a free 1-port PCIe card. Drive capacities range from 100GB all the way up to 960GB:
OCZ IBIS Lineup | ||||
Part Number | Capacity | MSRP | ||
OCZ3HSD1IBS1-960G | 960GB | $2799 | ||
OCZ3HSD1IBS1-720G | 720GB | $2149 | ||
OCZ3HSD1IBS1-480G | 480GB | $1299 | ||
OCZ3HSD1IBS1-360G | 360GB | $1099 | ||
OCZ3HSD1IBS1-240G | 240GB | $739 | ||
OCZ3HSD1IBS1-160G | 160GB | $629 | ||
OCZ3HSD1IBS1-100G | 100GB | $529 |
Internally the IBIS is a pretty neat design. There are two PCBs, each with two SF-1200 controllers and associated NAND. They plug into a backplane with a RAID controller and a chip that muxes the four PCIe lanes that branch off the controller into the HSDL signal. It's all custom OCZ PCB-work, pretty impressive.
This is the sandwich of PCBs inside the IBIS chassis
Pull the layers apart and you get the on-drive RAID/HSDL board (left) and the actual SSD cards (right)
Four SF-1200 controllers in parallel, this thing is fast
There’s a standard SATA power connector and an internal mini-SAS connector. The pinout of the connector is proprietary however, plugging it into a SAS card won’t work. OCZ chose the SAS connector to make part sourcing easier and keep launch costs to a minimum (designing a new connector doesn’t make things any easier).
The IBIS bundle includes a HSDL cable, which is a high quality standard SAS cable. Apparently OCZ found signal problems with cheaper SAS cables. OCZ has validated HSDL cables at up to half a meter, which it believes should be enough for most applications today. There obviously may be some confusion caused by OCZ using the SAS connector for HSDL but I suspect if the standard ever catches on OCZ could easily switch to a proprietary connector.
The 1-port PCIe card only supports PCIe 1.1, while the optional 4-port card supports PCIe 1.1 and 2.0 and will auto-negotiate speed at POST.
The Vision
I spoke with OCZ’s CEO Ryan Petersen and he outlined his vision for me. He wants HSDL and associated controllers to be present on motherboards. Instead of using PCIe SSDs, you’ll have HSDL connectors that can give you the bandwidth of PCIe. Instead of being limited to 3Gbps or 6Gbps as is the case with SATA/SAS today you get gobs of bandwidth. We’re talking 2GB/s of bandwidth per drive (1GB/s up and 1GB/s down) on a PCIe 2.0 motherboard. To feed that sort of bandwidth all OCZ has to do is RAID more SSD controllers internal to each drive (or move to faster drive controllers). Eventually, if HSDL takes off, controller makers wouldn’t have to target SATA they could simply build native PCIe controllers. It’d shave off some component cost and some latency.
You can even have a multi-port IBIS drive
The real win for HSDL appears to be the high end workstation or server markets. The single port HSDL/IBIS solution is interesting for those who want a lot of performance in a single drive, but honestly you could roll your own with a RAID controller and four SandForce drives for less money. The potential is once you start designing systems with multiple IBIS drives. With four of these drives you should be able to push multiple gigabytes per second of data which is just unheard of in something that’s still relatively attainable.
The Test
Note our AnandTech Storage Bench doesn't always play well with RAIDed drives and thus we weren't able to run it on the IBIS.
CPU | Intel Core i7 975 running at 3.33GHz (Turbo & EIST Disabled) |
Motherboard: | Intel DX58SO (Intel X58) |
Chipset: | Intel X58 + Marvell SATA 6Gbps PCIe |
Chipset Drivers: | Intel 9.1.1.1015 + Intel IMSM 8.9 |
Memory: | Qimonda DDR3-1333 4 x 1GB (7-7-7-20) |
Video Card: | eVGA GeForce GTX 285 |
Video Drivers: | NVIDIA ForceWare 190.38 64-bit |
Desktop Resolution: | 1920 x 1200 |
OS: | Windows 7 x64 |
PCMark Vantage & SYSMark 2007 Performance
With four SF-1200 controllers in RAID, the IBIS performs well as a desktop drive but honestly most desktop workloads just can’t stress it. In our PCMark Vantage test for example, overall performance went up 10% while drive performance went up 30%. Respectable gains, but not what you’d expect from RAIDing four SF-1200s together.
The fact of the matter is that most desktop workloads never see a queue depth (number of IO requests that must be fulfilled) higher than 5, with the average being in the 1 - 3 range even for heavy multitaskers. It’s part of the reason we focus on low queue depths in our desktop iometer tests. The IBIS isn’t targeted at desktop users however so we need to provide something a bit more stressful.
High Queue Depth Sequential Performance, Just Amazing
Our default sequential write tests have a queue depth of 1, representative of what most desktop users would encounter. Our random write tests bump the queue depth up to 3, but not high enough to really stress a four drive SF-1200 RAID setup. What we’ve done here is provide performance results for the IBIS drive in our standard Iometer tests as well as at a queue depth of 32. The latter is only useful in showing you how performance can scale with very IO dependent workloads, while the former points out that this is probably overkill for a desktop user.
We’ll start with sequential performance. With our standard queue depth of 1, we can write to the IBIS at 323MB/s. That’s a 50% improvement over a standard 120GB SandForce based drive like the Vertex 2 or Corsair Force.
When we crank up the queue depth to 32 simulating a ton of sequential IO the performance gap grows tremendously. While standard SSDs are already limited by SATA at this point, the IBIS jumps to 675MB/s. That’s from a single HSDL port. A 4-way RAID of IBIS drives would deliver absolutely staggering performance assuming linear scaling.
Colossus owners will obviously feel very sad as a single SandForce drive comes close to outperforming the Colossus 500GB drive in this test. The IBIS just puts it to shame.
Sequential read speed is even more impressive. With a queue depth of 1 the IBIS was able to pull 372MB/s, a 77% increase over a single SandForce drive. Crank the queue depth up to 32 and our IBIS sample managed 804MB/s. Again, this is from a single HSDL channel, four of these in RAID-0 should be amazing.
Making Random Performance Look Sequential
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our random tests write 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). Our random read test spans the entirety of the drive. I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.
In case you didn't do the math in your head, 510MB/s of 4KB random writes translates to 130,000 IOPS. That's insane. The IBIS can deliver faster random writes than the RevoDrive can manage sequential writes. A fully taxed SandForce drive manages 200MB/s, the performance advantage here is huge. Again I can't stress enough how fast four of these things must be.
Random read performance across the drive's entire LBA space drops the peak performance a bit but we're still well beyond what 3Gbps SATA can deliver, although technically 6Gbps SATA would be enough here. You'll note that in a desktop workload (QD=3) there's no advantage to the IBIS drive. This thing really only makes sense for very I/O intensive workloads.
No TRIM, but Garbage Collection
The IBIS drive features a four-controller internal RAID, and there’s currently no way to pass TRIM along to drives in a RAID array, which means its very important to have a resilient controller. OCZ stuck with SandForce and the SF-1200, the most resilient controller on the market today. To make things better however the drive supports idle time garbage collection. With an active NTFS partition on the drive, no IO activity and sufficient free space, the controllers will begin cleaning up the NAND. The effect is profound, below we have a clean drive:
Now, after we've filled the drive and tortured it with random writes:
Note that peak low queue-depth read speed dropped from ~233MB/s down to 120MB/s. Now here’s performance after the drive has been left idle for half an hour:
Remember this is very low queue depth testing so the peak values aren't very high, but it's enough to show the idle garbage collection working.
Final Words
The OCZ IBIS is very fast as you’d expect. Technically it’s the fastest SSD we’ve ever tested, but that’s because it’s actually four SF-1200 SSDs in a single 3.5” chassis. From a cost standpoint you’re better off grabbing four SandForce drives and rolling your own RAID, but if for whatever reason you don’t want to do that then a single IBIS will get the job done. It's faster than a Colossus by a huge margin and even faster than OCZ's recently introduced RevoDrive, you just need the workload to stress it.
I’m very comfortable with SandForce as a controller when it comes to not having TRIM. The IBIS’ garbage collection is aggressive enough to fix any significant fragmentation with a bit of idle time.
As I mentioned earlier however, the real goal here is to RAID multiple IBIS drives together with the 4-port card. We measured maximum sequential throughput of 675MB/s for a single IBIS drive, assuming somewhat linear scaling you can expect over 2.5GB/s out of a 4-drive IBIS array. This would be all off of a single PCIe x16 card. At this point the 4-port cards aren’t bootable, OCZ believes it is still several weeks away from having that support enabled.
Introducing a new interface is a very bold move for a company whose start was as just another memory vendor. I asked Ryan if motherboard manufacturers were signed up to deliver boards with HSDL connectors, he said they were. Although motherboard manufacturers often agree to do a lot that doesn’t end up being a launched, retail product. Motherboard makers aren’t the only ones who have taken notice as apparently RAID card manufacturers also want a piece of the HSDL pie. All of this is completely up in the air at this point. It’s one thing to mention you have lots of interest, it’s another to point at a mature market with products actually launching. Getting any company, much less a motherboard or RAID card manufacturer to commit resources to delivering features that support a market of zero is a tall order. I’m not saying it can’t be done, I’m saying that if Ryan Petersen can achieve it, he will have been the first to do it among a long list of memory companies who tried to be something more.
Ryan and OCZ are doing what they’re good at: finding a niche and trying their best to get there quicker than the big guys. There’s simply no PCIe SSD standard for the high end. For a single PCIe SSD there’s not much need, but if you want to RAID together multiple PCIe SSDs you’ll either run out of PCIe slots or have a very fast, very unbootable RAID.
The HSDL standard is completely open. OCZ tells me that there’s no licensing fees and all companies are completely free to implement it. In fact, that’s what OCZ would like to see happen. Currently the only way to get the HSDL spec is to contact OCZ and request it but the company is apparently working on setting something up a little more open. At the end of the day OCZ still wants to make money selling SSDs. HSDL’s success would simply let the company sell more expensive SSDs.
Drives will be available in two weeks but let's hope some of these integrated motherboard designs pan out.
HSDL sounds like a simple solution to the problem of delivering more interface bandwidth to SSDs. The interface should scale well since it’s built on PCIe, it’s just a matter of whether or not companies will support it. I believe it’s at least worth a try.