If I understood this correctly, OCZ is just using PCIe signaling over a SAS cable (with accompanying card to demux and pass on to the PCIe lanes)? That's ingenious.
I'm pretty sure they communicate with regular SATA SSD drives through the SATA interface, whereas HSDL brings PCIe all the way to the SSD drive. This would be why IBIS supports much higher IOPS than previous PCIe SSD solutions like the Z-Drive (which were limited by SATA RAID). Someone please correct me if I'm wrong about this.
675 MB/s in sequential write? I just feel sad all of a sudden. SSDs are so forbidden for me right now, but this is a true monster. Maybe they'll make a more "down-to-Earth" version next time. Good stuff, anyway.
You still burn DVD's? Hell, I stopped doing that a few months ago when I realized that 99% of the stuff I burned was really a 'watch-once and never again' thing and went out to get one of those 2.5" 1TB external hard drives.
too much proprietary peripherals, if i am going to use a PCIe card, I might as well just stick with a revodrive. pcie slots, proprietary slots connectors, proprietary cable, proprietary disk drive interface, blah. ill just stick with their own revodrive for now and wait for sata or sas controllers to pick up speed.
If this innovation will eventually make it's way down to personal computer, it could simplify board design by allowing us to do away with the ever-changing SATA standard. A PCI Bus for all drives, and so much bandwidth than any bus-level bottleneck would be a thing of the past. One Bus to rule them all!
I want AT to how me how long will it take to install Windows 7 from a fast USB stick, install all the biggest and IO hungry apps. Then i want to see how long will it take to start them all @ the same time having been put in the Start-up folder. Then i want to see how well would work 5 virtual machines doing some synthetic benchmarks (each one @ the same time) under windows 7.
Than i will have a clear view of how fast these SSD monsters are.
...or the past. Well it's probably deja vu, sounds like it. This is exactly what the MCA (Micro Channel architecture) bus did.back in the day. My IBM PS/2 35 SX's hard drive connected directly to this bus which was also the way the plug in cards connected...
"Even our upcoming server upgrade uses no less than fifty two SSDs across our entire network, and we’re small beans in the grand scheme of things." That's why the prices of SSD stay so high. The demand on the server market is way to high. Manufacturers do not need to fight for the mainstream consumer.
I hate to feed trolls but this one is so easy to refute...
The uptake of SSDs in the enterprise ultimately makes them cheaper/faster for consumers. If demand increases so does production. Also enterprise users buy different drives, the tech from those fancy beasts typically ends up in consumer products.
The analogy is that big car manufacturers have race teams for bleeding edge tech. If Anand bought a bunch of track ready Ferraris it wouldn't make your Toyota Yaris more expensive.
Flash production will ramp up to meet demand. Econ 101.
Except that there is a limited supply of NAND Flash supply is limited while the demand for Ferraris does affect the demand for Yarises. Further, advances in technologies does not have anything to do with the shortage for Flash. Still further, supply for flash is very inelastic due to the high cost of entry and possibly limited supply of raw materials (read silicon). p.s. Do yourself a favor, do not teach me economics.
Sorry for the bump, but in my original massage I simply expressed my supprise. I was not aware of the fact that SSD are so widely available/used in the commercial side.
I have had a feeling that SSD's would quickly become limited by the SATA specification. Much of what OCZ is doing here isn't bad in principle. Though I have plenty of questions about it.
Can the drive controller fall back to SAS when plugged into a motherboard's SAS port? (Reading the article, I suspect a very strong 'no' answer.) Can the HSDL spec and drive RAID controller be adapted to support this?
What booting limitations exist? Is a RAID driver needed for the controller inside the IBIS drive?
Are the individual SSD controllers connected to the drive's internal RAID controller seen by the operating system or BIOS? (I'd guess 'no' but not explicitly stated.)
Is the RAID controller on the drive seen as a PCI-E device?
Is the single port host card seen by the system's operating system or is entirely passive? Is the four HDSL port card seen differently?
Does the SAS cabling handle PCI-E 3.0 signaling?
Is OCZ working on a native HDSL controller that'll convert PCI-E to ONFI? Would such a chip be seen as a regular old IDE device for easy OS installation and support for legacy systems? Would such a chip be able to support TRIM?
I've read Anandtech for years and I had to register and comment for the first time on what a poor article this is - the worst on an otherwise impressive site.
"Answering the call many manufacturers have designed PCIe based SSDs that do away with SATA as a final drive interface. The designs can be as simple as a bunch of SATA based devices paired with a PCIe RAID controller on a single card, to native PCIe controllers." -And IBIS is not one of them - if there is a bunch of NAND behind a SATA RAID controller, then the final drive interface is STILL SATA.
"Dubbed the High Speed Data Link (HSDL), OCZ’s new interface delivers 2 - 4GB/s (that’s right, gigabytes) of bi-directional bandwidth to a single SSD." -WRONG, HSDL pipes 4 PCIe 1.1 lanes to a PCIe to PCI-X bridge chip (as per the RevoDrive), connected to a SiI 3124 PCI-X RAID controller, out to 4 RAID'ed SF-1200 drives. And PCIe x4 is 1000MB/s bi-directional, or 2000MB/s aggregate - not that it matters, since the IBIS SSDs aren't going to see that much bandwidth anyway - only the regular old 300MB/s any other SATA300 drive would see. This is nothing new; it's a RevoDrive on a cable. We can pull this out of the article, but you're covering it up with as much OCZ marketing as possible.
By your own admission, you have a black box that is more expensive and slower than a native PCIe x4 2.0, 4 drive RAID-0. You can't upgrade the number of drives or the drive capacity, you can't part it out to sell, it's bigger than 4x2.5" drives, AND you don't get TRIM - the only advantage to using a single, monolithic drive. It's built around a proprietary interface that could (hopefully) be discontinued after this product.
This should have been a negative review from the start instead of a glorified OCZ press release. I hope you return to objectivity and fix this article.
Oh, and how exactly will the x16 card provide "an absolutely ridiculous amount of bandwidth" in any meaningful way if it's a PCIe x16 to 4x PCIe x4 switch? You'd have to RAID the 4 IBIS drives in software and you're still stuck with all the problems above.
I appreciate you taking the time to comment. I never want to bring my objectivity into question but I will be the first to admit that I'm not perfect. Perhaps I can shed more light on my perspective and see if that helps clear things up.
A standalone IBIS drive isn't really all that impressive. You're very right, it's basically a RevoDrive in a chassis (I should've mentioned this from the start, I will make it more clear in the article). But you have to separate IBIS from the idea behind HSDL.
As I wrote in the article the exciting thing isn't the IBIS itself (you're better off rolling your own with four SF-1200 SSDs and your own RAID controller). It's HSDL that I'm more curious about.
I believe the reason to propose HSDL vs. PCIe External Cabling has to do with the interface requirements for firmware RAID should you decide to put several HSDL drives on a single controller, however it's not something that I'm 100% on. From my perspective, if OCZ keeps the standard open (and free) it's a non-issue. If someone else comes along and promotes the same thing via PCIe that will work as well.
This thing only has room to grow if controller companies and/or motherboard manufacturers introduce support for it. If the 4-port card turns into a PCIe x16 to 4x PCIe x4 switch then the whole setup is pointless. As I mentioned in the article, there's little chance for success here but that doesn't mean I believe it's a good idea to discourage the company from pursuing it.
But why is HSDL interesting in the first place? Maybe I'm missing some use case, but what advantage does HSDL present over just skipping the cable and plugging the SSD directly into the PCIe slot?
I might see the concept of four drives plugged into a 16x controller, each getting 4x bandwidth, but then the question is "Why not just put four times the controllers on a single 16x card."
There's so much abstraction here... We've got flash -> sandforce controller -> sata -> raid controller -> pci-x -> pci-e -> HSDL -> pci-e. That's a lot of steps! It's not like there aren't PCIe RAID controllers out there... If they don't have any choice but to stick to SATA (and I'd point out that other manufacturers, like Fusion-IO decided to just go native PCIe) due to SSD controller availability, why not just use PCIe RAID controller and shorten the path to flash -> sandforce controller -> raid controller -> pci-e?
This just seem so convoluted, while other products like the ioDrive just bypass all this.
What doesn't make sense for this setup is how it would be beneficial over just using a hardware RAID card like the Areca 1880. Even with a 4 port HSDL card you would need to tie the drives together somehow. (software raid? Drive pooling?) Then you have to think about redundancy. Doing raid 5 with 4 Ibis drives you'd lose an entire drive. Take the cards out of the drives, plug them all into one Areca card and you would have redundancy, RAID over all drives + hotswap, and drive caching. (RAM on the Areca card would still be faster than 4 ibis)
Not seeing the selling point of this technology at all.
In our cabinets filled with blade servers and 'pizza box' severs we have limited space and a nightmare of cables. We cannot afford to use 3.5" boxes of chips and we dont want to manage an extra bundle of cables. Having a small single 'extension cord' allows us to store storage in another area. So having this kind of bandwidth dense external cable does help.
Cabling is a HUGE problem for me... Having fewer cables is better so building the controllers on the drives themselves where we can simply plug them into a port is attractive. Our SANs do exactly this over copper using iSCSI. Unfortunatly these drives aren't SANs with redundant controllers, power supplies etc...
Software raid systems forced to use the CPU for parity calcs etc... YUK Unless somehow they can be linked and RAID supported between the drives automagically?
For supercomputing systems that can deal with drive failures better, I can see this drive being the perfect fit.
Can the drive controller fall back to SAS when plugged into a motherboard's SAS port? (Reading the article, I suspect a very strong 'no' answer.) Can the HSDL spec and drive RAID controller be adapted to support this? -No, SAS is a completely different protocol and IBIS houses a SATA controller. -HSDL is just a interface to transport PCIe lanes externally, so you could, in theory, connect it to an SAS controller.
What booting limitations exist? Is a RAID driver needed for the controller inside the IBIS drive? -Yes, you need a RAID driver for the SiI 3124.
Are the individual SSD controllers connected to the drive's internal RAID controller seen by the operating system or BIOS? (I'd guess 'no' but not explicitly stated.) -Neither, they only see the SiI 3124 SATA RAID controller.
Is the RAID controller on the drive seen as a PCI-E device? -Probably.
Is the single port host card seen by the system's operating system or is entirely passive? -Passive, the card just pipes the PCIe lanes to the SATA controller.
Is the four HDSL port card seen differently? -Vaporware at this point - I would guess it is a PCIe x16 to four PCIe x4 switch.
Does the SAS cabling handle PCI-E 3.0 signaling? -The current card and half-meter cable are PCIe 1.1 only at this point. The x16 card is likely a PCIe 2.0 switch to four PCIe x4 HDSLs. PCIe 3.0 will share essentially the same electrical characteristics as PCIe 2.0, but it doesn't matter since you'd need a new card with a PCIe 3.0 switch and a new IBIS with a PCIe 3.0 SATA controller anyway.
Is OCZ working on a native HDSL controller that'll convert PCI-E to ONFI? Would such a chip be seen as a regular old IDE device for easy OS installation and support for legacy systems? Would such a chip be able to support TRIM? That's something I'd love to see and it would "do away with SATA as a final drive interface", but I expect it to come from Intel and not somebody reboxing old tech (OCZ).
I suspect that many companies are working on SSDs that do away with SATA as a final drive interface. Just as we saw companies like OCZ enter the SSD market before Intel, I suspect we'll see the same thing happen with PCIe SSD controllers. When the market is new/small it's easy for a smaller player to quickly try something before the bigger players get involved. The real question is whether or not a company like OCZ or perhaps even SandForce can do that and make it successful. I agree with you that in all likelihood it'll be a company like Intel to do it and gain mainstream adoption, but we've seen funny things happen in the past with the adoption of standards proposed by smaller players.
The difference in time to market is not so much the company size it is the willingness to take risk. Small companies have to take more risks to carve out their market. I think you will find that Intel was working on SSD long before OCZ even thought about selling SSD’s. The difference is that Intel spends a lot of time getting the product right before it is released. OCZ simply bypass proper product development and do it in the field with paying customers.
The enthusiast market might be happy trading off performance with reliability but that is not going to happen in the enterprise market. (This is probably a moot point as I actually doubt that enterprise is the true target market for this product).
It would be great to see a lot more focus on aspects outside of performance, which as you have eluded to is no longer a relevant issue in terms of tangible benefit for the majority of users.
"Is the single port host card seen by the system's operating system or is entirely passive? -Passive, the card just pipes the PCIe lanes to the SATA controller."
Does that mean that Linux (and other non-MS) support is trivially workable?
So if the OS sees SiI3124 then there's actually no RAID inside at all - 3124 is simple 4-port SATA host controller and the RAID there is software RAID.
It would be interesting to see what Linux sees about IBIS. My guess it it will see plain 3124 and 4 SSD-s behind that with no RAID at all, and you can use dmraid etc to RAID them together - so you are actually seeing what's happening and no magic left.
Correct. I just installed the 240GB IBIS into a Karmic machine and the kernel only sees 4 separate 55GB "drives" - effectively JBOD mode. The Silicon Image 240GB raid0 set (which is reported OK in BIOS during post) is not visible to Linux. I think a driver will need to be published for this. Will explore dmraid options next...
OK, so official comment from OCZ is "The drive is not compatible with Linux". Seriously, OCZ must be kidding. And no one at Anandtech seemed to think this little tidbit was newsworthy in 8 pages of gushing praise? I would think a statement to the effect that non-Windows OS support is non-existent would be mandatory in a review.
I went back and checked the OCZ datasheet, and sure enough, it only mentions Windows. So the blame rests with me for ASSUMING anyone introducing such a device would support contemporary OS's. But, I've just wasted $700, so I feel like a complete loser. And that'll CERTAINLY be the last product I ever buy from OCZ.
While it may be of some use to speed freaks and number crunchers who running PCs only to get few more random numbers from benchmarks than others before them, I don't see the point in this. Yes of course bandwidth and stuff, all nice. But:
- You can't RAID few of them into some proper RAID level (10,5,6,50,60) because every drive is already "RAID-ed" internally. - You need a special add-on card which isn't anything standard - not to mention that offers nothing but ports to connect drives. - There is high degree of probability that such drives won't run properly with standard RAID cards (Areca, Adaptec, LSI, Intel - take your pick)
Instead creating some new "standard" OCZ should focus on lowering costs of SSDs. 2GB/s+ in RAID0 is easily achievable right now. Need 16 SSDs (which is exactly like 4x4 IBIS), 16 port card (like new Arecas 1880) and off you go. Only advantage for OCZ IBIS here is less occupied space with 4 drives instead 16, but still 16x2.5" SSDs takes only 3x5.25" slots with 2x6 and 1x4 backlpanes.
And for heavy duty jobs there are always better solutions like GM-PowerDrive-LSI for example. Delivers 1500MB (R)/1400 MB (W) straight out of the box. Supports all RAID from 0 to 60, 512 MB of on board cache. Need no special new card. It just works.
For some reason Anandtech seem to have lost objectivity when it comes to OCZ. This along with the Revodrive = epic fail. Consumers want products that are fit for market as opposed to underdeveloped and over priced products that are full of bugs. OCZ’s RMA policy is a substitute for quality control.
The HSDL-part makes me immediately want to skip this SSD.
But the mentioned Revodrive is quite interesting, as that setup should give you access to TRIM on FreeBSD operating system. Since silicon Image works as normal AHCI SATA controller under non-Windows OS, passing TRIM should also work.
I cannot confirm this, but in theory you should have TRIM when using the Revodrive under FreeBSD and likely also Linux (if you disable the pseudoraid).
A native PCI-express solution would still have to present itself that is affordable. It would be very cool if they made a LSI HBA connected to 4 or 8 Sandforce controllers and have a >1GB/s solution that also supports TRIM (not under Windows). That would be very sleek!
ssds in traditional drive packaging have one big advantage. they can be used in large disk arrays. this new gadget is not usable in anythyng other than a single computer or worksation i.e. it's a DAS solution.
the whole industry from mid level to high end is moving to SAN storage (be it fc, iscsi or infiniband). the IBIS has no future ...
I would like to echo the comments made by disappointed1, particularly with regard to OCZ's attempt to introduce a proprietry standard when a cabling spec for PCIe already exists.
It's all well and good having intimate relationships with representatives of companies who's products you review, but having read this (and a couple of other) articles, I do find myself wondering who the real beneficiary is. . .
It is less then 10 days Anand seems to have answer every question we have discussed in the thread. From Connection Port to Software usage.
The review pretty much prove my point. After current Gen Sandforce SSD, we are already hitting the laws of diminishing returns. A SATA 6Gbps SSD, or even a Quad Sandforce SSD like IBIS wont give as any perceptible speed improvement in 90% of out day to day usage.
Until Software or OS takes advantage of massive IOs from SSD. Current Sandforce SSD would be the best investment in terms of upgrades.
I forgot to mention, with next gen SSD that will be hitting 550MB/s and even slightly more IOPS, there is absolutely NO NEED for HSDL in consumer space.
While SATA is only Half Duplex, benchmarks shows no evidence such limitation has causes any latency problem.
Indeed. the next gen intel ssd on sata3 will most likely deliver the same as this ssd, but without all the proprietary crap. sure, numbers will be lower. but actual performance will most likely be the same, much cheaper, and very flexible (just raid them if you want, or jbod them, or what ever).
this stuff is bullshit for customers. it sounds like some geek created a funky setup to combine it's ssds to great performance, and that's it.
oh, other than that, i bet the latency will be higher on these ocz just because of all the indirections. and latency are the nr. one thing that make you feel the difference of different ssds.
in short, that product is absolute useless crap.
so far, i'm still happy on my intel gen1 and gen2. i'll wait a bit to find a new device that gives me a real noticable difference. and does not take away any of the flexibility i have right now with my simple 1-sata-drive setups.
i bet they don't even know. not that they care. their ssds will deliver much more for the customer. easy, standards based connection existing in ANY actual system, raidability, trim, and most likely about the same performance experience as this device, but at a much much lower cost.
Since this is really just a cable attached SSD card, I don't see the need for yet another protocol/connection standard. Also the concept of RAID upon RAID also seems somewhat redundant.
I am also unclear as to what market this is aimed for. The price excludes the mass desktop market and yet it also isn't aimed at the enterprise data center - that only leaves workstation power users which are not a large market. Given the small target audience, motherboard makers will most likely not invest their resources in supporting HSDL on their motherboards.
Its a very interesting concept, and the performance is of course incredible. But like you mentioned, I just can't see the money being worth it at this point. It is simpler than building your own RAID, as you just plug it in and be done with it.
But if motherboard makers can get on board, and the interface gains some traction, then I could certainly see it taking over SAS/SATA as the interface of choice in the future. I think OCZ is smart to offer it as a free and open standard. Offering a new standard for free has worked very well for other companies in the past. Especially when they are small.
<<Note that peak low queue-depth write speed dropped from ~233MB/s down to 120MB/s. Now here’s performance after the drive has been left idle for half an hour:>>
Isn't this a problem in a server environment? Maybe some servers never get half an hour of idle.
I suspect your resiliency test is flawed. Doesn't HD Tach essentially write a string of zeros to the drive? And a Sandforce drive would compress that and only write a tiny amount to flash memory. So it seems to me that you have only proved that the drives are resilient when they are presented with an unrealistic workload of highly compressible data.
I think you need to do two things to get a good idea of resiliency:
(1) Write a lot of random (incompressible) data to the drive to get it "dirty"
(2) Measure the write performance of random (incompressible) data while the SSD is "dirty"
It is also possible to combine (1) and (2) in a single test. Start with a "clean" SSD, then configure IO meter to write incompressible data continuously over the entire SSD span, say random 4KB 100% write. Measure the write speed once a minute and plot the write speed vs. time to see how the write speed degrades as the SSD gets dirty. This is a standard test done by Calypso system's industrial SSD testers. See, for example, the last graph here:
Also, there is a strange problem with Sandforce-controlled "dirty" SSDs having degraded write speed which is not recovered after TRIM, but it only shows up with incompressible data. See, for example:
It boils down to write amplification. I'm working on an article now to quantify exactly how low SandForce's WA is in comparison to other controller makers using methods similar to what you've suggested. In the case of the IBIS I'm simply trying to confirm whether or not the background garbage collection works. In this case I'm writing 100% random data sequentially across the entire drive using iometer, then peppering it with 100% random data randomly across the entire drive for 20 minutes. HDTach is simply used to measure write latency across all LBAs.
I haven't seen any issues with SF drives not TRIMing properly when faced with random data. I will augment our HDTach TRIM test with another iometer pass of random data to see if I can duplicate the results.
What I would like to see is SSDs with a standard mini-SAS2 connector. That would give a bandwidth of 24 Gbps, and it could be connected to any SAS2 HBA or RAID card. Simple, standards-compliant, and fast. What more could you want?
Well, inexpensive would be nice. I guess putting a 4x SAS2 interface in an SSD might be expensive. But at high volume, I would guess the cost could be brought down eventually.
After reading your response to my comment, I re-read the section of your article with HD Tach results, and I am now more confused. There are 3 HD Tach screenshots that show average read and write speeds in the text at the bottom right of the screen. In order, the avg read and writes for the 3 screenshots are:
read 201.4 write 233.1
read 125.0 write 224.3
"Note that peak low queue-depth write speed dropped from ~233MB/s down to 120MB/s"
read 203.9 write 229.2
I also included your comment from the article about write speed dropping. But are the read and write rates from HD Tach mixed up?
Ah good catch, that's a typo. On most drives the HDTach pass shows impact to write latency, but on SF drives the impact is actually on read speed (the writes appear to be mostly compressed/deduped) as there's much more data to track recover since what's being read was originally stored in its entirety.
My guess is that if you wrote incompressible data to a dirty SF drive, that the write speed would be impacted similarly to the impact you see here on the read speed.
In other words, the SF drives are not nearly as resilient as the HD Tach write scans show, since, as you say, the SF controller is just compressing/deduping the data that HD Tach is writing. And HD Tach's writes do not represent a realistic workload.
I suggest you do an article revisiting the resiliency of dirty SSDs, paying particular attention to writing incompressible data.
> A mini-SAS2 cable has four lanes of 6 Gbps for a total of 24 Gbps.
That was also my point (posted at another site today):
What happens when 6G SSDs emerge soon to comply with the current 6G standard? Isn't that what many of us have been waiting for?
(I know I have!)
I should think that numerous HBA reviews will tell us which setups are best for which workloads, without needing to invest in a totally new cable protocol.
For example, the Highpoint RocketRAID 2720 is a modest SAS/6G HBA we are considering for our mostly sequential workload i.e. updating a static database that we upload to our Internet website, archiving 10GB+drive images + lots of low-volume updates.
That RR2720 uses an x8 Gen2 edge connector, and it provides us with lots of flexibility concerning the size and number of SSDs we eventually will attach i.e. 2 x SFF-8087 connectors for a total of 8 SSDs and/or HDDs.
If we want, we can switch to one or two SFF-8087 cables that "fan out" to discrete HDDs and/or discrete SSDs instead of the SFF-8087 cables that come with the 2720. $20-$30 USD, maybe?
Now, some of you will likely object that the RR2720 is not preferred for highly random high I/O environments; so, for a little more money there are lots of choices that will address those workloads nicely e.g. Areca, LSI etc.
Highpoint even has an HBA with a full x16 Gen2 edge connector.
What am I missing here? Repeating, once again, this important observation already made above:
"A mini-SAS2 cable has four lanes of 6 Gbps for a total of 24 Gbps."
If what I am reading today is exactly correct, then the one-port IBIS is limited to PCI-E 1.1: x4 @ 2.5 Gbps = 10 Gbps.
p.s. I would suggest, as I already have, that the SATA/6G protocol be modified to do away with the 10/8 transmission overhead, and that the next SATA/SAS channel support 8 Gbps -- to sync it optimally with PCI-E Gen3's 128/130 "jumbo frames". At most, this may require slightly better SATA and SAS cables, which is a very cheap marginal cost, imho.
I for one found this to be a great article, and I always enjoy Anand's prose - and I think he surely has a book or two in him. You certainly can't fault him for politeness, even in the face of some fierce criticism. So come on critics, get real, look what kind of nonsense you have to put up with at Tom's et al!
Thank you for writing the article and shedding some light into this new product and standard from OCZ. Good to see you addressing the objectivity questions posted by some of the other posters. I enjoyed reading the additional information and analysis from some of the more informed poster. I do wonder if the source of the bias of your article is based on your not being aware of some of the developments in the SSD space or just being exposed to too much vendor coolaid. Not sure which one is the case, but hopefully my post will help generate a more balanced article.
---
I argee with several of the previous posters that this both the IBIS drives and the HDSL interface are nothing new (no matter how hard OCZ marketing might want to make it look like). As one of the previous posters said it is a SSD specific implementation of a PCIe extender solution.
I'm not sure how HDSL started, but I see it as a bridging solution because OCZ does not have 6G technology today. Recently released 6G dual port solutions will allow single SSD drives to transfer up to a theoretical 1200 MB/s per drive.
It does allow higher scaling beyond 1200 MB/s per drive through the channel bundling, but the SAS standardization commity is already looking into that option in case 12Gbps SAS ends up becoming too difficult to do. Channel bundling is inherent to SAS and address the bandwidth threat brought up by PCIe.
The PCIe channel bundling / IBIS Drive solution from OCZ also looks a bit like a uncomfortable balancing act. Why do you need to extend the PCIe interface to the the drive level? Is it just to maintain the more familiar "Drive" based use model? Or is it really a way to package 2 or more 3Gbps drives to get higher performance? Why not stick with a pure PCIe solution?
Assuming you don't buy into the SAS channel bundling story or you need a drive today that has more bandwidth. Why another propriatary solution? The SSD industry is working on NVMHCI which will address the concern of proprietary PCIe card solutions and will allow addressing of PCIe based cards as a storage device (Intel backed and evolved from ACHI).
While OCZ's efforts are certainly to be applauded especially given their aggresive roadmap plans a more balanced article should include references to the above developments to put the OCZ solution into perspective. I'd love to see some follow-up articles on the multi-port SAS and NVMHCI as a primer on what how the SSD industry is addressing technology limitations of today's SSDs. In addition it might be interesting to talk about the recent SNIA performance spec (soon to include client specific workloads) and Jedec's endurance spec.
---
I continue to enjoy your indepth reporting on the SSD side.
Not quite true actually. PCIe drives of this type, meaning drives that are essentially multiple ssd drives attached to a raid controller on a single board, show up on *nix OS's as individual drives as well as the array device itself.
So under *nix you don't have to use the onboard raid (which does not provide any performance benefits in this case, there is no battery backed cache) and you can then create a single RAID0 for all the individual drives on all your PCIe cards, how many those are.
With the PCI-X converter this is really limited to 1066mb/s, but minus overhead is probably in the 850-900mb/s range which is what we see on one of the tests, just above 800mb/s.
I hope the link works, if it doesn't, I linked to an LSI 8204ELP for $210. It has a single 4x SAS connector for 1200mb/s, OCZ could link that straight into a 3.5" bay device with 4x Sandforce SSD's on it. This would be about the same price or cheaper than this IBIS device, while giving 1200mb/s which is going to be about 40% faster than the IBIS.
It would make MUCH MUCH more sense to me to simply have a 3.5" bay device with a 4x SAS port on it which can connect to just about any hard drive controller. The HSDL interface thing is completely unnecessary with the current PCI-X converter chip on the SSD.
Why are we concerning ourselfs withs memory controllers instead of just going directly to DMA? Just because these devices are static rams doesen't change the fact they are memory. Just treat them as such. Its called KISS it.
One clarification: these devices and controllers use multi-lane-sas internal ports and cables, but are electrically incompatible.. what wil happen if one by mistake attaches an IBIS to a SAS controller, or a SAS device to the HSDL controller? the devices get fried, or is there atleast some keying to the connectors, so such a mistake can be avoided?
What PCI device do these IBISes provide? Is it something standard like AHCI that has drivers for every OS, or something proprietary that needs new driver written and integrated into all relevant OS-es?
OK, it looks like the interface to host is SiI 3124. This is widely supported sata HBA and has drivers for most operating systems.
But SiI3124 is just SATA host controller - no RAID. So the RAID must be done host side, or sofRAID in other words. It also means Linux should see 4 distinct SSD devices.
Hello, I don't remember if i already posted my question, sorry !
But in installed one IBIS 160GB using the following configurated computer:
ASUS P6T WS Pro (latest BIOS & drivers) Intel i7 Core 965 Extreme 3.2GHz Kingston DDR3 1600MHz - 12GB nVIDIA Quadro FX4800 grphics card 2 Seagate SAS 450GB Microsoft Windows 7 Pro
After installing Win7 without problems, i installed antivirus BitDefender, several app's (including Adobe package and Microsoft Office Pro), configured Updates NOT AUTOMATIC ! When i stopped my computer, system started downloading 92 Upgrades (without my permission) ?
When i restarted..Crash error 0x80070002 Impossible to restor (i made an image system, but day before !)
Reinstalled, and while i was typing the Key codes for the Microsoft Vision Pro .. An other crash ! Same problem !
My opinion, about the IBIS HSDL box, it's a very poor assembly design! Impossible to connec the supply connector on it, and i must dismantle the front plater to have access to the supply connector ! Now, i wonder if i have to follow OCZ's advice about the BIOS configuration?
They are saying:
" You must set you BIOS to use "S1 Sleep Mode" for proper operation. Using S3 or AUTO may cause instability ".
And what about the internal HDD's ?
Is there any member who already installed such IBIS and use it regularely.
If the answer is Yes (?) can you please tell me how you configured your system ?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
74 Comments
Back to Article
punjabiplaya - Wednesday, September 29, 2010 - link
If I understood this correctly, OCZ is just using PCIe signaling over a SAS cable (with accompanying card to demux and pass on to the PCIe lanes)? That's ingenious.davecason - Wednesday, September 29, 2010 - link
This method also makes it easy to port it to a laptop interface through an express card socket.vol7ron - Wednesday, September 29, 2010 - link
Isn't that what PCIe RAID controllers do (minus the SAS cable)?TinyTeeth - Saturday, October 2, 2010 - link
I'm pretty sure they communicate with regular SATA SSD drives through the SATA interface, whereas HSDL brings PCIe all the way to the SSD drive. This would be why IBIS supports much higher IOPS than previous PCIe SSD solutions like the Z-Drive (which were limited by SATA RAID). Someone please correct me if I'm wrong about this.Ethaniel - Wednesday, September 29, 2010 - link
675 MB/s in sequential write? I just feel sad all of a sudden. SSDs are so forbidden for me right now, but this is a true monster. Maybe they'll make a more "down-to-Earth" version next time. Good stuff, anyway.vol7ron - Wednesday, September 29, 2010 - link
If only I could burn DVDs that fast.Lerianis - Saturday, October 2, 2010 - link
You still burn DVD's? Hell, I stopped doing that a few months ago when I realized that 99% of the stuff I burned was really a 'watch-once and never again' thing and went out to get one of those 2.5" 1TB external hard drives.Haven't burned another DVD since.
rqle - Wednesday, September 29, 2010 - link
too much proprietary peripherals, if i am going to use a PCIe card, I might as well just stick with a revodrive. pcie slots, proprietary slots connectors, proprietary cable, proprietary disk drive interface, blah. ill just stick with their own revodrive for now and wait for sata or sas controllers to pick up speed.jo-82 - Wednesday, September 29, 2010 - link
SAS cabels aren't that expensive these days, and the most companies who would by one of these use them today anyway.And good luck waiting the next 3-5 years or so for SATA 12GB ;)
vol7ron - Wednesday, September 29, 2010 - link
Gb, not GBclovis501 - Wednesday, September 29, 2010 - link
If this innovation will eventually make it's way down to personal computer, it could simplify board design by allowing us to do away with the ever-changing SATA standard. A PCI Bus for all drives, and so much bandwidth than any bus-level bottleneck would be a thing of the past. One Bus to rule them all!LancerVI - Wednesday, September 29, 2010 - link
One bus to rule them all! That's a great point. One can hope. That would be great!AstroGuardian - Wednesday, September 29, 2010 - link
I want AT to how me how long will it take to install Windows 7 from a fast USB stick, install all the biggest and IO hungry apps. Then i want to see how long will it take to start them all @ the same time having been put in the Start-up folder. Then i want to see how well would work 5 virtual machines doing some synthetic benchmarks (each one @ the same time) under windows 7.Than i will have a clear view of how fast these SSD monsters are.
Minion4Hire - Wednesday, September 29, 2010 - link
Well aren't you demanding... =pPerisphetic - Thursday, September 30, 2010 - link
...or the past.Well it's probably deja vu, sounds like it.
This is exactly what the MCA (Micro Channel architecture) bus did.back in the day. My IBM PS/2 35 SX's hard drive connected directly to this bus which was also the way the plug in cards connected...
jonup - Wednesday, September 29, 2010 - link
"Even our upcoming server upgrade uses no less than fifty two SSDs across our entire network, and we’re small beans in the grand scheme of things."That's why the prices of SSD stay so high. The demand on the server market is way to high. Manufacturers do not need to fight for the mainstream consumer.
mckirkus - Wednesday, September 29, 2010 - link
I hate to feed trolls but this one is so easy to refute...The uptake of SSDs in the enterprise ultimately makes them cheaper/faster for consumers. If demand increases so does production. Also enterprise users buy different drives, the tech from those fancy beasts typically ends up in consumer products.
The analogy is that big car manufacturers have race teams for bleeding edge tech. If Anand bought a bunch of track ready Ferraris it wouldn't make your Toyota Yaris more expensive.
Flash production will ramp up to meet demand. Econ 101.
jonup - Wednesday, September 29, 2010 - link
Except that there is a limited supply of NAND Flash supply is limited while the demand for Ferraris does affect the demand for Yarises. Further, advances in technologies does not have anything to do with the shortage for Flash. Still further, supply for flash is very inelastic due to the high cost of entry and possibly limited supply of raw materials (read silicon).p.s. Do yourself a favor, do not teach me economics.
jonup - Wednesday, September 29, 2010 - link
Sorry for the bump, but in my original massage I simply expressed my supprise. I was not aware of the fact that SSD are so widely available/used in the commercial side.Ushio01 - Wednesday, September 29, 2010 - link
I believe the random write and read graphs have been mixed up.Ushio01 - Wednesday, September 29, 2010 - link
My mistake.Kevin G - Wednesday, September 29, 2010 - link
I have had a feeling that SSD's would quickly become limited by the SATA specification. Much of what OCZ is doing here isn't bad in principle. Though I have plenty of questions about it.Can the drive controller fall back to SAS when plugged into a motherboard's SAS port? (Reading the article, I suspect a very strong 'no' answer.) Can the HSDL spec and drive RAID controller be adapted to support this?
What booting limitations exist? Is a RAID driver needed for the controller inside the IBIS drive?
Are the individual SSD controllers connected to the drive's internal RAID controller seen by the operating system or BIOS? (I'd guess 'no' but not explicitly stated.)
Is the RAID controller on the drive seen as a PCI-E device?
Is the single port host card seen by the system's operating system or is entirely passive? Is the four HDSL port card seen differently?
Does the SAS cabling handle PCI-E 3.0 signaling?
Is OCZ working on a native HDSL controller that'll convert PCI-E to ONFI? Would such a chip be seen as a regular old IDE device for easy OS installation and support for legacy systems? Would such a chip be able to support TRIM?
disappointed1 - Wednesday, September 29, 2010 - link
I've read Anandtech for years and I had to register and comment for the first time on what a poor article this is - the worst on an otherwise impressive site."Answering the call many manufacturers have designed PCIe based SSDs that do away with SATA as a final drive interface. The designs can be as simple as a bunch of SATA based devices paired with a PCIe RAID controller on a single card, to native PCIe controllers."
-And IBIS is not one of them - if there is a bunch of NAND behind a SATA RAID controller, then the final drive interface is STILL SATA.
"Dubbed the High Speed Data Link (HSDL), OCZ’s new interface delivers 2 - 4GB/s (that’s right, gigabytes) of bi-directional bandwidth to a single SSD."
-WRONG, HSDL pipes 4 PCIe 1.1 lanes to a PCIe to PCI-X bridge chip (as per the RevoDrive), connected to a SiI 3124 PCI-X RAID controller, out to 4 RAID'ed SF-1200 drives. And PCIe x4 is 1000MB/s bi-directional, or 2000MB/s aggregate - not that it matters, since the IBIS SSDs aren't going to see that much bandwidth anyway - only the regular old 300MB/s any other SATA300 drive would see. This is nothing new; it's a RevoDrive on a cable. We can pull this out of the article, but you're covering it up with as much OCZ marketing as possible.
Worse, it's all connected through a proprietary interface instead of the PCI Express External Cabling, spec'd Febuary 2007 (http://www.pcisig.com/specifications/pciexpress/pc...
By your own admission, you have a black box that is more expensive and slower than a native PCIe x4 2.0, 4 drive RAID-0. You can't upgrade the number of drives or the drive capacity, you can't part it out to sell, it's bigger than 4x2.5" drives, AND you don't get TRIM - the only advantage to using a single, monolithic drive. It's built around a proprietary interface that could (hopefully) be discontinued after this product.
This should have been a negative review from the start instead of a glorified OCZ press release. I hope you return to objectivity and fix this article.
Oh, and how exactly will the x16 card provide "an absolutely ridiculous amount of bandwidth" in any meaningful way if it's a PCIe x16 to 4x PCIe x4 switch? You'd have to RAID the 4 IBIS drives in software and you're still stuck with all the problems above.
Anand Lal Shimpi - Wednesday, September 29, 2010 - link
I appreciate you taking the time to comment. I never want to bring my objectivity into question but I will be the first to admit that I'm not perfect. Perhaps I can shed more light on my perspective and see if that helps clear things up.A standalone IBIS drive isn't really all that impressive. You're very right, it's basically a RevoDrive in a chassis (I should've mentioned this from the start, I will make it more clear in the article). But you have to separate IBIS from the idea behind HSDL.
As I wrote in the article the exciting thing isn't the IBIS itself (you're better off rolling your own with four SF-1200 SSDs and your own RAID controller). It's HSDL that I'm more curious about.
I believe the reason to propose HSDL vs. PCIe External Cabling has to do with the interface requirements for firmware RAID should you decide to put several HSDL drives on a single controller, however it's not something that I'm 100% on. From my perspective, if OCZ keeps the standard open (and free) it's a non-issue. If someone else comes along and promotes the same thing via PCIe that will work as well.
This thing only has room to grow if controller companies and/or motherboard manufacturers introduce support for it. If the 4-port card turns into a PCIe x16 to 4x PCIe x4 switch then the whole setup is pointless. As I mentioned in the article, there's little chance for success here but that doesn't mean I believe it's a good idea to discourage the company from pursuing it.
Take care,
Anand
Anand Lal Shimpi - Wednesday, September 29, 2010 - link
I've clarified the above points in the review, hopefully this helps :)Take care,
Anand
disappointed1 - Thursday, September 30, 2010 - link
Thanks for the clarification Anand.Guspaz - Wednesday, September 29, 2010 - link
But why is HSDL interesting in the first place? Maybe I'm missing some use case, but what advantage does HSDL present over just skipping the cable and plugging the SSD directly into the PCIe slot?I might see the concept of four drives plugged into a 16x controller, each getting 4x bandwidth, but then the question is "Why not just put four times the controllers on a single 16x card."
There's so much abstraction here... We've got flash -> sandforce controller -> sata -> raid controller -> pci-x -> pci-e -> HSDL -> pci-e. That's a lot of steps! It's not like there aren't PCIe RAID controllers out there... If they don't have any choice but to stick to SATA (and I'd point out that other manufacturers, like Fusion-IO decided to just go native PCIe) due to SSD controller availability, why not just use PCIe RAID controller and shorten the path to flash -> sandforce controller -> raid controller -> pci-e?
This just seem so convoluted, while other products like the ioDrive just bypass all this.
bman212121 - Wednesday, September 29, 2010 - link
What doesn't make sense for this setup is how it would be beneficial over just using a hardware RAID card like the Areca 1880. Even with a 4 port HSDL card you would need to tie the drives together somehow. (software raid? Drive pooling?) Then you have to think about redundancy. Doing raid 5 with 4 Ibis drives you'd lose an entire drive. Take the cards out of the drives, plug them all into one Areca card and you would have redundancy, RAID over all drives + hotswap, and drive caching. (RAM on the Areca card would still be faster than 4 ibis)Not seeing the selling point of this technology at all.
rbarone69 - Thursday, September 30, 2010 - link
In our cabinets filled with blade servers and 'pizza box' severs we have limited space and a nightmare of cables. We cannot afford to use 3.5" boxes of chips and we dont want to manage an extra bundle of cables. Having a small single 'extension cord' allows us to store storage in another area. So having this kind of bandwidth dense external cable does help.Cabling is a HUGE problem for me... Having fewer cables is better so building the controllers on the drives themselves where we can simply plug them into a port is attractive. Our SANs do exactly this over copper using iSCSI. Unfortunatly these drives aren't SANs with redundant controllers, power supplies etc...
Software raid systems forced to use the CPU for parity calcs etc... YUK Unless somehow they can be linked and RAID supported between the drives automagically?
For supercomputing systems that can deal with drive failures better, I can see this drive being the perfect fit.
disappointed1 - Wednesday, September 29, 2010 - link
Can the drive controller fall back to SAS when plugged into a motherboard's SAS port? (Reading the article, I suspect a very strong 'no' answer.) Can the HSDL spec and drive RAID controller be adapted to support this?-No, SAS is a completely different protocol and IBIS houses a SATA controller.
-HSDL is just a interface to transport PCIe lanes externally, so you could, in theory, connect it to an SAS controller.
What booting limitations exist? Is a RAID driver needed for the controller inside the IBIS drive?
-Yes, you need a RAID driver for the SiI 3124.
Are the individual SSD controllers connected to the drive's internal RAID controller seen by the operating system or BIOS? (I'd guess 'no' but not explicitly stated.)
-Neither, they only see the SiI 3124 SATA RAID controller.
Is the RAID controller on the drive seen as a PCI-E device?
-Probably.
Is the single port host card seen by the system's operating system or is entirely passive?
-Passive, the card just pipes the PCIe lanes to the SATA controller.
Is the four HDSL port card seen differently?
-Vaporware at this point - I would guess it is a PCIe x16 to four PCIe x4 switch.
Does the SAS cabling handle PCI-E 3.0 signaling?
-The current card and half-meter cable are PCIe 1.1 only at this point. The x16 card is likely a PCIe 2.0 switch to four PCIe x4 HDSLs. PCIe 3.0 will share essentially the same electrical characteristics as PCIe 2.0, but it doesn't matter since you'd need a new card with a PCIe 3.0 switch and a new IBIS with a PCIe 3.0 SATA controller anyway.
Is OCZ working on a native HDSL controller that'll convert PCI-E to ONFI? Would such a chip be seen as a regular old IDE device for easy OS installation and support for legacy systems? Would such a chip be able to support TRIM?
That's something I'd love to see and it would "do away with SATA as a final drive interface", but I expect it to come from Intel and not somebody reboxing old tech (OCZ).
Anand Lal Shimpi - Wednesday, September 29, 2010 - link
I suspect that many companies are working on SSDs that do away with SATA as a final drive interface. Just as we saw companies like OCZ enter the SSD market before Intel, I suspect we'll see the same thing happen with PCIe SSD controllers. When the market is new/small it's easy for a smaller player to quickly try something before the bigger players get involved. The real question is whether or not a company like OCZ or perhaps even SandForce can do that and make it successful. I agree with you that in all likelihood it'll be a company like Intel to do it and gain mainstream adoption, but we've seen funny things happen in the past with the adoption of standards proposed by smaller players.Take care,
Anand
Ao1 - Wednesday, September 29, 2010 - link
The difference in time to market is not so much the company size it is the willingness to take risk. Small companies have to take more risks to carve out their market. I think you will find that Intel was working on SSD long before OCZ even thought about selling SSD’s. The difference is that Intel spends a lot of time getting the product right before it is released. OCZ simply bypass proper product development and do it in the field with paying customers.The enthusiast market might be happy trading off performance with reliability but that is not going to happen in the enterprise market. (This is probably a moot point as I actually doubt that enterprise is the true target market for this product).
It would be great to see a lot more focus on aspects outside of performance, which as you have eluded to is no longer a relevant issue in terms of tangible benefit for the majority of users.
cjcoats - Wednesday, September 29, 2010 - link
"Is the single port host card seen by the system's operating system or is entirely passive?-Passive, the card just pipes the PCIe lanes to the SATA controller."
Does that mean that Linux (and other non-MS) support is trivially
workable?
mroos - Friday, November 5, 2010 - link
So if the OS sees SiI3124 then there's actually no RAID inside at all - 3124 is simple 4-port SATA host controller and the RAID there is software RAID.It would be interesting to see what Linux sees about IBIS. My guess it it will see plain 3124 and 4 SSD-s behind that with no RAID at all, and you can use dmraid etc to RAID them together - so you are actually seeing what's happening and no magic left.
SiliconLunch - Thursday, December 2, 2010 - link
Correct. I just installed the 240GB IBIS into a Karmic machine and the kernel only sees 4 separate 55GB "drives" - effectively JBOD mode. The Silicon Image 240GB raid0 set (which is reported OK in BIOS during post) is not visible to Linux. I think a driver will need to be published for this. Will explore dmraid options next...SiliconLunch - Thursday, December 2, 2010 - link
OK, so official comment from OCZ is "The drive is not compatible with Linux". Seriously, OCZ must be kidding. And no one at Anandtech seemed to think this little tidbit was newsworthy in 8 pages of gushing praise? I would think a statement to the effect that non-Windows OS support is non-existent would be mandatory in a review.I went back and checked the OCZ datasheet, and sure enough, it only mentions Windows. So the blame rests with me for ASSUMING anyone introducing such a device would support contemporary OS's. But, I've just wasted $700, so I feel like a complete loser. And that'll CERTAINLY be the last product I ever buy from OCZ.
ypsylon - Wednesday, September 29, 2010 - link
While it may be of some use to speed freaks and number crunchers who running PCs only to get few more random numbers from benchmarks than others before them, I don't see the point in this. Yes of course bandwidth and stuff, all nice. But:- You can't RAID few of them into some proper RAID level (10,5,6,50,60) because every drive is already "RAID-ed" internally.
- You need a special add-on card which isn't anything standard - not to mention that offers nothing but ports to connect drives.
- There is high degree of probability that such drives won't run properly with standard RAID cards (Areca, Adaptec, LSI, Intel - take your pick)
Instead creating some new "standard" OCZ should focus on lowering costs of SSDs. 2GB/s+ in RAID0 is easily achievable right now. Need 16 SSDs (which is exactly like 4x4 IBIS), 16 port card (like new Arecas 1880) and off you go. Only advantage for OCZ IBIS here is less occupied space with 4 drives instead 16, but still 16x2.5" SSDs takes only 3x5.25" slots with 2x6 and 1x4 backlpanes.
And for heavy duty jobs there are always better solutions like GM-PowerDrive-LSI for example. Delivers 1500MB (R)/1400 MB (W) straight out of the box. Supports all RAID from 0 to 60, 512 MB of on board cache. Need no special new card. It just works.
Ao1 - Wednesday, September 29, 2010 - link
For some reason Anandtech seem to have lost objectivity when it comes to OCZ. This along with the Revodrive = epic fail. Consumers want products that are fit for market as opposed to underdeveloped and over priced products that are full of bugs. OCZ’s RMA policy is a substitute for quality control.sub.mesa - Wednesday, September 29, 2010 - link
The HSDL-part makes me immediately want to skip this SSD.But the mentioned Revodrive is quite interesting, as that setup should give you access to TRIM on FreeBSD operating system. Since silicon Image works as normal AHCI SATA controller under non-Windows OS, passing TRIM should also work.
I cannot confirm this, but in theory you should have TRIM when using the Revodrive under FreeBSD and likely also Linux (if you disable the pseudoraid).
A native PCI-express solution would still have to present itself that is affordable. It would be very cool if they made a LSI HBA connected to 4 or 8 Sandforce controllers and have a >1GB/s solution that also supports TRIM (not under Windows). That would be very sleek!
haplo602 - Wednesday, September 29, 2010 - link
ssds in traditional drive packaging have one big advantage. they can be used in large disk arrays. this new gadget is not usable in anythyng other than a single computer or worksation i.e. it's a DAS solution.the whole industry from mid level to high end is moving to SAN storage (be it fc, iscsi or infiniband). the IBIS has no future ...
Johnsy - Wednesday, September 29, 2010 - link
I would like to echo the comments made by disappointed1, particularly with regard to OCZ's attempt to introduce a proprietry standard when a cabling spec for PCIe already exists.It's all well and good having intimate relationships with representatives of companies who's products you review, but having read this (and a couple of other) articles, I do find myself wondering who the real beneficiary is. . .
63jax - Wednesday, September 29, 2010 - link
although i am amazed by those numbers you should put the ioDrive there as a standard.iwodo - Wednesday, September 29, 2010 - link
I recently posted on Anandtech forum about SSD - When we hit the laws of diminishing returnshttp://forums.anandtech.com/showthread.php?t=21068...
It is less then 10 days Anand seems to have answer every question we have discussed in the thread. From Connection Port to Software usage.
The review pretty much prove my point. After current Gen Sandforce SSD, we are already hitting the laws of diminishing returns. A SATA 6Gbps SSD, or even a Quad Sandforce SSD like IBIS wont give as any perceptible speed improvement in 90% of out day to day usage.
Until Software or OS takes advantage of massive IOs from SSD. Current Sandforce SSD would be the best investment in terms of upgrades.
iwodo - Wednesday, September 29, 2010 - link
I forgot to mention, with next gen SSD that will be hitting 550MB/s and even slightly more IOPS, there is absolutely NO NEED for HSDL in consumer space.While SATA is only Half Duplex, benchmarks shows no evidence such limitation has causes any latency problem.
davepermen - Thursday, September 30, 2010 - link
Indeed. the next gen intel ssd on sata3 will most likely deliver the same as this ssd, but without all the proprietary crap. sure, numbers will be lower. but actual performance will most likely be the same, much cheaper, and very flexible (just raid them if you want, or jbod them, or what ever).this stuff is bullshit for customers. it sounds like some geek created a funky setup to combine it's ssds to great performance, and that's it.
oh, other than that, i bet the latency will be higher on these ocz just because of all the indirections. and latency are the nr. one thing that make you feel the difference of different ssds.
in short, that product is absolute useless crap.
so far, i'm still happy on my intel gen1 and gen2. i'll wait a bit to find a new device that gives me a real noticable difference. and does not take away any of the flexibility i have right now with my simple 1-sata-drive setups.
anand and ocz, always a strange combination :)
viewwin - Wednesday, September 29, 2010 - link
I wonder what Intel thinks about a new competing cable design?davepermen - Thursday, September 30, 2010 - link
i bet they don't even know. not that they care. their ssds will deliver much more for the customer. easy, standards based connection existing in ANY actual system, raidability, trim, and most likely about the same performance experience as this device, but at a much much lower cost.tech6 - Wednesday, September 29, 2010 - link
Since this is really just a cable attached SSD card, I don't see the need for yet another protocol/connection standard. Also the concept of RAID upon RAID also seems somewhat redundant.I am also unclear as to what market this is aimed for. The price excludes the mass desktop market and yet it also isn't aimed at the enterprise data center - that only leaves workstation power users which are not a large market. Given the small target audience, motherboard makers will most likely not invest their resources in supporting HSDL on their motherboards.
Stuka87 - Wednesday, September 29, 2010 - link
Its a very interesting concept, and the performance is of course incredible. But like you mentioned, I just can't see the money being worth it at this point. It is simpler than building your own RAID, as you just plug it in and be done with it.But if motherboard makers can get on board, and the interface gains some traction, then I could certainly see it taking over SAS/SATA as the interface of choice in the future. I think OCZ is smart to offer it as a free and open standard. Offering a new standard for free has worked very well for other companies in the past. Especially when they are small.
nirolf - Wednesday, September 29, 2010 - link
<<Note that peak low queue-depth write speed dropped from ~233MB/s down to 120MB/s. Now here’s performance after the drive has been left idle for half an hour:>>Isn't this a problem in a server environment? Maybe some servers never get half an hour of idle.
jwilliams4200 - Wednesday, September 29, 2010 - link
Anand:I suspect your resiliency test is flawed. Doesn't HD Tach essentially write a string of zeros to the drive? And a Sandforce drive would compress that and only write a tiny amount to flash memory. So it seems to me that you have only proved that the drives are resilient when they are presented with an unrealistic workload of highly compressible data.
I think you need to do two things to get a good idea of resiliency:
(1) Write a lot of random (incompressible) data to the drive to get it "dirty"
(2) Measure the write performance of random (incompressible) data while the SSD is "dirty"
It is also possible to combine (1) and (2) in a single test. Start with a "clean" SSD, then configure IO meter to write incompressible data continuously over the entire SSD span, say random 4KB 100% write. Measure the write speed once a minute and plot the write speed vs. time to see how the write speed degrades as the SSD gets dirty. This is a standard test done by Calypso system's industrial SSD testers. See, for example, the last graph here:
http://www.micronblogs.com/2010/08/setting-a-new-b...
Also, there is a strange problem with Sandforce-controlled "dirty" SSDs having degraded write speed which is not recovered after TRIM, but it only shows up with incompressible data. See, for example:
http://www.bit-tech.net/hardware/storage/2010/08/1...
Anand Lal Shimpi - Wednesday, September 29, 2010 - link
It boils down to write amplification. I'm working on an article now to quantify exactly how low SandForce's WA is in comparison to other controller makers using methods similar to what you've suggested. In the case of the IBIS I'm simply trying to confirm whether or not the background garbage collection works. In this case I'm writing 100% random data sequentially across the entire drive using iometer, then peppering it with 100% random data randomly across the entire drive for 20 minutes. HDTach is simply used to measure write latency across all LBAs.I haven't seen any issues with SF drives not TRIMing properly when faced with random data. I will augment our HDTach TRIM test with another iometer pass of random data to see if I can duplicate the results.
Take care,
Anand
jwilliams4200 - Wednesday, September 29, 2010 - link
What I would like to see is SSDs with a standard mini-SAS2 connector. That would give a bandwidth of 24 Gbps, and it could be connected to any SAS2 HBA or RAID card. Simple, standards-compliant, and fast. What more could you want?Well, inexpensive would be nice. I guess putting a 4x SAS2 interface in an SSD might be expensive. But at high volume, I would guess the cost could be brought down eventually.
LancerVI - Wednesday, September 29, 2010 - link
I found the article to be interesting. OCZ introducing a new interconnect that is open for all is interesting. That's what I took from it.It's cool to see what these companies are trying to do to increase performance, create new products and possibly new markets.
I think most of you missed the point of the article.
davepermen - Thursday, September 30, 2010 - link
problem is, why?there is NO use of this. there are enough interconnects existing. enough fast, they are, too. so, again, why?
oh, and open and all doesn't matter. there won't be any products besides some ocz stuff.
jwilliams4200 - Wednesday, September 29, 2010 - link
Anand:After reading your response to my comment, I re-read the section of your article with HD Tach results, and I am now more confused. There are 3 HD Tach screenshots that show average read and write speeds in the text at the bottom right of the screen. In order, the avg read and writes for the 3 screenshots are:
read 201.4
write 233.1
read 125.0
write 224.3
"Note that peak low queue-depth write speed dropped from ~233MB/s down to 120MB/s"
read 203.9
write 229.2
I also included your comment from the article about write speed dropping. But are the read and write rates from HD Tach mixed up?
Anand Lal Shimpi - Wednesday, September 29, 2010 - link
Ah good catch, that's a typo. On most drives the HDTach pass shows impact to write latency, but on SF drives the impact is actually on read speed (the writes appear to be mostly compressed/deduped) as there's much more data to track recover since what's being read was originally stored in its entirety.Take care,
Anand
jwilliams4200 - Wednesday, September 29, 2010 - link
My guess is that if you wrote incompressible data to a dirty SF drive, that the write speed would be impacted similarly to the impact you see here on the read speed.In other words, the SF drives are not nearly as resilient as the HD Tach write scans show, since, as you say, the SF controller is just compressing/deduping the data that HD Tach is writing. And HD Tach's writes do not represent a realistic workload.
I suggest you do an article revisiting the resiliency of dirty SSDs, paying particular attention to writing incompressible data.
greggm2000 - Wednesday, September 29, 2010 - link
So how will Lightpeak factor into this? Is OCZ working on a Lightpeak implementation of this? One hopes that OCZ and Intel are communicating here..jwilliams4200 - Wednesday, September 29, 2010 - link
The first lightpeak cables are only supposed to be 10 Gbps. A mini-SAS2 cable has four lanes of 6 Gbps for a total of 24 Gbps. lightpeak loses.MRFS - Wednesday, September 29, 2010 - link
> A mini-SAS2 cable has four lanes of 6 Gbps for a total of 24 Gbps.That was also my point (posted at another site today):
What happens when 6G SSDs emerge soon to comply
with the current 6G standard? Isn't that what many of
us have been waiting for?
(I know I have!)
I should think that numerous HBA reviews will tell us which setups
are best for which workloads, without needing to invest
in a totally new cable protocol.
For example, the Highpoint RocketRAID 2720 is a modest SAS/6G HBA
we are considering for our mostly sequential workload
i.e. updating a static database that we upload to our Internet website,
archiving 10GB+drive images + lots of low-volume updates.
http://www.newegg.com/Product/Product.aspx?Item=N8...
That RR2720 uses an x8 Gen2 edge connector, and
it provides us with lots of flexibility concerning the
size and number of SSDs we eventually will attach
i.e. 2 x SFF-8087 connectors for a total of 8 SSDs and/or HDDs.
If we want, we can switch to one or two SFF-8087 cables
that "fan out" to discrete HDDs and/or discrete SSDs
instead of the SFF-8087 cables that come with the 2720.
$20-$30 USD, maybe?
Now, some of you will likely object that the RR2720
is not preferred for highly random high I/O environments;
so, for a little more money there are lots of choices that will
address those workloads nicely e.g. Areca, LSI etc.
Highpoint even has an HBA with a full x16 Gen2 edge connector.
What am I missing here? Repeating, once again,
this important observation already made above:
"A mini-SAS2 cable has four lanes of 6 Gbps for a total of 24 Gbps."
If what I am reading today is exactly correct, then
the one-port IBIS is limited to PCI-E 1.1: x4 @ 2.5 Gbps = 10 Gbps.
p.s. I would suggest, as I already have, that the SATA/6G protocol
be modified to do away with the 10/8 transmission overhead, and
that the next SATA/SAS channel support 8 Gbps -- to sync it optimally
with PCI-E Gen3's 128/130 "jumbo frames". At most, this may
require slightly better SATA and SAS cables, which is a very
cheap marginal cost, imho.
MRFS
blowfish - Wednesday, September 29, 2010 - link
I for one found this to be a great article, and I always enjoy Anand's prose - and I think he surely has a book or two in him. You certainly can't fault him for politeness, even in the face of some fierce criticism. So come on critics, get real, look what kind of nonsense you have to put up with at Tom's et al!sfsilicon - Wednesday, September 29, 2010 - link
Hi Anand,Thank you for writing the article and shedding some light into this new product and standard from OCZ. Good to see you addressing the objectivity questions posted by some of the other posters. I enjoyed reading the additional information and analysis from some of the more informed poster. I do wonder if the source of the bias of your article is based on your not being aware of some of the developments in the SSD space or just being exposed to too much vendor coolaid. Not sure which one is the case, but hopefully my post will help generate a more balanced article.
---
I argee with several of the previous posters that this both the IBIS drives and the HDSL interface are nothing new (no matter how hard OCZ marketing might want to make it look like). As one of the previous posters said it is a SSD specific implementation of a PCIe extender solution.
I'm not sure how HDSL started, but I see it as a bridging solution because OCZ does not have 6G technology today. Recently released 6G dual port solutions will allow single SSD drives to transfer up to a theoretical 1200 MB/s per drive.
It does allow higher scaling beyond 1200 MB/s per drive through the channel bundling, but the SAS standardization commity is already looking into that option in case 12Gbps SAS ends up becoming too difficult to do. Channel bundling is inherent to SAS and address the bandwidth threat brought up by PCIe.
The PCIe channel bundling / IBIS Drive solution from OCZ also looks a bit like a uncomfortable balancing act. Why do you need to extend the PCIe interface to the the drive level? Is it just to maintain the more familiar "Drive" based use model? Or is it really a way to package 2 or more 3Gbps drives to get higher performance? Why not stick with a pure PCIe solution?
Assuming you don't buy into the SAS channel bundling story or you need a drive today that has more bandwidth. Why another propriatary solution? The SSD industry is working on NVMHCI which will address the concern of proprietary PCIe card solutions and will allow addressing of PCIe based cards as a storage device (Intel backed and evolved from ACHI).
While OCZ's efforts are certainly to be applauded especially given their aggresive roadmap plans a more balanced article should include references to the above developments to put the OCZ solution into perspective. I'd love to see some follow-up articles on the multi-port SAS and NVMHCI as a primer on what how the SSD industry is addressing technology limitations of today's SSDs. In addition it might be interesting to talk about the recent SNIA performance spec (soon to include client specific workloads) and Jedec's endurance spec.
---
I continue to enjoy your indepth reporting on the SSD side.
don_k - Wednesday, September 29, 2010 - link
"..or have a very fast, very unbootable RAID."Not quite true actually. PCIe drives of this type, meaning drives that are essentially multiple ssd drives attached to a raid controller on a single board, show up on *nix OS's as individual drives as well as the array device itself.
So under *nix you don't have to use the onboard raid (which does not provide any performance benefits in this case, there is no battery backed cache) and you can then create a single RAID0 for all the individual drives on all your PCIe cards, how many those are.
eva2000 - Thursday, September 30, 2010 - link
would love to see some more tests on non-windows platform i.e. centos 5.5 64bit with mysql 5.1.50 database benchmarks - particularly on writes.bradc - Thursday, September 30, 2010 - link
With the PCI-X converter this is really limited to 1066mb/s, but minus overhead is probably in the 850-900mb/s range which is what we see on one of the tests, just above 800mb/s.While this has the effect of being a single SSD for cabling and so on, I really don't see the appeal? Why didn't they just use something like this http://www.newegg.com/Product/Product.aspx?Item=N8...
I hope the link works, if it doesn't, I linked to an LSI 8204ELP for $210. It has a single 4x SAS connector for 1200mb/s, OCZ could link that straight into a 3.5" bay device with 4x Sandforce SSD's on it. This would be about the same price or cheaper than this IBIS device, while giving 1200mb/s which is going to be about 40% faster than the IBIS.
It would make MUCH MUCH more sense to me to simply have a 3.5" bay device with a 4x SAS port on it which can connect to just about any hard drive controller. The HSDL interface thing is completely unnecessary with the current PCI-X converter chip on the SSD.
sjprg2 - Friday, October 1, 2010 - link
Why are we concerning ourselfs withs memory controllers instead of just going directly to DMA? Just because these devices are static rams doesen't change the fact they are memory. Just treat them as such. Its called KISS it.XLV - Friday, October 1, 2010 - link
One clarification: these devices and controllers use multi-lane-sas internal ports and cables, but are electrically incompatible.. what wil happen if one by mistake attaches an IBIS to a SAS controller, or a SAS device to the HSDL controller? the devices get fried, or is there atleast some keying to the connectors, so such a mistake can be avoided?juhatus - Sunday, October 3, 2010 - link
"Even our upcoming server upgrade uses no less than fifty two SSDs across our entire network, and we’re small beans in the grand scheme of things."Why didn't you go for real SAN ? Something like EMC clariion?
You just like pedling with disks, now do you? :)
randomloop - Tuesday, October 5, 2010 - link
In the beginning, we based our aerial video and imaging recording system on OCZ SSD drives, based on their specs.They've all failed after several months of operation.
Aerial systems endure quite a bit of jostling. Hence the desire to use SSD's.
We had 5 of 5 OCZ 128G SSD drives fail during our tests.
We now use other SSD drives.
mroos - Friday, November 5, 2010 - link
What PCI device do these IBISes provide? Is it something standard like AHCI that has drivers for every OS, or something proprietary that needs new driver written and integrated into all relevant OS-es?mroos - Friday, November 5, 2010 - link
OK, it looks like the interface to host is SiI 3124. This is widely supported sata HBA and has drivers for most operating systems.But SiI3124 is just SATA host controller - no RAID. So the RAID must be done host side, or sofRAID in other words. It also means Linux should see 4 distinct SSD devices.
paralou - Saturday, April 9, 2011 - link
Hello,I don't remember if i already posted my question, sorry !
But in installed one IBIS 160GB using the following configurated computer:
ASUS P6T WS Pro (latest BIOS & drivers)
Intel i7 Core 965 Extreme 3.2GHz
Kingston DDR3 1600MHz - 12GB
nVIDIA Quadro FX4800 grphics card
2 Seagate SAS 450GB
Microsoft Windows 7 Pro
After installing Win7 without problems, i installed antivirus BitDefender, several app's (including Adobe package and Microsoft Office Pro), configured Updates NOT AUTOMATIC !
When i stopped my computer, system started downloading 92 Upgrades (without my permission) ?
When i restarted..Crash error 0x80070002
Impossible to restor (i made an image system, but day before !)
Reinstalled, and while i was typing the Key codes for the Microsoft Vision Pro ..
An other crash ! Same problem !
My opinion, about the IBIS HSDL box, it's a very poor assembly design!
Impossible to connec the supply connector on it, and i must dismantle the front plater to have access to the supply connector !
Now, i wonder if i have to follow OCZ's advice about the BIOS configuration?
They are saying:
" You must set you BIOS to use "S1 Sleep Mode" for proper operation.
Using S3 or AUTO may cause instability ".
And what about the internal HDD's ?
Is there any member who already installed such IBIS and use it regularely.
If the answer is Yes (?) can you please tell me how you configured your system ?
Regards,
Paralou
MySchizoBuddy - Wednesday, February 22, 2012 - link
OCZ doesn't have PCIe x16 option like FusionIO ioDrive Octal which takes the reads to 6GB/s