Comments Locked

22 Comments

Back to Article

  • davegraham - Thursday, June 2, 2011 - link

    http://fusionio.com/products/iodriveoctal/ <--Fusion I/O IODrive Octal handily beats the specs posted by Micron. I believe even the recently announced TMS PCIe solution has it's nose up in the rarified air of enterprise class PCIe solutions. ;)

    kudos to Micron for starting the journey...they've got a LONG road ahead to prove themselves.

    cheers,

    Dave
  • GullLars - Thursday, June 2, 2011 - link

    This is what i also thought of when i read "Sequential read/write performance is up to 3GB/s and 2GB/s respectively. Random 4KB read performance is up at a staggering 750,000 IOPS, while random write speed peaks at 341,000 IOPS. The former is unmatched by anything I've seen on a single card, while the latter is a number that OCZ's recently announced Z-Drive R4 88 is promising as well. Note that these aren't steady state numbers nor are the details of the testing methodology known so believe accordingly."

    ioDrives have been the benchmark to beat, and it doesn't seem like P320h can beat it. ioDrives also scale from 1 to 8 controllers on the same card in a cluster type setup, and come with support for Infiniband for direct linking outside the host system.

    TMS has both PCIe Flash and RAM sollutions that can beat this.

    The question is really, will this be able to compete in it's price range on a TCO vs QOS basis. $5600 is more in the ioDriveDuo range.
  • engineer7 - Saturday, October 29, 2011 - link

    I think you mean "kudos to micron they've blown up the competition". At least for the range that this card is marketing. If you read the specs a little more close you will see that the IO fusion drive is a 150 watt,double width full length card. P320h says 25w Max in this review. That's a huge difference.

    The octal is about $5000 more according to a quick Google search. So this is really an apples to oranges comparison.

    Watt for watt the p320h wins hands down. It is also a single wide card. I suspect this will be very desirable for enterprise/server market. I also saw on the register that they have a hhhl card too...

    FTW
  • blanarahul - Wednesday, December 21, 2011 - link

    Mind you the Octal is an 8, yes EIGHT ioDrives in RAID 0
  • vol7ron - Thursday, June 2, 2011 - link

    I like the idea of this (and the size), of course that pricing is ridiculous.

    Assuming pricing will come down, I'm a bit skeptical to put my longterm storage near, or right next to my dedicated GPU. I'm concerned about the heat implications and the longterm effects it will have on longevity.

    For gaming systems, GPUs are getting hotter and hotter, I'm not sure passive cooling will suffice for these PCIe SSDs
  • jcollett - Thursday, June 2, 2011 - link

    Why on earth would you think this was for your "gaming" system? Loaded to the hilt with SLC NAND, this is for highly accessible databases in an enterprise environment. When your computers are there to make you money, the cost doesn't looks so bad at all.
  • mckirkus - Thursday, June 2, 2011 - link

    Agree, getting a $300 Vertex 3 SSD would probably give you almost indistinguishable results (level load times) and for $5k less. Also, SSDs are much better able to deal with heat than spinning disks.

    Regarding gaming, if some day games would use hard drives for more than just an installation location this could prove interesting, but PC games aren't even using 4GB of RAM yet.
  • vol7ron - Thursday, June 2, 2011 - link

    You guys are looking at today and not the big-picture, down-the-road impact. Don't be so short-sighted and nit-pick my word choice - it's a comment, not an article ;)

    Sure it's SLCs today, but it'll be MLCs tomorrow - perhaps with 3D-engineered hardware. Down the road, they might not use passive cooling, but still the PCIe slots are near where a lot of the heat is generated, so even without the dedicated card, I'm curious about the longterm effects.

    Back to your response: Gaming systems tend to be used for more than just gaming, at least for the majority of gamers - their machines are more high-end all-purpose systems. Sure, dedicated pro-gramers have their specific setups and boxes used solely, for gaming, but I was generalizing. Teens, and amateur-gamers are not going to spend the $$ on multiple computers, one to crunch numbers and one to play online. That being said, 4GB is not enough on a 64b system, with background virus/anti-cheat running, while recording demos. And if it isn't a serious gamer, they'll be having their desktop widgets and internet browsers open, possibly even streaming TV if they have a decent system.

    To counter your point, for those enterprise systems, it'd be more cost-effective to use the non-PCI alternatives. And if it is an enterprise system, I question the number of PCIe slots available that would allow for RAIDing, or even if that's possible using this setup, as it might be software-driven.

    That being said, this alternative is great for one thing not mentioned above. Many times companies are locked into contracts to buy specific parts from certain vendors. These contracts allow those vendors to bump the prices astronomically. However, having this alternative provides for loopholes in said contracts, which even through it may be $5.6k for 350GB, that beats the $1k-$2.5k for 100GB SCSI contracts floating around.
  • Zap - Thursday, June 2, 2011 - link

    Wow, this makes the OCZ drives look quite affordable.

    I wonder if anyone won the $200-million Powerball lottery last night? If I won that...
  • peternelson - Thursday, June 2, 2011 - link

    Anand, you're likely aware of cooperation among many leading companies to develop improved standards for this type of product (rather than the sata-raid onboard approach).

    I'm thinking of:

    http://www.intel.com/standards/nvmhci/

    When you mention, preview or review such products, if possible establish and tell us if the hardware conforms to the new nvmhci standards which should among other things improve driver support etc.

    Thanks!
  • davegraham - Thursday, June 2, 2011 - link

    actually SNIA is doing the majority of the standards definition here. Intel /= standards body.
  • peternelson - Thursday, June 2, 2011 - link

    Well certainly SNIA are doing important work eg SNIA-CTP 1.4.

    I don't really mean Intel standards per-se, I agree Intel is not a standards body, but some of what it is involved in do become de-facto or finalised standards over time. eg USB did, Thunderbolt likely will. Some standardisation tracks take a long time - witness Wifi router manufacturers shipping "draft N" for years prior. I mean things that have broad agreement and adoption (or will have), whether they fully "standardised" through whatever external ratification process yet, or emerging in draft form.

    Certainly vendors seem to be cooperating in the field of SSD storage, and where 70+ leading players are talking to each other including names like IBM, Dell, EMC, Fujitsu Amphenol, Emulex, Fusion-IO, IDT, Marvel, Micron, Molex, PLX, Qlogic, Sandforce etc, regardless of who initiated or ultimately "owns" it, we can benefit from their deliberations. This seems to be a "coalition of the winning" rather than necessarily being led by Intel. Intel themselves are already producing SSD products, but they were also involved in development of PCI Express, and wider PC platform direction, roadmaps and technologies so have significant influence.

    The latest ncmhci seems to offer direct PCIe connection bypassing the SATA interface bottleneck which can deliver higher bandwidth and lower latency. I believe that if vendors agree to do this in some common way, it is an improvement for customers who buy these products because it will likely be easier to swap or mix and match between such products, and results in drivers that are better tested, with less development cost (which potentially could result in more competitive pricing).

    If there exists such a specification (let's not use the word "standard" prematurely) which achieves good industry adoption, then it's worth knowing about it when discussing related products, and worth noting if some manufacturer decided to alternatively pursue their own proprietary way of doing things. If they do they would need to spell out any benefits of using their approach.
  • tygrus - Thursday, June 2, 2011 - link

    P320h may require a queue depth of >64 for max read IO speed.
    This has 32 channels of SLC which beats 64 channels of MLC (RAID0 of 8 controllers of 8 channels, eg. Z-Drive R4 m88). Or does it really matter how many die's there are in total
    The 700GB could be closer to $9000 if the main controller is not duplicated. ie. $1700 base plus $7.5/GB x 512GB = $5540 (350GB card); $1900plus $7.5 x 1024 = $9580 (750GB card)
    4-6x the cost of other MLC based cards.
    At that rate SAS datasets could be read/written faster than they can be processed by the CPU. Take that you datasteps, PROC SORT and PROC FREQ.
  • Kevin G - Thursday, June 2, 2011 - link

    For an enterprise class SLC based flash, the pricing isn't bad. I do think that Micron should add a low profile ~350 GB version to their line up and they'd be set (not all servers accept full height cards).

    Just need a consumer centric version that'll be bootable and have a ton of MLC flash on it.
  • Shadowmaster625 - Thursday, June 2, 2011 - link

    The controller chip and its associated nre costs are too high for the consumer market.
  • murray13 - Thursday, June 2, 2011 - link

    Yes the development costs ARE high, but if you can re-coup them in the enterprise sector you're free to move to the consumer market for not much more than manufacturing costs. It would depend more on what the manufacturing costs are than the development costs, which is where enterprise pays, a LOT.
  • Demon-Xanth - Thursday, June 2, 2011 - link

    ...but I want one.

    *posts a picture of this card up next to the Countach from his high school days*
  • Olternaut - Thursday, June 2, 2011 - link

    I skipped right to the part about the read/write speeds. I was like "3GB/s ?????". Then I saw the pricing. :(

    If it wasn't for the pricing I was going to definitely get this instead of the OCZ RevoDrive 3 for my new build.
    But obviously, the pricing is meant for servers somewhere in some company's IT department.

    Oh well, OCZ Revodrive 3 it is then. 1.5GB/s is nothing to sneeze at.
  • TEAMSWITCHER - Friday, June 3, 2011 - link

    The biggest problem with the PC world is that awesome new technology like this can take forever to reach the mainstream. Why isn't every SSD just a PCI Express card? There is no reason to ship it in a case that makes it look like a hard disk drive. I love the speed of SSDs, but the implementation feels so legacy. I hope technology like this gets adopted fast.
  • FREEPAT75014 - Saturday, June 4, 2011 - link

    Please clarify for all such SSD assemblies if they are BOOTABLE or not. I'm chasing an SSD 1st to accelerate Windows 7, so a device non-bootable will be useless for me.
  • blanarahul - Wednesday, December 21, 2011 - link

    Oh Shit! Run away! Its RAINing!!!!!!!!!!!!!!!!
  • blanarahul - Wednesday, December 21, 2011 - link

    I really wish the made a consumer level ssd with a "custom SSD controller that interfaces directly to PCIe". We would love that! Bt isn't this the same thing as ioMemory?
    I wish Sandforce too made a controller of this type.

Log in

Don't have an account? Sign up now