Comments Locked

21 Comments

Back to Article

  • ABB2u - Thursday, February 15, 2018 - link

    Is Intel VROC really software RAID? No question RAID is all about software. But, since this is running underneath an OS at the chip level why not call it Hardware RAID just like the RAID software running on an Avago RAID controller? In my experience, I have referred to software RAID as that implemented in the OS through LVM or Dsik Management, the filesystem like ZFS, or erasure encoding at a parallel block level. --- It is all about the difference in latency.
  • saratoga4 - Thursday, February 15, 2018 - link

    >Is Intel VROC really software RAID?

    Yes.

    > In my experience, I have referred to software RAID as that implemented in the OS

    That is what VROC is. Without the driver, you would just have independent disks.
  • Samus - Thursday, February 15, 2018 - link

    So this is basically just storagespaces?
  • tuxRoller - Friday, February 16, 2018 - link

    Storage Space is more similar to lvm & mdadm (pooling, placement & parity policies, hot spares, and a general storage management interface) while vroc lets the os deal with nvme device bring up & then offers pooling + parity without an hba.
  • HStewart - Thursday, February 15, 2018 - link

    I would think any raid system has software to drive - it maybe on say an ARM microcontroller - but it still has some kind of software to make it work.

    But I would doubt if you can take Intel's driver and make it work on another SSD. It probably has specific hardware enhancements to increase it's performance.
  • Nime - Thursday, March 21, 2019 - link

    If RAID controller uses the same CPU as OS it might be called soft. If the controller has its own processor to calculate disk data to read & write, it's a hard raid system.
  • saratoga4 - Thursday, February 15, 2018 - link

    I would be interested to see performance of normal software raid vs. VROC since for most applications I would prefer not to boot off of a high performance disk array. What, if any, benefit does it offer over more conventional software raid?
  • JamesAnthony - Thursday, February 15, 2018 - link

    I think the Raid 5 tests when you are done with them are going to be an important note in what the actual performance the platform is capable of.
  • boeush - Thursday, February 15, 2018 - link

    Maybe a stupid question, but - out of sheer curiosity - is there a limit, if any, on the number of VROC drives per array? For instance, could you use VROC to build a 10-drive RAID-5 array? (Is 4 drives the maximum - or if not, why wouldn't Intel supply more than 4 to you, for an ultimate showcase?)

    On a separate note - the notion of paying Intel extra $$$ just to enable functions you've already purchased (by virtue of them being embedded on the motherboard and the CPU) - I just can't get around it appearing as nothing but a giant ripoff. Doesn't seem like this would do much to build or maintain brand loyalty... And the notion of potentially paying less to enable VROC when restricted to Intel-only drives - reeks of exerting market dominance to suppress competition (i.e. sounds like an anti-trust lawsuit in the making...)
  • stanleyipkiss - Thursday, February 15, 2018 - link

    The maximum number of drives, as stated in the article, depends solely on the number of PCI-E lanes available. These being x4 NVME drives, the lanes dry up quickly.
  • MrSpadge - Friday, February 16, 2018 - link

    > On a separate note - the notion of paying Intel extra $$$ just to enable functions you've already purchased (by virtue of them being embedded on the motherboard and the CPU) - I just can't get around it appearing as nothing but a giant ripoff.

    We take it for granted that any hardware features are exposed to us via free software. However, by that argument one wouldn't need to pay for any software, as the hardware to enable it (i.e. a x86 CPU) is already there and purchased (albiet probably from a different vendor).

    And on the other hand: it's apparently OK for Intel and the others to sell the same piece of silicon at different speed grades and configurations for different prices. Here you could also argue that "the hardware is already there" (assuming no defects, as is often the case).

    I agree on the anti trust issue of cheaper prices for Intel drives.
  • boeush - Friday, February 16, 2018 - link

    My point is that when you buy these CPUs and motherboards, you automatically pay for the sunk R&D and production costs of VROC integration - it's included in the price of the hardware. It has to be - if VROC I is dud and nobody actually opts for it, Intel has to be sure to recoup its costs regardless.

    That means you've already paid for VROC once - but you now have to pay twice yo actually use it!

    Moreover, the extra complexity involved with this hardware key-based scheme implies that the feature is necessarily more costly (in terms of sunk R&D as well as BOM) than it could have been otherwise. It's like Intel deliberately and intentionally set out to gouge its customers from the early concept stage onward. Very bad optics...
  • nivedita - Monday, February 19, 2018 - link

    Why would you be happier if they actually took the trouble to remove the silicon from your cpu?
  • levizx - Friday, February 16, 2018 - link

    > However, by that argument one wouldn't need to pay for any software, as the hardware to enable it

    That's a ridiculous claim, the same vendor (SoC vendor, Intel in this case) does NOT produce "any software" (MSFT etc). VROC technology in ALREADY embedded in the hardware/firmware.
  • BenJeremy - Friday, February 16, 2018 - link

    Unless things have changed in the last 3 months, VROC is all but useless unless you stick with intel-branded storage options. My BIL bought a fancy new Gigabyte Aorus Gaming 7 X299 motherboard when they came out, then waited months to finally get a VROC key. It still didn't allow him to make a bootable RAID-0 array the 3 Samsung NVMe sticks. We do know that, in theory, the key is not needed to make such a setup work, as a leaked version of Intel's RST allowed a bootable RAID-0 array in "30-day trial mode".

    We need to stop falling for Intel's nonsense. AMD's Threadripper is turning in better numbers in RAID-0 configurations, without all the nonsense of plugging in a hardware DRM dongle.
  • HStewart - Friday, February 16, 2018 - link

    "We need to stop falling for Intel's nonsense. AMD's Threadripper is turning in better numbers in RAID-0 configurations, without all the nonsense of plugging in a hardware DRM dongle."

    Please stop the nonsense of fact less claims about AMD and provide actual proof about performance numbers. Keep in mind this SSD is an enterprise product designed for CPU's like Xeon not game machines.
  • peevee - Friday, February 16, 2018 - link

    Like it.
    But idle power of 5W is kind of insane, isn't it?
  • Billy Tallis - Friday, February 16, 2018 - link

    Enterprise drives don't try for low idle power because they don't want the huge wake-up latencies to demolish their QoS ratings.
  • peevee - Friday, February 16, 2018 - link

    4-drive RAID0 only overcomes 2-drive RAID0 by QD 512 . What kind of a server can run 612 threads at the same time? And what kind of server you will need for full 32 Ruler 1U backend (which would require 4192 threads to take advantage of all that power)?
  • kingpotnoodle - Sunday, February 18, 2018 - link

    One use could be shared storage for I/O intensive virtual environments, attached to multiple hypervisor nodes, each with multiple 40Gb+ NICs for the storage network.
  • ckrt - Tuesday, February 20, 2018 - link

    that and the other way around... virtualization for aggregation... with those 32 rulers adding up to a PETABYTE of storage and some neat high performance computing nodes using submerged liquid cooling, you can have the equivalent of a full small or medium business datacenter in just one 42u rack... man!... the posibilities!

Log in

Don't have an account? Sign up now