Random Read Performance

In order to properly stress a four-drive NVMe RAID array, this test covers queue depths much higher than our client SSD tests: from 1 to 512, using from one to eight worker threads each with queue depths up to 64. Each queue depth is tested for four minutes, and the performance average excludes the first minute. The queue depths are tested back to back with no idle time. The individual read operations are 4kB, and cover the span of the drive or array. Prior to running this test, the drives were preconditioned by writing to the entire drive with random writes, twice over.

Sustained 4kB Random Read

The 2TB and 8TB P4510s have the same peak random read performance. The RAID-0 and RAID-10 configurations both provide about three times the performance of a single drive, and the two-drive RAID-0 provides just under twice the performance of a single drive.

The individual P4510 drives don't saturate until at least QD128. The RAID configurations with a bottleneck from the PCIe x8 switch uplink have hit that limit by the end of the test, but the four-drive configurations without that bottleneck could clearly deliver even higher throughput with more worker threads.

Random Write Performance

As with the random read test, this test covers queue depths from 1 to 512, using from one to eight worker threads each with queue depths up to 64. Each queue depth is tested for four minutes, and the performance average excludes the first minute. The queue depths are tested back to back with no idle time. The individual write operations are 4kB, and cover the span of the drive or array. This test was run immediately after the random read test, so the drives had been preconditioned with two full drive writes of random writes.

Sustained 4kB Random Write

The four-drive RAID-0 configuration manages to provide five times the random write throughput of a single 2TB drive, and even the configuration with a PCIe x8 bottleneck is over four times faster than a single drive. The RAID-10 configurations and the two-drive RAID-0 are only slightly faster than the single 8TB drive, which has more than twice the random write throughput of the 2TB model.

Several of the test runs show performance drops in the second half that we did not have time to debug, but the general pattern seems to be that random write performance saturates at relatively low queue depths, around QD16 or QD32.

Hands-On With Intel VROC Sequential Performance
Comments Locked

21 Comments

View All Comments

  • ABB2u - Thursday, February 15, 2018 - link

    Is Intel VROC really software RAID? No question RAID is all about software. But, since this is running underneath an OS at the chip level why not call it Hardware RAID just like the RAID software running on an Avago RAID controller? In my experience, I have referred to software RAID as that implemented in the OS through LVM or Dsik Management, the filesystem like ZFS, or erasure encoding at a parallel block level. --- It is all about the difference in latency.
  • saratoga4 - Thursday, February 15, 2018 - link

    >Is Intel VROC really software RAID?

    Yes.

    > In my experience, I have referred to software RAID as that implemented in the OS

    That is what VROC is. Without the driver, you would just have independent disks.
  • Samus - Thursday, February 15, 2018 - link

    So this is basically just storagespaces?
  • tuxRoller - Friday, February 16, 2018 - link

    Storage Space is more similar to lvm & mdadm (pooling, placement & parity policies, hot spares, and a general storage management interface) while vroc lets the os deal with nvme device bring up & then offers pooling + parity without an hba.
  • HStewart - Thursday, February 15, 2018 - link

    I would think any raid system has software to drive - it maybe on say an ARM microcontroller - but it still has some kind of software to make it work.

    But I would doubt if you can take Intel's driver and make it work on another SSD. It probably has specific hardware enhancements to increase it's performance.
  • Nime - Thursday, March 21, 2019 - link

    If RAID controller uses the same CPU as OS it might be called soft. If the controller has its own processor to calculate disk data to read & write, it's a hard raid system.
  • saratoga4 - Thursday, February 15, 2018 - link

    I would be interested to see performance of normal software raid vs. VROC since for most applications I would prefer not to boot off of a high performance disk array. What, if any, benefit does it offer over more conventional software raid?
  • JamesAnthony - Thursday, February 15, 2018 - link

    I think the Raid 5 tests when you are done with them are going to be an important note in what the actual performance the platform is capable of.
  • boeush - Thursday, February 15, 2018 - link

    Maybe a stupid question, but - out of sheer curiosity - is there a limit, if any, on the number of VROC drives per array? For instance, could you use VROC to build a 10-drive RAID-5 array? (Is 4 drives the maximum - or if not, why wouldn't Intel supply more than 4 to you, for an ultimate showcase?)

    On a separate note - the notion of paying Intel extra $$$ just to enable functions you've already purchased (by virtue of them being embedded on the motherboard and the CPU) - I just can't get around it appearing as nothing but a giant ripoff. Doesn't seem like this would do much to build or maintain brand loyalty... And the notion of potentially paying less to enable VROC when restricted to Intel-only drives - reeks of exerting market dominance to suppress competition (i.e. sounds like an anti-trust lawsuit in the making...)
  • stanleyipkiss - Thursday, February 15, 2018 - link

    The maximum number of drives, as stated in the article, depends solely on the number of PCI-E lanes available. These being x4 NVME drives, the lanes dry up quickly.

Log in

Don't have an account? Sign up now