Mixed Random Performance

Our test of mixed random reads and writes covers mixes varying from pure reads to pure writes at 10% increments. Each mix is tested for four minutes, with the first minute excluded from the statistics. The test is conducted with eight worker threads and total queue depths of 8, 64 and 512. This test is conducted immediately after the random write test, so the drives have been thoroughly preconditioned with random writes across the entire drive or array.

QD 8
QD 64
QD 512

At the relatively low queue depth of 8, the individual P4510 drives show fairly flat performance across the varied mixes of reads and writes. The RAID configurations help a little bit with the random read performance, but have a much bigger effect on write throughput.

Mixed Sequential Performance

Our test of mixed sequential reads and writes differs from the mixed random I/O test by performing 128kB sequential accesses rather than 4kB accesses at random locations. The highest queue depth tested here is 256. The range of mixes tested is the same, and the timing of the sub-tests are also the same as above. This test was conducted immediately after the sequential write test, so the drives had been preconditioned with sequential writes over their entire span.

QD 8
QD 64
QD 256

At QD8, the single 2TB P4510 again has fairly flat performance across the range of mixes, but the 8TB model picks up speed as the proportion of writes increases. The four-drive RAID-0 shows strong increases in performance as the mix becomes more write heavy, and the two-drive RAID-0 shows a similar but smaller effect over most of the test.

At QD64 and QD256, the huge difference in write performance between the four-drive RAID-0 and RAID-10 configurations is apparent. The configurations with a PCIe x8 bottleneck show entirely different behavior, peaking in the middle of the test when they are able to take advantage of the full-duplex nature of PCIe, and slowest at either end of the test when one-way traffic saturates the link. For even balances of reads and writes, the PCIe x8 bottleneck barely affects overall throughput.

Sequential Performance Looking Forward
Comments Locked

21 Comments

View All Comments

  • ABB2u - Thursday, February 15, 2018 - link

    Is Intel VROC really software RAID? No question RAID is all about software. But, since this is running underneath an OS at the chip level why not call it Hardware RAID just like the RAID software running on an Avago RAID controller? In my experience, I have referred to software RAID as that implemented in the OS through LVM or Dsik Management, the filesystem like ZFS, or erasure encoding at a parallel block level. --- It is all about the difference in latency.
  • saratoga4 - Thursday, February 15, 2018 - link

    >Is Intel VROC really software RAID?

    Yes.

    > In my experience, I have referred to software RAID as that implemented in the OS

    That is what VROC is. Without the driver, you would just have independent disks.
  • Samus - Thursday, February 15, 2018 - link

    So this is basically just storagespaces?
  • tuxRoller - Friday, February 16, 2018 - link

    Storage Space is more similar to lvm & mdadm (pooling, placement & parity policies, hot spares, and a general storage management interface) while vroc lets the os deal with nvme device bring up & then offers pooling + parity without an hba.
  • HStewart - Thursday, February 15, 2018 - link

    I would think any raid system has software to drive - it maybe on say an ARM microcontroller - but it still has some kind of software to make it work.

    But I would doubt if you can take Intel's driver and make it work on another SSD. It probably has specific hardware enhancements to increase it's performance.
  • Nime - Thursday, March 21, 2019 - link

    If RAID controller uses the same CPU as OS it might be called soft. If the controller has its own processor to calculate disk data to read & write, it's a hard raid system.
  • saratoga4 - Thursday, February 15, 2018 - link

    I would be interested to see performance of normal software raid vs. VROC since for most applications I would prefer not to boot off of a high performance disk array. What, if any, benefit does it offer over more conventional software raid?
  • JamesAnthony - Thursday, February 15, 2018 - link

    I think the Raid 5 tests when you are done with them are going to be an important note in what the actual performance the platform is capable of.
  • boeush - Thursday, February 15, 2018 - link

    Maybe a stupid question, but - out of sheer curiosity - is there a limit, if any, on the number of VROC drives per array? For instance, could you use VROC to build a 10-drive RAID-5 array? (Is 4 drives the maximum - or if not, why wouldn't Intel supply more than 4 to you, for an ultimate showcase?)

    On a separate note - the notion of paying Intel extra $$$ just to enable functions you've already purchased (by virtue of them being embedded on the motherboard and the CPU) - I just can't get around it appearing as nothing but a giant ripoff. Doesn't seem like this would do much to build or maintain brand loyalty... And the notion of potentially paying less to enable VROC when restricted to Intel-only drives - reeks of exerting market dominance to suppress competition (i.e. sounds like an anti-trust lawsuit in the making...)
  • stanleyipkiss - Thursday, February 15, 2018 - link

    The maximum number of drives, as stated in the article, depends solely on the number of PCI-E lanes available. These being x4 NVME drives, the lanes dry up quickly.

Log in

Don't have an account? Sign up now