Sequential Read Performance

The structure of this test is the same as the random read test, except that the reads performed are 128kB and are arranged sequentially. This test covers queue depths from 1 to 512, using from one to eight worker threads each with queue depths up to 64. Each worker thread is reading from a different section of the drive. Each queue depth is tested for four minutes, and the performance average excludes the first minute. The queue depths are tested back to back with no idle time. The individual read operations are 128kB, and cover the span of the drive or array. Prior to running this test, the drives were preconditioned by writing to the entire drive sequentially, twice over.

Sustained 128kB Sequential Read

The sequential read performance results are pretty much as expected. The 2TB and 8TB drives have the same peak throughput. The two-drive RAID-0 is almost as fast as the four-drive array configurations that were working with a PCIe x8 bottleneck, and with that bottleneck removed, performance of the four-drive RAID-0 and RAID-10 increases by 80%.

All but the RAID-5 configuration show a substantial drop in throughput from QD1 to QD2 as competition between threads is introduced, but performance quickly recovers.  The individual drives reach full speed at QD16 (eight threads each at QD2). Unsurprisingly, the two-drive configuration saturates at QD32 and the four-drive arrays saturate at QD64.

Sequential Write Performance

The structure of this test is the same as the sequential read test. This test covers queue depths from 1 to 512, using from one to eight worker threads each with queue depths up to 64. Each worker thread is writing to a different section of the drive. Each queue depth is tested for four minutes, and the performance average excludes the first minute. The queue depths are tested back to back with no idle time. The individual write operations are 128kB, and cover the span of the drive or array. This test was run immediately after the sequential read test, so the drives had been preconditioned with sequential writes.

Sustained 128kB Sequential Write

The 8TB P4510 delivers far higher sequential write throughput than the 2TB model. The four-drive RAID-10 configuration requires more than PCIe x8 to beat the 8TB drive. The four-drive RAID-0 is about 3.6 times faster than a single 2TB drive, but only 2.4 times faster than the equivalent capacity 8TB drive.

The sequential write throughput of most configurations saturates with queue depths of just 2-4. The 8TB drive takes a bit longer to reach full speed around QD8. The performance of a four-drive array scales up more slowly when it is subject to a PCIe bottleneck, even before it has reached that upper limit.

Random Performance Mixed Read/Write Performance
Comments Locked

21 Comments

View All Comments

  • MrSpadge - Friday, February 16, 2018 - link

    > On a separate note - the notion of paying Intel extra $$$ just to enable functions you've already purchased (by virtue of them being embedded on the motherboard and the CPU) - I just can't get around it appearing as nothing but a giant ripoff.

    We take it for granted that any hardware features are exposed to us via free software. However, by that argument one wouldn't need to pay for any software, as the hardware to enable it (i.e. a x86 CPU) is already there and purchased (albiet probably from a different vendor).

    And on the other hand: it's apparently OK for Intel and the others to sell the same piece of silicon at different speed grades and configurations for different prices. Here you could also argue that "the hardware is already there" (assuming no defects, as is often the case).

    I agree on the anti trust issue of cheaper prices for Intel drives.
  • boeush - Friday, February 16, 2018 - link

    My point is that when you buy these CPUs and motherboards, you automatically pay for the sunk R&D and production costs of VROC integration - it's included in the price of the hardware. It has to be - if VROC I is dud and nobody actually opts for it, Intel has to be sure to recoup its costs regardless.

    That means you've already paid for VROC once - but you now have to pay twice yo actually use it!

    Moreover, the extra complexity involved with this hardware key-based scheme implies that the feature is necessarily more costly (in terms of sunk R&D as well as BOM) than it could have been otherwise. It's like Intel deliberately and intentionally set out to gouge its customers from the early concept stage onward. Very bad optics...
  • nivedita - Monday, February 19, 2018 - link

    Why would you be happier if they actually took the trouble to remove the silicon from your cpu?
  • levizx - Friday, February 16, 2018 - link

    > However, by that argument one wouldn't need to pay for any software, as the hardware to enable it

    That's a ridiculous claim, the same vendor (SoC vendor, Intel in this case) does NOT produce "any software" (MSFT etc). VROC technology in ALREADY embedded in the hardware/firmware.
  • BenJeremy - Friday, February 16, 2018 - link

    Unless things have changed in the last 3 months, VROC is all but useless unless you stick with intel-branded storage options. My BIL bought a fancy new Gigabyte Aorus Gaming 7 X299 motherboard when they came out, then waited months to finally get a VROC key. It still didn't allow him to make a bootable RAID-0 array the 3 Samsung NVMe sticks. We do know that, in theory, the key is not needed to make such a setup work, as a leaked version of Intel's RST allowed a bootable RAID-0 array in "30-day trial mode".

    We need to stop falling for Intel's nonsense. AMD's Threadripper is turning in better numbers in RAID-0 configurations, without all the nonsense of plugging in a hardware DRM dongle.
  • HStewart - Friday, February 16, 2018 - link

    "We need to stop falling for Intel's nonsense. AMD's Threadripper is turning in better numbers in RAID-0 configurations, without all the nonsense of plugging in a hardware DRM dongle."

    Please stop the nonsense of fact less claims about AMD and provide actual proof about performance numbers. Keep in mind this SSD is an enterprise product designed for CPU's like Xeon not game machines.
  • peevee - Friday, February 16, 2018 - link

    Like it.
    But idle power of 5W is kind of insane, isn't it?
  • Billy Tallis - Friday, February 16, 2018 - link

    Enterprise drives don't try for low idle power because they don't want the huge wake-up latencies to demolish their QoS ratings.
  • peevee - Friday, February 16, 2018 - link

    4-drive RAID0 only overcomes 2-drive RAID0 by QD 512 . What kind of a server can run 612 threads at the same time? And what kind of server you will need for full 32 Ruler 1U backend (which would require 4192 threads to take advantage of all that power)?
  • kingpotnoodle - Sunday, February 18, 2018 - link

    One use could be shared storage for I/O intensive virtual environments, attached to multiple hypervisor nodes, each with multiple 40Gb+ NICs for the storage network.

Log in

Don't have an account? Sign up now