Mixed Read/Write Performance

Workloads consisting of a mix of reads and writes can be particularly challenging for flash based SSDs. When a write operation interrupts a string of reads, it will block access to at least one flash chip for a period of time that is substantially longer than a read operation takes. This hurts the latency of any read operations that were waiting on that chip, and with enough write operations throughput can be severely impacted. If the write command triggers an erase operation on one or more flash chips, the traffic jam is many times worse.

The occasional read interrupting a string of write commands doesn't necessarily cause much of a backlog, because writes are usually buffered by the controller anyways. But depending on how much unwritten data the controller is willing to buffer and for how long, a burst of reads could force the drive to begin flushing outstanding writes before they've all been coalesced into optimal sized writes.

The effect of a read still applies to the Optane SSD's 3D XPoint memory, but with greatly reduced severity. Whether a block of reads coming in has an effect depends on how the Optane SSD's controller manages the 3D XPoint memory.

Queue Depth 4

Our first mixed workload test is an extension of what Intel describes in their specifications for throughput of mixed workloads. A total queue depth of 16 is achieved using four worker threads, each performing a mix of random reads and random writes. Instead of just testing a 70% read mixture, the full range from pure reads to pure writes is tested at 10% increments.

Mixed Random Read/Write Throughput
Vertical Axis units: IOPS MB/s

The Optane SSD's throughput does indeed show the bathtub curve shape that is common for this sort of mixed workload test, but the sides are quite shallow and the minimum (at 40% reads/60% writes) is still 83% of the peak throughput (which occurs with the all-reads workload). While the Optane SSD is operating near 2GB/s the flash SSDs spend most of the test only slightly above 500MB/s. When the portion of writes increases to 70%, the two flash SSDs begin to diverge: the Intel P3700 loses almost half its throughput and only recovers a little of it during the remainder of the test, while the Micron 9100 begins to accelerate and comes much closer to the Optane SSD's level of performance.

Random Read Latency
Mean Median 99th Percentile 99.999th Percentile

The median latency curves for the two flash SSDs show a substantial drop when the median operation switches from a read to a cacheable write. The P3700's median latency even briefly drops below that of the Optane SSD, but then the Optane SSD is handling several times the throughput. The 99th and 99.999th percentile latencies for the Optane SSD are relatively flat after jumping a bit when writes are first introduced to the mix. The flash SSDs have far higher 99th and 99.999th percentile latencies through the middle of the test, but much fewer outliers during the pure read and pure write phases.

Adding Writes to a Drive that is Reading

The next mixed workload test takes a different approach and is loosely based on the Aerospike Certification Tool. The read workload is constant throughout the test: a single thread performing 4kB random reads at QD1. Threads performing 4kB random writes at QD1 and throttled to 100MB/s are added to the mix until the drive's throughput is saturated. As the write workload gets heavier, the random read throughput will drop and the read latency will increase.

The three SSDs have very different capacity for random write throughput: the Intel P3700 tops out around 400MB/s, the Micron 9100 can sustain 1GB/s, and the Intel Optane SSD DC P4800X can sustain almost 2GB/s. The Optane SSD's average read latency increases by a factor of 5, but that still enough to provide about 25k read IOPS. The flash SSDs both experience read latency growing by an order of magnitude as write throughput approaches saturation. Even though the Intel P3700 has a much lower capacity for random writes, it provides slightly lower random read latency at its saturation point than the Micron 9100. When comparing the two flash SSDs with the same write load, the Micron 9100 provides far more random read throughput.

Sequential Access Performance Final Words: Is 3D XPoint Ready?
Comments Locked

117 Comments

View All Comments

  • ddriver - Friday, April 21, 2017 - link

    So if you populate the dimm slots with hypetane, where does the dram go?
  • kfishy - Friday, April 21, 2017 - link

    You can have a hybrid memory subsystem, the current topology of CACHE-DRAM-SSD/HDD is not the only way to go.
  • tuxRoller - Friday, April 21, 2017 - link

    Why are you mentioning dimms?
    Are you just posting random responses?
    Neither of your posts in this thread actually addressed anything that the posters were discussing.
  • Kakti - Saturday, April 22, 2017 - link

    Have you been living in a cave the past five years? SATA 3.0 has been the limiting factor for SSDs for a while now - all max out around 450MB/sec.

    Now there are plenty of SSD that connect via PCIe instead of SATA and are able to pull several gigabytes/sec. Examples include Samsung 960 Pro/Evo, 950 Pro, OCZ RD400, etc. SATA has ben the bottleneck for a while and now that we have NVMe, we're seeing what NAND can really do with m.2 or pci-e connections
  • cfenton - Thursday, April 27, 2017 - link

    That speed is only for high queue depth workloads. Even the 960 Pro only does about 137mb/s average in random reads over QD1, QD2, and QD4. The QD1 numbers are something like 34mb/s. Those numbers are far below the SATA spec. Almost all consumer tasks are low queue depth.

    With this drive, you get about 400mb/s even at QD1, and something like 1.3gb/s at QD4.
  • CajunArson - Thursday, April 20, 2017 - link

    A very very sweet piece of technology assuming you have the right workloads to take advantage of what it can offer. Obviously it's not going to do much for a consumer grade desktop, at least not in this form factor & price.

    It's pretty clear that in at least some of those tests the PCIe interface is doing some bottlenecking too. It will be interesting to see Optane integrated into memory DIMMs where that is no longer an issue.
  • tarqsharq - Thursday, April 20, 2017 - link

    I can imagine this on a heavily trafficked database server would be insanely effective.
  • ddriver - Friday, April 21, 2017 - link

    Not anywhere nearly as fast as an in-memory database.
  • Chaitanya - Thursday, April 20, 2017 - link

    Like most recent Intel products: overpriced, and overhyped.
  • vortmax2 - Thursday, April 20, 2017 - link

    I don't agree. For Gen1, I'd say it's about right on. It seems that consumer storage advancements are accelerating (SSD, NAND, now this inside a decade). I for one am happy to see a part of Intel (albeit a joint partnership) pressing ahead and releasing revolutionary tech - soon to me enjoyed by consumers.

Log in

Don't have an account? Sign up now