Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Samsung PM981 doesn't quite set a new record, but it's pretty close to the top performer and very far ahead of any non-Samsung drive.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

On the longer test with higher queue depths, the best MLC-based drives pull ahead of the PM981 and even the 960 EVO has a slight advantage.

The 1TB PM981 starts out with almost the same performance as the 1TB 960 EVO, but the PM981's performance falls off a bit during the first half of the test while the 960 EVO remains steady. The 512GB PM981 doesn't experience any slowdown, but it is slower than the 1TB model throughout the test.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The PM981s both deliver the same record-setting burst sequential write performance that is a marked improvement over the best of Samsung's last generation, and far ahead of any competing flash-based SSD.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

On the longer sequential write test, the 512GB PM981 falls behind most of the rest of the Samsung drives but the 1TB model remains on top, ahead of even the 960 PROs.

The 1TB PM981 hits full write speed at QD2 and stays there for the rest of the test, holding on to its lead over the 960 PRO. The 512GB PM981 runs out of SLC write cache early on and its performance bounces around with the garbage collection cycles.

Random Performance Mixed Read/Write Performance
Comments Locked

53 Comments

View All Comments

  • skavi - Monday, December 4, 2017 - link

    Lol, tech isn't wine. If people aren't working to improve it, it won't get better.
  • skavi - Monday, December 4, 2017 - link

    Lol, tech isn't wine. If people aren't working to improve it, it won't get better.
  • WorldWithoutMadness - Friday, December 1, 2017 - link

    that and ram oligopoly. Almost reminded me of intel before ryzen.
  • Drumsticks - Thursday, November 30, 2017 - link

    I doubt we'll see 1TB 3D XPoint in an m.2 form factor until at least the second generation of XPoint. Power consumption looks too high; you'd probably have to severely limit performance to get into m.2, or you'd need a massive unrealistic heatsink.
  • UltraWide - Thursday, November 30, 2017 - link

    Yes, the people want to see 900p destroy these benchmarks!! :)
  • romrunning - Thursday, November 30, 2017 - link

    I would love to see the Optane 900p results included as well.
  • peevee - Thursday, November 30, 2017 - link

    Me too.
  • mczak - Thursday, November 30, 2017 - link

    I miss the power draw numbers.
  • Drazick - Thursday, November 30, 2017 - link

    I don't like those m.2 drives. Can't we have high bandwidth connection for 2.5 inch drives? It will have less thermal issues for desktop configurations.
  • DanNeely - Thursday, November 30, 2017 - link

    That's the barely gained any traction outside of enterprise U.2 connection.

Log in

Don't have an account? Sign up now