Sequential Read Performance

Our first test of sequential read performance uses short bursts of 128MB, issued as 128kB operations with no queuing. The test averages performance across eight bursts for a total of 1GB of data transferred from a drive containing 16GB of data. Between each burst the drive is given enough idle time to keep the overall duty cycle at 20%.

Burst 128kB Sequential Read (Queue Depth 1)

The burst sequential read performance of the Samsung PM981 doesn't quite set a new record, but it's pretty close to the top performer and very far ahead of any non-Samsung drive.

Our test of sustained sequential reads uses queue depths from 1 to 32, with the performance and power scores computed as the average of QD1, QD2 and QD4. Each queue depth is tested for up to one minute or 32GB transferred, from a drive containing 64GB of data.

Sustained 128kB Sequential Read

On the longer test with higher queue depths, the best MLC-based drives pull ahead of the PM981 and even the 960 EVO has a slight advantage.

The 1TB PM981 starts out with almost the same performance as the 1TB 960 EVO, but the PM981's performance falls off a bit during the first half of the test while the 960 EVO remains steady. The 512GB PM981 doesn't experience any slowdown, but it is slower than the 1TB model throughout the test.

Sequential Write Performance

Our test of sequential write burst performance is structured identically to the sequential read burst performance test save for the direction of the data transfer. Each burst writes 128MB as 128kB operations issued at QD1, for a total of 1GB of data written to a drive containing 16GB of data.

Burst 128kB Sequential Write (Queue Depth 1)

The PM981s both deliver the same record-setting burst sequential write performance that is a marked improvement over the best of Samsung's last generation, and far ahead of any competing flash-based SSD.

Our test of sustained sequential writes is structured identically to our sustained sequential read test, save for the direction of the data transfers. Queue depths range from 1 to 32 and each queue depth is tested for up to one minute or 32GB, followed by up to one minute of idle time for the drive to cool off and perform garbage collection. The test is confined to a 64GB span of the drive.

Sustained 128kB Sequential Write

On the longer sequential write test, the 512GB PM981 falls behind most of the rest of the Samsung drives but the 1TB model remains on top, ahead of even the 960 PROs.

The 1TB PM981 hits full write speed at QD2 and stays there for the rest of the test, holding on to its lead over the 960 PRO. The 512GB PM981 runs out of SLC write cache early on and its performance bounces around with the garbage collection cycles.

Random Performance Mixed Read/Write Performance
Comments Locked

53 Comments

View All Comments

  • mapesdhs - Thursday, November 30, 2017 - link

    And Drazick, what do you mean by 2.5" drives? If you're referring to SATA, well then no, it's already at its limit of 550MB/sec, and producing something akin to SATA4 would be pointless when it's also hobbled by the old AHCI protocol.

    Also, "don't like" is an emotional response; what's your evidence and argument that they're a bad product somehow? Have you used them?
  • WithoutWeakness - Thursday, November 30, 2017 - link

    By 2.5" drives I'm sure he means the same form factor as standard SATA 2.5" SSDs except using a newer, faster connection just like the U.2 connectors that Dan mentioned. We definitely hit the limit of what SATA 3 can deliver and it would be nice to have a new standard that can leverage PCIe NVMe SSDs in a form factor that allows us to use cables to put drives elsewhere in a case for better layouts and airflow. U.2 was supposed to be that connector but there are basically no drives that support the standard and very few boards with more than 1 U.2 port. There are a few adapters on the market that allow you to install an M.2 drive into a 2.5" enclosure with a U.2 connector on it but until motherboards have more than 1 U.2 port it won't be a real replacement for the ubiquity of SATA.
  • msabercr - Friday, December 1, 2017 - link

    Actually there are m.2 to U.2 connectors readily available from most MB vendors, and 7mm U.2 datacenter drives are starting to become a thing. See Intel SSD DC P4501. I wouldn't be surprised if AIC disappears after too long. Limiting the power draw would be the major hurdle in creating such drives but it's not impossible. The EDSFF is going to pave the way for many high density compact form factors for NVMe moving forward.
  • sleeplessclassics - Thursday, November 30, 2017 - link

    One more thing which I think will be different when these drives are launched as retail devices is the driver support for Phoenix controller. While, it is always difficult to pinpoint the exact bottlenecks on such bleeding edge technology, I think a driver that is better optimized for Phoenix controller will definitely produce better results (ceteris paribus)

    Also, there have been rumors of QLC-Nand. If that is true, that could be the differentiator between EVO and PRO series.
  • romrunning - Thursday, November 30, 2017 - link

    Yes - QLC... more latency, lower endurance, slower writes - what's not to like? :-S
  • Spunjji - Thursday, November 30, 2017 - link

    Lower price..? Higher densities and increased production? That's what it's all about.

    If 3D QLC performs like 2D TLC then it'll do just fine for mass storage.
  • mapesdhs - Thursday, November 30, 2017 - link

    Good point given the way in which most products seem to be abe to tolerate far more writes than for which they're officially rated, in which case it's likely most users will want something newer long before a QLC product's endurance has been reached. If one is doing something that will drain the endurance a lot faster, then one should be using something more suitable anyway.
  • romrunning - Thursday, November 30, 2017 - link

    Sure, but QLC is just like TLC - once you force it on enough people and you say it's "good enough", then the higher-performing but costlier flash (like SLC/MLC) slowly is removed from the product portfolio. I'm not in favor of these race-to-the-bottom "advances", just to reduce the price a bit for hte consumer but more for the mfg. You may get a slight bump in capacity, but for me, the performance/endurance trade-off with a slight reduction in price isn't worth it.

    Now, I suppose it doesn't matter anymore to me since I'll still be buying the 960 Pro until the Optane 900p reaches better pricing. But the slippery slope you encounter is that new product "advances" are usually better when comapred to to the "current" state of tech. If the current standard is QLC, then the new "improvement" might only be raising it to levels that SLC/MLC were at previously. So the possibility is that it may not be that much of an improvement.
  • bcronce - Thursday, November 30, 2017 - link

    For read heavy mass storage drives, slower writes is fine. SSDs are getting fast enough that the IO or CPU is the bottleneck. Higher read latency for small queues will hurt performance, but not by a whole lot.

    The endurance is only an issue if you re-write your data a lot, like a paging file or a game drive that sees a lot of updates. A relatively static mass-media drive will probably be just fine.
  • sleeplessclassics - Thursday, November 30, 2017 - link

    Latency can (till some extent) be handled with a bigger dram buffer. Also, controllers are the key here and not the NAND type. Today, even TLC can perform better than MLC/SLC just 2-3 generations ago due to better controllers.

    A couple of years ago and even last year, 500GB ssd was around $80. If the prices were sane, 64-layer 3D TLC would have been below $50 for sure.
    And 96-layer QLC can give real competition to the HDDs.

    As for lower endurance, that can be handled by slightly higher provisioning and slower writes....well they would be okay for 95% of the mainstream users.
    Enthusiasts have optane and Z-NAND

Log in

Don't have an account? Sign up now