Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Micron M600 256GB
Default
25% Over-Provisioning

The 1TB M600 actually performs quite significantly worse than the 256GB model, which is most likely due to the tracking overhead that the increased capacity causes (more pages to track). Overall IO consistency has not really changed from the MX100 as Dynamic Write Acceleration only affects burst performance. I suspect the firmware architectures for sustained performance are similar between the MX100 and M600, although with added over-provisioning the M600 is a bit more consistent.

Micron M600 256GB
Default
25% Over-Provisioning

Micron M600 256GB
Default
25% Over-Provisioning

TRIM Validation

To test TRIM, I filled the 128GB M600 with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

It appears that TRIM does not fully recover the SLC cache as the acceleration capacity seems to be only ~7GB. I suspect that giving the drive some idle time would do the trick because it might take a couple of minutes (or more) for the internal garbage collection to finish after issuing a TRIM command.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

56 Comments

View All Comments

  • Kristian Vättö - Wednesday, October 1, 2014 - link

    Oh, that one. It's from the M600's reviewer's guide and the numbers are based on Micron's own research.
  • maofthnun - Wednesday, October 1, 2014 - link

    Thanks for the clarification on the powerloss protection feature. I am very disappointed by how it actually works because that was a major deciding factor in my purchase of the MX100. At the time, the choice was between the MX100 and the Seagate 600 Pro which was $30 more and which also offers powerloss protection. I would have gladly paid the extra $30 if I had known the actual workings of the MX100.

    Since we're on the topic, I wonder if other relatively recent SSDs within the consumer budget that offer powerloss protection (e.g. Intel 730, Seagate 600 Pro) work the way everyone assumes (flush volatile data)? Would love to hear your comment on this.
  • Kristian Vättö - Wednesday, October 1, 2014 - link

    Seagate 600 Pro is basically an enterprise drive (28% over-provisioning etc), so it does have full power-loss protection. It uses tantalum capacitors like other enterprise SSDs.

    http://www.anandtech.com/show/6935/seagate-600-ssd...

    As for the SSD 730, it too has full power-loss protection, which is because of its enterprise background (it's essentially an S3500 with an overclocked controller/NAND and a more client-optimized firmware). The power-loss protection implementation is the same as in the S3500 and S3700.
  • maofthnun - Wednesday, October 1, 2014 - link

    Thank you. I'll be targeting those two as my future purchase.
  • RAMdiskSeeker - Wednesday, October 1, 2014 - link

    If the 256GB drive were formatted with a 110GB partition, would it operate in Dynamic Write Acceleration 100% of the time? If so, this would be an interesting way to get an SLC drive.
  • Romberry - Friday, January 9, 2015 - link

    I'm really not sure that the AnandTech Storage Bench 2013 does an adequate job of characterzing the performance of this drive or really, any drive in a consumer class environment. And I'm not sure that filling all the LBA's and looking at the psedo-SLC step down as the drive is filled really tells us anything useful (other than where the break points are...and how much use is that?) either.

    Performance consistency? Same deal. Almost no one uses consumer class drives (large steady long term massive writes) this way, and those who do use drives this way likely aren't using consumer class drives.

    I can really take nothing useful away from this review. And BTW, this whole "Crucial doesn't really have power protection, we didn't actually bother checking but just assumed and repeated the marketing speak before" stuff is not the kind of thing I expect from AnandTech. With that kind of care being taken in these articles, I'll be careful to read things here with the same sort of skepticism I had previously reserved for other sites. I'd sort of suspended that skepticism with AnandTech over the years. My mistake.

Log in

Don't have an account? Sign up now