AnandTech Storage Bench 2013

Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based – we record all IO requests made to a test system and play them back on the drive we are testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.

AnandTech Storage Bench 2013 - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, Bioshock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, Ad-Aware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we have been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

Storage Bench 2013 - The Destroyer (Data Rate)

Wow, this actually looks pretty bad. The 256GB M600 is slower than the 256GB MX100 and I guess it is due to the fact that under sustained workloads, the M600 will have to transfer data from SLC to MLC at the same time it is taking in host IOs, so the performance drops due to the internal IO overhead. The 1TB drive does better thanks to higher parallelism, but even then the M550 and 840 EVO are faster.

Storage Bench 2013 - The Destroyer (Service Time)

Performance Consistency & TRIM Validation AnandTech Storage Bench 2011
Comments Locked

56 Comments

View All Comments

  • makerofthegames - Monday, September 29, 2014 - link

    If the cost is low enough, they might be able to compete with hard drives. A two-disk RAID0 of these 1TB drives could replace my 2TB WD Black, which I store my game library on. And even a slow drive like this is a million times faster than any hard drive.

    That said, it's still a $900 set of SSDs fighting with a $200 hard drive. What we really need is a $200 1TB SSD, even a horribly slow one (is it possible to pack four bits into one cell? Like a QLC or something? That might be the way to do it). That would be able to compete not just in the performance sector, but in the bulk storage arena.

    For people like me, capacity also affects performance, because it means I can install more apps/games to that drive instead of the slow spinning rust. I actually bought a very low-performing Mushkin 180GB SSD for my desktop, because it was the same price as the 120GB drives everyone else was slinging. That meant I could fit more games onto it, even the big ones like Skyrim.
  • sirius3100 - Monday, September 29, 2014 - link

    Afaik QLC has been used in some USB-sticks in the past. But for SSDs the amount of write cycles QLC-NAND would be able to endure might be too low.
  • bernstein - Monday, September 29, 2014 - link

    you are just wrong, it's an order of magnitude BETTER than a M500 & still 5x better than MX100 : http://techreport.com/r.x/micron-m600/db2-100-writ...
  • milli - Monday, September 29, 2014 - link

    That review wasn't up yet when I posted my comment.
    But you can add to that, that it's still 340x worse than the ARC 100 in that same test (which is also a budget drive). It's worse in the read test than the MX100 and 5x worse than the ARC.
    So yeah, service times are just terrible on Crucial's 256GB drives (all models).
  • nirwander - Monday, September 29, 2014 - link

    Obviously, Dynamic Write Acceleration is not meant to be benchmarked. And "client workload" is not about constant high pressure on the SSD, so the drive is basically ok.
  • kmmatney - Monday, September 29, 2014 - link

    Agreed. It seems like the whole premise of the Dynamic Write Acceleration requires idle time to move data off the SLC NAND, but benchamarking doesn't allow that to happen (and isn't like real-life client usage). Also, if you just compare the MX100 256GB vs the M600 256GB, the newer SSD does have better write speeds, and does better at everything except the destroyer test.
  • hojnikb - Monday, September 29, 2014 - link

    I wonder if Crucial is gonna bring DWA to their consumer line aswell..
  • Samus - Monday, September 29, 2014 - link

    The M500 sure could have used it back in the day. The 120GB model had appalling write performance.
  • PrivacyIsNotCriminal - Monday, September 29, 2014 - link

    Appreciate the brief write up on encryption and that this may be a technically challenging area to detail. But in a post-Snowden world with increasing complex malware and emphasis on data mining, we should all be pressing for strengthening of protective technologies.

    Additional article depth on encryption technologies, certification authorities and related technical metrics would be appreciated by many of us who are not IT professionals, but are concerned about protecting our personal LANs and links to our wireless/cellular devices.

    Contrary to the government's and RIAA most recent assertions, a desire for privacy and freedom from warrantless searches should be a fundamental American value.

    Thanks for the in depth technical reviews and hope Anand is doing well.
  • kaelynthedove78 - Monday, September 29, 2014 - link

    This explains the data loss issues we've had with the MX100 series, both under Windows 7 and FreeNAS.

    With all C-states enabled (the default and recommended configuration, which Anandtech doesn't use since some highly advertized drives are badly designed and suffer up to 40% IOPS drop), the drives don't properly handle suspending and resuming the system.

    Under FreeNAS, the zpool would slowly accumulate corruption and during the next scrubbing the whole zpool would get trashed and the only option was to restore all data from backup.

    Under windows strange errors, like being unable to properly recognise USB devices or install Windows updates, would appear little by little after every suspend/resume cycle until the machine would refuse to boot up at all.

    A workaround is to either disable all power-saving C-states or to disable HIPM and DIPM on *all* disk controller, even those which don't have Micron drives connected. Or to never suspend/resume.

    We decided to return all our Micron drivers, about 350 total, and get Intel SSDs instead. They're not cheap and not the fastest but at least I don't have to keep re-imaging systems every week..

    For information on how to enable/disable HIPM and DIPM under Windows 7 please see:
    www.sevenforums.com/tutorials/177819-ahci-link-power-management-enable-hipm-dipm.html

Log in

Don't have an account? Sign up now