AnandTech Storage Bench 2013

Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based – we record all IO requests made to a test system and play them back on the drive we are testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.

AnandTech Storage Bench 2013 - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, Bioshock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, Ad-Aware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we have been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

Storage Bench 2013 - The Destroyer (Data Rate)

Wow, this actually looks pretty bad. The 256GB M600 is slower than the 256GB MX100 and I guess it is due to the fact that under sustained workloads, the M600 will have to transfer data from SLC to MLC at the same time it is taking in host IOs, so the performance drops due to the internal IO overhead. The 1TB drive does better thanks to higher parallelism, but even then the M550 and 840 EVO are faster.

Storage Bench 2013 - The Destroyer (Service Time)

Performance Consistency & TRIM Validation AnandTech Storage Bench 2011
Comments Locked

56 Comments

View All Comments

  • Kristian Vättö - Tuesday, September 30, 2014 - link

    We used to do that a couple of years ago but then we reached a point where SSDs became practically indistinguishable. The truth is that for light workloads what matters is that you have an SSD, not what model the SSD actually is. That is why we are recommending the MX100 for the majority of users as it provides the best value.

    I think our Light suite already does a good job at characterizing performance under typical consumer workloads. The differences between drives are small, which reflects the minimal difference one would notice in real world with light usage. It's not overly promoting high-end drives like purely synthetic tests do.

    Then again, that applies to all components. It's not like we test CPUs and GPUs under typical usage -- it's just the heavy use cases. I mean, we could test the application launch speed in our CPU reviews, but it's common knowledge that CPUs today are all so fast that the difference is negligible. Or we could test GPUs for how smoothly they can run Windows Aero, but again it's widely known that any modern GPU can handle that just fine.

    The issue with testing heavy usage scenarios in real world is the number of variables I mentioned earlier. There tends to be a lot of multitasking involved, so creating a reliable test is extremely hard. One huge problem is the variability of user input speed (i.e. how quickly you click things etc -- this vary from round to round during testing). That can be fixed with excellent scripting skills, but unfortunately I have a total lack of those.

    FYI, I spent a lot of time playing around with real world tests about a year ago, but I was never able to create something that met my criteria. Either the test was too basic (like installing an app) that showed no difference between drives, or the results wouldn't be consistent when adding more variables. I'm not trying to avoid real world tests, not at all, it's just that I haven't been able to create a suite that would be relevant and accurate at the same time.

    Also, once we get some NVMe drives in for review, I plan to revisit my real world testing since that presents a chance for greater difference between drives. Right now AHCI and SATA 6Gbps limit the performance because they account for the largest share in latency, which is why you don't really see differences between drives under light workloads as the AHCI and SATA latency absorb any latency advantage that a particular drive provides.
  • AnnonymousCoward - Tuesday, September 30, 2014 - link

    Thanks for explaining The State of SSDs.

    I suspect a lot of people don't realize there's negligible performance difference across SSDs. And I think lots of people put SSDs in RAID0! Reviews I've seen show zero real-world benefit.

    This isn't a criticism, but it's practically misleading for a review to only include graphs with a wide range of performance. What a real-world test does is get us back to reality. I think ideally a review should start with real-world, and all the other stuff almost belongs in an appendix.

    Users should prioritize SSDs with:
    1. Good enough (excellent) performance.
    2. High reliability and data protection.
    3. Low cost.

    If #1 is too easy, then #2 and #3 should get more attention. I generally recommend Intel SSDs because I suspect they have the best reliability standards, but I really don't know, and most people probably also don't. OCZ wouldn't have shipped as many as they did if people were aware of their reliability.
  • leexgx - Saturday, November 1, 2014 - link

    nowadays you cant buy a bad SSD (unless its phison based, they norm make Cheap USB flash pen drives) even JMicron based SSDs are OK now

    its only compatibility problems that make an SSD bad with some setups

    JMicron JMF602 had a Very very very bad SSD controller when they made there first 2 (did i say that to many times) http://www.anandtech.com/show/2614/8 (1 second Write delay)
  • Impulses - Monday, September 29, 2014 - link

    Probably because top tier SSD reached a point a while ago where the differences in performing basic tasks like that is basically milliseconds, which would tell the reader even less.

    For large transfers the sequential tests are wholly representative of the task.

    I think Anand used to have a test in the early days of SSD reviews where he'd time opening five apps right after boot, but it'd basically be a dead heat with any decent drive these days.
  • Gigaplex - Monday, September 29, 2014 - link

    It would tell the reader that any of the drives being tested would fit the bill. Currently, readers might see that drive A is 20% faster than drive B and think that will give 20% better real world performance.

    Both types of tests are useful, doing strictly real-world tests would miss information too.
  • AnnonymousCoward - Tuesday, September 30, 2014 - link

    > is basically milliseconds, which would tell the reader even less.

    Wrong; that tells the reader MORE! If all modern video cards produced within 1fps of each other, would you rather see that, or solely relative performance graphs that show an apparent difference?
  • Wolfpup - Monday, September 29, 2014 - link

    Darn, that's a shame these don't have full data loss protection. I assumed they did too! Still, Micron/Crucial and Intel are my top choices for drives :)
  • Wormstyle - Tuesday, September 30, 2014 - link

    Thanks for posting the information here. I think you are a bit soft on them with the power failure protection marketing, but you did a good job explaining what they were doing and hopefully they will now accurately reflect the capability of the product in their marketing collateral. A lot of people have bought these products with the wrong expectations on power failure, although for most applications they are still very good drives. What is the source for the market data you posted in the article?
  • Kristian Vättö - Tuesday, September 30, 2014 - link

    It's straight from the M500's product page.

    http://www.micron.com/products/solid-state-storage...
  • Wormstyle - Tuesday, September 30, 2014 - link

    The size of the SSD market by OEM, channel, industrial and OEM breakdown of notebook, tablet, and desktop? I'm not seeing it at that link.

Log in

Don't have an account? Sign up now