A Note on Real World Performance

The majority of our SSD test suite is focused on I/O bound tests. These are benchmarks that intentionally shift the bottleneck to the SSD and away from the CPU/GPU/memory subsystem in order to give us the best idea of which drives are the fastest. Unfortunately, as many of you correctly point out, these numbers don't always give you a good idea of how tangible the performance improvement is in the real world.

Some of them do. Our 128KB sequential read/write tests as well as the ATTO and AS-SSD results give you a good indication of large file copy performance. Our small file random read/write tests tell a portion of the story for things like web browser cache accesses, but those are difficult to directly relate to experiences in the real world.

So why not exclusively use real world performance tests? It turns out that although the move from a hard drive to a decent SSD is tremendous, finding differences between individual SSDs is harder to quantify in a single real world metric. Take application launch time for example. I stopped including that data in our reviews because the graphs ended up looking like this:

All of the SSDs performed the same. It's not just application launch times though. Here is data from our Chrome Build test timing how long it takes to compile the Chromium project:

Build Chrome

Even going back two generations of SSDs, at the same capacity nearly all of these drives perform within a couple of percent of one another. Note that the Vertex 3 is even a 6Gbps drive and doesn't even outperform its predecessor.

So do all SSDs perform the same then? The answer there is a little more complicated. As I mentioned at the start of this review, I do long term evaluation of all drives I recommend in my own personal system. If a drive is particularly well recommended I'll actually hand out drives for use in the systems of other AnandTech editors. For example, back when I wanted to measure actual write amplification on SandForce drives I sent three Vertex 2s to three different AnandTech editors. I had them use the drives normally for two - three months and then looked at the resulting wear on the NAND.

In doing these real world use tests I get a good feel for when a drive is actually faster or slower than another. My experiences typically track with the benchmark results but it's always important to feel it first hand. What I've noticed is that although single tasks perform very similarly on all SSDs, it's during periods of heavy I/O activity that you can feel the difference between drives. Unfortunately these periods of heavy I/O activity aren't easily measured, at least in a repeatable fashion. Getting file copies, compiles, web browsing, application launches, IM log updates and searches to all start at the same time while properly measuring overall performance is near impossible without some sort of automated tool. Unfortunately most system-wide benchmarks are more geared towards CPU or GPU performance and as a result try to minimize the impact of I/O.

The best we can offer is our Storage Bench suite. In those tests we are actually playing back the I/O requests captured of me using a PC over a long period of time. While all other bottlenecks are excluded from the performance measurement, the source of the workload is real world in nature.

What you have to keep in mind is that a performance advantage in our Storage Bench suite isn't going to translate linearly into the same overall performance impact on your system. Remember these are I/O bound tests, so a 20% increase in your Heavy 2011 score is going to mean that the drive you're looking at will be 20% faster in that particular type of heavy I/O bound workload. Most desktop PCs aren't under that sort of load constantly, so that 20% advantage may only be seen 20% of the time. The rest of the time your drive may be no quicker than a model from last year.

The point of our benchmarks isn't to tell you that only the newest SSDs are fast, but rather to show you the best performing drive at a given price point. The best values in SSDs are going to be last year's models without a doubt. I'd say that the 6Gbps drives are interesting mostly for the folks that do a lot of large file copies, but for most general use you're fine with an older drive. Almost any SSD is better than a hard drive (almost) and as long as you choose a good one you won't regret the jump.

I like the SF-2281 series because, despite things like the BSOD issues, SandForce has put a lot more development and validation time into this controller than its predecessor. Even Intel's SSD 320 is supposed to be more reliable than the X25-M G2 that came before it. Improvements do happen from one generation to the next but they're evolutionary - they just aren't going to be as dramatic as the jump from a hard drive to an SSD.

So use these numbers for what they tell you (which drive is the fastest) but keep in mind that a 20% advantage in an I/O bound scenario isn't going to mean that your system is 20% faster in all cases.

Patriot's Wildfire Random & Sequential Read/Write Speed
Comments Locked

112 Comments

View All Comments

  • Chloiber - Thursday, June 23, 2011 - link

    Hi Anand,

    is it possible to do the same (1 hour) torture tests for other SSDs such as Intel 320, Intel 510 and C300/m4? It would be interesting to see how the, in my opinion, huge performance hit with the Sandforce drives compares to other SSDs/controllers.
  • Impulses - Thursday, June 23, 2011 - link

    I think he's done similar tests in past reviews, though probably not the very same 60 min test. Crucial drives had issues recovering from similar situations, and Intel drives were the most resilient (shocking right?). The SF drives are particularly susceptible to that sort of degradation when hammering them with incompressible data due to the very nature of how their compression algorithm works.

    That's one reason I've never been very high on SF drives... Currently I have two Intel drives being used as OS drives (where that sorta scenario is improbable), but if I decided to upgrade the desktop OS drive I could very well end up using one of those smaller drives as a scratch disk for working with video, or as a spare disk for game installs. SF wouldn't necessarily be ideally suited for that.
  • Chloiber - Friday, June 24, 2011 - link

    Yes, but without the same 60mins the comparison is pretty much useless, sadly. You can see this very well in the Agility 3 review - nearly no performance drop with 20min torture test.
    I know that the SF drives drop performance to about 65% (write), both SF1 and SF2. And that it's not a state that you reach when you torture your drive is known because nearly everyone who does a ASS benchmark some month after the initial use show the lower performance (in case of SF2 that's 70-90MB/s seq. write).
    But I'd like to see a direct comparison from Anand, would just be great.

    And yes - that's also a reason why I won't buy SF drives. I just don't like it how they try to confuse customers. They say 450MB/s+ write...yeah right. In a very special case. And even worse, it drops down even more. Intel is honest about the performance of their SSD, that's what I like about it. But I'm pretty sure SF gained countless customers just because of those "incredible" performance stats.
  • Phil NBR - Thursday, June 23, 2011 - link

    "So why not exclusively use real world performance tests? It turns out that although the move from a hard drive to a decent SSD is tremendous, finding differences between individual SSDs is harder to quantify in a single real world metric. "

    I don't think it's that hard. Sites like Hardwareheaven and Techspot show meaningful differences between SSDs in real world settings. I would like to see Anandtech include real real world benchmarks again. I/O bound benchmarks don't tell the whole story.
  • ckryan - Thursday, June 23, 2011 - link

    It's my belief that these real world tests are contrived in and of themselves to some degree.
  • Impulses - Thursday, June 23, 2011 - link

    I don't frequent Hardware Heaven often but I do like the way they compare and present results for their GPU reviews, so I went looking for their "real world" SSD tests when I saw that comment. Out of the 5 or 6 tests like 3 or 4 are just large sequential read/write tests... Sure seeing 200 minutes vs 210 minutes might be somewhat more intuitive than a generic benchmark score, but it doesn't tell you a whole lot more tbh. It's all basically just OS/game install tests and file transfer/scan tests, with two exceptions...

    One is their OS boot up test, where the difference between all current drives is usually 2-3 sec at most (time to hibernate and resume might be more valuable imo), and the other is an HD video capture test that might actually be the only real world test they're doing of any actual value. It showcases the biggest disparity between the drives (due to sequential write speeds using raw uncompressed footage), and it really is something you could be doing day in and day out and not easily represented by synthetic benchmarks or some of the other test scenarios Anand uses. Worth looking into...
  • cjs150 - Thursday, June 23, 2011 - link

    Seems to be a lot of conspiracy theorists about today.

    I read Anandtech because I do not detect bias. When it is wrong he will tell us. Sometimes I do not understand what he is saying - but that is because I am an amateur geek not a full time pro!

    Now my noob question.

    What is best way of setting up a system with an SSD and a traditional HD. Should I use the SSD for OS and programs and the HD for widows swap file. Or would it be fine to use the SSD for all OS functions? Happy to partition the HD so that there is a small partition for the OS swap
  • Impulses - Thursday, June 23, 2011 - link

    Leave the swap file alone, Windows manages it just fine and a Windows engineer was quoted during the launch of Win7 as saying that SSD are particularly well suited for the swap file's purpose... If you have enough RAM it's gonna see little use besides background maintenance Windows does of active processes. Just install your OS and apps as you normally do on the SSD, let Win7 partition it (or Vista, if you're using XP you'll wanna look into proper partition alignment), and then use your HDD for large game installs that don't fit on the SSD and data.

    If you have lots of games at any one time it's worth looking into system links or junction links, they provide any easy way to move game directories to the SSD and back w/o altering or affecting the existing install (or w/o messing w/registry keys, it's like an OS level shortcut that's transparent to the programs).

    If you have a small SSD (and particularly if you have lots of RAM), it's worth turning off hibernate as the hibernate file will take up a few GB of space on the drive (depending on the amount of RAM). Swap file should be dynamic and shouldn't grow too large if it's rarely used.
  • jwilliams4200 - Thursday, June 23, 2011 - link

    Did I miss where you commented on the Desktop Iometer - 4KB Random Read chart?

    The 120GB Vertex 3 Max IOPS and the Patriot Wildfire were in the basement, with 35 MB/s or lower performance.

    What is going on?
  • Anand Lal Shimpi - Thursday, June 23, 2011 - link

    The 240GB Vertex 3 results were a typo, I've updated/corrected that entry. The Toshiba 32nm drives are even slower, likely due to the specific characteristics of that NAND vs. the IMFT devices.

    Random read performance is a weak area of many drives this generation for some reason. Even Crucial's m4 is slower than last year's C300 in this department.

    Take care,
    Anand

Log in

Don't have an account? Sign up now