Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data (data is random within a write, but duplicated between writes) for each write as well as fully random data (data is random within a write and random across most writes) to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why the type of data you're writing matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

Random write performance has always been the weak spot of Toshiba’s controllers, this latest combination of controller and firmware is no different. Compared to all other SSDs, the Toshiba based SSDNow V+ 100 doesn’t look very good. It’s even slower than the old Indilinx based Corsair Nova. It’s still over 2x the speed of the fastest desktop 3.5” hard drive however, and enough to give you the feel of an SSD for the most part.

Crucial loses a decent amount of performance when going from 128GB to 64GB. The RealSSD C300 drops below even the worst case scenario performance of the Corsair Force F40.

Note that not all SandForce drives are created equal here. If a manufacturer doesn’t meet SandForce’s sales requirements, their drives are capped at a maximum of 50MB/s here. This is the case with the Patriot Inferno, although OCZ’s Agility 2 voluntarily enforces the limit.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

Kingston's performance doesn't change regardless of queue depth. The SandForce drives either stay the same or get a little faster as they have more requests to parallelize and write sequentially.

Our random read test is similar to the random write test however we lift the 8GB LBA space restriction:

Iometer - 4KB Random Read, QD=3

Random read speed is much better, but still not as good as the competition. At 19.7MB/s the SSDNow V+ 100 is a couple of orders of magnitude faster than the 600GB VelociRaptor. The Indilinx based Corsair Nova is even faster however, and nothing can top the RealSSD C300.

Prices and New Competitors Sequential Read/Write Speed
Comments Locked


View All Comments

  • Dorin Nicolaescu-Musteață - Thursday, November 11, 2010 - link

    Anand, what about the Samsung 470 Series?

    It's been out since August and looks like a very nice drive. Why in the world the reviews have only started to appear this week on-line.
  • Nickel020 - Thursday, November 11, 2010 - link

    Thanks for the review! I've got a few suggestionss/questions though.

    I've been out of the loop for a while and looking at Sandforce and other newer drives. So some Sandforce drives have the I/O limitations that were intended for the SF-1200 and some have SF-1500 like performance?

    I'm surprised the Corsair F40 does so well. I thought the lower capacity drives performed worse than the 120GB versions, but it holds up really well. Or is this just a special case with the 40GB one and the 60GB is worse than the 40GB one? The 60GB Sandforce are also much better value than the 40GB ones, 50% more capacity at >20% more price. I find it strange you didn't include them and mentioned the 64GB C300 to be the value drive at their price point.

    I'm pretty sure the Indilinx 60GB, the unlocked SF-1200 60GB and the X25-M 80GB are the most popular drives out there, which makes them great reference points, but they're not in the review. The former two are not on Bench either unfortunately. Do you have some around and could test them?

    You tested the Crucial drives on the ICH10R, right?

    Also, I would appreciate some blog posts or small articles about developments with newer FWs. I remember the FW development improving the Indilinx drives significantly, and am always wondering how accurate your older reviews still are given there are newer FWs out now. It would also be nice if you could list the tested FW version in Bench.

    It would also be great if you could look at SSD performance in Macbooks. I want to put one in my Macbook Pro (Late 2008), but all the talk of freezing has me hesitating, and I haven't seen an in-depth look at this issue. Is it related to what kind of SSD you use, and does it make a difference wheter you have a late 2008 or mid 2009? It would also be interesting to see how tha lack of TRIM actually affects different drives under OSX.

    That's all for now, thanks again!
  • retnuh - Thursday, November 11, 2010 - link

    I've had a OWC Mercury Extreme Pro 240GB in my late 2008 MBP since may, not one issue or freezing. Best upgrade you can do.

  • iwod - Thursday, November 11, 2010 - link

    I posted and asked on many forums and did not found an answer to my MBA ( Macbook Air ) question.

    Why did the MBA do so well in its test, while its performance data were below the King of SSD controller, Sandforce?

    No one could answer. There were number of review pointing out that their MBA actually feels snappier then their Macbook with Sandforce or Intel SSD. Although this is impossible when first heard, numerous other review site seems to confirm similar findings. Of cause there is no way to test it out since the MBA does not have an regular SATA slot.

    Now this article actually print out the truth. The same Toshiba SSD controller used in MBA's SSD, is top of the chart in BOTH Synthetic Benchmarks and Real World usage ( Anand Bench ) Benchmarks. What we have been talking as the Holy Grail of SSD Performance Delta, the 4K Random Read / Write, didn't matter when Toshiba was literally the bottom of the chart in those test.

    There is a reason why Apple choose a Inferior part ( To us at the time ) instead of Sandforce. The argument for choosing it because of always On GC doesn't make sense, since Sandforce has the same capabilities within the Firmware itself.

    One reason would be Toshiba is a NAND manufacture itself, and buying NAND and Controller directly from Toshiba would be cheaper. The other reason being Toshiba ( properly involve Sandisk as well since their JV ) had a controller chip which is very fast.

    There has to be an missing pieces in our performance test, something that these companies knows and we dont.
  • Chloiber - Thursday, November 11, 2010 - link

    I'd like to see more real world tests - and I don't consider the AnandTech Storage Bench to be "real world" - it's still a bench, like PCMark.

    But yes, you are right: synthetic tests tell us little about the performance you actually get from an SSD. There are more unknown variables than we think.
    You may see big differences in benches like SysMark or PCMark - and even bigger differences in synthetic tests like AS SSD or even IOMeter. But these scores tell us little about REAL world performance - and with REAL I mean things like:
    - "How long does it take to start Photoshop while running Virus Scan?"
    - "How long does it take to start iTunes while unzipping a not-so-much-compressed zip-file?"

    That's the things I care about. And interestingly, you often get COMPLETELY different results, than what you would think when looking at synthetic tests or "half-synthetic"-tests like PCMark or AnandTech Storage Bench.
  • Anand Lal Shimpi - Thursday, November 11, 2010 - link

    I used to run a lot of those types of tests, however I quickly found that if you choose your iometer and other benchmarks appropriately they don't any new data. And often times they are so limited in their scope (e.g. launch an application with virus scan in the background) that you don't see any appreciable differences between drives. Most high end SSDs are fast enough to do most of these types of tasks just as quickly as one another. It's when you string a bunch of operations together and look for cumulative differences in response time or performance that you can really begin to see which one is faster. These types of scenarios are virtually impossible to perform with consistency by hand, that's where our test suite comes in.

    AnandTech Storage Bench, PCMark and even SYSMark do what is necessary - they measure performance of a more complex usage case. PCMark Vantage is a great showcase of truly light workload I/O performance, while SYSMark is more CPU bound and shows you how small the differences can be. Our own benchmark offers a more modern set of usage models (we actually do run photoshop while virus scan is active and actually edit images in photoshop, all while doing other things as well).

    All of these tests are application based, they are simply scripted or isolate the I/O component specifically. They give us a look into bursts of activity that's, again, near impossible to reproduce by hand with a stopwatch.

    Benchmarking a specific task usually just repeats some information we've already presented, fails to present the bigger picture or shows no repeatable difference between drives. I can absolutely add those types of benchmarks back in, however I originally pulled them out because I believed they didn't add anything appreciable to the reviews.

    Of course this is your site, if you guys would like me to present some of that data I definitely can :)

    Take care,
  • Nickel020 - Thursday, November 11, 2010 - link

    The problem is that the synthetic tests you do are hard to interpret for just about anyone. "What drive is the best for this usage profile?" is still really hard to answer after reading your reviews (not that anyone else does a better job).

    And even if there is little difference between todays drives in the level loading time tests you used to do, we don't know that even if you do. Right now the average AT reader reads this test and doesn't know that the more expensive drives won't load his games noticeably faster or perform better when doing video editing.

    Maybe you should give recommendations for certain usage profiles, like video editing, photo editing, gaming etc. Even if you're just saying that there's not gonna be a noticeable difference.
  • wumpus - Thursday, November 11, 2010 - link

    It might help if you included statements like "you don't see any appreciable differences between drives. Most high end SSDs are fast enough to do most of these types of tasks just as quickly as one another." a bit more often in the articles. While we might be interested in the technical data, it would usually be foolish to buy SSDs by things other than size, price, and reliability.
  • Chloiber - Saturday, November 13, 2010 - link

    But that's the thing. You don't see any difference in an application a "normal" homeuser would use. We see huge differences in those synthetic tests, but in reality, you don't have any faster loading time.

    Of course you can test it like this and say:
    "You don't see much difference between these three SSDs in "real world" application tests. Get the cheapest SSD (or most robust, whatever)."

    Or another position (I think the one you are currently in) is:
    "You don't see much difference between these three SSDs in "real world" application tests so let's stress them some more and base our verdict on those stresstests."

    The thing is:
    a) You don't know how the SSDs would REALLY react if you stress them in reality like this. They are still synthetic tests and unless you can prove that there are scenarios where differences appear (without any influence of some kind of bench program) they don't tell us that much.

    b) I think we have to begin to widen our horizon a little bit. Why exactly is it, that you don't see any beneftit using a, let's say, 50k IOPS drive and a 15k IOPS drive? Shouldn't you see some significant faster load times?

    Im telling you this because of future SSDs. We get 30k IOPS, soon 60k IOPS, and in one year maybe over 100k IOPS. The score in your benches gets bigger and bigger...and bigger...
    And what exactly is it the user gets? NOTHING because everything else in his computer is limiting his SSD (which is already happening right now!)!

    I agree that you have to test hardware in scenarios, where nothing else is limiting your subject. That's why you use a 4GHz i7 when testing GPUs. That's why you test CPUs game performance using a very low resolution.
    But I think it's really important that you also test scenarios a user experiences in reality. And that means in this case: "real world" benches. And yes, there will be nearly no difference there. But isn't this the thing I want to know? If I spend 600$ on a fking RevoDrive and nothing loads faster, I WANNA KNOW ABOUT IT!

    I hope you see my point :)
  • Out of Box Experience - Thursday, November 11, 2010 - link

    Real World testing of SSD's should be done in a worst case scenaio on the lowest common denominator

    They should be plug and play on XP machines without any tweaks on the slowest computer you use to amplify the differences between drives

    I use a copy/paste test on ATOM CPU's to guage the Real World differences between Platter drives and SSD's

    Using 200MB of data (900+ files in 80 or so folders), I simply time a copy/paste of that data on the ATOM computer

    Using a faster computer WILL reduce the "Relative" speed gap between drives to the point where it becomes hard to tell which of two drives is actually the fastest

    Using Windows 7 with its funky caching scheme will make ALL the drives appear to copy and paste at the same speed on the ATOM core and therefore cannot be used for this test

    A 40GB Vertex 2 can copy and paste this data in 55 seconds (3.6MB/sec)
    A 5400RPM Western Digital Laptop drive does it in 54 seconds
    A 7200RPM Western Digital Desktop Drive takes 17 seconds

    ALL testing was done under XP-SP2 without ANY tweaks!
    All tests were repeated for accuracy

    Sandforce SSD's are HORRIBLE at handling data that is NOT compressible or that is already on the drive in compressed form

    Any drive that requires Windows 7 or multiple tweaks just to give you "Synthetic" numbers that have no bearing in the Real World are worthless

    Show us how they compare in a worst case scenario on the least common denominator for results we can use please

    I'm tired of hearing how great Sandforce drives are when they can't even beat a 5400RPM laptop drive in a Real World test such as the one I've just described

Log in

Don't have an account? Sign up now