Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). As I've explained in the comments in previous reviews, simulating the type of random access you see in a desktop workload is difficult to do. Small file desktop accesses aren't usually sequential but they're not fully random either. By limiting the LBA space to 8GB we somewhat simulate a constrained random access pattern, but again it's still more random than what you'd see on your machine. Your best bet for real world performance is to look at our Storage Bench charts near the end of the review as they accurately record and play back traces of real world workloads.

For our random access tests I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data (data is random within a write, but duplicated between writes) for each write as well as fully random data (data is random within a write and random across most writes) to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why the type of data you write to SF drives matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

The Corsair Nova is our Indilinx Barefoot representative in this preview and you can see how performance has improved with the Martini controller. While the original Indilinx Barefoot traded good sequential performance for slower-than-Intel random performance, Martini fixes the problem. It's not in the class of SandForce's SF-1200, but Indilinx appears to have built a performance equal to Intel's X25-M G2.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

Our random read test is similar to the random write test however we lift the 8GB LBA space restriction:

Iometer - 4KB Random Read, QD=3

Random read performance falls short of Intel and basically hasn't changed since the Barefoot. It's not bad at all, but not industry leading.

Introduction Sequential Read/Write Speed
Comments Locked

61 Comments

View All Comments

  • leexgx - Tuesday, November 16, 2010 - link

    i agree 128gb would seem to be the min i would get

    got the 256gb m225 my self before the £100 price increase i have 50gb free on that, i had an corsair s128 before that bit slow at writing but load times was quite fast, main thing that went faster was disk based installs like stream pre load decrypt (when you pre load an steam game its encrypted it has to decrypt Very disk intensive as it has to read and write a lot) as it would make my corsair S128 stall the system for short times as latency's whent up to 1000, m225 system is fully usable when it was decrypting blackops

    apart from jmicron and older samsung first-second gen ssds, you be hard pressed to notice the difference unless you was doing server loads (like if i went from an m225 to an sf-1200 based ssd my pc mite boot up 1 second faster same goes for games and programs)

    only reason you see me replacing this ssd is if i was getting an 512gb version (like i got money to burn :) ) or 2xSF-1200 based ssd's in raid 0 (as GC works good on them)

    the segate XT drives may not seem that good but if your mainly only playing games or opening the same files often you could raid 3-4 of them in RAID 0 and that give you 16gb of cache data to work with
  • iamezza - Wednesday, November 17, 2010 - link

    I would consider myself a power user, I spend much of the day on my PC working then later for gaming and I get by just fine with an 80GB SSD. I am currently using less than half capacity. Moving the user files in Windows 7 is a piece of piss, takes about 5mins and then its done, it's just drag and drop onto your storage drive.
    The only thing I can't do is install my games directory onto an SSD even a 128GB wouldn't be enough for that.

    I recently installed a 30GB SSD into my HTPC and it works a treat. Being a HTPC everything apart from the OS goes onto a storage drive so 30GB is more than enough.
  • Mugur - Thursday, November 18, 2010 - link

    You are right, regarding this particular case, but I can see from other small drives, like Corsair F40, Intel X25-V 40 GB etc. how the performance scales down... F40 looks almost similar to F120 and we all know with Intel 40 GB that they have half channels so the sequential write is the most affected, not so much other factors.
  • Crucial - Tuesday, November 16, 2010 - link

    All these reviews keep making me happy with my purchase of the 128mb Crucial drive. It seems to be a solid all around performer that still stays towards the top of the heap.
  • Mr Perfect - Tuesday, November 16, 2010 - link

    Not to divert this nifty SSD article, but if the drive manufacturers are so dead set on using round, metric numbers for their bytes, then I think I'm going to start calling them Metric Gigabytes and Imperial Gigabytes. It follows the current naming schemes much better then these ridiculous gibibytes. Who came up with that name anyhow?
  • akedia - Tuesday, November 16, 2010 - link

    *sighs*

    The prefix giga- is metric, while the prefix gibi- is binary. Your phrase "metric gigabyte" is redundant, while your "imperial gigabyte" is nonsensical.

    A Gigabyte is 10^9, or 1,000,000,000, bytes. https://secure.wikimedia.org/wikipedia/en/wiki/Gig...
    A Gibibyte is 2^30, or 1,073,741,824, bytes. https://secure.wikimedia.org/wikipedia/en/wiki/Gib...

    Using the word gigabyte for those nice, round numbers is correct. The problem is operating system manufacturers whose systems display the number in the form of gibibytes. I'm not sure about others, but OS X now correctly displays gigabytes, erasing the apparent (but not actual) discrepancy between the box the drive came in and the system display about the drive.

    In answer to your question about who named them, from the Wikipedia entry for Binary Prefix:

    "The set of binary prefixes that were eventually adopted, now referred to as the "IEC prefixes," were first proposed by the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols (IDCNS) in 1995. ... The new prefixes kibi (kilobinary), mebi (megabinary) and gibi (gigabinary) were also proposed at the time, and the proposed symbols for the prefixes were kb, Mb and Gb respectively, rather than Ki, Mi and Gi. The proposal was not accepted at the time.

    "The Institute of Electrical and Electronic Engineers (IEEE) began to collaborate with the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) to find acceptable names for binary prefixes. The IEC proposed kibi, mebi, gibi and tebi, with the symbols Ki, Mi, Gi and Ti respectively, in 1996.

    "The names for the new prefixes are derived from the original SI prefixes combined with the term binary, but contracted, by taking the first two letters of the SI prefix and 'bi' from binary. The first letter of each such prefix is therefore identical to the corresponding SI prefixes, except for "K", which is used interchangeably with "k", whereas in SI, only the lower-case k represents 1000."

    That's quite the user name for someone who can't manage a couple of Wikipedia lookups. The right to be outraged comes with the obligation to be informed.
  • Mr Perfect - Thursday, November 18, 2010 - link

    Yes, it was supposed to be nonsensical. I apologize for not putting more ;) smilies in my post.
  • Mr Perfect - Thursday, November 18, 2010 - link

    Also, it wasn't just a troll post, since it apparently looks that way. I really AM annoyed at how the whole thing is being handled. Even though a 128Gib SSD really DOES have 128Gib on it, and is sold AS a 128Gib drive, the user space will be far less, depending on controller model. We're using the right units of measure, but people are STILL ending up with less then they thought they where. This was probably our one shot at getting accurate labeling on drives.

    I suppose I should have just said that, rather then try to have some fun with it. :|
  • FunBunny2 - Tuesday, November 16, 2010 - link

    The real usecase for SSD is high normal form RDBMS. Let's get a TPC-C test of these things, using flatfile type schemas and BCNF type schemas; both on HDD (pick one for all tests going forward) and the SSD under test. Then we'll know whether they're worth the cost.
  • dbt - Tuesday, November 16, 2010 - link

    Garbage collection - really for the masses? Vendor laims that it will help where TRIM support is not available are confusing me.

    Which filesystems does garbage collection support? I'll bet FAT16/32/64(exfat) are covered, NTFS as well.

    On other filesystems - how does the SSD controller know which blocks are "free" ?

    EXT4/ZFS/<anyother>FS

    ?

Log in

Don't have an account? Sign up now