No, it’s not the new Indilinx JetStream controller - that’ll be in the second half of the year at the earliest. And it’s definitely not Intel’s 3rd generation X25-M, we won’t see that until Q4. The SSD I posted a teaser of last week is a modified version of OCZ’s Agility 2.

The modification? Instead of around 28% of the drive’s NAND set aside as spare area, this version of the Agility 2 has 13%. You get more capacity to store data, at the expense of potentially lower performance. How much lower? That’s exactly what I’ve spent the past several days trying to find out.

The drive looks just like a standard Agility 2. OCZ often makes special runs of drives for testing with no official labels or markings, in fact that's what my first SandForce drive came as late last year. Internally the drive looks identical to the Agility 2 we reviewed not too long ago.

OCZ lists the firmware as 1.01 compared to the standard 1.0 firmware on the shipping Agility 2. The only difference I'm aware of is the amount of NAND set aside as spare area.

SandForce and Spare Area

When you write data to a SandForce drive the controller attempts to represent the data you’re writing with fewer bits. What’s stored isn’t your exact data, but a smaller representation of it plus a hash or index so that you can recover the original data. This results in potentially lower write amplification, but greater reliance on the controller and firmware.

SandForce stores some amount of redundant information in order to deal with decreasing reliability of smaller geometry NAND. The redundant data and index/hash of the actual data being written are stored in the drive’s spare area.

While most consumer SSDs dedicate around 7% of their total capacity to spare area, SandForce’s drives have required ~28% until now. As I mentioned at the end of last year however, SandForce would be bringing a more consumer focused firmware to market after the SF-1200 with only 13% over provisioning. That’s what’s loaded on the drive OCZ sent me late last week.

SandForce Overprovisioning Comparison
Advertised Capacity Total Flash Formatted Capacity (28% OP) Formatted Capacity (13% OP)
50GB 64GB 46.6GB 55.9GB
100GB 128GB 93.1GB 111.8GB
200GB 256GB 186.3GB 223.5GB
400GB 512GB 372.5GB 447.0GB

As always, if you want to know more about SandForce read this, and if you want to know more about how SSDs work read this.

When Does Spare Area Matter?

In addition to the SandForce-specific uses of spare area, all SSDs use it for three purposes: 1) read-modify-writes, 2) wear leveling and 3) bad block replacement.

If a SSD is running out of open pages and a block full of invalid data needs to be cleaned, its valid contents is copied to a new block allocated from the spare area and the two blocks swap positions. The old block is cleaned and tossed into the spare area pool and formerly spare block is now put into regular use.

Recreated from diagram originally produced by IBM's Zurich Research Lab

The spare area is also used for wear leveling. NAND blocks in the spare area are constantly being moved in and out of user space to make sure that all parts of the drive wear evenly.

And finally, if a block does wear out (either expectedly or unexpectedly), its replacement comes from the spare area.

The Impact of Spare Area on Performance
Comments Locked


View All Comments

  • DigitalFreak - Monday, May 3, 2010 - link

    Apparently IBM trusts Sandforce's technology.
  • MrSpadge - Monday, May 3, 2010 - link

    A 60 GB Vertex 2 for the price of the current 50 GB one would make me finally buy an SSD. Actually, even a 60 GB Agility 2 would do the trick!
  • Impulses - Monday, May 3, 2010 - link

    Interesting, Newegg's got the Agility 2 in stock for $399... Vertex 2 is OOS but has an ETA. That makes my choice of what drive to give my sister a lil' harder (I promised her a SSD as a birthday gift last month, gonna install it on her laptop when I visit her soon). The old Vertex/Agility drives are 20GB more for $80 less... I dunno whether the performance bump and capacity loss would be worth it.

    Do the SandForce and Crucial drives feel noticeably faster than an X25-M or Indillix Barefoot drive in everyday tasks or are they all so fast that the difference is not really appreciable outside of heavy multi-tasking or certain heavy tasks? I own an X25-M and an X25-V and I'm ecstatic with both...
  • MadMan007 - Monday, May 3, 2010 - link

    Hello Anand, thanks for the review. I am posting the same comment regarding capacity that I've posted before - I hope it doesn't get ignored this time :) While it's nice to say 'formatted capacity' it is not 100% clear whether that is in HD-style gigabytes (10^9 bytes) or gibibytes (base 1024 - what OSes actually report.) This is very important information imo because people want to know 'How much space am I really getting' or they have a specific space target they need to hit.

    Please clarify this in future reviews! (If not this one too :)) Thanks.
  • anurax - Tuesday, May 4, 2010 - link

    I've had 2 brand new OCZ Vertex Limited edition died on me in the span of 2 weeks, so you guys should really take into consideration the reliability when buying new SSD. Like Anand say WE are the test pigs here and the manufacture's dun really give a care about us or the inconvenience we experience when we have to re-install and reload our systems.

    My Vertex Limited Edition drive just died all of a sudden without any prompt or s.m.a.r.t. notification, it just simple cannot be detected anymore. It so damn frustrating to have such poor reliability standards.

    One thing is 100% sure, OCZ and SandForce are a NO NO NO, they have played me out enough and me forking out hard earned $$$ to be their test pig is simply not acceptable.

    To all you folks out there, seriously be careful about reliability and be even more careful about doing things to hamper the reliability, cuz in the end its your data, time and efforts that are at stake here (unless we are Anand whose job is to fully stress and review these new toys everyday)
  • mattmc61 - Wednesday, May 5, 2010 - link

    Sorry to hear you lost two drives, that must be pretty rare. I lost a 120g Vertex Turbo myself. No warning, just "poof", and it was gone. I think that's the nature of the beast there are no moving parts to let s.m.a.r.t. technology to know when a SSD is slowly dying. One this is for sure, you are right, we are guinea pigs when it comes to a technology in its infancy such as SSDs, which are experiencing growing pains. Anand did warn us a while back that we should procede at our own risk when it comes to these drives. He had a few SSDs go poof on him as well. It just suprizes me when guys buy bleeding edge technology, which usually costs a premium, has a high risk of failure, then procede to trash-mouth the manufacturer or the technology itself when it fails them. I think some people who want the latest and greatest so bad, that they have an "aah, that won't happen to me" attitude, go ahead and buy the product. Then when it fails they are shocked and take it personally like someone diliberately sabatage them. If you did your homework on that OCZ drive like you should have, you would know that the manufacturer really does care about their SSDs out in the wild are performing. I can tell you from personal experience, that when my drive died, they quickly replaced it. OCZ also has great support forum. I'm sure you won't lose all that money you spend if you just send back the drives for replacement. The bottom line is if you want reliability, then go back to machanical hard drives. If you want bleeding edge, the accept the risks and stop whining.

  • thebeastie - Tuesday, May 4, 2010 - link

    There is no point letting the sequential performance have any baring on your choice of SSD, if you like sequential speed just by a mechanical hard drive. But you have been there and no how crap it makes your end user experience.

    Thats why Intel is still great value for SSD despite all the latest random read and write benchmarks Anandtech has come up with they are still killer speed while the Indilix controllers are running at 0.5megs/sec aligned Windows 7 type performance.

    In other words anyone looking at sequential performance is really failing a basic mental handicapped test.
  • Chloiber - Wednesday, May 5, 2010 - link

    Actually, Indilinx is faster on 4k Random Reads with 1 Queue Depth.
  • stoutbeard - Tuesday, May 11, 2010 - link

    So what about when you get the agility 2? How do you get the newest sf-1200 firmware (1.01)? It's not on OCZ's site.
  • hartmut555 - Tuesday, May 25, 2010 - link

    I guess it might be a little late to comment here and expect a response, but I have been reading a few posts on forums suggesting leaving a portion of a mainstream SSD unpartitioned, so that the drive has a little more spare area to work with. Basically, it is the opposite of what this article is about - instead of recovering some of the spare area capacity for normal use, you are setting aside some of the normal use capacity for spare area. (And yes, they are talking about SSDs, not short-stroking a HDD.)

    In this article, it states that both the Intel and SandForce controllers appear to be dynamic in that they use any unused sectors as spare area. However, the tests show that the SandForce controller can have pretty much equivalent performance even when the spare area is decreased. This makes me think that there is some point at which more spare area ceases to provide a performance advantage after the drive has been filled (both user area and spare area) - the inevitable case if you are using SSDs in a RAID setup, since there is no TRIM support.

    The spare area acts as a sort of "buffer", but the controller implementation would make a big difference as to how much advantage a larger buffer might provide. The workload used for testing might also make a big difference in benchmarks, depending on the GC implementation. For instance, if the SSD controller is "lazy" and only shuffles stuff around when a write command is issued, and only enough to make room for the current write, then spare area size will have virtually no impact on performance. However, if the controller is "active" and lines up a large pool of pre-erased blocks, then having a larger spare area would increase the amount of rapid-fire writes that could happen before the pre-erased blocks were all used up and it had to resort to shuffling data around to free up more erase blocks. Finally, real world workloads almost always include a certain amount of idle time, even on servers. If the GC for the SSD is scheduled for drive idle time, then benchmarks that consist of recorded disk activity which are played back all at once would not allow time for the GC to occur.

    Having a complex controller between the port and the flash cells really complicates the evaluation of these drives. It would be nice if we had at least a little info from the manufacturers about stuff like GC scheduling and dynamic spare area usage. Also, it would be interesting to see a benchmark test that is run over a constant time with real-world idle periods (like actually reading the web page that is viewed), and measures wait times for disk activity.

    Has anyone tested the affects of increasing spare area (by leaving part of the drive unpartitioned) for drives like the X25-M that have a small spare area when TRIM is not available and the drive has reached its "used" (degraded) state?

Log in

Don't have an account? Sign up now