Buying an SSD for your notebook or desktop is nice. You get more consistent performance. Applications launch extremely fast. And if you choose the right SSD, you really curb the painful slowdown of your PC over time. I’ve argued that an SSD is the single best upgrade you can do for your computer, and I still believe that to be the case. However, at the end of the day, it’s a luxury item. It’s like saying that buying a Ferrari will help you accelerate quicker. That may be true, but it’s not necessary.

In the enterprise world however, SSDs are even more important. Our own Johan de Gelas had his first experience with an SSD in one of his enterprise workloads a year ago. This OLTP test looks at the performance difference between using 15K RPM SAS drives for a database server. Johan experimented with using SAS HDDs vs. SSDs for both data and log drives in the server.

Using a single SSD (Intel’s X25-E) for a data drive and a single SSD for a log drive is faster than running eight 15,000RPM SAS drives in RAID 10 plus another two in in RAID 0 as a logging drive.

Not only is performance higher, but total power consumption is much lower. Under full load eight SAS drives use 153W, compared to 2 - 4W for a single Intel X25-E. There are also reliability benefits. While mechanical storage requires redundancy in case of a failed disk, SSDs don’t. As long as you’ve properly matched your controller, NAND and ultimately your capacity to your workload, an SSD should fail predictably.

The overwhelming number of poorly designed SSDs on the market today is one reason most enterprise customers are unwilling to consider SSDs. The high margins available in the enterprise market is the main reason SSD makers are so eager to conquer it.

Micron’s Attempt

Just six months ago we were first introduced to Crucial’s RealSSD C300. Not only was it the first SSD we tested with a native 6Gbps SATA interface, but it was also one of the first to truly outperform Intel across the board. A few missteps later and we found the C300 to be a good contender, but our second choice behind SandForce based drives like the Corsair Force or OCZ Vertex 2.

Earlier this week Micron, Crucial’s parent company, called me up to talk about a new SSD. This drive would only ship under the Micron name as it’s aimed squarely at the enterprise market. It’s the Micron RealSSD P300.

The biggest difference between the P300 and the C300 is that the former uses SLC (Single Level Cell) NAND Flash instead of MLC NAND. As you may remember from my earlier SSD articles, SLC and MLC NAND are nearly identical - they just store different amounts of data per NAND cell (1 vs. 2).


SLC (left) vs. MLC (right) NAND

The benefits of SLC are higher performance and a longer lifespan. The downside is cost. SLC NAND is at least 2x the price of MLC NAND. You take up the same die area as MLC but you get half the storage. It’s also produced in lower quantities so you get at least twice the cost.

  SLC NAND flash MLC NAND flash
Random Read 25 µs 50 µs
Erase 2ms per block 2ms per block
Programming 250 µs 900 µs

Micron wouldn’t share pricing but it expects drives to be priced under $10/GB. That’s actually cheaper than Intel’s X25-E, despite being 2 - 3x more than what we pay for consumer MLC drives. Even if we’re talking $9/GB that’s a bargain for enterprise customers if you can replace a whole stack of 15K RPM HDDs with just one or two of these.

The controller in the P300 is nearly identical to what was in the C300. The main differences are two fold. First, the P300’s controller supports ECC/CRC from the controller down into the NAND. Micron was unable to go into any more specifics on what was protected via ECC vs. CRC. Secondly, in order to deal with the faster write speed of SLC NAND, the P300’s internal buffers and pathways operate at a quicker rate. Think of the P300’s controller as a slightly evolved version of what we have in the C300, with ECC/CRC and SLC NAND support.


The C300

The rest of the controller specs are identical. We still have the massive 256MB external DRAM and unchanged cache size on-die. The Marvell controller still supports 6Gbps SATA although the P300 doesn’t have SAS support.

Micron P300 Specifications
  50GB 100GB 200GB
Formatted Capacity 46.5GB 93.1GB 186.3GB
NAND Capacity 64GB SLC 128GB SLC 256GB SLC
Endurance (Total Bytes Written) 1 Petabyte 1.5 Petabytes 3.5 Petabytes
MTBF 2 million device hours 2 million device hours 2 million device hours
Power Consumption < 3.8W < 3.8W < 3.8W

The P300 will be available in three capacities: 50GB, 100GB and 200GB. The drives ship with 64GB, 128GB and 256GB of SLC NAND on them by default. Roughly 27% of the drive capacity is designated as spare area for wear leveling and bad block replacement. This is in line with other enterprise drives like the original 50/100/200GB SandForce drives and the Intel X25-E. Micron’s P300 datasheet seems to imply that the drive will dynamically use unpartitioned LBAs as spare area. In other words, if you need more capacity or have a heavier workload you can change the ratio of user area to spare area accordingly.

Micron shared some P300 performance data with me:

Micron P300 Performance Specifications
  Peak Sustained
4KB Random Read Up to 60K IOPS Up to 44K IOPS
4KB Random Write Up to 45.2K IOPS Up to 16K IOPS
128KB Sequential Read Up to 360MB/s Up to 360MB/s
128KB Sequential Write Up to 275MB/s Up to 255MB/s

The data looks good, but I’m working on our Enterprise SSD test suite right now so I’ll hold off any judgment until we get a drive to test. Micron is sampling drives today and expects to begin mass production in October.

Comments Locked

49 Comments

View All Comments

  • Gilbert Osmond - Friday, August 13, 2010 - link

    Whether a Ferrari is considered a luxury or not also depends on the user/usage...


    A Ferrari is a luxury for 99.99997328471% of Ferrari owners. Aside from German police units on the Autobahn (who would probably use German high-end cars, anyhow,) is there any necessary or serious use for Ferraris? Not really.
  • jabberwolf - Friday, August 27, 2010 - link

    It does depend on the task and you are right...
    99% wouldnt use it for its purpose other than flash.

    But lets say I want to use it for 100-200 virtual destops on a server, and these VDI sessions eat iops.
    Not to mention, I need this IOP consumtion as well as a drive that can erase bad data when the VDI sessions are not provisioned and the next group needs to login.

    These drives are a god send to enterprise use that are looking for these drives, and dont want to go and buy 20 sas drives and a drive array...to get th same IOPS use.

    If you dont know what I am talking about - then youre probably not in that 1% market ;)
  • eanazag - Thursday, August 12, 2010 - link

    I am glad to see more enterprise offerings. I hope the pricing is right. The spec numbers look good. I have followed the articles closely on SSDs. I own 3 Intel X-25M 80GBs and a 60GB Mushkin Callisto Enhanced Deluxe. I passed on the Crucial 64GB for performance drops without TRIM and the 256GB model really has the enviable specs. I am using these in a ESXi server running training VMs of Windows XP with a MS SQL DB employed. The performance allows me to skip RAID and really pack in the VMs.

    I want to see more OPAL spec encryption on SSDs other than Samsung on the market. The current Samsung drives only appeal to me is the OPAL spec encryption. The new Samsung drives may be reasonable. The company I work for could potentially gobble these drive types up over the next few years to replace software encryption on client machines. RAID 5/6 and the ilk support would be nice also.
  • ///// - Thursday, August 12, 2010 - link

    How come SSDs don't require redundancy? Are you saying they never fail prematurely? Is there any data?
  • Gilbert Osmond - Thursday, August 12, 2010 - link

    For all-you-can-read (and more) about SSD speed, reliability, marketplace analysis, history, etc., I *highly* recommend this site:

    http://ww.storagesearch.com

    See this link for a specific discussion on data integrity:
    http://www.storagesearch.com/sandforce-art1.html

    The site is one person's experienced, informed, independent,technically-competent perspective on the past, present, and future of the SSD market. The editor's focus is on the enterprise, but that's because until very recently SSDs have been almost exclusively an enterprise-level product.

    Some of his newer articles address the newly-emerging lower levels of the market, i.e. SOHO / consumer applications such as laptops and small NAS.
  • ///// - Thursday, August 12, 2010 - link

    Seriously? I want data and you give me this?
  • Gilbert Osmond - Friday, August 13, 2010 - link

    Seriously? I'm going to waste my time doing your homework for you? Show some gratitude and use your mouse-button to find your own answers; they're out there on the Intarweb waiting for you.
  • yknott - Thursday, August 12, 2010 - link

    SSD's definitely require redundancy. An SSD is just like any other electronics item, it can and will fail at some point. In the tests that Johan performed above, I'm guessing that the SSDs were in some sort of RAID 10 configuration. The single SSD configuration was for pure illustration. No one would ever run on that kind of configuration in a real production system.

    I'm not sure I would even question if/when SSDs fail. it doesn't matter. If the data you are writing to your SSD is important and uptime is valuable, then redundancy is required.

    On a side note, Anand/Johan, I'd love to find out the specs for the server you ran those tests on. I'm curious to see if the RAID controller was being maxed out at any point. A simple test would be to see how the results scaled as you moved from 4->6->8 drives in your array. I'm willing to bet that you're starting to hit a limit where the controller can't keep up with all of the i/o requests coming through.
  • jimhsu - Thursday, August 12, 2010 - link

    Read: http://research.microsoft.com/pubs/63596/USENIX-08...
    http://forums.anandtech.com/showthread.php?t=20710...

    Briefly:

    SSDs have more predictable failure modes than hard drives. Read/write errors are directly correlated with block lifetime. Raw error rates in general are at least 10 times lower than hard drives. Parity varies from industry standard (URE 10^-14) to extreme (i.e. Sandforce). Controller failures are not covered.

    I highly recommend searching the internet (http://tinyurl.com/32lzc6s) Plenty of useful info.
  • ///// - Friday, August 13, 2010 - link

    The internet tends to focus on easily measured or predicted failure modes that may not lead to sudden drive failures at all.
    What is known about "Controller failures" as in here
    http://forums.anandtech.com/showpost.php?p=2887340...

Log in

Don't have an account? Sign up now