In 2008 Intel introduced its first SSD, the X25-M, and with it Intel ushered in a new era of primary storage based on non-volatile memory. Intel may have been there at the beginning, but it missed out on most of the evolution that followed. It wasn't until late 2012, four years later, that Intel showed up with another major controller innovation. The Intel SSD DC S3700 added a focus on IO consistency, which had a remarkable impact on both enterprise and consumer workloads. Once again Intel found itself at the forefront of innovation in the SSD space, only to let others catch up in the coming years. Now, roughly two years later, Intel is back again with another significant evolution of its solid state storage architecture.

Nearly all prior Intel drives, as well as drives of its most qualified competitors have played within the confines of the SATA interface. Designed for and limited by the hard drives that came before it, SSDs used SATA to sneak in and take over the high performance market, but they did so out of necessity, not preference. The SATA interface and the hard drive form factors that went along with it were the sheep's clothing to the SSD's wolf. It became clear early on that a higher bandwidth interface was necessary to really give SSDs room to grow.

We saw a quick transition from 3Gbps to 6Gbps SATA for SSDs, but rather than move to 12Gbps SATA only to saturate it a year later most SSD makers set their eyes on PCIe. With PCIe 3.0 x16 already capable of delivering 128Gbps of bandwidth, it's clear this was the appropriate IO interface for SSDs. Many SSD vendors saw the writing on the wall initially, but their PCIe based SSD solutions typically leveraged a bunch of SATA SSD controllers behind a PCIe RAID controller. Only a select few PCIe SSD makers developed their own native controllers. Micron was among the first to really push a native PCIe solution with its P320h and P420m drives.

Bandwidth limitations were only one reason to want to ditch SATA. The other bit of legacy that needed shedding was AHCI, the interface protocol for communication between host machines and their SATA HBAs (Host Bus Adaptors). AHCI was designed for a world where low latency NAND based SSDs didn't exist. It ends up being a fine protocol for communicating with high latency mechanical disks, but one that consumes an inordinate amount of CPU cycles for high performance SSDs.

In the example above, the Linux AHCI stack alone requires around 27,000 cycles. The result is you need 10 Sandy Bridge CPU cores to drive 1 million IOPS. The solution is a new lightweight, low latency interface - one designed around SSDs and not hard drives. The result is NVMe (Non-Volatile Memory Express otherwise known as NVM Host Controller Interface Specification - NVMHCI). And in the same example, total NVMe overhead is reduced to 10,000 cycles, or roughly 3.5 Sandy Bridge cores needed to drive 1 million IOPS.

NVMe drives do require updated OS/driver support. Windows 8.1 and Server 2012R2 both include NVMe support out of the box, older OSes require the use of a miniport driver to enable NVMe support. Booting to NVMe drives shouldn't be an issue either. 

NVMe is a standard that seems to have industry support behind it. Samsung already launched its own NVMe drives, SandForce announced NVMe support with its SF3700 and today Intel is announcing a family of NVMe SSDs.

The Intel SSD DC P3700, P3600 and P3500 are all PCIe SSDs that feature a custom Intel NVMe controller. The controller is an evolution of the design used in the S3700/S3500, with improved internal bandwidth via an expanded 18-channel design, reduced internal latencies and NVMe support built in. The controller connects to as much as 2TB of Intel's own 20nm MLC NAND. The three different drives offer varying endurance and performance needs:

The pricing is insanely competitive for brand new technology. The highest endurance P3700 drive is priced at around $3/GB, which is similar to what enthusiasts were paying for their SSDs not too long ago. The P3600 trades some performance and endurance for $1.95/GB, and the P3500 drops pricing down to $1.495/GB. The P3700 ships with Intel's highest endurance NAND and highest over provisioning percentage (25% spare area vs. 12% on the P3600 and 7% on the P3500). DRAM capacities range from 512MB to 2.5GB of DDR3L on-board. All drives will be available in half-height, half-length PCIe 3.0 x4 add in cards or 2.5" SFF-8639 drives.

Intel sent us a 1.6TB DC P3700 for review. Based on Intel's 400GB drive pricing from the table above, the drive we're reviewing should retail for $4828.

IO Consistency

A cornerstone of Intel's DC S3700 architecture was its IO consistency. As the P3700 leverages the same basic controller architecture as the S3700, I'd expect a similar IO consistency story. I ran a slightly modified version of our IO consistency test, but the results should still give us some insight into the P3700's behavior from a consistency standpoint:

IO consistency seems pretty solid, the IOs are definitely not as tightly grouped as we've seen elsewhere. The P3700 still appears to be reasonably consistent and it does attempt to increase performance over time.

Sequential Read & Write Performance
Comments Locked

85 Comments

View All Comments

  • Ammaross - Tuesday, June 3, 2014 - link

    The M.2 SATA-protocol-on-PCIe drives? A comparison would mean Apple would need have to have support for NVMe first or the ability to even slot in such a card (rules out the 3 offerings you outlined).
  • extide - Tuesday, June 3, 2014 - link

    What are you talking about? "The M.2 SATA-protocol-on-PCIe drives?" doesn't even make sense.

    All you need to do to compare them is run the benchmarks on the apple hardware, possibly while running under a windows OS. OR, if the drives use the regular m.2 style connector you could just stick them in a desktop. The fact they run AHCI over PCIe does not make a comparison impossible, in fact all of the other PCIe cards in this review that were benchmarked against were also AHCI based cards. Seems like NVMe is confusing people at lot more than it should.
  • Galatian - Wednesday, June 4, 2014 - link

    I think he tried to say that you can't stick one of those new NVMe drives into a Mac, since OS X does not yet support NVMe.

    That being said, Apple discontinued the old Mac Pro where you could put a PCIe device inside, so the point is moot no matter what.
  • gospadin - Tuesday, June 3, 2014 - link

    I'd have liked to see the drive in 25W mode too
  • extide - Tuesday, June 3, 2014 - link

    Yeah, I would as well. I am assuming the 25W mode has specific cooling requirements? More info on this would be nice. Also what is the default TDP?
  • eanazag - Tuesday, June 3, 2014 - link

    That's also the first thing I thought. I wanted to see the boost level. That bottom is pretty close to where I would consider splurging for my desktop with a 400GB. If you consider a RAID card and few drives then $600 is justifiable.

    I stayed away from the PCIe SSDs because of boot issues and quality concerns. A lot of those were OCZ.
  • Galatian - Tuesday, June 3, 2014 - link

    I might have just overlooked it, but I guess those drives are not bootable?
  • 457R4LDR34DKN07 - Tuesday, June 3, 2014 - link

    " Booting to NVMe drives shouldn't be an issue either."
  • Galatian - Tuesday, June 3, 2014 - link

    Ah great...so this might be a nice alternative to the lackluster state M.2 is right now after all!
  • dopp - Tuesday, June 3, 2014 - link

    NVMe won't necessarily be a replacement for M.2. M.2 is just the connector, and the M.2 standard supports both SATA and NVMe as protocols to control the SSD. That said, you need a motherboard that's wired to give PCIe-over-M.2 as well as a drive that supports NVMe, and NVMe M.2 drives will likely be much better than SATA ones.

Log in

Don't have an account? Sign up now