Introduction and Testbed Setup

A couple of weeks back, Western Digital updated their NAS-specific drive lineup with 5 and 6 TB Red drives. In addition, 7200 RPM Red Pro models with 2 - 4 TB capacities were also introduced. We have already looked at the performance of the WD Red, and it now time for us to take the WD Red Pro for a spin. In our 4 TB NAS drive roundup from last year, we also indicated that efforts would be taken to add more drives to the mix along with an updated benchmarking scheme involving RAID-5 volumes. The Red Pro gives us an opportunity to present results from the evaluation of various drives that have arrived in our labs since then.

The SMB / SOHO / consumer NAS market has been experiencing rapid growth over the last few years. With declining PC sales and increase in affordability of SSDs, hard drive vendors have scrambled to make up for the deficit and increase revenue by targeting the NAS market. The good news is that the growth is expected to accelerate in the near future (thanks to increasing amounts of user-generated data through the usage of mobile devices). In addition, security threats such as SynoLocker have also underscored the necessity of frequent backups.

Back in July 2012, Western Digital began the trend of hard drive manufacturers bringing out dedicated units for the burgeoning SOHO / consumer NAS market with the 3.5" Red hard drive lineup. The firmware was tuned for 24x7 operation in SOHO and consumer NAS units. 1 TB, 2 TB and 3 TB versions were made available at launch. Later, Seagate also jumped into the fray with a hard drive series carrying similar firmware features. Over the last two years, the vendors have been optimizing the firmware features as well as increasing the capacities. On the enterprise side, hard drive vendors have been supplying different models for different applications, but all of them are quite suitable for 24x7 NAS usage. While mission-critical applications tend to use SAS drives, it is the nearline SATA versions that are more suitable for home / SMB users. These enterprise drives provide better reliability / longer warranties compared to the NAS-specific WD Red and the Seagate NAS HDD lineups.

The correct choice of hard drives for a NAS system is influenced by a number of factors. These include expected workloads, performance requirements and power consumption restrictions, amongst others. In this review, we will discuss some of these aspects while evaluating ten different hard drives targeting the NAS market. One of the most glaring omissions in our list is HGST's Deskstar NAS. Due to HGST's strange sampling scheme, we are still trying to obtain enough drives for our NAS-specific benchmkaring, but they did send us their 4 TB SAS drive for participation in this roundup. Other than the HGST SAS drive, the other nine drives all carry a SATA interface.

  1. WD Red Pro (WD4001FFSX-68JNUN0)
  2. Seagate Enterprise Capacity 3.5" HDD v4 (ST4000NM0024-1HT178)
  3. WD Red (WD40EFRX-68WT0N0)
  4. Seagate NAS HDD (ST4000VN000-1H4168)
  5. WD Se (WD4000F9YZ-09N20L0)
  6. Seagate Terascale (ST4000NC000-1FR168)
  7. WD Re (WD4000FYYZ-01UL1B0)
  8. Seagate Constellation ES.3 (ST4000NM0033-9ZM170)
  9. Toshiba MG03ACA400
  10. HGST Ultrastar 7K4000 SAS (HUS724040ALS640)

The above drives do not target the same specific market. For example, the WD Red and Seagate NAS HDD are for 1- 8 bay NAS systems in the tower form factor. The WD Red Pro is meant for rackmount units up to 16 bays, but is not intended to be a replacement for drives such as the WD Re, Seagate Constellation ES.3, Seagate Enterprise Capacity v4 and the Toshiba MG03ACA400 which target enterprise applications requiring durability under heavy workloads. The WD Se and the Seagate Terascale target the capacity-sensitive cold storage / data center market.

Testbed Setup and Testing Methodology

Unlike our previous evaluation of 4 TB drives, we managed to obtain enough samples of the new drives to test them in a proper NAS environment. As usual, we will start off with a feature set comparison of the various drives, followed by a look at the raw performance when connected directly to a SATA 6 Gbps port. In the same PC, we also evaluate the performance of the drive using some aspects of our direct attached storage (DAS) testing methodology. For evaluation in a NAS environment, we configured three drives of each model in a RAID-5 volume and processed selected benchmarks from our standard NAS review methodology. Since our NAS drive testbed supports both SATA and SAS drives, but our DAS testbed doesn't, the HGST SAS drive was not subject to any of the DAS benchmarks. We plan to carry more detailed coverage of the HGST SAS unit in a future SAS-specific roundup.

We used two testbeds in our evaluation, one for benchmarking the raw drive and DAS performance and the other for evaluating performance when placed in a NAS unit.

AnandTech DAS Testbed Configuration
Motherboard Asus Z97-PRO Wi-Fi ac ATX
CPU Intel Core i7-4790
Memory Corsair Vengeance Pro CMY32GX3M4A2133C11
32 GB (4x 8GB)
DDR3-2133 @ 11-11-11-27
OS Drive Seagate 600 Pro 400 GB
Optical Drive Asus BW-16D1HT 16x Blu-ray Write (w/ M-Disc Support)
Add-on Card Asus Thunderbolt EX II
Chassis Corsair Air 540
PSU Corsair AX760i 760 W
OS Windows 8.1 Pro
Thanks to Asus and Corsair for the build components

In the above testbed, the hot swap bays of the Corsair Air 540 have to be singled out for special mention.
They were quite helpful in getting the drives processed in a fast and efficient manner for benchmarking. For NAS evaluation, we used the QNAP TS-EC1279U-SAS-RP. This is very similar to the unit we reviewed last year, except that we have a slightly faster CPU, more RAM and support for both SATA and SAS drives.

The NAS setup itself was subjected to benchmarking using our standard NAS testbed.

AnandTech NAS Testbed Configuration
Motherboard Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB
CPU 2 x Intel Xeon E5-2630L
Coolers 2 x Dynatron R17
Memory G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30
OS Drive OCZ Technology Vertex 4 128GB
Secondary Drive OCZ Technology Vertex 4 128GB
Tertiary Drive OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD)
Other Drives 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS)
Network Cards 6 x Intel ESA I-340 Quad-GbE Port Network Adapter
Chassis SilverStoneTek Raven RV03
PSU SilverStoneTek Strider Plus Gold Evolution 850W
OS Windows Server 2008 R2
Network Switch Netgear ProSafe GSM7352S-200

Thank You!

We thank the following companies for helping us out with our NAS testbed:

4 TB NAS and Nearline Drives Face-Off: The Contenders
Comments Locked

62 Comments

View All Comments

  • sin_tax - Thursday, August 21, 2014 - link

    Transcoding != streaming. Most all NAS boxes are underpowered when it comes to transcoding.
  • jaden24 - Friday, August 8, 2014 - link

    Correction.

    WD Red Pro: Non-Recoverable Read Errors / Bits Read 10^15
  • Per Hansson - Friday, August 8, 2014 - link

    I'm not qualified to say for certain, but I think it's just marketing
    bullshit to make the numbers look better when infact they are the same?

    Non-Recoverable Read Errors per Bits Read:
    <1 in 10^14 WD Red
    <10 in 10^15 WD Red Pro
    <10 in 10^15 WD Se

    Just for reference the Seagate Constellation ES.3, Hitachi Ultrastar He6 &
    7K4000 are all rated for: 1 in 10^15
  • shodanshok - Friday, August 8, 2014 - link

    Hi, watch at the specs carefully: WD claim <10 errors for 10^15 bytes read.
    Previously, it was <1 each 10^14.

    In other word, they increase the exponent (14 vs 15), but the value is (more or less) the same!
  • jaden24 - Friday, August 8, 2014 - link

    Nice catch. I just went and looked up all of them.

    RE = 10 in 10^16
    Red Pro = 10 in 10^15
    SE = 10 in 10^15
    Black = 1 in 10^14
    Red = 1 in 10^14
    Green = 1 in 10^14

    It looks like they switched to this marketing on RE and SE. The terms are in black and white, but it is a deviation from a measurement scheme, and can only be construed as deceiving in my book. I love WD, but this pisses me off.
  • isa - Friday, August 8, 2014 - link

    For my home/home office use, the most important aspect by far for me is reliability/failure rates, mainly because I don't want to invest in more than 2 drives or go beyond Raid 1. I realize the most robust reliability info is based on several years of statistics in the field, but is their any kind of accelerated life test that Anandtech can do or get access to that has been proven to be a fairly reliable indicator of reliability/failure rates differences across the models? I'm aware of the manufacturer specs, but I don't regard those as objective or measured apples-apples across manufacturers.

    If historical reliability data is fairly consistent across multiple drives across a manufacturers' line, perhaps at least provide that as a proxy for predicted actual reliability. Thanks for considering any of this!
  • NonSequitor - Friday, August 8, 2014 - link

    If you're really concerned about reliability, you need double-parity rather than RAID-1. In a double parity situation the system knows which drive is returning bad data. With RAID-1 is doesn't know which drive is right. Of course either way you should have a robust backup infrastructure in place if your data matters.
  • shodanshok - Friday, August 8, 2014 - link

    Mmm... it depends.

    If you speak about silent data corruption (ie: read error that aren't detected by the build in ECC), RAIDZ2 and _some_ RAID6 implementation should be able to isolate the wrond data. However this require that for every read both parity data (P and Q) are re-calculated, hammering the disk subsystem and the controller's CPU. RAIDZ2, by design, do this precise thing (N disks in a single RAIDZ array give you the same IOPS that a single disk), but many RAID6 implementation simply don't do that for performance reason (Linux MDRAID, for example, don't to it).

    If you are referring to UREs, even a single parity scheme as RAID5 is sufficient for non-degraded conditions. The true problem is for the degraded scenario: in this case and on most hardware RAID implementation, a single URE error will kill your entire array (note: MDRAID behave differently and let you to recover _even_ from this scenario, albeit with some corrupted data and in different manner based on its version). In this case, a double parity scheme is a big advantage.

    On the other hand, while mirroring is not 100% free from URE risk, it need to re-read the content of a _single_ disk, not the entire array. In other word, it is less exposed to URE problem then RAID5 simply because it has to read less data to recover from a failure (but with current capacities RAID6 is even less exposed to this kind of error).

    Regards.
  • NonSequitor - Friday, August 8, 2014 - link

    MDRAID will recheck all of the parity during a scrub. That's really enough to catch silent data corruption before your backups go stale. It's not perfect but is a good balance between safety and performance. The problem with RAID5 compared to RAID6 is that in a URE situation the RAID5 will still silently produce garbage as it has no way to know if the data is correct. RAID6 is at least capable of spotting those read errors.
  • shodanshok - Friday, August 8, 2014 - link

    I think you are confusing URE with silent dats corruption. An URE in non-degraded RAID 5 will not produce any data corruption, it will simply trigger a stripe reconstruction.

    An URE in degraded RAID 5 will not produce any data corruption, but it will result in a "dead" or faulty array.

    A regular scrub will prevents unexpected UREs, but if a drive suddenly become returning garbage, even regular scrubs can not do anything.

    In theory, RAID6 can identify what drive is returning garbage because it has two different parity data. However, as stated above, that kind of control is avoided due to the severe performance penalties it implies.

    RAIDZ2 take the safe path and do a complete control for every read, but as results its performance is quite low.

    Regards.

Log in

Don't have an account? Sign up now