Direct-Attached Storage Benchmarks

Our evaluation routine for hard-drive based direct-attached storage devices borrows heavily from the testing methodology for flash-based direct-attached storage devices. The testbed hardware (the Thunderbolt 3 / USB 3.1 Gen 2 Type-C port enabled by the Alpine Ridge host controller in the Hades Canyon NUC) is reused. CrystalDiskMark is used for a quick performance overview. Real-world performance testing is done with our custom test suite involving robocopy bencharks and PCMark 8's storage bench.

CrystalDiskMark uses four different access traces for reads and writes over a configurable region size. Two of the traces are sequential accesses, while two are 4K random accesses. Internally, CrystalDiskMark uses the Microsoft DiskSpd storage testing tool. The 'Seq Q32T1' sequential traces use 128K block size with a queue depth of 32 from a single thread, while the '4K Q32T1' ones do random 4K accesses with the same queue and thread configurations. The plain 'Seq' traces use a 1MiB block size. The plain '4K' ones are similar to the '4K Q32T1' except that only a single queue and single thread are used.

Comparing the '4K Q32T1' and '4K' numbers can quickly tell us whether the storage device supports NCQ (native command queuing) / UASP (USB-attached SCSI protocol). If the numbers for the two access traces are in the same ballpark, NCQ / UASP is not supported. This assumes that the host port / drivers on the PC support UASP. We can see that the Seagate Backup Plus external storage drives do support NCQ and UASP. Performance numbers are typical of what one might expect from a 5400 RPM hard drive, with peak performance close to 150 MBps for the 5TB Backup Plus Portable and around 135 MBps for the 2TB Backup Plus Slim.

HDD-Based Direct-Attached Storage Benchmarks - CrystalDiskMark

Moving on to the real-world benchmarks, we first look at the results from our custom robocopy test. In this test, we transfer three folders with the following characteristics.

  • Photos: 15.6 GB collection of 4320 photos (RAW as well as JPEGs) in 61 sub-folders
  • Videos: 16.1 GB collection of 244 videos (MP4 as well as MOVs) in 6 sub-folders
  • BR: 10.7 GB Blu-ray folder structure of the IDT Benchmark Blu-ray (the same that we use in our robocopy tests for NAS systems)

The test starts off with the Photos folder in a RAM drive in the testbed. robocopy is used with default arguments to mirror it onto the storage drive under test. The content on the RAM drive is then deleted. robocopy is again used to transfer the content, but, from the storage drive under test to the RAM drive. The first segment gives the write speed, while the second one gives the read speed for the storage device. The segments end with the purge of the contents from the storage device. This process is repeated thrice and the average of all the runs is recorded as the performance number. The same procedure is adopted for the Videos and the BR folders.

Photos Read

The 5TB Backup Plus Portable comes out on top in a couple of workloads, as does the 2TB Backup Plus Slim. However, the relative positions across different workloads are not consistent. This indicates that performance consistency under sustained traffic is not predictable for these drives.

High-performance external storage devices can also be used for editing multimedia files directly off the unit. They can also be used as OS-to-go boot drives. Evaluation of this aspect is done using PCMark 8's storage bench. The storage workload involves games as well as multimedia editing applications. The command line version allows us to cherry-pick storage traces to run on a target drive. We chose the following traces.

  • Adobe Photoshop (Light)
  • Adobe Photoshop (Heavy)
  • Adobe After Effects
  • Adobe Illustrator

Usually, PC Mark 8 reports time to complete the trace, but the detailed log report has the read and write bandwidth figures which we present in our performance tables. Note that the bandwidth number reported in the results don't involve idle time compression. Results might appear low, but that is part of the workload characteristic. Note that the same CPU is being used for all configurations. Therefore, comparing the numbers for each trace should be possible across different DAS units.

Adobe Photoshop Light Read

The lack of performance consistency is more pronounced in these benchmarks. In fact, the two drives being reviewed today appear in the bottom half of the graphs more often than not. The reason for this requires deeper investigation into SMR characteristics, and this is presented in the next section.

Introduction and Product Impressions Investigating SMR for Consumer Workloads
Comments Locked

62 Comments

View All Comments

  • sheh - Tuesday, June 18, 2019 - link

    Seagate's had a few bad 4TB models. ST4000DM001, and maybe DM005 and DX001. Models like DM000 seem better:

    https://www.backblaze.com/blog/hard-drive-stats-fo...
    https://www.backblaze.com/blog/hard-drive-stats-fo...
  • sheh - Tuesday, June 18, 2019 - link

    (This was a reply to oRAirwolf. Anandtech's commenting system fails to create a sub-comment without Javascript.)
  • sheh - Tuesday, June 18, 2019 - link

    22 hours to write the whole 5TB drive?!
  • abufrejoval - Wednesday, June 19, 2019 - link

    I am a little surprised, that a consistent sequential write to an SMR should drop the data rate below a non-shingled drive. AFAIK only updates-in-place of a shingle should trigger the SMR write-amplification, unless the firmware actually always writes to a none-shingled section of the drive first, similar to an SLC buffer on TLC/QLC SSDs. That seems to happen with the fio workload, but not with the backup: Otherwise its performance would have to drop similarly.

    I guess that is where SMRs would have a command set which allows applications to steer that behavior by hinting how data should be handled. And perhaps they should support a variant of TRIM, by which an OS could signal, which parts of a shingle no longer need preservation and avoid the write amplification.

    The problem is that without some low-level tool as a user you currently don't really have control over an SMR drive's behavior. The OS could/would/should know perhaps, that the large set of files you are copying are in fact intended to replace your last backup, but at the block level of a drive, without some help from the OS or a hint via a tool all that useful information is lost and the firmware needs to second guess your intentions.

    I don't think I have heard of any SMR specific optimizations on Windows, and to be honest not even on Linux. And then this isn't just OS but also file system specific and AFAIK exFAT isn't known for its sophistication.

    In any case I'd expect your experience to vary over the life-time of the drive. First time you fill it, it might be ok enough, but once you're into incremental backups replacing smallish files at near medium capacity, the 25% capacity increase may turn out too expensive in extra time.

    If you're doing full backups only, erasing first might help. But only, if there is a way to tell the drive that entire shingles don't need preservation.
  • Kastriot - Wednesday, June 19, 2019 - link

    Pricing for 5TB model is very tempting.
  • ballsystemlord - Wednesday, June 19, 2019 - link

    Only one grammar error, good work Ganesh!

    "Sustained sequential writes for a hour or more are not realistic workloads for a majority of the retail consumers."
    "an" not "a" (Yes, its an English idiosyncrasy, not the typical "an" before vowel "a" before consonent):
    "Sustained sequential writes for an hour or more are not realistic workloads for a majority of the retail consumers."
  • ballsystemlord - Wednesday, June 19, 2019 - link

    *consonant
  • austonia - Monday, June 24, 2019 - link

    The 5TB drives are often $95 or $100 at Costco. I have half dozen, they work fine.
  • badbanana - Tuesday, June 25, 2019 - link

    for those using such devices for backup, over time the files stored in this external HDDs would fail eventually (according to my findings). therefore i make sure to have another backup somewhere, like the cloud, to ensure that the files will be readable in the coming years. that's my plan B.

    for the rest of you, what are your plan Bs?
  • Chloiber - Saturday, June 29, 2019 - link

    Interesting read, thanks!
    We often use external hard disks from WD to create archives of certain datasets (usually very large, single files) and had to use the 5TB Seagate version for the first time as we exceeded 4TB.
    I did notice that it took very long for the copy - I don't think I have kept the logs, but this would explain a lot!

Log in

Don't have an account? Sign up now