​Microwave Assisted Magnetic Recording (MAMR)

The WD Breakthrough

Western Digital's ​Microwave Assisted Magnetic Recording (MAMR) drives use platters very similar to those in the current-generation PMR drives*. This means that the innovation to enable MAMR is mainly to do with the heads that perform read and write operations.

As part of the MAMR design, WD pointed out to its shift to the damascene process for building the bit grains as the key enabler for the MAMR breakthrough. The process allows them to fabricate a spin torque oscillator (STO) capable of creating precise energy fields without any additional overheads. The embedded oscillator in the head is tuned to generate microwaves with a frequency of 20-40 GHz, and this provides the 'energy-assist' to make it easier to write to the bits (technically it lowers the coercivity of the underlying recording media).

* Current drives use an aluminium substrate with a cobalt-platinum layer.

WD  pointed out that MAMR requires absolutely no external heating of the media that could lead to reliability issues. The temperature profiles of MAMR HDDs (both platters and drive temperature itself) are expected to be similar to those of the current generation HDDs. It was indicated that the MAMR drives would meet all current data center reliability requirements.

Based on the description of the operation of MAMR, it is a no-brainer that HAMR has no future in its current form. Almost all hard drive industry players have a lot more patents on HAMR compared to MAMR. It remains to be seen if the intellectual property created on the HAMR side is put to use elsewhere.

Western Digital has talked about timeframes for the introduction of MAMR drives. They had working prototypes on display at the press and analyst event yesterday. WD's datacenter customers have their own four to six month qualification cycle, and MAMR drives for that purpose are expected to be out towards the middle of next year. Production-level HDDs based on MAMR technology are expected to start shipping in 2019.

Western Digital sees plenty of value in MAMR, and it is not hard to see why. MAMR technology allows for the bit densities of individual platters to scale to more than 4 Tb/sq.in. WD believes that it is well-positioned to bring 40TB drives by 2025 using MAMR alone.

Technologies such as SMR and TDMR are complementary to MAMR. Currently, WD does not use TDMR in any shipping enterprise drive, and SMR is restricted to a few host-managed models. It is possible that some MAMR drives will use those technologies to achieve higher capacity points compared to conventional drives. WD's working prototype on display was a helium drive (HelioSeal), but, WD again stressed that helium is not a compulsory requirement for MAMR drives. It was also confirmed that drives of 16TB and more would have to be MAMR-based.

In 2005, when the shift from longitudinal recording to PMR happened, most vendors managed to release drives based on the new technology within a few years of each other. The shift to helium in 2012, though expected by everyone in the industry, proved to be a big win for HGST - they had the markets that focus on high-capacity, or low-power, or low TCO to themselves for almost three years before Seagate eventually caught up. Toshiba is yet to release a helium drive publicly. It is going to be interesting to see how Seagate and Toshiba respond to this unexpected MAMR announcement from Western Digital.

The players in the hard drive industry have a robust cross-licensing program, and it is highly likely that other manufacturers will not face significant patent bottlenecks in bringing out MAMR drives on their own. WD stressed that the development is a multi-year effort, particularly if the heads are still being manufactured in the old dry pole process.

High-volume mature hard drives are often manufactured with the help of third-party suppliers - such as Showa Denko for the recording media and TDK for the heads. In the case of the MAMR drives, WD mentioned that all the components are being designed and manufactured in-house. It is possible for the competition to catch up faster if some of the third-party manufacturers are further along in their own R&D. In particular, TDK has been investing in MAMR R&D recently too. Toshiba has also shown interest in the same, but it is not clear how far along they are in the commercial development cycle. Currently, we believe WD has a clear lead in MAMR technology. It just remains to be seen how long it takes for the competition to catch up.

Part 1: Scaling Hard Drive Capacities and a Route to MAMR
Comments Locked

127 Comments

View All Comments

  • Jaybus - Monday, October 16, 2017 - link

    1 gigabit Ethernet has an upper limit of 125 MB/s. It would take a minimum of 320,000 seconds or around 3.7 days of continuous 125 MB/s transfer rate to backup a 40 TB drive. Cloud backup of large volumes of data isn't going to be practical until at least 20 gigabit connections are commonplace.
  • tuxRoller - Thursday, October 12, 2017 - link

    Backing up to the cloud is really slow.
    Retrieving from the cloud is really slow.
    When gigabit becomes ubiquitous then it will make more sense.
    Even then, your should keep at least one copy of the data for data about offline and local.
    Btw, I agree that raid is dead, but for different reasons. Namely, we've the much more flexible erasure coding (much more sophisticated than the simple xor encoding used by some of the raid levels) schemes that let you apply arbitrary amounts of redundancy and decide on placement of data. That's what the data centers have been moving towards.
  • alpha754293 - Thursday, October 12, 2017 - link

    Even with Gb WAN, it'd still be slow.

    I have GbE LAN and I'm starting to run into bottlenecks with that being slow that once I have the funds to do so, I'm likely going to move over to 4x FDR IB.
  • tuxRoller - Thursday, October 12, 2017 - link

    Slower than an all ssd, 10g lan, but how many people have that? 1g is roughly HDD speed.
  • BurntMyBacon - Friday, October 13, 2017 - link

    @tuxRoller

    1Gbps = 128MBps. cekim seems to think that 250MBps is a better estimate and alpha75493 suggests that these drives will increase in speed well beyond that. Granted this will not hold up for small file writes, but for large sequential data sets, the days of 1G ethernet being roughly equal to HDD speed are soon coming to an end.
  • tuxRoller - Friday, October 13, 2017 - link

    Even now HDD have sequential (non-cached) speeds in excess of 300MB (for enterprise 15k drives), but 250MB+ is currently available with the 8TB+ 7200 drives.
    Those are best case, but they might also be readily achievable depending on how your backup software works (eg., a block-based object store vs NTFS/zfs/xfs/etc).
  • alpha754293 - Thursday, October 12, 2017 - link

    @cekim

    Your math is a little bit off. If the areal density increases from 1.1 Tb/in^2 to 4 Tb/in^2, then so too will the data transfer speeds.

    It has to.

    Check that and update your calcs.

    @imaheadcase

    RAID is most definitely not dead.

    RAID HBAs addressing SANs is still farrr more efficient to map (even with GPT) a logical array rather than lots of physical tables.

    You do realise that there are rackmount enclosures that hold like 72 drives, right?

    If that were hosted as a SAN (or iSCSI), there isn't anything that you can put as AICs that will allow a host to control 72 JBOD drives simultaneously.

    It'd be insanity, not to mention the cabling nightmare.
  • bcronce - Thursday, October 12, 2017 - link

    Here's an interest topic on raid rebuilds for ZFS. While it can't fix the issue of writing 250MiB/s to a many TiB storage device, it is fun.

    Parity Declustered RAID for ZFS (DRAID)

    A quick overview is that ZFS can quickly rebuild a storage device if the storage device was mostly empty. This is because ZFS only needs to rebuild the data, not the entire drive. On the other hand, as the device gets fuller, the rate of a rebuild gets slower because walking the tree causes random IO. DRAID allows for a two pass where it optimistically writes out the data via a form of parity, then scrubs the data after to make sure it's actually correct. This allows the device to be quickly rebuilt by deferring the validation.
  • alpha754293 - Thursday, October 12, 2017 - link

    My biggest issue with ZFS is that there are ZERO data recovery tools available for it. You can't do a bit read on the media in order to recover the data if the pool fails.

    I was a huge proponent of ZFS throughout the mid 2000s. Now, I am completely back to NTFS because at least if a NTFS array fails, I can do a bit-read on the media to try and recover the data.

    (Actually spoke with the engineers who developed ZFS originally at Sun, now Oracle and they were able to confirm that there are no data recovery tools like that for ZFS. Their solution to a problem like that: restore from backup.)

    (Except that in my case, the ZFS server was the backup.)
  • BurntMyBacon - Friday, October 13, 2017 - link

    Are there any freely available tools to do this for NTFS. If so, please post as I'm sure more than an few people here would be interested in acquiring said tools. If not, what is your favorite non-free tool?

    I've been a huge fan of ZFS, particularly after my basement flooded and despite my NAS being submerged, I was able to recover every last bit of my data. Took a lot of work using DD and DDRescue, but I eventually got it done. That all said, a bit read tool would be nice.

Log in

Don't have an account? Sign up now