​Microwave Assisted Magnetic Recording (MAMR)

The WD Breakthrough

Western Digital's ​Microwave Assisted Magnetic Recording (MAMR) drives use platters very similar to those in the current-generation PMR drives*. This means that the innovation to enable MAMR is mainly to do with the heads that perform read and write operations.

As part of the MAMR design, WD pointed out to its shift to the damascene process for building the bit grains as the key enabler for the MAMR breakthrough. The process allows them to fabricate a spin torque oscillator (STO) capable of creating precise energy fields without any additional overheads. The embedded oscillator in the head is tuned to generate microwaves with a frequency of 20-40 GHz, and this provides the 'energy-assist' to make it easier to write to the bits (technically it lowers the coercivity of the underlying recording media).

* Current drives use an aluminium substrate with a cobalt-platinum layer.

WD  pointed out that MAMR requires absolutely no external heating of the media that could lead to reliability issues. The temperature profiles of MAMR HDDs (both platters and drive temperature itself) are expected to be similar to those of the current generation HDDs. It was indicated that the MAMR drives would meet all current data center reliability requirements.

Based on the description of the operation of MAMR, it is a no-brainer that HAMR has no future in its current form. Almost all hard drive industry players have a lot more patents on HAMR compared to MAMR. It remains to be seen if the intellectual property created on the HAMR side is put to use elsewhere.

Western Digital has talked about timeframes for the introduction of MAMR drives. They had working prototypes on display at the press and analyst event yesterday. WD's datacenter customers have their own four to six month qualification cycle, and MAMR drives for that purpose are expected to be out towards the middle of next year. Production-level HDDs based on MAMR technology are expected to start shipping in 2019.

Western Digital sees plenty of value in MAMR, and it is not hard to see why. MAMR technology allows for the bit densities of individual platters to scale to more than 4 Tb/sq.in. WD believes that it is well-positioned to bring 40TB drives by 2025 using MAMR alone.

Technologies such as SMR and TDMR are complementary to MAMR. Currently, WD does not use TDMR in any shipping enterprise drive, and SMR is restricted to a few host-managed models. It is possible that some MAMR drives will use those technologies to achieve higher capacity points compared to conventional drives. WD's working prototype on display was a helium drive (HelioSeal), but, WD again stressed that helium is not a compulsory requirement for MAMR drives. It was also confirmed that drives of 16TB and more would have to be MAMR-based.

In 2005, when the shift from longitudinal recording to PMR happened, most vendors managed to release drives based on the new technology within a few years of each other. The shift to helium in 2012, though expected by everyone in the industry, proved to be a big win for HGST - they had the markets that focus on high-capacity, or low-power, or low TCO to themselves for almost three years before Seagate eventually caught up. Toshiba is yet to release a helium drive publicly. It is going to be interesting to see how Seagate and Toshiba respond to this unexpected MAMR announcement from Western Digital.

The players in the hard drive industry have a robust cross-licensing program, and it is highly likely that other manufacturers will not face significant patent bottlenecks in bringing out MAMR drives on their own. WD stressed that the development is a multi-year effort, particularly if the heads are still being manufactured in the old dry pole process.

High-volume mature hard drives are often manufactured with the help of third-party suppliers - such as Showa Denko for the recording media and TDK for the heads. In the case of the MAMR drives, WD mentioned that all the components are being designed and manufactured in-house. It is possible for the competition to catch up faster if some of the third-party manufacturers are further along in their own R&D. In particular, TDK has been investing in MAMR R&D recently too. Toshiba has also shown interest in the same, but it is not clear how far along they are in the commercial development cycle. Currently, we believe WD has a clear lead in MAMR technology. It just remains to be seen how long it takes for the competition to catch up.

Part 1: Scaling Hard Drive Capacities and a Route to MAMR
Comments Locked

127 Comments

View All Comments

  • cekim - Thursday, October 12, 2017 - link

    The bigger concern is throughput - if it takes the bulk of the MTBF of a drive to write then read it we are gonna have a bad time... quick math - maybe I goofed, but given 250MB/s and TB = 1024^4 that's 167,772s or 2796m or 46 hours to read the entire drive. Fun time waiting 2 days for a raid re-build...
  • imaheadcase - Thursday, October 12, 2017 - link

    If you are using this for home use, you should not be using raid anyways. Since you will had SSD on computer, and also if its a server bandwidth is not a concern since its on LAN. And backing up to cloud is what %99 of people do in that situation.

    RAID is dead for most part.
  • qap - Thursday, October 12, 2017 - link

    It's dangerous not only for RAID, but also for that "cloud" you speak of and underlaying object storages. Typical object storage have 3 replicas. With 250MBps peak write/read speed you are not looking at two days to replicate all files. In reality it's more like two weeks to one month because you are handling lot of small files, transfer over LAN and in that case both read and write suffer. Over the course of several weeks there is too high probability of 3 random drives failing.
    We were considering 60TB SATA SSDs for our object storage, but it simply doesn't add up even in case of SSD-class read/write.
    Especially if there is only single supplier of such drives, chance of synchronized failure of multiple drives is too high (we had one such scare).
  • LurkingSince97 - Friday, October 20, 2017 - link

    That is not how it works. If you have 3 replicas, and one drive dies, then all of that drive's data has two other replicas.

    Those two other replicas are _NOT_ just on two other drives. A large clustered file system will have the data split into blocks, and blocks randomly assigned to other drives. So if you have 300 drives in a cluster, a replica factor of 3, and one drive dies, then that drive's data has two copies, evenly spread out over the other 299 drives. If those are spread out across 30 nodes (each with 10 drives) with 10gbit network, then we have aggregate ~8000 MB/sec copying capacity, or close to a half TB per minute. That is a little over an hour to get the replication factor back to 3, assuming no transfers are local, and all goes over the network.

    And that is a small cluster. A real one would have closer to 100 nodes and 1000 drives, and higher aggregate network throughput, with more intelligent block distribution. The real world result is that on today's drives it can take less than 5 minutes to re-replicate a failed drive. Even with 40TB drives, sub 30 minute times would be achievable.
  • bcronce - Thursday, October 12, 2017 - link

    RAID isn't dead. The same people who used it in the past are still using it. It was never popular outside of enterprise/enthusiast use. I need a place to store all of my 4K videos.
  • wumpus - Thursday, October 12, 2017 - link

    [non-0] RAID almost never made sense for home use (although there was a brief point before SSDs where it was cool to partitions two drives so /home could be mirrored and /everything_else could be striped.

    Backblaze uses some pretty serious RAID, and I'd expect that all serious datacenters use similar tricks. Redundancy is the heart of storage reliability (SSDs and more old fashioned drives have all sorts of redundancy built in) and there is always the benefit of having somebody swap out the actual hardware (which will always be easier with RAID).

    RAID isn't going anywhere for the big boys, but unless you have a data hording hobby (and tons of terrabytes to go with it), you probably don't want RAID at home. If you do, then you you will probably on need to RAID your backups (RAID on your primary only helps for high availability).
  • alpha754293 - Thursday, October 12, 2017 - link

    I can see people using RAID at home thinking that it will give them the misguided latency advantage (when they think about "speed").

    (i.e. higher MB/s != lower latency, which is what gamers probably actually want when they put two SSDs on RAID0)
  • surt - Sunday, October 15, 2017 - link

    Not sure what game you are playing, but at least 90% of the tier 1 games out there care mostly about throughput not latency when it comes to hard drive speed. Hard drive latency in general is too great for any reasonable game design to assume anything other than a streaming architecture.
  • Ahnilated - Thursday, October 12, 2017 - link

    Sorry but if you backup to the cloud you are a fool. All your data is freely accessible to anyone from the script kiddies on up. Much less transferring it over the web is a huge risk.
  • Notmyusualid - Thursday, October 12, 2017 - link

    @ Ahnilated

    I never liked the term 'script kiddies'.

    What is the alternative? Waste your time / bust your ass writing your own exploit(s) - when so many cool exploits already exist?

    Some of us who dabble with said scripts, have significant other networking / Linux knowledge, so it doesn't fit to denigrate us, just because we can't be arsed to write new exploits ourselves.

    We've better things to be doing with our time...

    I bet you don't make your own clothes, even though you possibly can.

Log in

Don't have an account? Sign up now