Scaling Hard Drive Capacities

Hard disk drives using magnetic recording have been around for 60+ years. Despite using the same underlying technology, the hard drives of today look nothing like the refrigerator-sized ones from the 1960s. The more interesting aspect in the story is the set of advancements that have happened since the turn of the century.

At a high level, hard disks are composed of circular magnetic plates or 'platters' on which data is recorded using magnetization and the patterns of magnetization represent the data stored. The patterns are laid out in terms of tracks. They are created, altered and recognized with the help of 'heads' mounted on an actuator that perform read and write operations. Modern hard disks have more than one platter in a stack, with each platter using its own individual 'head' to read and write. 

There are additional hardware components - the motor, spindle, and electronics. The segment of interest from a capacity perspective are the platters and the heads. The slide below shows two ways to increase the capacity of a platter - increasing the number of tracks per inch (TPI) and/or increasing the number of bits per inch (BPI) in a single track. Together they yield a metric for areal density, which the industry gives as a value in bits per square inch, such as gigabits per square inch (Gb/in2) or terabits per square inch (Tb/in2). 

Hard drives in the early 2000s primarily relied on longitudinal recording, with the data bits aligned horizontally in relation to the spinning platter - this is shown in the first half of the image below. One of the first major advancements after the turn of the century was the introduction of perpendicular magnetic recording (PMR) in 2005.

At the time PMR made its breakthrough, Hitachi commissioned an amusing video called 'Get Perpendicular', which was used to demonstrate this technology and reaching 230 gigabits per square inch. The video can be found here.

PMR was developed as a solution to the previous areal density limits of around 200 Gb/ caused by the 'superparamagnetic effect' where the density of bits would cause the bits to flip magnetic orientation and corrupt data. PMR, by itself, can theoretically hit around 1.1 Tb/

Alongside PMR, more technologies have come into play. The most recently launched hard drives (the Seagate 12TB ones) have an areal density of 923 Gb/ The industry came up with a number of solutions to keep increasing hard drive capacity while remaining within the theoretical areal density limits of PMR technology:

Helium-filled drives: One of the bottlenecks in modern drivers is the physical resistance on the heads by the air around the platters. Using helium reduces that resistance, albeit, with the requirement of sealed enclosures. The overall effect is improved head stability and a reduction in internal turbulence. This allows for a shorter distance between platters, giving manufacturers the ability to stack up to seven in a single 3.5" drive (rather than the usual six). Helium drives were first introduced to the market in 2012 by HGST. The latest helium drives come with as many as eight platters.

Shingled magnetic recording (SMR): In this technology, the track layouts are modified to give overlaps, similar to how roof shingles are laid (hence the name). While this creates challenges in rewriting over areas already containing data (the need to avoid overwriting valid data that has to be retained), there are sub-technologies and methods to mitigate some of these issues. The challenges can be either solved on the host side or the drive side. Seagate was the first to ship drive-managed SMR drives in 2013.

Improvements in actuator technology: In the last few years, Western Digital has been shipping 'micro actuators' that allow for finer positioning and control compared to traditional actuator arms. This directly translates to drives with a higher bit density.

Improvements in head manufacturing: Traditionally, PMR heads have been manufactured using the Dry Pole process involving material deposition and ion milling. Recently, Western Digital has moved to the Damascene process (PDF) that involves a etched pattern filled using electroplating. This offered a host of advantages including a higher bit density.

We had briefly mentioned PMR technology having theoretical limits earlier in this section. Traditional PMR can deliver up to 1.1 Tb/ with improved actuators and heads. Use of SMR and TDMR (Two Dimensional Magnetic Recording) can drive this up to 1.4 Tb/

At those areal densities, the TPI and BPI need to be so high that the media grain pitch (the smallest size that the metallic elements that store individual bits can be) is around 7-8 nm. These small grains present a number of challenges, such as the head not being capable of creating a strong enough magnetic field for stable recording.

One solution to this would be to make it easier to write the data to the grain. Decreasing the resistance to magnetization (technically, lowering the coercivity), allows the head's field to modify the magnetic state of the grain. This requires extra energy, such as thermal energy, to be directly applied to the grain for the short amount of time that is needed to write a bit. This is the point where the 'energy-assist' aspect comes into the picture.

Over the last several years, a lot of focus has been on heat-assisted magnetic recording (HAMR), where the lowered resitance (coercivity) is achieved by locally heating the grains using a laser. This brings in a number of concerns that have prevented mass production of drives based on HAMR technology.

MAMR, on the other hand, uses microwaves to enable recording. A primary reason for MAMR not being considered as a viable technology by industry analysts so far was the complexity associated with designing a write head to include a microwave generator. In the next section, we take a look at how Western Digital was able to address this.

Moving to MAMR: Introduction Part 2: Microwave Assisted Magnetic Recording - The WD Breakthrough
Comments Locked


View All Comments

  • Glaurung - Thursday, October 12, 2017 - link

    Mac OS has calculated storage capacity using TB rather than TiB for years now.
  • lmcd - Thursday, October 12, 2017 - link

    That'll happen when general users refer to Gibibytes instead of Gigabytes, etc.
  • melgross - Thursday, October 12, 2017 - link

    For crying out loud. I wish we could get over this nonsense. You do realize that it's the same amount of storage? It doesn't matter which number is used, as long as everyone uses the same way of describing it.
  • mapesdhs - Thursday, October 12, 2017 - link

    It matters because computing by its very nature lends itself to the binary world, powers of 2, hex, etc., and the idea of not doing this for describing disk capacities only started as a way of making customers think they were getting more storage than they actually were. When I was at uni in the late 1980s, nobody in any context used MB, GB, etc. based on a power of ten, as everything was derived from the notion of bytes and KB, which are powers of 2. Like so many things these days, this sort of change is just yet more dumbing down, oh we must make it easier for people! Rubbish, how about for once we insist that people actual improve their intellects and learn something properly.

    Anyway, great article Ganesh, thanks for that! I am curious though how backup technologies are going to keep up with all this, eg. what is the future of LTO? Indeed, as consumer materials become ever more digital, surely at some point the consumer market will need viable backup solutions that are not ferociously expensive. It would be a shame if in decades' time, the future elderly have little to remember their youth because all the old digital storage devices have worn out. There's something to be said for a proper photo album...
  • BrokenCrayons - Thursday, October 12, 2017 - link

    I have a few decrepit 5.25 inch full height hard drives (the sorts that included a bad sector map printed on their label made by companies long dead to this world) sitting in a box in my house that were from the 80s. They used a power of ten to represent capacity even before you attended university. This capacity discussion is absolutely not a new concern. It was the subject of lots of BBS drama carried out over 2400 baud modems.
  • bcronce - Thursday, October 12, 2017 - link

    For the longest time where was no common definition of a "byte". 5 bit byte? 6 bit byte? 7 bit byte? 11 bit byte? Most storage devices were labeled in bits, which is labeled in base 10.
  • melgross - Thursday, October 12, 2017 - link

    Sheesh, none of that has any importance whatsoever outside of the small geeky areas in this business.
  • alpha754293 - Thursday, October 12, 2017 - link

    ...except for the companies that got sued for fraudulent advertising, because 'Murica!
  • Ratman6161 - Thursday, October 12, 2017 - link

    "It doesn't matter which number is used, as long as everyone uses the same way of describing it."

    You would be amazed at how many times I still get the question : "Did I get ripped off? Windows says my hard drive is smaller than what <insert PC OEM here> said in their specs!"
  • melgross - Thursday, October 12, 2017 - link

    That's why changing the way we describe this every few years is a problem. We need a standard to be used everywhere, no matter which one. Quite frankly, almost no one will ever do what your friends, according to you, do. Most don't even know offhand, who makes their computer, much less how much storage it comes with.

Log in

Don't have an account? Sign up now