Kicking off another busy Spring GPU Technology Conference for NVIDIA, this morning the graphics and accelerator designer is announcing that they are going to once again design their own Arm-based CPU/SoC. Dubbed Grace – after Grace Hopper, the computer programming pioneer and US Navy rear admiral – the CPU is NVIDIA’s latest stab at more fully vertically integrating their hardware stack by being able to offer a high-performance CPU alongside their regular GPU wares. According to NVIDIA, the chip is being designed specifically for large-scale neural network workloads, and is expected to become available in NVIDIA products in 2023.

With two years to go until the chip is ready, NVIDIA is playing things relatively coy at this time. The company is offering only limited details for the chip – it will be based on a future iteration of Arm’s Neoverse cores, for example – as today’s announcement is a bit more focused on NVIDIA’s future workflow model than it is speeds and feeds. If nothing else, the company is making it clear early on that, at least for now, Grace is an internal product for NVIDIA, to be offered as part of their larger server offerings. The company isn’t directly gunning for the Intel Xeon or AMD EPYC server market, but instead they are building their own chip to complement their GPU offerings, creating a specialized chip that can directly connect to their GPUs and help handle enormous, trillion parameter AI models.

NVIDIA SoC Specification Comparison
  Grace Xavier Parker
(Tegra X2)
CPU Cores ? 8 2
CPU Architecture Next-Gen Arm Neoverse
(Arm v9?)
Carmel
(Custom Arm v8.2)
Denver 2
(Custom Arm v8)
Memory Bandwidth >500GB/sec
LPDDR5X
(ECC)
137GB/sec
LPDDR4X
60GB/sec
LPDDR4
GPU-to-CPU Interface >900GB/sec
NVLink 4
PCIe 3 PCIe 3
CPU-to-CPU Interface >600GB/sec
NVLink 4
N/A N/A
Manufacturing Process ? TSMC 12nm TSMC 16nm
Release Year 2023 2018 2016

More broadly speaking, Grace is designed to fill the CPU-sized hole in NVIDIA’s AI server offerings. The company’s GPUs are incredibly well-suited for certain classes of deep learning workloads, but not all workloads are purely GPU-bound, if only because a CPU is needed to keep the GPUs fed. NVIDIA’s current server offerings, in turn, typically rely on AMD’s EPYC processors, which are very fast for general compute purposes, but lack the kind of high-speed I/O and deep learning optimizations that NVIDIA is looking for. In particular, NVIDIA is currently bottlenecked by the use of PCI Express for CPU-GPU connectivity; their GPUs can talk quickly amongst themselves via NVLink, but not back to the host CPU or system RAM.

The solution to the problem, as was the case even before Grace, is to use NVLink for CPU-GPU communications. Previously NVIDIA has worked with the OpenPOWER foundation to get NVLink into POWER9 for exactly this reason, however that relationship is seemingly on its way out, both as POWER’s popularity wanes and POWER10 is skipping NVLink. Instead, NVIDIA is going their own way by building an Arm server CPU with the necessary NVLink functionality.

The end result, according to NVIDIA, will be a high-performance and high-bandwidth CPU that is designed to work in tandem with a future generation of NVIDIA server GPUs. With NVIDIA talking about pairing each NVIDIA GPU with a Grace CPU on a single board – similar to today’s mezzanine cards – not only does CPU performance and system memory scale up with the number of GPUs, but in a roundabout way, Grace will serve as a co-processor of sorts to NVIDIA’s GPUs. This, if nothing else, is a very NVIDIA solution to the problem, not only improving their performance, but giving them a counter should the more traditionally integrated AMD or Intel try some sort of similar CPU+GPU fusion play.

By 2023 NVIDIA will be up to NVLink 4, which will offer at least 900GB/sec of cummulative (up + down) bandwidth between the SoC and GPU, and over 600GB/sec cummulative between Grace SoCs. Critically, this is greater than the memory bandwidth of the SoC, which means that NVIDIA’s GPUs will have a cache coherent link to the CPU that can access the system memory at full bandwidth, and also allowing the entire system to have a single shared memory address space. NVIDIA describes this as balancing the amount of bandwidth available in a system, and they’re not wrong, but there’s more to it. Having an on-package CPU is a major means towards increasing the amount of memory NVIDIA’s GPUs can effectively access and use, as memory capacity continues to be the primary constraining factors for large neural networks – you can only efficiently run a network as big as your local memory pool.

CPU & GPU Interconnect Bandwidth
  Grace EPYC 2 + A100 EPYC 1 + V100
GPU-to-CPU Interface
(Cummulative, Both Directions)
>900GB/sec
NVLink 4
~64GB/sec
PCIe 4 x16
~32GB/sec
PCIe 3 x16
CPU-to-CPU Interface
(Cummulative, Both Directions)
>600GB/sec
NVLink 4
304GB/sec
Infinity Fabric 2
152GB/sec
Infinity Fabric

And this memory-focused strategy is reflected in the memory pool design of Grace, as well. Since NVIDIA is putting the CPU on a shared package with the GPU, they’re going to put the RAM down right next to it. Grace-equipped GPU modules will include a to-be-determined amount of LPDDR5x memory, with NVIDIA targeting at least 500GB/sec of memory bandwidth. Besides being what’s likely to be the highest-bandwidth non-graphics memory option in 2023, NVIDIA is touting the use of LPDDR5x as a gain for energy efficiency, owing to the technology’s mobile-focused roots and very short trace lengths. And, since this is a server part, Grace’s memory will be ECC-enabled, as well.

As for CPU performance, this is actually the part where NVIDIA has said the least. The company will be using a future generation of Arm’s Neoverse CPU cores, where the initial N1 design has already been turning heads. But other than that, all the company is saying is that the cores should break 300 points on the SPECrate2017_int_base throughput benchmark, which would be comparable to some of AMD’s second-generation 64 core EPYC CPUs. The company also isn’t saying much about how the CPUs are configured or what optimizations are being added specifically for neural network processing. But since Grace is meant to support NVIDIA’s GPUs, I would expect it to be stronger where GPUs in general are weaker.

Otherwise, as mentioned earlier, NVIDIA big vision goal for Grace is significantly cutting down the time required for the largest neural networking models. NVIDIA is gunning for 10x higher performance on 1 trillion parameter models, and their performance projections for a 64 module Grace+A100 system (with theoretical NVLink 4 support) would be to bring down training such a model from a month to three days. Or alternatively, being able to do real-time inference on a 500 billion parameter model on an 8 module system.

Overall, this is NVIDIA’s second real stab at the data center CPU market – and the first that is likely to succeed. NVIDIA’s Project Denver, which was originally announced just over a decade ago, never really panned out as NVIDIA expected. The family of custom Arm cores was never good enough, and never made it out of NVIDIA’s mobile SoCs. Grace, in contrast, is a much safer project for NVIDIA; they’re merely licensing Arm cores rather than building their own, and those cores will be in use by numerous other parties, as well. So NVIDIA’s risk is reduced to largely getting the I/O and memory plumbing right, as well as keeping the final design energy efficient.

If all goes according to plan, expect to see Grace in 2023. NVIDIA is already confirming that Grace modules will be available for use in HGX carrier boards, and by extension DGX and all the other systems that use those boards. So while we haven’t seen the full extent of NVIDIA’s Grace plans, it’s clear that they are planning to make it a core part of future server offerings.

First Two Supercomputer Customers: CSCS and LANL

And even though Grace isn’t shipping until 2023, NVIDIA has already lined up their first customers for the hardware – and they’re supercomputer customers, no less. Both the Swiss National Supercomputing Centre (CSCS) and Los Alamos National Laboratory are announcing today that they’ll be ordering supercomputers based on Grace. Both systems will be built by HPE’s Cray group, and are set to come online in 2023.

CSCS’s system, dubbed Alps, will be replacing their current Piz Daint system, a Xeon plus NVIDIA P100 cluster. According to the two companies, Alps will offer 20 ExaFLOPS of AI performance, which is presumably a combination of CPU, CUDA core, and tensor core throughput. When it’s launched, Alps should be the fastest AI-focused supercomputer in the world.


An artist's rendition of the expected Alps system

Interestingly, however, CSCS’s ambitions for the system go beyond just machine learning workloads. The institute says that they’ll be using Alps as a general purpose system, working on more traditional HPC-type tasks as well as AI-focused tasks. This includes CSCS’s traditional research into weather and the climate, which the pre-AI Piz Daint is already used for as well.

As previously mentioned, Alps will be built by HPE, who will be basing on their previously-announced Cray EX architecture. This would make NVIDIA’s Grace the second CPU option for Cray EX, along with AMD’s EPYC processors.

Meanwhile Los Alamos’ system is being developed as part of an ongoing collaboration between the lab and NVIDIA, with LANL set to be the first US-based customer to receive a Grace system. LANL is not discussing the expected performance of their system beyond the fact that it’s expected to be “leadership-class,” though the lab is planning on using it for 3D simulations, taking advantage of the largest data set sizes afforded by Grace. The LANL system is set to be delivered in early 2023.

Comments Locked

119 Comments

View All Comments

  • CiccioB - Monday, April 12, 2021 - link

    500GB is the aggregate memory bandwidth between all CPUs (there are 4 of them in the slide)
  • mode_13h - Wednesday, April 14, 2021 - link

    While 500 GB/s does sound high for a single CPU, 125 GB/s sounds low for a server CPU in 2023. I guess this is a bit of a special case, being on a card with the GPU, but it's specifically one where they're trying to optimize bandwidth.
  • CiccioB - Wednesday, April 14, 2021 - link

    Bandwidth requirements depends on the computation capacity of the chip and the amount of caches it has (other than the type of computations they are performing).
    Here we are not speaking of 125GB/s for a 96/128+HT core (like those AMD will create next year). These chips have a much smaller amount of cores and are not the one that will process all the data as actual x86 architecture have to.
    You are just evaluating this new system architecture with the same criteria you evaluate the nowadays server, which are based on the absolute centrality of the single CPUs to carry out all they task. Where all resources are connected to the single CPU. From network to storage to external accelerators, like GPUs.
    That's why you say "a CPU server in 2023". This is not "a CPU server", this is just a (small) slice of the computing CPUs on 2023 server, where more of them are going to work in parallel to do the work.

    For what I have understood here, the idea is to create a system based on "chiplets" but not constrained on a substrate, but on the entire motherboard. More chips working in parallel, all with their own local resources communicating through a very fast bus. You can see the analogy with AMD's chiplets-in-a-package architecture.
    And this would allow for a linear scaling of performances exactly as chiplets on a substrate, provided you give them enough bandwidth (and energy) for everything. Bus as their number increases, it becomes difficult to feed them. This is the next scaling solution.
  • mode_13h - Thursday, April 15, 2021 - link

    Your whole discussion of the CPU's bandwidth needs misses the point, completely. The problem they're attempting to solve with this architecture is to reduce bottlenecks in GPUs' access of main memory. So, the CPU's memory bandwidth better be large, not so much for the sake of the ARM cores, but more by way of acting as a bridge, to that memory pool, for the GPUs.

    Sometimes it helps to take a step back and think, before launching into these verbose posts.
  • JayNor - Sunday, May 2, 2021 - link

    pcie5 is 32GT/sec.
    SPR has 80 lanes so 80x32x2/8(bidirectional)=640 GB/sec
    If PCIE6 happens in 2023 timeframe, that doubles, though requiring PAM4 transceivers.
    Run cxl on top of that for the biased cache coherency.
    So, why go to NVLink?
  • Raqia - Monday, April 12, 2021 - link

    The uncore is underappreciated in general, and their ownership of Mellanox will pay dividends in the plumbing of this server design.
  • mode_13h - Wednesday, April 14, 2021 - link

    It's funny to me how Nvidia has been building GPUs with a couple hundred SMs and a couple TB/s of memory bandwidth, yet somehow they need outside expertise to work out the interconnect fabric for a CPU with a fraction of that bandwidth? I have trouble seeing that.
  • Zizy - Monday, April 12, 2021 - link

    AMD and Intel are switching to PCIe 5 so total bandwidth should be comparable. But Grace still sounds interesting because it should have much better bandwidth to a single GPU whereas PCIe 5 x16 is still mere ~60GB/s bidirectional (120 cumulative). Also, considering all the "cumulative bandwidth" numbers, I wonder if NV will keep upload/download symmetry or move to say 10 links to the card and just 2 links from the card.
  • mode_13h - Wednesday, April 14, 2021 - link

    Can PCIe cope with mesh networks?

    BTW, in 2023 you'll likely see PCIe 6 and CXL 2 products introduced.
  • CiccioB - Monday, April 12, 2021 - link

    I was just wondering this since quite a bit of time, now.. how long will nvidia take to put some ARM cores directly ONTO the GPU board (first) and then directly into the GPU die?
    That could easily off load main CPU (which could then be weaker and support more GPU at once with only limits in total I/O) from a lot of tasks and better exploit the GPU they belong to.

Log in

Don't have an account? Sign up now