Improvements to the Cache Hierarchy

The biggest under-the-hood change for the Ryzen 2000-series processors is in the cache latency. AMD is claiming that they were able to knock one-cycle from L1 and L2 caches, several cycles from L3, and better DRAM performance. Because pure core IPC is intimately intertwined with the caches (the size, the latency, the bandwidth), these new numbers are leading AMD to claim that these new processors can offer a +3% IPC gain over the previous generation.

The numbers AMD gives are:

  • 13% Better L1 Latency (1.10ns vs 0.95ns)
  • 34% Better L2 Latency (4.6ns vs 3.0ns)
  • 16% Better L3 Latency (11.0ns vs 9.2ns)
  • 11% Better Memory Latency (74ns vs 66ns at DDR4-3200)
  • Increased DRAM Frequency Support (DDR4-2666 vs DDR4-2933)

It is interesting that in the official slide deck AMD quotes latency measured as time, although in private conversations in our briefing it was discussed in terms of clock cycles. Ultimately latency measured as time can take advantage of other internal enhancements; however a pure engineer prefers to discuss clock cycles.

Naturally we went ahead to test the two aspects of this equation: are the cache metrics actually lower, and do we get an IPC uplift?

Cache Me Ousside, How Bow Dah?

For our testing, we use a memory latency checker over the stride range of the cache hierarchy of a single core. For this test we used the following:

  • Ryzen 7 2700X (Zen+)
  • Ryzen 5 2400G (Zen APU)
  • Ryzen 7 1800X (Zen)
  • Intel Core i7-8700K (Coffee Lake)
  • Intel Core i7-7700K (Kaby Lake)

The most obvious comparison is between the AMD processors. Here we have the Ryzen 7 1800X from the initial launch, the Ryzen 5 2400G APU that pairs Zen cores with Vega graphics, and the new Ryzen 7 2700X processor.

This graph is logarithmic in both axes.

This graph shows that in every phase of the cache design, the newest Ryzen 7 2700X requires fewer core clocks. The biggest difference is on the L2 cache latency, but L3 has a sizeable gain as well. The reason that the L2 gain is so large, especially between the 1800X and 2700X, is an interesting story.

When AMD first launched the Ryzen 7 1800X, the L2 latency was tested and listed at 17 clocks. This was a little high – it turns out that the engineers had intended for the L2 latency to be 12 clocks initially, but run out of time to tune the firmware and layout before sending the design off to be manufactured, leaving 17 cycles as the best compromise based on what the design was capable of and did not cause issues. With Threadripper and the Ryzen APUs, AMD tweaked the design enough to hit an L2 latency of 12 cycles, which was not specifically promoted at the time despite the benefits it provides. Now with the Ryzen 2000-series, AMD has reduced it down further to 11 cycles. We were told that this was due to both the new manufacturing process but also additional tweaks made to ensure signal coherency. In our testing, we actually saw an average L2 latency of 10.4 cycles, down from 16.9 cycles in on the Ryzen 7 1800X.

The L3 difference is a little unexpected: AMD stated a 16% better latency: 11.0 ns to 9.2 ns. We saw a change from 10.7 ns to 8.1 ns, which was a drop from 39 cycles to 30 cycles.

Of course, we could not go without comparing AMD to Intel. This is where it got very interesting. Now the cache configurations between the Ryzen 7 2700X and Core i7-8700K are different:

CPU Cache uArch Comparison
  AMD
Zen (Ryzen 1000)
Zen+ (Ryzen 2000)
Intel
Kaby Lake (Core 7000)
Coffee Lake (Core 8000)
L1-I Size 64 KB/core 32 KB/core
L1-I Assoc 4-way 8-way
L1-D Size 32 KB/core 32 KB/core
L1-D Assoc 8-way 8-way
L2 Size 512 KB/core 256 KB/core
L2 Assoc 8-way 4-way
L3 Size 8 MB/CCX
(2 MB/core)
2 MB/core
L3 Assoc 16-way 16-way
L3 Type Victim Write-back

AMD has a larger L2 cache, however the AMD L3 cache is a non-inclusive victim cache, which means it cannot be pre-fetched into unlike the Intel L3 cache.

This was an unexpected result, but we can see clearly that AMD has a latency timing advantage across the L2 and L3 caches. There is a sizable difference in DRAM, however the core performance metrics are here in the lower caches.

We can expand this out to include the three AMD chips, as well as Intel’s Coffee Lake and Kaby Lake cores.

This is a graph using cycles rather than timing latency: Intel has a small L1 advantage, however the larger L2 caches in AMD’s Zen designs mean that Intel has to hit the higher latency L3 earlier. Intel makes quick work of DRAM cycle latency however.

Talking 12nm and Zen+ Translating to IPC: All This for 3%?
Comments Locked

545 Comments

View All Comments

  • Vesperan - Sunday, April 22, 2018 - link

    If by 'pulling a number out of thin air' you mean that I looked at the same steam hardware survey as you did and also a (year old) TechReport survey (https://techreport.com/news/31542/poll-what-the-re... ) - then yes, I absolutely pulled a number out of thin air. I actually think 10% of the entire market as a max for x1080 resolution and high refresh rate monitors will be significantly too high, as the market will have a lot of old or cheap monitors out there.

    The fact is, once you say Ryzen is perfectly fine for x1080 (at 60 hz) gaming and anything at or above x1440 because your GPU limited (and I'm not saying there is no difference - but is it significant enough?), the argument is no longer 'Ryzen is worse at gaming', but is instead 'Ryzen is just as good for gaming as Intel counterparts, unless you have a high refresh rate x1080 monitor and high end graphics card.'

    Which is a bloody corner case. It might be an important one to a bunch of people, but as I said - it is a distinct minority and it is nonsensical to condemn or praise a CPU architecture for gaming in general because of one corner case. The conclusion is too general and sweeping.
  • Targon - Monday, April 23, 2018 - link

    This is where current benchmarks, other than the turn length benchmark in Civ 6, are not doing enough to show where slowdowns come from. Framerates don't matter as much if the game adds complexity based on CPU processing capability. AI in games for example, will benefit from additional CPU cores(when you don't use your maxed out video card for AI of course).

    I agree that game framerates as the end all, be all that people look at is far too limited, and we do see other things, Cinebench for example, that help expand things, but doesn't go far enough. I just know that I personally find anything below 8-cores will feel sluggish with the number of programs I tend to run at once.
  • GreenReaper - Wednesday, April 25, 2018 - link

    Monitors in use do lag the market. All of my standalone monitors are over a decade old. My laptop and tablet are over five years old. Many people have 4K TVs, but rarely hook them up to their PC.

    It's hard to tell, of course, because some browsers don't fully-communicate display capabilities, but 1920x1080 is a popular resolution with maybe 22.5% of the market on it (judging by the web stats of a large art website I run). Another ~13.5% is on 1366x768.

    I think it's safe to say that only ~5% have larger than 1080p - 2560x1440 has kinda taken off with gamers, but even then it only has 3.5% in the Steam survey - and of course, this mainly counts new installations. 4K is closer to 0.3%.

    Performance for resolutions not in use *now* may matter for a new CPU because you might well want to pair it with a new monitor and video card down the road. You're buying a future capability - maybe you don't need HEVC 10-bit 4K 60FPS decode now, but you might later. However, it could be a better bet to upgrade the CPU/GPU later, especially since we may see AV1 in use by then.

    Buying capabilities for the future is more important for laptops and all-in-one boxes, since they're least likely to be upgradable - Thunderbolt and USB display solutions aside.
  • Bourinos - Friday, April 20, 2018 - link

    Streaming at 144Hz? Are you mad???
  • Luckz - Monday, April 23, 2018 - link

    Would be gaming in 144 Hz while streaming 60 Hz, unless in Akkuma's fantasy world of 240 Hz monitors, the majority of stream viewers would want 144 Hz streams too ;)
  • Shaheen Misra - Sunday, April 22, 2018 - link

    Thats a great point. Every time i have upgraded it has been due to me not hitting 60fps. I have no interest in 144hz/240hz monitors. Had a Q9400 till GTA IV released. Bought a FX 8300 due to lag. Used that till COD WW2 stuttered (Still not sure why really). Now i own a 7700k paired with a 1060 6gb. Not the kind of thing you should say out loud but im not gonna buy a GTX 1080ti for 1080p/60HZ. The PCIe x16 slot is here to stay, i can upgrade whenever. The CPU socket on my Z270 board on the other hand is obsolete a year after purchase.
  • Targon - Monday, April 23, 2018 - link

    Just wait until you upgrade to 4k, at which point you will be waiting for a new generation of video card to come out, and then you find that even the new cards can't handle 4k terribly well. I agree about video card upgrades not making a lot of sense if you are not going above 1080p/60Hz.
  • Luckz - Monday, April 23, 2018 - link

    For 4K you've so far always needed SLI, and SLI was always either bad, bugged, or -as of recently- retired. Why they still make multi GPU mainboards and bundle SLI bridges is beyond me.
  • Lolimaster - Thursday, April 19, 2018 - link

    Zen2 should easily surpass the 200pts in CB15 ST, a minimum of 5-10% + a minum of 5-10% higher clocks, being extremely negative.
  • Lolimaster - Thursday, April 19, 2018 - link

    IPC and clock, no edit button gg.

Log in

Don't have an account? Sign up now