Translating to IPC: All This for 3%?

Contrary to popular belief, increasing IPC is difficult. Attempt to ensure that each execution port is fed every cycle requires having wide decoders, large out-of-order queues, fast caches, and the right execution port configuration. It might sound easy to pile it all on, however both physics and economics get in the way: the chip still has to be thermally efficient and it has to make money for the company. Every generational design update will go for what is called the ‘low-hanging fruit’: the identified changes that give the most gain for the smallest effort. Usually reducing cache latency is not always the easiest task, and for non-semiconductor engineers (myself included), it sounds like a lot of work for a small gain.

For our IPC testing, we use the following rules. Each CPU is allocated four cores, without extra threading, and power modes are disabled such that the cores run at a specific frequency only. The DRAM is set to what the processor supports, so in the case of the new CPUs, that is DDR4-2933, and the previous generation at DDR4-2666. I have recently seen threads which dispute if this is fair: this is an IPC test, not an instruction efficiency test. The DRAM official support is part of the hardware specifications, just as much as the size of the caches or the number of execution ports. Running the two CPUs at the same DRAM frequency gives an unfair advantage to one of them: either a bigger overclock/underclock, and deviates from the intended design.

So in our test, we take the new Ryzen 7 2700X, the first generation Ryzen 7 1800X, and the pre-Zen Bristol Ridge based A12-9800, which is based on the AM4 platform and uses DDR4. We set each processors at four cores, no multi-threading, and 3.0 GHz, then ran through some of our tests.

For this graph we have rooted the first generation Ryzen 7 1800X as our 100% marker, with the blue columns as the Ryzen 7 2700X. The problem with trying to identify a 3% IPC increase is that 3% could easily fall within the noise of a benchmark run: if the cache is not fully set before the run, it could encounter different performance. Shown above, a good number of tests fall in that +/- 2% range.

However, for compute heavy tasks, there are 3-4% benefits: Corona, LuxMark, CineBench and GeekBench are the ones here. We haven’t included the GeekBench sub-test results in the graph above, but most of those fall into the 2-5% category for gains.

If we take out Cinebench R15 nT result and the Geekbench memory tests, the average of all of the tests comes out to a +3.1% gain for the new Ryzen 2700X. That sounds bang on the money for what AMD stated it would do.

Cycling back to that Cinebench R15 nT result that showed a 22% gain. We also had some other IPC testing done at 3.0 GHz but with 8C/16T (which we couldn’t compare to Bristol Ridge), and a few other tests also showed 20%+ gains. This is probably a sign that AMD might have also adjusted how it manages its simultaneous multi-threading. This requires further testing.

AMD’s Overall 10% Increase

With some of the benefits of the 12LP manufacturing process, a few editors internally have questioned exactly why AMD hasn’t redesigned certain elements of the microarchitecture to take advantage. Ultimately it would appear that the ‘free’ frequency boost is worth just putting the same design in – as mentioned previously, the 12LP design is based on 14LPP with performance bump improvements. In the past it might not have been mentioned as a separate product line. So pushing through the same design is an easy win, allowing the teams to focus on the next major core redesign.

That all being said, AMD has previously already stated its intentions for the Zen+ core design – rolling back to CES at the beginning of the year, AMD stated that they wanted Zen+ and future products to go above and beyond the ‘industry standard’ of a 7-8% performance gain each year.

Clearly 3% IPC is not enough, so AMD is combining the performance gain with the +250 MHz increase, which is about another 6% peak frequency, with better turbo performance with Precision Boost 2 / XFR 2. This is about 10%, on paper at least. Benchmarks to follow.

Improvements to the Cache Hierarchy: Lower Latency = Higher IPC Precision Boost 2 and XFR2: Ensuring It Hertz More
Comments Locked

545 Comments

View All Comments

  • Aichon - Tuesday, April 24, 2018 - link

    Wouldn’t it actually suggest that there’s a difference between your setup and theirs that favors yours? For instance, in the case of your earlier benchmark, you admittedly overclocked your 1600X and they didn’t, so don’t you think that might account for the 10% difference you saw over theirs? And in the case of the 8700K, you omitted key contextual information (e.g. is your system updated, and if so, which updates to what components?) that would allow others to verify that it was an apples-to-apples comparison.

    Ryan may very well have made a mistake and you may very well be entirely correct about all of this, but claiming he’s a liar on the basis of your overclocked system and then following it up with claims about the 8700K that lack the information necessary for someone else to verify your data does not help your case.

    Meanwhile, my horse in this race died years ago. The latest product I bought from either teams red or blue was a 2011 Mac Mini that had an Intel CPU and an AMD GPU. All of which is to say, I’m a fan of passionate debate, but let’s keep aspersions to a minimum and focus on getting to the truth.
  • tn_techie - Thursday, April 19, 2018 - link

    On the first paragraph, Ian writes the following:
    "This is not AMD’s next big microarchitecture, which we know is called Rome (or Zen 2) on 7nm."
    That is incorrect. Rome is the codename for the upcoming EPYC 2nd Gen CPUs that will replace the current Naples products, and not the codename for AMD's next gen CPU core arch.
  • msroadkill612 - Thursday, April 19, 2018 - link

    "anything that is hard on a single-threaded, such as our FCAT test or Cinebench 1T, Intel wins hands down"

    Yeah, I know, its just an indicator, but it's telling that the test seems as silly as the emphasis on ipc due to shrill/shill gamers - who would use single thread for cinebench?
  • Luckz - Monday, April 23, 2018 - link

    Single thread Cinebench 15 score is *the* indicator of IPC used in meme-filled debates on online forums. It's just an important metric right now. And unlike, uh, GeekBench, CPU-Z, and whatever else claims to judge single thread score, it's pretty accurate.
  • peevee - Thursday, April 19, 2018 - link

    "ranging from the Silent 65W Wraith models"

    You mean Stealth, right?
  • Ryan Smith - Thursday, April 19, 2018 - link

    Indeed we do. Thanks!
  • fallaha56 - Thursday, April 19, 2018 - link

    Techradar also confirming massive performance hit from Intel patches

    Eg 1000points in single core Geekbench
  • ACE76 - Thursday, April 19, 2018 - link

    Techradar has AMD beating Intel in pretty much everything...I guess Intel fanboys could just run their systems unpatched and claim to be kings...lol.
  • msroadkill612 - Thursday, April 19, 2018 - link

    My memory is bad, but not that bad.

    I read a lot of cpu reviews, and this is the first that has made it clear that these are post security patch.

    Could this be the first honest comparison review of ryzen - new OR old?

    It certainly stirred a fanboi wasp nest.
  • ACE76 - Thursday, April 19, 2018 - link

    Anandtech isn't alone... Techradar has AMD winning on fully patched systems as well...the sites that have Intel winning are using either old scores or unpatched systems for Smeltdown.

Log in

Don't have an account? Sign up now