Translating to IPC: All This for 3%?

Contrary to popular belief, increasing IPC is difficult. Attempt to ensure that each execution port is fed every cycle requires having wide decoders, large out-of-order queues, fast caches, and the right execution port configuration. It might sound easy to pile it all on, however both physics and economics get in the way: the chip still has to be thermally efficient and it has to make money for the company. Every generational design update will go for what is called the ‘low-hanging fruit’: the identified changes that give the most gain for the smallest effort. Usually reducing cache latency is not always the easiest task, and for non-semiconductor engineers (myself included), it sounds like a lot of work for a small gain.

For our IPC testing, we use the following rules. Each CPU is allocated four cores, without extra threading, and power modes are disabled such that the cores run at a specific frequency only. The DRAM is set to what the processor supports, so in the case of the new CPUs, that is DDR4-2933, and the previous generation at DDR4-2666. I have recently seen threads which dispute if this is fair: this is an IPC test, not an instruction efficiency test. The DRAM official support is part of the hardware specifications, just as much as the size of the caches or the number of execution ports. Running the two CPUs at the same DRAM frequency gives an unfair advantage to one of them: either a bigger overclock/underclock, and deviates from the intended design.

So in our test, we take the new Ryzen 7 2700X, the first generation Ryzen 7 1800X, and the pre-Zen Bristol Ridge based A12-9800, which is based on the AM4 platform and uses DDR4. We set each processors at four cores, no multi-threading, and 3.0 GHz, then ran through some of our tests.

For this graph we have rooted the first generation Ryzen 7 1800X as our 100% marker, with the blue columns as the Ryzen 7 2700X. The problem with trying to identify a 3% IPC increase is that 3% could easily fall within the noise of a benchmark run: if the cache is not fully set before the run, it could encounter different performance. Shown above, a good number of tests fall in that +/- 2% range.

However, for compute heavy tasks, there are 3-4% benefits: Corona, LuxMark, CineBench and GeekBench are the ones here. We haven’t included the GeekBench sub-test results in the graph above, but most of those fall into the 2-5% category for gains.

If we take out Cinebench R15 nT result and the Geekbench memory tests, the average of all of the tests comes out to a +3.1% gain for the new Ryzen 2700X. That sounds bang on the money for what AMD stated it would do.

Cycling back to that Cinebench R15 nT result that showed a 22% gain. We also had some other IPC testing done at 3.0 GHz but with 8C/16T (which we couldn’t compare to Bristol Ridge), and a few other tests also showed 20%+ gains. This is probably a sign that AMD might have also adjusted how it manages its simultaneous multi-threading. This requires further testing.

AMD’s Overall 10% Increase

With some of the benefits of the 12LP manufacturing process, a few editors internally have questioned exactly why AMD hasn’t redesigned certain elements of the microarchitecture to take advantage. Ultimately it would appear that the ‘free’ frequency boost is worth just putting the same design in – as mentioned previously, the 12LP design is based on 14LPP with performance bump improvements. In the past it might not have been mentioned as a separate product line. So pushing through the same design is an easy win, allowing the teams to focus on the next major core redesign.

That all being said, AMD has previously already stated its intentions for the Zen+ core design – rolling back to CES at the beginning of the year, AMD stated that they wanted Zen+ and future products to go above and beyond the ‘industry standard’ of a 7-8% performance gain each year.

Clearly 3% IPC is not enough, so AMD is combining the performance gain with the +250 MHz increase, which is about another 6% peak frequency, with better turbo performance with Precision Boost 2 / XFR 2. This is about 10%, on paper at least. Benchmarks to follow.

Improvements to the Cache Hierarchy: Lower Latency = Higher IPC Precision Boost 2 and XFR2: Ensuring It Hertz More
Comments Locked

545 Comments

View All Comments

  • Vesperan - Sunday, April 22, 2018 - link

    If by 'pulling a number out of thin air' you mean that I looked at the same steam hardware survey as you did and also a (year old) TechReport survey (https://techreport.com/news/31542/poll-what-the-re... ) - then yes, I absolutely pulled a number out of thin air. I actually think 10% of the entire market as a max for x1080 resolution and high refresh rate monitors will be significantly too high, as the market will have a lot of old or cheap monitors out there.

    The fact is, once you say Ryzen is perfectly fine for x1080 (at 60 hz) gaming and anything at or above x1440 because your GPU limited (and I'm not saying there is no difference - but is it significant enough?), the argument is no longer 'Ryzen is worse at gaming', but is instead 'Ryzen is just as good for gaming as Intel counterparts, unless you have a high refresh rate x1080 monitor and high end graphics card.'

    Which is a bloody corner case. It might be an important one to a bunch of people, but as I said - it is a distinct minority and it is nonsensical to condemn or praise a CPU architecture for gaming in general because of one corner case. The conclusion is too general and sweeping.
  • Targon - Monday, April 23, 2018 - link

    This is where current benchmarks, other than the turn length benchmark in Civ 6, are not doing enough to show where slowdowns come from. Framerates don't matter as much if the game adds complexity based on CPU processing capability. AI in games for example, will benefit from additional CPU cores(when you don't use your maxed out video card for AI of course).

    I agree that game framerates as the end all, be all that people look at is far too limited, and we do see other things, Cinebench for example, that help expand things, but doesn't go far enough. I just know that I personally find anything below 8-cores will feel sluggish with the number of programs I tend to run at once.
  • GreenReaper - Wednesday, April 25, 2018 - link

    Monitors in use do lag the market. All of my standalone monitors are over a decade old. My laptop and tablet are over five years old. Many people have 4K TVs, but rarely hook them up to their PC.

    It's hard to tell, of course, because some browsers don't fully-communicate display capabilities, but 1920x1080 is a popular resolution with maybe 22.5% of the market on it (judging by the web stats of a large art website I run). Another ~13.5% is on 1366x768.

    I think it's safe to say that only ~5% have larger than 1080p - 2560x1440 has kinda taken off with gamers, but even then it only has 3.5% in the Steam survey - and of course, this mainly counts new installations. 4K is closer to 0.3%.

    Performance for resolutions not in use *now* may matter for a new CPU because you might well want to pair it with a new monitor and video card down the road. You're buying a future capability - maybe you don't need HEVC 10-bit 4K 60FPS decode now, but you might later. However, it could be a better bet to upgrade the CPU/GPU later, especially since we may see AV1 in use by then.

    Buying capabilities for the future is more important for laptops and all-in-one boxes, since they're least likely to be upgradable - Thunderbolt and USB display solutions aside.
  • Bourinos - Friday, April 20, 2018 - link

    Streaming at 144Hz? Are you mad???
  • Luckz - Monday, April 23, 2018 - link

    Would be gaming in 144 Hz while streaming 60 Hz, unless in Akkuma's fantasy world of 240 Hz monitors, the majority of stream viewers would want 144 Hz streams too ;)
  • Shaheen Misra - Sunday, April 22, 2018 - link

    Thats a great point. Every time i have upgraded it has been due to me not hitting 60fps. I have no interest in 144hz/240hz monitors. Had a Q9400 till GTA IV released. Bought a FX 8300 due to lag. Used that till COD WW2 stuttered (Still not sure why really). Now i own a 7700k paired with a 1060 6gb. Not the kind of thing you should say out loud but im not gonna buy a GTX 1080ti for 1080p/60HZ. The PCIe x16 slot is here to stay, i can upgrade whenever. The CPU socket on my Z270 board on the other hand is obsolete a year after purchase.
  • Targon - Monday, April 23, 2018 - link

    Just wait until you upgrade to 4k, at which point you will be waiting for a new generation of video card to come out, and then you find that even the new cards can't handle 4k terribly well. I agree about video card upgrades not making a lot of sense if you are not going above 1080p/60Hz.
  • Luckz - Monday, April 23, 2018 - link

    For 4K you've so far always needed SLI, and SLI was always either bad, bugged, or -as of recently- retired. Why they still make multi GPU mainboards and bundle SLI bridges is beyond me.
  • Lolimaster - Thursday, April 19, 2018 - link

    Zen2 should easily surpass the 200pts in CB15 ST, a minimum of 5-10% + a minum of 5-10% higher clocks, being extremely negative.
  • Lolimaster - Thursday, April 19, 2018 - link

    IPC and clock, no edit button gg.

Log in

Don't have an account? Sign up now