After an already packed Computex 2016 event with Radeon Polaris and Bristol Ridge/Stoney Ridge news, AMD CEO Dr. Lisa Su had one final surprise to close out the company’s presentation: Zen, in the flesh.

Zen taped out earlier this year, and AMD is now working on bringing it up in their labs. To that end Dr. Su pulled out a very early engineering sample of what will be AMD’s Summit Ridge CPU, an 8 core Zen-based CPU. Summit Ridge will use AMD’s new AM4 socket – currently being rolled out for Bristol Ridge – making it a drop-in platform replacement.

Little in the way of new details on Summit Ridge and Zen were released, but Dr. Su confirmed that AMD is still targeting a 40% IPC increase. On the development front, the chip still has some work to undergo, but AMD is to the point where they are going to start sampling their top-tier, high profile customers with engineering samples here in a few weeks. Wider sampling to their larger OEM base will in turn take place in Q3 of this year. AMD has not mentioned a retail product date, but keep in mind there’s a fairly significant lag time between OEM sampling and retail products.

Finally, Dr. Su also reiterated that Zen will be the basis of a range of products for AMD. Along with the desktop CPU, AMD will be using Zen as the basis of their next, 8th generation APU. And further down the line it will be appearing in server products and embedded products as well.

Comments Locked

89 Comments

View All Comments

  • tamalero - Wednesday, June 8, 2016 - link

    I understood that reference!
  • 0ldman79 - Thursday, June 9, 2016 - link

    I didn't. Need more coffee.
  • Morawka - Wednesday, June 1, 2016 - link

    so with 40% IPC increase where does this theoretically put them in regards to Skylake i7?

    Also, is hyperthreading new to current AMD cpu's?
  • Morawka - Wednesday, June 1, 2016 - link

    Also, why so little Cache? when intel gives each core 1.5 or 2MB
  • darkvader75 - Wednesday, June 1, 2016 - link

    AMD uses L3 cache as well so be cautious in comparing cache sizes until after the full specification is announced.
  • name99 - Wednesday, June 1, 2016 - link

    The pure size of the cache is becoming ever less important in the face of more sophisticated cache management algorithms. The question that matters is not the size of the caches, it is how smart their cache management algorithms are. These include issues like
    - where are new lines placed in the recency chain?
    - how rapidly are lines promoted up this chain?
    - do you handle prefetched lines differently?
    - do you handle I-lines differently?
    - are your caches inclusive, exclusive, or neither?
    - how do your outer caches know that inner-cache lines are aggressively in use and so should not be dropped? (Relevant to inclusive caches.)
    - how do you prevent thrashing/streaming cores from trashing the LLC for everyone else?
    etc etc
    Correct answers to these questions is worth 30% or more in performance --- a whole lot more than just adding a MiB of cache here or there.
  • dgingeri - Wednesday, June 1, 2016 - link

    Intel only gives their cores 64k of L1 cache and 256k of L2 cache. Their L3 cache is usually about 1.5-2.5MB per core (lower end chips have less, like the Core i5 6600k has only 6MB of L3 across 4 cores) with all of it accessible to all cores, unlike the L1 and L2 cores.

    AMD gives their cores double the L1 cache and quadruple the L2 cache currently. From what I've seen of rumors of the Zen architecture, it will still be double on both counts with more L3 cache. I don't see any mention of cache in the above article.

    So, why do you say "why so little cache?"?
  • JoeMonco - Wednesday, June 1, 2016 - link

    "Their L3 cache is usually about 1.5-2.5MB per core (lower end chips have less, like the Core i5 6600k has only 6MB of L3 across 4 cores) "

    Your parenthetical statement doesn't make sense. 6MB / 4 = 1.5 MB. How is that less than the 1.5-2MB range you state just prior?
  • dgingeri - Wednesday, June 1, 2016 - link

    I just said "less" not "less than 1.5MB". The lower end chips have less cache, usually 1.5MB, while the higher end chips have more, 2-2.5MB/core. It was more of a qualifying remark to the "1.5-2.5MB per core" statement. Sorry if that got you confused.

    Now, on to what I've learned about Zen. From what I've read, AMD has developed Zen cores in a quad core module with a shared 8MB L3 cache between each 4 cores. So a 8 core Zen based chip is said to have 16MB of cache. That's 2MB of L3 per core.

    The newest FX and A series chips (Bristol Ridge) with integrated graphics are said to be excavator cores with a shared 2MB L3 cache. I think that's probably where Morawka got the "so little cache" idea. The trick is that AMD uses an exclusive cache scheme (possibly at the cost of a little performance) so that the data in the L1 and L2 caches aren't duplicated in the L3 cache. Intel uses an inclusive cache, and 2MB of their L3 is lost to the L1 and L2 cache from each core. An FX processor with the latest core would have an equal caching scheme to 6.25MB of Intel's version of cache. (4X 64k L1 + 2X 2MB L2 + 2MB L3 = 6.25MB) Whereas a Core i5 6600k has only 6MB of usable cache because the L1 data is stored in the L2 and L3 in duplicate and the L2 is stored in the L3 in duplicate. So the 6MB of L3 is all the Core i5 has.
  • bcronce - Thursday, June 2, 2016 - link

    Exclusive caches can have sizable performance hits in cases where you share data among cores. Exclusive caches have to do a lot of cache snooping, which increases latency and needlessly consumes bandwidth. Less wasted space, but at a cost.

Log in

Don't have an account? Sign up now