Taking Small Steps forward

Today unveiled its new AMD Opteron 6300 series server processors, code name Abu Dhabi. The Opteron 6300 containes the new Piledriver cores, an evolutionary improvement of the Bulldozer cores.

We did an in depth analysis of the Bulldozer core and we came to the conclusion that there are three primary weak spots that resulted in the underwhelming performance of the Bulldozer core:

  1. The L1 instruction cache: when running two threads simultaneously, the cache misrate increased significantly; the associativity is too low.
  2. The branch misprediction penalty
  3. Lower than expected clock speed

Secondary bottlenecks were the high latency and low bandwidth of the L2 cache, and the very high latency of the L3 cache, which signficantly increased the overall memory latency.

The lack of clock speed has been partially solved in Piledriver with the use of hard edge flops and the resonant clock edge, which is especially useful for clock speeds beyond 3GHz. Vishera, the desktop chip with Piledriver cores, runs at clock speeds of up to 4GHz, 11% higher than Bulldozer, without any measureable increase in power consumption. As you can see further below, the clockspeed increase are a lot smaller for the Opteron 6300: about 4-6%. The fastest but hottest (140W TDP) Opteron now clocks at 2.8GHz instead of 2.7GHz, and the "regular" Opteron 6380 now runs at 2.5GHz instead of 2.4GHz (Opteron 6278). That means that the Opteron is still not able to fully leverage the deeply pipelined, high clockspeed architecture: the power envelope of 115W is still limiting the maximum clockspeed. The more complex and less deeply pipelined Intel Xeon E5 runs at 2.7GHz with a 115W TDP.

Piledriver also comes with a few small improvements in the branch prediction unit. Two out of three of the worst bottlenecks got somewhat wider. The most important bottleneck, the L1 Icache, is only going to be fixed with the next iteration, Steamroller.

The L2 cache latency and bandwidth has not changed, but AMD did quite a few optimizations. From AMD engineering:

"While the total bandwidth available between the L2 and the rest of the core did not change from Bulldozer to Piledriver, the existing bandwidth is now used more effectively. Some unnecessary instruction decode hint data writes to the L2 that were present in Bulldozer have been removed in Piledriver. Also, some misses sent to the L2 that would get canceled in Bulldozer are prevented from being sent to the L2 at all in Piledriver. This allows the L2’s existing resources to be applied toward more useful work.”

We talked about the whole list of other improvements when we looked at Trinity:

  • Smarter prefetching
  • A perceptron branch predictor that supplements the primary BPU
  • Larger L1 TLB
  • Schedulers that free up tokens more quickly
  • Faster FP and integer dividers and SYSCALL/RET (kernel/System call instructions)
  • Faster Store-to-Load forwarding

Lastly, the new Opteron 6300 can now support one DDR3 DIMM per channel at 1866MHz. With 2 DPC, you get a 1600MHz at 1.5V.

We're still working to get hardware in house for testing, but we wanted to provide some analysis of what to expect with Abu Dhabi in the meantime.

Performance According to AMD
Comments Locked

22 Comments

View All Comments

  • Notperk - Monday, November 5, 2012 - link

    Wouldn't it be better to compare these CPUs to Intel's E7 series enterprise server CPUs? I ask this, because of how technically an Opteron 6386 SE is two CPUs in one. Therefore, two of these would actually be four CPUs and would be a direct competitor (at least in terms of class) to four E7-4870s. If you went even further, four of those Opterons would be a competitor to eight E7-8870s. I understand that, performance wise, these are more similar to the E5s, but it just makes more sense to me to place them higher as enterprise server CPUs.
  • MrSpadge - Monday, November 5, 2012 - link

    It's actually the other way around: there may be 2 dies inside each CPU, but even combined they get less work done than the Intel chips in most situations. However, comparing a 4-socket Opti system with a 2-socket Intel system, which cost approximately the same to purchase, can get very interesting: massive memory capacity and bandwidth, lot's of threads for integer throughput and quite a few FPUs. With the drawback of much higher running costs through electricity costs, of course.
  • leexgx - Tuesday, November 6, 2012 - link

    Happy that the reviewer correctly got the module/cores right (as the Integer cores are more like hyper threading but not)

    in any case should compare the amd modules count to intel cpu cores
    (amd should be marketing them the same way, 4 module cpus with core assist, that are slower or the same as an phenom x4 real world use, its like saying an i7 is an 8 core cpu when its about the same speed of an i5 that lacks HT)
  • RicardoNeuer - Thursday, November 8, 2012 - link

    my co-worker's mother makes $60 an hour on the computer. She has been out of work for nine months but last month her pay was $13948 just working on the computer for a few hours. Read more on this
    (Click on menu Home more information)
    http://goo.gl/v6dOM
  • thebluephoenix - Tuesday, November 6, 2012 - link

    E7 is nehalem, old technology. E5-2687W and E5-2690 are actual competition (~Double 2600K vs ~Double FX-8350)
  • JohanAnandtech - Tuesday, November 6, 2012 - link

    Minor nitpick: E7 is Westmere, improved Nehalem.

    http://www.anandtech.com/show/4285/westmereex-inte...

    But E5 is indeed the real competition. E7 is less about performance/Watt, but more about RAS and high scalability (corecounts of 40, up to 80 threads)
  • alpha754293 - Monday, November 5, 2012 - link

    I don't know if I would say that. Course, I'm biased because I'm somewhat in HPC. But I think that the HPC will also give an idea of how well (or how poorly) a highly multi-threaded/multi-processor capable/aware application is going to perform.

    In some HPC cases, having more integer cores is probably going to be WORSE since it's still fighting for FPU resources. And that running it on more processors isn't always necessarily better either (higher intercore communication traffic).
  • MrSpadge - Monday, November 5, 2012 - link

    If you compare a 4-socket Opti to a 2-socket Intel (comparable purchase cost) you can get massive memory bandwidth, which might be enough to tip the scale in Optis favor in some HPC applications. They need to profit from many cores and require this bandwdith, though.

    Personally for genereal HPC jobs I prefer less cores with higher IPC and clock speed (i.e. Intel), as they're more generally useful.
  • alpha754293 - Friday, November 9, 2012 - link

    I can tell you from experience that it really depends on the type of HPC workload.

    For FEA, if you can store the large matrices in the massive amount of memory that the Opterons can handle (upto 512 GB for a quad-socket system) - it can potentially help*.

    *You have to disable swap so that you don't get bottlenecked by the swap I/O performance.

    Then you'd really be able to rock 'n roll being able to solve the matrices entirely in-core.

    For molecular dynamics though - it's not really that memory intensive (compared to structural mechanics FEA) but it's CPU intensive.

    For CFD, that's also very CPU intensive.

    And even then it also depends too on how the solvers are written and what you're doing.

    CFD - you need to pass the pressure, velocity, and basically the information/state information about the fluid parcel from one cell to another; so if you partition the model at the junction and you need to transfer information from one cell on one core on one socket to another core sitting on another CPU sitting in another physical socket - then it's memory I/O limited. And most commercial CFD codes that I know of that enables MPI/MPP processing - they actually do somewhat of a local remesh at the partition boundaries so they actually create extra elements just to facilitate the data information transfer/exchange (and to make sure that the data/information is stable).

    So there's a LOT that goes into it.

    Same with crash safety simulations and explicit dynamics structural mechanics (like LS-DYNA) because that's an energy solver, so what happens to one element will influence what happens at your current element and that in turn will influence what happens at the next element. And for LS-DYNA, *MPP_DECOMPOSITION_<OPTION> can further tell how you want the problem to be broken down specifically (and you can do some pretty neat stuff with it) in order to make the MPI/MPP solver even more efficient.

    If you have a problem where what happens with one doesn't really have that much of an impact on another element (such as fatigue analysis - done at the finite element level) - you can process all of the elements individually, so having lots of cores means you can run it at lot faster.

    But for problems where there's a lot of "bi-directional" data/communication (hypersonic flow/shock wave for example) - then I THINK (if I remember correctly), the communication penalty is something like O(n^2) or O(n^3). So the CS side to an HPC problem is trying to optimize between these two. Run as many cores as possible, with as little communication as possible (so it doesn't slow you down), as fast as possible, as independently possible, and pass ONLY the information you NEED to pass along, WHEN you need to pass it along and try and do as much of it in-core as possible.

    And then to throw a wrench into that whole thing - the physics of the simulations basically is a freakin' hurricane on that whole parade (the physics makes a lot of that either very difficult or near impossible or outright impossible).
  • JohanAnandtech - Monday, November 5, 2012 - link

    I would not even dream of writing that! HPC software can be so much more fun than other enterprise software: no worries about backup or complex High Availability setups. No just focussing on performance and enjoying the power of your newest chip.

    I was talking about the HPC benchmarks that AMD reports. Not all HPC benchmarks can be recompiled and not all of them will show such good results with FMA. Those are very interesting, but they only give a very limited view.

    For the rest: I agree with you.

Log in

Don't have an account? Sign up now