Bulldozer for Servers: Testing AMD's "Interlagos" Opteron 6200 Series
by Johan De Gelas on November 15, 2011 5:09 PM ESTBenchmark Configuration
Since AMD sent us a 1U Supermicro server, we had to resort to testing our 1U servers again. That is why we went back to the ASUS RS700 for the Xeon. It is a bit unfortunate as on average 1U servers have a relatively worse performance/watt ratio than other form factors such as 2U and blades. Of course, 1U still makes sense in low cost, high density HPC environments.
Supermicro A+ server 1022G-URG (1U Chassis)
CPU |
Two AMD Opteron "Bulldozer" 6276 at 2.3GHz Two AMD Opteron "Magny-Cours" 6174 at 2.2GHz |
RAM | 64GB (8x8GB) DDR3-1600 Samsung M393B1K70DH0-CK0 |
Motherboard | SuperMicro H8DGU-F |
Internal Disks |
2 x Intel SLC X25-E 32GB or 1 x Intel MLC SSD510 120GB |
Chipset | AMD Chipset SR5670 + SP5100 |
BIOS version | v2.81 (10/28/2011) |
PSU | SuperMicro PWS-704P-1R 750Watt |
The AMD CPUS have four memory channels per CPU. The new Interlagos Bulldozer CPU supports DDR3-1600, and thus our dual CPU configuration gets eight DIMMs for maximum bandwidth.
Asus RS700-E6/RS4 1U Server
CPU |
Two Intel Xeon X5670 at 2.93GHz - 6 cores Two Intel Xeon X5650 at 2.66GHz - 6 cores |
RAM | 48GB (12x4GB) Kingston DDR3-1333 FB372D3D4P13C9ED1 |
Motherboard | Asus Z8PS-D12-1U |
Chipset | Intel 5520 |
BIOS version | 1102 (08/25/2011) |
PSU | 770W Delta Electronics DPS-770AB |
To speed up testing, we tested with the Intel Xeon and AMD Opteron system in parallel. As we didn't have more than eight 8GB DIMMs, we used our 4GB DDR3-1333 DIMMs. The Xeon system only gets 48GB, but this is no disadvantage as our benchmark with the highest memory footprint (vApus FOS, 5 tiles) uses no more than 36GB of RAM.
We measured the difference between 12x4GB and 8x8GB of RAM and recalculated the power consumption for our power measurements (note that the differences were very small). There is no alternative as our Xeon has three memory channels and cannot be outfitted with the same amount of RAM as our Opteron system (four channels).
We chose the Xeons based on AMD's positioning. The Xeon X5649 is priced at the same level as the Opteron 6276 but we didn't have the X5649 in the labs. As we suggested earlier, the Opteron 6276 should reach the performance of the X5650 to be attractive, so we tested with the X5670 and X5650. We only tested with the X5670 in some of the tests because of time constraints.
Common Storage System
For the virtualization tests, each server gets an adaptec 5085 PCIe x8 (driver aacraid v1.1-5.1[2459] b 469512) connected to six Cheetah 300GB 15000 RPM SAS disks (RAID-0) inside a Promise JBOD J300s. The virtualization testing requires more storage IOPs than our standard Promise JBOD with six SAS drives can provide. To counter this, we added internal SSDs:
- We installed the Oracle Swingbench VMs (vApus Mark II) on two internal X25-E SSDs (no RAID). The Oracle database is only 6GB large. We test with two tiles. On each SSD, each OLTP VM accesses its own database data. All other VMs (web, SQL Server OLAP) are stored on the Promise JBOD (see above).
- With vApus FOS, Zimbra is the I/O intensive VM. We spread the Zimbra data over the two Intel X25-E SSDs (no RAID). All other VMs (web, MySQL OLAP) get their data from the Promise JBOD (see above).
We monitored disk activity and phyiscal disk adapter latency (as reported by VMware vSphere) was between 0.5 and 2.5 ms.
Software configuration
All vApus testing was done one ESXi vSphere 5--VMware ESXi 5.0.0 (b 469512 - VMkernel SMP build-348481 Jan-12-2011 x86_64) to be more specific. All vmdks use thick provisioning, independent, and persistent. The power policy is "Balanced Power" unless indicated otherwise. All other testing was done on Windows 2008 R2 SP1.
Other notes
Both servers were fed by a standard European 230V (16 Amps max.) powerline. The room temperature was monitored and kept at 23°C by our Airwell CRACs.
We used the Racktivity ES1008 Energy Switch PDU to measure power. Using a PDU for accurate power measurements might same pretty insane, but this is not your average PDU. Measurement circuits of most PDUs assume that the incoming AC is a perfect sine wave, but it never is. However, the Rackitivity PDU measures true RMS current and voltage at a very high sample rate: up to 20,000 measurements per second for the complete PDU.
106 Comments
View All Comments
mino - Wednesday, November 16, 2011 - link
IT had most likely to do with you running it on NetBurst (judging by no VT-X moniker).As much to do with VT-X as with a crappy CPU ... wiht bus architecture ah, thank god they are dead.
JustTheFacts - Wednesday, November 16, 2011 - link
Please explain why there is no comparison between the latest AMD processors to Intel's flagship two-way server processors: the Intel Westmere-EX e7-28xx processor family?Lest you forgot about them, you can find your own benchmarks of this flagship Intel processor here: http://www.anandtech.com/show/4285/westmereex-inte...
Take the gloves off and compare flagship against flagship please, and then scale the results to reflect the price differece if you have to, but there's no good reason not to compare them that I can see. Thanks.
duploxxx - Thursday, November 17, 2011 - link
Westmere EX 2sockets is dead, will be killed by own intel platform called romley which will have 2p and 4p.it was a stupid platform from the start and overrated by sales/consultants with there so called huge memory support.
aka_Warlock - Wednesday, November 16, 2011 - link
I think you should have done a more thorough VM test than you did. 64GB RAM?We all know single threaded performance is weak, but I still feel the server are underutilized in your test.
These CPU's are screaming heavy multi threading workloads. Many VM's. Many vCPU's.
What would the performance be if you had, say, at least 192GB of RAM and 50 (maybe more) VM's on it?
And offcourse, storage should not be a bottleneck.
I think this is where his 8modules/16threads cpu would shine.
A dual socket rack/blade. 16modules/32 threads.
Loads of RAM and a bounch of VM's.
iwod - Wednesday, November 16, 2011 - link
It is power hungry, isn't any better then Intel, and it is only slightly cheaper, at the cost of higher electricity bill.So unless with some software optimization that magically show AMD is good at something, i think they are pretty much doomed.
It is like Pentium 4, except Intel can afford making one or two mistakes, but not with AMD.
mino - Wednesday, November 16, 2011 - link
Then the article served its purpose well.SunLord - Wednesday, November 16, 2011 - link
So is the AMD system running 8GB DDR3-1600 DIMMS or 4GB DDR3-1333? Because you list the same DDR3-1333 model for both systems and if the Server supports 16 DIMMs well 16*4 is 64GBJohanAnandtech - Thursday, November 17, 2011 - link
Copy and paste error, Fixed. We used DDR-3 1600 (Samsung)Johnmcl7 - Wednesday, November 16, 2011 - link
I have wondered about this, with more cores per socket and virtualisation (organising new set of servers and buying far less hardware for the same functionality) so I'd have thought in total less server hardware is being purchased. Clearly that isn't the case though, is the money made back from more expensive servers?John
bruce24 - Wednesday, November 16, 2011 - link
While sure which each new generation of server you need much less hardware to do the same amount of work, however worldwide people are looking for servers to do much more work. Each year companies like Google, Facebook, Amazon, Microsoft and Apple add much more computing power than they could get by refreshing their current servers.