The Intel Broadwell-E Review: Core i7-6950X, i7-6900K, i7-6850K and i7-6800K Tested
by Ian Cutress on May 31, 2016 2:01 AM EST- Posted in
- CPUs
- Intel
- Enterprise
- Prosumer
- X99
- 14nm
- Broadwell-E
- HEDT
Turbo Boost Max 3.0 (TBM3), aka Turbo Boost Max or Turbo Boost 3.0
When Intel released the enterprise focused Broadwell-EP Xeon CPUs, there were a few features added to the platform over the previous Haswell-EP generation. One of these has come through to the consumer parts, though in a slightly different form.
For Broadwell-EP, one of the new features was the ability to have each core adjust the frequency independently depending on AVX or non-AVX workloads. Previously when an AVX load was detected, all the cores would reduce in frequency, but beginning with BDW-EP now they act separately. Intel has taken this enterprise feature and expanded it a little into a feature they're calling ‘Turbo Boost Max 3.0’.
Turbo Boost 2.0 is what Intel calls its maximum Turbo or ‘peak’ frequency. So in the case of the i7-6950X, the base frequency is 3.0 GHz and the Turbo Boost 2.0 frequency is 3.5 GHz. The CPU will use that frequency when light workloads are in play and decrease the frequency of the cores as the load increases in order to keep the power consumption more consistent. Turbo Boost 2.0 frequencies are advertised alongside the CPU on the box - TBM3 will be slightly different and not advertised.
TBM3, in a nutshell, will boost the frequency of a single CPU core when a single-threaded program is being used.
It requires a driver, similar to Skylake’s Speed Shift feature (which is not in Broadwell-E), which should be distributed in new X99 motherboard driver packages, but will also be rolled out in Windows 10 in due course. It also comes with a user interface, which might make it easier to explain:
Each of the cores in the processor can be individually accessed by the OS with the new driver, and the cores will be rated based on their performance and efficiency as they come out of Intel. In the image above, Core 9 is rated the best, with Core 0 at the bottom. This means that for TBM3, the driver will primarily use Core 9.
When enabled, TBM3 will activate in two modes: either the foreground application, or from a priority list. For the foreground selection, when the software detects a single threaded workload in play, it will attempt to pin the software to the best core (similar to changing the affinity in task manager to one core), and then boost the frequency. In priority mode, the application will look for any application on the left-hand panel (which has to be added manually). If an application with higher priority is present, then the software will unpin the current software and take the higher priority one and pin that instead.
When pinned, the software will boost the frequency of that core only. The only question now is how much is the boost, and what is the effect on performance? Unfortunately, both of those questions have bad answers.
Intel refuses to state the effect of TBM3, saying that ‘each CPU is different and could boost by different amounts’. Now, you might think that makes sense. However…
Turbo Boost Max 3.0 has to be supported by the motherboard manufacturer in the BIOS. The TBM3 settings have to be set in the BIOS, which means that the usefulness of such a feature is actually down to the motherboard manufacturers. But they know how to do it right, right? Well, here’s where it can get worse.
On the MSI motherboard we used for most of our testing, Turbo Boost Max 3.0 was disabled by default in the BIOS. We asked about this, and they said it was a conscious decision made by management a couple of weeks prior. This makes TBM3 useless for most users who never even touch the BIOS. That sounds good, right?
Well, the BIOS also sets how much the CPU can boost by. So ultimately it doesn’t matter how much the CPU might like to boost in frequency, the system will only boost by the amount it says so in the BIOS, which is set by the motherboard manufacturer. In the case of the MSI BIOS, it was set to ‘Auto’. In my case, ‘Auto’ meant a boost of zero, despite the MSI BIOS ‘suggesting’ 4000 MHz. I had to manually set Core 9 to a 40x multiplier. Then it worked.
All in all, TBM3 was only enabled after I changed two settings and specifically setting the correct core in the BIOS. For me, this isn’t a global feature if that is the case. That’s not to mention how Multi-Core Turbo also comes into the mix, which still works with Turbo Boost 2.0 speeds by default. Based on what we've seen, it would seem at this time that TBM3 isn’t being readily embraced at this time.
It should be noted that we also had one of the new ASUS motherboards in for testing, however time was too limited before leaving for Computex to verify if this is the case on the ASUS motherboard as well. ASUS has told me that they have/will have a software package that enables TBM3 to be applied to multiple cores at once, whereas the Intel software will only accelerate a single program. It should be interesting to test.
The Reviewers Problem With Turbo Boost Max 3.0
In the options menu for TBM3, there are two primary options to take note of. The first is the utilization threshold, which is the % at which the software will take control of the single threaded application and pin it to a core. By default, this is set at 90%.
The other option is where a dilemma will be faced. It is the evaluation interval, or the period of time between checks that the software makes in order to accelerate a program. The version of the software we had started with a value of 10 seconds. That means if the software package starts one second or nine seconds into a benchmark run, it can affect the score. The answer here would be to make the evaluation interval very small, but the software only has a one-second resolution. So for benchmarks that run for only a few seconds (anyone benchmarking wPrime or SuperPi, for example), might either fail to be accelerated if the evaluation window is set at default, or only slightly when set at one second.
As you can imagine, if a reviewer does not know if TBM3 is enabled or not, there may be some odd benchmark results that seem different to what you might expect. It should be noted that because of the BIOS issue and the potential for motherboard manufacturers to do something different with every product, we ran our benchmarks with TBM3 disabled, and readers should check to see if reviewers specify how TBM3 is being used when data is published.
Package Differences: It’s Thin
When the Skylake mainstream platform was launched, it was noted that the processor packages and substrates were thinner compared to the previous generation. It would appear that Intel is using the same packaging technology for Broadwell-E as well.
On the left is the Haswell-E based Core i7-5960X, and on the right is the Broadwell-E based Core i7-6950X. Both of these platforms use a FIVR, the Fully Integrated Voltage Regulator, which Intel equipped on this microarchitecture in order to increase power efficiency. Usually the presence of the FIVR would require additional layers for power management in the package, but it would seem that Intel has been optimizing this to a certain extent. Each individual layer is certainly thinner, but it is likely that Intel has also reduced the number of layers, though my eyes cannot discern the resolution needed to see exactly how many are in each CPU (and I don’t have a microscope on hand to test).
A couple of questions will crop up from readers regarding the thinner package. Firstly, on the potential for bending the package, especially in regards to a minor story on Skylake where a couple of CPUs were found to have bent when under extreme cooler force. As far as Intel is aware, Broadwell-E should not have a problem for a number of reasons, but mostly related to the dual latch socket design and socket cooler implementation. Intel’s HEDT platforms, from Sandy Bridge-E on, have been rated as requiring 30-40% more pressure per square inch then the mainstream platforms. As a result the sockets have been designed with this in mind, ensuring the pressure of the latch and cooler stays on the heatspreader.
The other question that would come to my mind is the heatspreader itself. Intel has stated that they are not doing anything new with regards to the thermal interface material here compared to previous designs, and it is clear that the heatspreader itself is taller to compensate for the z-height difference in the processor PCB.
If we compare the ‘wing’ arrangement between the Haswell-E and Broadwell-E processors, Intel has made the layout somewhat more robust by adding more contact area between the heatspreader and the PCB, especially in the corners and sides. One would assume this is to aid the thinner PCB, although without proper stress testing tools I can’t verify that claim.
205 Comments
View All Comments
JimmiG - Tuesday, May 31, 2016 - link
What's worse than the price premium is that you're also paying for the previous generation architecture.I really don't see why anyone would want one of those CPUs. For gaming and most typical applications, the mainstream models are actually faster because of their more modern architecture and higher clock speeds. If you're a professional user, you should really be looking at Xeons rather than these server rejects.
K_Space - Tuesday, May 31, 2016 - link
Exactly. I think that's the whole point: Intel realizes that -realistically- little profit will be made from these B-Es given the little incremental increase in performance so why not use them as an advert for the Xeons (which they have aggressively been marketing for HEDT not just servers over the last few month). Anyone considering these will consider the Xeons now.Ratman6161 - Tuesday, May 31, 2016 - link
There are a few benchmarks where they do make sense, if and only if you are doing that particular task for your job i.e. an environment where time is money. For the rest of us, if I need to do a video conversion of some kind its relatively rare and I can always start it before I go to bed.retrospooty - Tuesday, May 31, 2016 - link
People belittle AMD because even though Intel has dramatically slowed down the pursuit of speed, AMD still cant catch up. It's actually worse than that though. If AMD were competitive at all in the past decade Intel would still be perusing speed and would be further ahead. Its a double edged sword sort of thing.Flunk - Tuesday, May 31, 2016 - link
Yes, Intel has slowed down for AMD to catch up before. Cough, Pentium 4.retrospooty - Tuesday, May 31, 2016 - link
Yup... and back then AMD took advantage of it. I was the happy owner of a Thunderbird, then an Athlon, then an Athlon X2... Then Intel woke up and AMD went to sleep. For the past decade AMD has been too far behind to even matter. In the desktop CPU space there is Intel and then ... no-one.Flunk - Tuesday, May 31, 2016 - link
You're right, it's totally Intel's fault. They could launch a line of high-end consumer chips that cost the same as the current i5/i7 line but had 2-3X as many cores but no iGPU. They'd cost Intel the same to fabricate. They're the only ones to blame for their slowing sales.khon - Tuesday, May 31, 2016 - link
I could see people buying the i7-6850K for gaming, 6 cores at decent speeds + 40 PCI-E lanes, and $600 is not that bad when consider that some people have $700 1080's in SLI.However, the i7-6900/6950 look like they are for professional users only.
RussianSensation - Tuesday, May 31, 2016 - link
40 PCI lanes are worthless when i7 6700K can reliably overclock to 4.7-4.8Ghz, and has extra PCIe 3.0 lanes off the chipset. The 6850K will be lucky to get 4.5Ghz, and still lose in 99% of gaming scenarios. Z170 PCIe lanes are sufficient for 1080 SLI and PCIe 3.0 x4 in RAID.6850K is the worst processor in the entire Broadwell-E line.
Impulses - Tuesday, May 31, 2016 - link
Well if you're about gaming only you might as well compare it with the 6600K... AFAIK HT doesn't do much for gaming does it? The 6800K isn't much better either when your can just save a few bucks with the 5820K.I feel like they could've earned some goodwill despite the high end price hikes by just putting out a single 68xx SKU for like $500, it'd still be a relative price hike for entry into HEDT but could be more easily seen as a good value.
Are the 6800K bad die harvests or something? Seems dumb to keep that artificial segmentation in place otherwise when HEDT is already pretty far removed from the mainstream platform.
When I chose the 6700K over the 5820K I thought it'd be the last quad core I'd buy, but at this pace (price hikes, HEDT lagging further behind, lower end SKU still lane limited) I don't know if that'll be true.