The AMD A8-7650K APU Review, Also New Testing Methodology
by Ian Cutress on May 12, 2015 10:00 AM ESTThe staggered birth of Kaveri has been an interesting story to cover but it has been difficult to keep all the pieces right in the forefront of memory. The initial launch in January 2014 saw a small number of SKUs such as the A10-7850K and the A8-7600 at first and since then we have had a small trickle at a rate of one or two new models a quarter hitting the shelves. We've seen 65W SKUs, such as in the form of the A10-7800, which offer 45W modes as well. Today we're reviewing the most recent Kaveri processor to hit the market, the A8-7650K rated at 95W and officially priced at $105/$95.
AMDs APU Strategy
Integrated graphics is one of the cornerstones of both the mobile and the desktop space. Despite the love we might harbor for a fully discrete graphics solution, the truth of the matter is that most people and most places still have that integrated platform in both consumer and business. Whenever I meet with AMD, the question from them is always simple - when you build a system, what would you get from AMD/Intel at a similar price point? The APU series tackles the sub-$200 price bracket from head to toe:
CPU/APU Comparion | ||||
AMD Kaveri | Amazon Price on 5/12 |
Intel Haswell | ||
$236 |
i5-4690K (4C/4T, 88W) |
3.5-3.9 GHz HD 4600 |
||
$199 | i5-4590 (4C/4T, 84W) |
3.3-3.7 GHz HD 4600 |
||
$189 | i5-4460 (4C/4T, 84W) |
3.2-3.4 GHz HD 4600 |
||
3.7-4.0 GHz 512 SPs |
A10-7850K (2M/4T, 95W) |
$140 | i3-4330 (2C/4T, 54W) |
3.5 GHz HD 4600 |
3.5-3.9 GHz 512 SPs |
A10-7800 (2M/4T, 65W) |
$135 | ||
3.4-3.8 GHz 384 SPs |
A10-7700K (2M/4T, 95W) |
$120 | i3-4130 (2C/4T, 54W) |
3.4 GHz HD 4400 |
3.3-3.8 GHz 384 SPs |
A8-7650K (2M/4T, 95W) |
$104 | ||
3.1-3.8 GHz 384 SPs |
A8-7600 (2M/4T, 65W) |
$96 | Pentium G3430 (2C/2T, 53W) |
3.3 GHz HD (Haswell) |
3.7-4.0 GHz No IGP |
X4 860K (2M/4T, 95W) |
$83 | ||
$70 | Pentium G3258 (2C/2T, 53W) |
3.2 GHz HD (Haswell) |
||
3.5-3.9 GHz 256 SPs |
A6-7400K (1M/2T, 65W) |
$64 | Celeron G1830 (2C/2T, 53W) |
2.8 GHz HD (Haswell) |
I first created this table with launch pricing, and it had some of the APUs/CPUs moved around. But since the release dates of these processors varies on both sides, the prices of individual SKUs has been adjusted to compete. Perhaps appropriately, we get a number of direct matchups including the A10-7700K and the Core i3-4130 at $120 right now. This table is by no means complete, due to Intel’s 20+ other SKUs that fight around same price points but vary slightly in frequency, but that tells a lot about each sides attack on the market. Some of AMD's recently announced price cuts are here, but for consistency our results tables will list the launch pricing as we have no mechanism for dynamic pricing.
Testing AMDs APUs over the years has provided results that these are not necessarily targeted to the high end when it comes to multi-GPU systems that total $2000+, although AMD wouldn't mind if you built a high end system with one. The key element to the APU has always been the integrated graphics, and the ability to offer more performance or percentage of transistors to graphics than the competition does at various price points (irrespective of TDP). Ultimately AMD likes to promote that for a similarly priced Intel+NVIDIA solution, a user can enable dual graphics with an APU+R7 discrete card for better performance. That being said, the high-end APUs have also historically been considered when it comes to single discrete GPU gaming when the most expensive thing in the system is the GPU as we showed in our last gaming CPU roundup, although we need to test for a new one of those soon.
Part of the new set of tests for this review is to highlight the usefulness of dual graphics, as well as comparing both AMD and NVIDIA graphics for low, mild-mannered and high end gaming arrangements.
The A8-7650K
The new APU fits in the stack between the 65W A8-7600 and before we get into the A10 models with the A10-7700K. It offers a slightly reduced clock speed than the A10, but it is built (in part) for overclocking with the K moniker. The integrated graphics under the hood provide 384 SPs at 720 MHz, being part of AMDs 4+6 compute core strategy. The A8-7650K is designed to fill out the processor stack to that end.
AMD Kaveri Lineup | |||||||
A10- 7850K |
A10- 7800 |
A10- 7700K |
A8- 7650K |
A8- 7600 |
X4 860K |
A6- 7400K |
|
Price | $140 | $135 | $120 | $104 | $96 | $83 | $64 |
Modules | 2 | 2 | 2 | 2 | 2 | 2 | 1 |
Threads | 4 | 4 | 4 | 4 | 4 | 4 | 2 |
Core Freq. (GHz) | 3.7-4.0 | 3.5-3.9 | 3.4-3.8 | 3.3-3.8 | 3.1-3.8 | 3.7-4.0 | 3.5-3.9 |
Compute Units | 4+8 | 4+8 | 4+6 | 4+6 | 4+6 | 4+0 | 2+4 |
Streaming Processors |
512 | 512 | 384 | 384 | 384 | N/A | 256 |
IGP Freq. (MHz) | 720 | 720 | 720 | 720 | 720 | N/A | 756 |
TDP | 95W | 65W | 95W | 95W | 65W | 95W | 65W |
DRAM Frequency |
2133 | 2133 | 2133 | 2133 | 2133 | 1866 | 1866 |
L2 Cache | 2x2MB | 2x2MB | 2x2MB | 2x2MB | 2x2MB | 2x2MB | 1MB |
At a list price of $105 (current $104), we were at a quandary with what to test against it from team blue. The Pentium G3258 sits at $72 with two cores at 3.2 GHz and HD (Haswell) GT1 graphics. The next one up the stack is the i3-4130, a dual core with hyperthreading and HD4400, but sits at $120. Ultimately there is no direct price competitor, but AMD assured us they were confident in the positing of the SKUs, particularly when gaming is concerned. Due to what I have in my testing lab, the nearest competitor to this is the i3-4330, a model with a larger L3 cache which has a list price of $138, or the i3-4130T which is a low power SKU.
177 Comments
View All Comments
Gigaplex - Tuesday, May 12, 2015 - link
What happened to the DX12 benchmarks? Do we need to remind you that DX12 hasn't even been released yet, so is completely unsuitable for comparing hardware?akamateau - Tuesday, May 12, 2015 - link
Porting a CURRENT game designed and CODED to DX11 MAX SPEC to DX12 does not mean that it will automatically look better or play better if you do not consider faster fps as the main criteria for quality game play. In fact DX11 Game benchmarks will not show ANY increase in performance using Mantle or DX12And logically, continuing to write to this DX11 MAXSPEC will NOT improve gaming community-wide in general. Let’s be clear, a higher spec game will cost more money. So the studio must balance cost and projected sales. So I would expect that incremental increases in game quality may occur over the next few years as studios become more confident with spending more of the gaming budget on a higher MINSPEC DX12 game. Hey, it is ALL ABOUT THE MONEY.
If a game was written with the limitations or, better, say the maximums or MAXSPEC of DX11 then that game will in all likelihood not look any better with DX12. You will run it at faster frame rates but if the polygons, texture details and AI objects aren't there then the game will only be as detailed as the original programming intent will allow.
However, what DX12 will give you is a game that is highly playable with much less expensive hardware.
For instance using 3dMark API Overhead test, it is revealed with DX11 Intel i7-4960 with a GTX 980 can produce 2,000,000 draw calls at 30fps. Switch to DX12 and it is revealed that a single $100 AMD A6-7400 APU can produce 4,400,000 draw calls and get 30 fps. Of course these aren't rendered but you can't render the object if hasn;t been drawn.
If you are happy with the level of performance that $1500 will get you with DX11 then you should be ecstatic to get very close to the same level of play that DX12and a $100 A6 AMD APU will get you!!!!
That was the whole point behind Mantle, er (cough, cough) DX12. Gaming is opened up to more folks without massive amounts of surplus CASH.
silverblue - Tuesday, May 12, 2015 - link
Yes, yes, I see your point about AMD's iGPUs benefitting a lot from DirectX 12/Mantle, however I don't think you needed so many posts to make it. Additionally, not benchmarking a specific way doesn't make somebody a liar, it just means they didn't benchmark a specific way.Draw calls don't necessarily mean better performance, and if you're memory or ROP limited to begin with... what's more, the performance difference between the 384-shader 7600 and the 512-shader 7850K is practically nothing. Based off this, why would I opt for the 7850K when the 7600 performs similarly for less power? The 7400K is only a little behind but is significantly slower in DX11 testing. Does that mean we don't need the 7600 either if we're playing DX12 titles? Has the test highlighted a significant memory bottleneck with the whole Kaveri product stack that DX12 simply cannot solve?
In addition, consider the dGPU results. Intel still smokes AMD on a per-FPU basis. By your own logic, AMD will not gain any ground on Intel at all in this area if we judge performance purely on draw calls.
DirectX 11 is still current. There aren't many Mantle games out there to provide much for this comparison, but I'm sure somebody will have those results on another site for you to make further comparisons.
akamateau - Tuesday, May 12, 2015 - link
There is ONLY ONE BENCHMARK that is relevant to gamers.3dMark API Overhead Test!
If I am considering a GPU purchase I am not buying it becasue I want to Calculate Pi to a BILLION decimal places. I want better gameplay.
When I am trying to decide on an AMD APU or Intel IGP then that decision is NOT based on CineBench but rather what siliocn produces QUALITY GAMEPLAY.
You are DELIBERATELY IGNORING DX12 API Overhead Tests and that makes you a liar.
The 3dMark API Overhead Test measures the draw calls that are produced when the FPS drops below 30. As the following numbers will show the AMD APU will give the BEST GAMING VISUAL EXPERIENCE.
So what happens when this benchmark is run on AMD APU’s and Intel IGP?
AMD A10-7700k
DX11 = 655,000 draw calls.
Mantle = 4,509,000 Draw calls.
DX11 = 4,470,000 draw calls.
AMD A10-7850K
DX11 = 655,000 draw calls
Mantle = 4,700,000 draw calls
DX12 = 4,454,000 draw calls.
AMD A8-7600
DX11 = 629,000 draw calls
Mantle = 4,448,000 draw calls.
DX12 = 4,443,000 draw calls.
AMD A6-7400k
DX11 = 513,000 draw calls
Mantle = 4,047,000 draw calls
DX12 = 4,104,000 draw calls
Intel Core i7-4790
DX11 = 696,000 draw calls.
DX12 = 2,033,000 draw calls
Intel Core i5-4690
DX11 = 671,000 draw calls
DX12 = 1,977,000 draw calls.
Intel Core i3-4360
DX11 = 640,000 draw calls.
DX12 = 1,874,000 draw calls
Intel Core i3-4130T
DX11 = 526,000 draw calls.
DX12 = 1,692,000 draw calls.
Intel Pentium G3258
DX11 = 515,000 draw calls.
DX12 = 1,415,000 draw calls.
These numbers were gathered from AnandTech piece written on March 27, 2015.
Intel IGP is hopelessly outclassed by AMD APU’s using DX12. AMD outperforms Intel by 100%!!!
JumpingJack - Wednesday, May 13, 2015 - link
"There is ONLY ONE BENCHMARK that is relevant to gamers.3dMark API Overhead Test!"
NO, that is a syntethic, it simply states how many draw call can be made. It does not measure the capability of the entire game engine.
There is only ONE benchmark of concern to gamers -- actual performance of the games they play. Period.
Get ready for a major AMD DX12 let down if this is your expectation.
akamateau - Tuesday, May 12, 2015 - link
Legacy Benchmarks?????? i am going to spend money based on OBSOLETE BENCHMARKS???CineBench 11.5 was released in 2010 and is obsolete. It is JUNK
TrueCrypt???? TreuCrypt development was ended in MAY 2014. Another piece of JUNK.
Where is 3dMark API Overhead Test? That is brand new.
Where Is STARSWARM?? That is brand new.
akamateau - Tuesday, May 12, 2015 - link
Where are your DX12 BENCHMARKS?akamateau - Tuesday, May 12, 2015 - link
Where are your DX12 BENCHMARKS?rocky12345 - Tuesday, May 12, 2015 - link
whining about no DX12 test just take the info that was given & learn from that and wait for a released DX12 program that can truely be tested. testing DX12 at this point has very little to offer because it is still a beta product & the code is far from finished & by the time it is done all the tests you are screaming to have done will not be worth a pinch of racoon crap.galta - Tuesday, May 12, 2015 - link
Back when DX11 was about be released, AMD fans said the same: nVidia is better @DX10, but with DX11, Radeons superior I-don't-know-what will rule.Time passed and nVidia smashed Radeons new - and rebranded - GPUs.
I suspect it will be the same this time.