What's Next? ARM's Cortex A15

Comparing to Qualcomm's APQ8060A gives us a much better idea of how Atom fares in the modern world. Like Intel, Qualcomm appears to prioritize single threaded performance and builds its SoCs on a leading edge LP process. If this were the middle of 2012, the Qualcomm comparison is where we'd stop however this is a new year and there's a new kid in town: ARM's Cortex A15.

We've already looked at Cortex A15 performance and found it to be astounding. While Intel's 5-year old Atom core can still outperform most of the other ARM based designs on the market, the Cortex A15 easily outperforms it. But at what power cost?

To find out, we looked at a Google Nexus 10 featuring a Samsung Exynos 5250 SoC. The 5250 (aka Exynos 5 Dual) features two ARM Cortex A15s running at up to 1.7GHz, coupled with an ARM Mali-T604 GPU. The testing methodology remains identical.

Idle Power

As the Exynos 5250 isn't running Windows RT, we don't need to go through the same song and dance to wait for live tiles to stop animating. The Android home screen is static to begin with, all swings in power consumption have more to do with WiFi at this point:

At idle, the Nexus 10 platform uses more power than any of the other tablets. This shouldn't be too surprising as the display requires much more power, I don't think we can draw any conclusions about the SoC just yet. But just to be sure, let's look at power delivery to the 5250's CPU and GPU blocks themselves:

Ah the wonderful world of power gating. Despite having much more power hungry CPU cores, when they're doing nothing the ARM Cortex A15 looks no different than Atom or even Krait.

Mali-T604 looks excellent here. With virtually nothing happening on the display the GPU doesn't have a lot of work to do to begin with, I believe we're also seeing some of the benefits of Samsung's 32nm LP (HK+MG) process.

Remove WiFi from the equation and things remain fairly similar, total platform power is high thanks to a more power hungry display but at the SoC level idle power consumption is competitive. The GPU power consumption continues to be amazing, although it's possible that Samsung simply doesn't dangle as much off of the GPU power rail as the competitors.

Krait: GPU Power Consumption Cortex A15: SunSpider
Comments Locked

140 Comments

View All Comments

  • kumar0us - Friday, January 4, 2013 - link

    My point was that for a CPU benchmark say Sunspider, the code generated by x86 compilers would be better than ARM compilers.

    Could better compilers available for x86 platform be a (partial) reason for faster performance of intel. Or compilers for ARM platform are mature and fast enough that this angle could be discarded?
  • iwod - Friday, January 4, 2013 - link

    Yes, not just compiler but general optimization in software on x86. Which is giving some advantage on Intel's side. However with the recent surge of ARM platform and software running on it my ( wild ) guess is that this is less then 5% in the best case scenario. And it is only the worst case, or individual cases like SunSpider not running fully well.
  • jwcalla - Friday, January 4, 2013 - link

    Yes. And it was a breath of fresh air to see Anand mention that in the article.

    Look at, e.g., the difference in SunSpider benchmarks between the iPad and Nexus 10. Completely different compilers and completely different software. As the SunSpider website indicates, the benchmark is designed to compare browsers on the same system, not across different systems.
  • monstercameron - Friday, January 4, 2013 - link

    it would be interesting to throw an amd system into the benchmarking, maybe the current z-01 or the upcoming z-60...
  • silverblue - Friday, January 4, 2013 - link

    AMD has thrown a hefty GPU on die, which, coupled with the 40nm process, isn't going to help with power consumption whatsoever. The FCH is also separate as opposed to being on-die, and AMD tablets seem to be thicker than the competition.

    AMD really needs Jaguar and its derivatives and now. A dual core model with a simple 40-shader GPU might be a competitive part, though I'm always hearing about the top-end models which really aren't aimed at this market. Perhaps AMD will use some common sense and go for small, volume parts over the larger, higher performance offerings, and actually get themselves into this market.
  • BenSkywalker - Friday, January 4, 2013 - link

    There is an AMD design in their, Qualcomm's part.

    A D R E N O
    R A D E O N

    Not a coincidence, Qualcomm bought AMD's ultra portable division off from them for $65 million a few years back.

    Anand- If this is supposed to be a CPU comparison, why go overboard with the terrible browser benchmarks? Based on numbers you have provided, Tegra 3 as a generic example is 100% faster under Android then WinRT depending on the bench you are running. If this was an article about how the OSs handle power tasks I would say that is reasonable, but given that you are presenting this as a processor architecture article I would think that you would want to use the OS that works best with each platform.
  • powerarmour - Friday, January 4, 2013 - link

    Agreed, those browser benchmarks seem a pretty poor way to test general CPU performance, in fact browser benchmarks in general just test how optimized a particular browser is on a particular OS mainly.

    In fact I can beat most of those results with a lowly dual-A9 Galaxy Nexus smartphone running Android 4.2.1!
  • Pino - Friday, January 4, 2013 - link

    I remember AMD having a dual core APU (Ontario) with a 9W TDP, on a 40nm process, back in 2010.

    They should invest on a SOC
  • kyuu - Friday, January 4, 2013 - link

    That's what Temash is going to be. They just need to get it on the market and into products sooner rather than later.
  • jemima puddle-duck - Friday, January 4, 2013 - link

    Impressive though all this engineering is, in the real world what is the unique selling point for this? Normal people (not solipsistic geeks) don't care what's inside their phone, and the promise of their new phone being slighty faster than another phone is irrelevant. And for manufacturers, why ditch decades of ARM knowledge to lock yourself into one supplier. The only differentiator is cost, and I don't see Intel undercutting ARM any time soon.

    The only metric that matters is whether normal human beings get any value from it. This just seems like (indirect) marketing by Intel for a chip that has no raison d'etre. I'm hearing lots of "What" here, but no "Why". This is the analysis I'm interested in.

    All that said, great article :)

Log in

Don't have an account? Sign up now