What's Next? ARM's Cortex A15

Comparing to Qualcomm's APQ8060A gives us a much better idea of how Atom fares in the modern world. Like Intel, Qualcomm appears to prioritize single threaded performance and builds its SoCs on a leading edge LP process. If this were the middle of 2012, the Qualcomm comparison is where we'd stop however this is a new year and there's a new kid in town: ARM's Cortex A15.

We've already looked at Cortex A15 performance and found it to be astounding. While Intel's 5-year old Atom core can still outperform most of the other ARM based designs on the market, the Cortex A15 easily outperforms it. But at what power cost?

To find out, we looked at a Google Nexus 10 featuring a Samsung Exynos 5250 SoC. The 5250 (aka Exynos 5 Dual) features two ARM Cortex A15s running at up to 1.7GHz, coupled with an ARM Mali-T604 GPU. The testing methodology remains identical.

Idle Power

As the Exynos 5250 isn't running Windows RT, we don't need to go through the same song and dance to wait for live tiles to stop animating. The Android home screen is static to begin with, all swings in power consumption have more to do with WiFi at this point:

At idle, the Nexus 10 platform uses more power than any of the other tablets. This shouldn't be too surprising as the display requires much more power, I don't think we can draw any conclusions about the SoC just yet. But just to be sure, let's look at power delivery to the 5250's CPU and GPU blocks themselves:

Ah the wonderful world of power gating. Despite having much more power hungry CPU cores, when they're doing nothing the ARM Cortex A15 looks no different than Atom or even Krait.

Mali-T604 looks excellent here. With virtually nothing happening on the display the GPU doesn't have a lot of work to do to begin with, I believe we're also seeing some of the benefits of Samsung's 32nm LP (HK+MG) process.

Remove WiFi from the equation and things remain fairly similar, total platform power is high thanks to a more power hungry display but at the SoC level idle power consumption is competitive. The GPU power consumption continues to be amazing, although it's possible that Samsung simply doesn't dangle as much off of the GPU power rail as the competitors.

Krait: GPU Power Consumption Cortex A15: SunSpider
Comments Locked

140 Comments

View All Comments

  • metafor - Friday, January 4, 2013 - link

    It matters to a degree. Look at the CPU power chart, the CPU is constantly being ramped from low to high frequencies and back.

    Tegra automatically switches the CPU to a low-leakage core at some frequency threshold. This helps in almost all situations except for workloads that constantly keep the CPU at above that threshold, which, if you look at the graph, isn't the case.

    That being said, that doesn't mean it'll be anywhere near enough to catch up to its Atom and Krait competitors.
  • jeffkro - Saturday, January 5, 2013 - link

    The tegra 3 is also not the post powerful arm processor, intel obviously chose it to make atom look better.
  • npoe1 - Wednesday, January 9, 2013 - link

    From one of Ananad's articles: "NVIDIA recently revealed it was doing something similar to this with its upcoming Tegra 3 (Kal-El) SoC. NVIDIA outfitted its next-generation SoC with five CPU cores, although only a maximum of four are visible to the OS. If you’re running light tasks (background checking for email, SMS/MMS, twitter updates while your phone is locked) then a single low power Cortex A9 core services those needs while the higher performance A9s remain power gated. Request more of the OS (e.g. unlock your phone and load a webpage) and the low power A9 goes to sleep and the 4 high performance cores wake up."

    http://www.anandtech.com/show/4991/arms-cortex-a7-...
  • jeffkro - Saturday, January 5, 2013 - link

    A15 currently pulls to much power for smartphone but it makes for a great tablet chip as well as providing enough horse power to power basic laptops.
  • djgandy - Friday, January 4, 2013 - link

    The most obvious thing here is that PowerVR graphics are far superior to Nvidia graphics.
  • Wolfpup - Friday, January 4, 2013 - link

    Actually no, that isn't obvious at all. Tegra 3 is a two year old design, on a 2 generations old process. The fact that it's still competitive today is just because it was so good to begin with. It'll be nessisary to look at the performance and power usage of upcoming Nvidia chips on the same process to actually say anything "obvious" about them.
  • Death666Angel - Friday, January 4, 2013 - link

    According to Wikipedia, the 545 is from January '10, so it's got its a 3 year old now. The only current gen thing here is the Mali. The 225 is just a 220 with a higher clock, so it's about 1.5 to 2 years old.
  • djgandy - Friday, January 4, 2013 - link

    And a 4/5 year old atom and the 2/3 year+ old SGX545 aren't old designs?

    Look at the power usage of Nvidia. It's way beyond what is acceptable for any SOC design. Phones from 2 years ago used far less power on older processes than the 40nm T3! Just look at GLbenchmark battery life tests for the HTC One X and you'll see how poor the T3 GPU is. In fact just take your Nvidia goggles off and re-read this whole article.
  • Wolfpup - Friday, January 4, 2013 - link

    Atom's basic design is old, the manufacturing process is newer. Tegra 3 is by default at the biggest disadvantage here. You accuse me of bias when it appears you're actually biased.
  • Chloiber - Tuesday, January 8, 2013 - link

    First of all it's still 40nm.

    Second of all: you mentioned the battery benchmarks yourself. Go look at the Nexus 4 review and look how the international version of the One X fares. Battery life on the T3 One X is very good, if you take into account that it's based on 40nm compared to 28nm of the One XL and uses 4 cores.

Log in

Don't have an account? Sign up now