Amongst the new iPad and Watch devices released today, Apple made news in releasing the new A14 SoC chip. Apple’s newest generation silicon design is noteworthy in that is the industry’s first commercial chip to be manufactured on a 5nm process node, marking this the first of a new generation of designs that are expected to significantly push the envelope in the semiconductor space.

Apple’s event disclosures this year were a bit confusing as the company was comparing the new A14 metrics against the A12, given that’s what the previous generation iPad Air had been using until now – we’ll need to add some proper context behind the figures to extrapolate what this means.

On the CPU side of things, Apple is using new generation large performance cores as well as new small power efficient cores, but remains in a 2+4 configuration. Apple here claims a 40% performance boost on the part of the CPUs, although the company doesn’t specify exactly what this metric refers to – is it single-threaded performance? Is it multi-threaded performance? Is it for the large or the small cores?

What we do know though is that it’s in reference to the A12 chipset, and the A13 already had claimed a 20% boost over that generation. Simple arithmetic thus dictates that the A14 would be roughly 16% faster than the A13 if Apple’s performance metric measurements are consistent between generations.

On the GPU side, we also see a similar calculation as Apple claims a 30% performance boost compared to the A12 generation thanks to the new 4-core GPU in the A14. Normalising this against the A13 this would mean only an 8.3% performance boost which is actually quite meagre.

In other areas, Apple is boasting more significant performance jumps such as the new 16-core neural engine which now sports up to 11TOPs inferencing throughput, which is over double the 5TOPs of the A12 and 83% more than the estimated 6TOPs of the A13 neural engine.

Apple does advertise a new image signal processor amongst new features of the SoC, but otherwise the performance metrics (aside from the neural engine) seem rather conservative given the fact that the new chip is boasting 11.8 billion transistors, a 38% generational increase over the A13’s 8.5bn figures.

The one explanation and theory I have is that Apple might have finally pulled back on their excessive peak power draw at the maximum performance states of the CPUs and GPUs, and thus peak performance wouldn’t have seen such a large jump this generation, but favour more sustainable thermal figures.

Apple’s A12 and A13 chips were large performance upgrades both on the side of the CPU and GPU, however one criticism I had made of the company’s designs is that they both increased the power draw beyond what was usually sustainable in a mobile thermal envelope. This meant that while the designs had amazing peak performance figures, the chips were unable to sustain them for prolonged periods beyond 2-3 minutes. Keeping that in mind, the devices throttled to performance levels that were still ahead of the competition, leaving Apple in a leadership position in terms of efficiency.

What speaks against such a theory is that Apple made no mention at all of concrete power or power efficiency improvements this generation, which is rather very unusual given they’ve traditionally always made a remark on this aspect of the new A-series designs.

We’ll just have to wait and see if this is indicative of the actual products not having improved in this regard, of it’s just an omission and side-effect of the new more streamlined presentation style of the event.

Whatever the performance and efficiency figures are, what Apple can boast about is having the industry’s first ever 5nm silicon design. The new TSMC-fabricated A14 thus represents the cutting-edge of semiconductor technology today, and Apple made sure to mention this during the presentation.

Related Reading:

Comments Locked

127 Comments

View All Comments

  • BedfordTim - Wednesday, September 16, 2020 - link

    You are in many ways correct in that modern phones are a triumph of marketing over common sense.
    Where I think you may be wrong is that Apple has never marketed on absolute performance. They aren't really competing with Android phones and so have for example got by with minimal RAM and flash for years. Given there is no marketing going on for the NPU itself it must be there for some purpose that will increase sales or data harvesting.
  • BedfordTim - Wednesday, September 16, 2020 - link

    As an extension, the obvious area of use is the camera. Phone cameras are heavily dependent on software image synthesis to improve apparent image quality, adding in detail that was missing in the original image using AI.
  • Spunjji - Wednesday, September 16, 2020 - link

    @BedformTim - they've never marketed on absolute performance per se, but they do regularly tout performance improvements over their own prior products, along with their general leadership.

    I'm not sure you're disagreeing with me here, though - my point here was very much that putative performance advantages in any area are irrelevant to the success of their products! :)
  • nico_mach - Thursday, September 17, 2020 - link

    The ML chip units simply aren't high profile enough to be simply about sales. And Apple in particular doesn't just add hardware for no reason - yes, AR but notably that isn't on every device the way that machine learning has been. It's real hardware with real advantages, I'm not sure why you're picking this out.
  • octavus - Tuesday, September 15, 2020 - link

    With a higher transistor budget they can add more and more fixed or atleast less flexible circuitry for better power efficiency. All of the CPUs today have an immense amount of fixed function units for things like media or imaging as no one has a better use for all these transistors, why have a separate sensor hub when you can just put it in the CPU?
  • Tams80 - Friday, September 18, 2020 - link

    Very much this.

    Before it was because the SoCs weren't computationally powerful enough to do some tasks without bringing the SoC to its knees (see the Nokia 808 PureView and it's imaging DSP compared to the Lumia 1020).

    The effieciency is now just used to reduce power draw, with the added benefit that that dedicated circuitry can be added in at comparatively very little cost (in space, etc.)
  • linuxgeex - Tuesday, September 15, 2020 - link

    Updating the neural inferencing capabilities at the edge is about reducing data transmitted back to the mothership for the same quality of data harvested. They're doing it for their own enrichment.
  • Spunjji - Wednesday, September 16, 2020 - link

    I'm not sure this really applies in the case of Apple?
  • nico_mach - Thursday, September 17, 2020 - link

    Of course it does. All their cloud subscription services run on AI in the cloud. Local AI can't do inferences without a huge dataset - voice and photos can be local to a greater extent netflix type stuff can't. Like the new fitness+, to pick videos and create strategies for content, that's all stats and AI based and requires collecting data and analytics on their side, not yours.

    They have talked up about anonymizing what they collect, but they are the only ones who can see that, so it's on the honor system entirely for that.
  • dotjaz - Tuesday, September 15, 2020 - link

    So you are never using the camera over the life time of your phone. Then why not start complaining about the sensor first?

Log in

Don't have an account? Sign up now