Closing Thoughts

Usually at the end of an article we’d be coming to a conclusion. However as this piece is more of a hands-on, we'll limit our scope a bit. We're not really in a position to properly determine how Xavier stacks up in the robotics or automotive spaces, so that's going to have to remain a discussion for another time.

We had a quick, superficial look at what NVIDIA would be able to offer industrial and automotive costumers – in that regard Xavier certainly seems to offer a ton of flexibility and also significant raw processing power. It’s clear that the one aspect in which NVIDIA tries to promote Xavier the most is its vision and machine learning capabilities, and here, although we lack any great comparison points, it does look like NVIDIA is able to provide an outstandingly robust platform.

For most AnandTech readers, the most interesting aspect of the Jetson AGX and Xavier will be the new Carmel CPU cores. Although a deeper microarchitectural analysis of the core was out of the scope of this article, what does matter in the end is the resulting performance and power characteristics, which we did measure in detail. Here NVIDIA’s results landed in relatively modest territories, with Carmel landing at around, or slightly higher performance levels of an Arm Cortex-A75.

Multi-threaded performance of Xavier is great, although again the rather odd CPU cluster configuration can result in scenarios where not all eight cores are able to perform at their peak performance under some circumstances. As Arm tries to enter the automotive sector with dedicated IP, I do wonder if in the future it will make sense for NVIDIA to continue on with their rather exotic CPU microarchitecture.

The challenge to NVIDIA then is how to best stay ahead of Arm and the army of licensees that will be implementing their automotive-focused IP in the future. I think the one aspect where NVIDIA can offer a lot more value than competing products is the fact that NVIDIA is creating a strong software ecosystem and development toolkits, allowing customers to more easily achieve enable their product use-cases.

The Jetson AGX development kit costs a whopping $2500 (It can be attained for $1299 if you are part of Nvidia's development programme) – and even the individual modules are $1100 – making it non-viable for the hobbyist looking to use it as a server machine at home. But for companies looking to setup more complex systems requiring heavy vision processing, or actually deploying the AGX module in autonomous applicaitons for robotics or industrial uses, then Xavier looks quite interesting and is definitely a more approachable and open platform than what tends to exist from competing products.

NVIDIA's Carmel CPU Core - SPEC2006 Rate
Comments Locked

51 Comments

View All Comments

  • syxbit - Friday, January 4, 2019 - link

    I wish Nvidia hadn't abandoned the mobile space. They could have brought some much needed competition :( :(.
  • Despoiler - Friday, January 4, 2019 - link

    The only design that was competitive was the one selected by Google for one generation. 4 ARM cores + a 5th core for power management was a huge failure when everyone can do PM within the ARM SOC. It was only cost competitive in other words.
  • syxbit - Friday, January 4, 2019 - link

    The Tegra X1 was a great chip when released.
    The Shield TV still uses it, and it's an excellent (though now old) chip.
  • Alistair - Friday, January 4, 2019 - link

    And that's not a mobile device. Perf/W for Xavier is also really poor vs. the newest Huawei silicon also.
  • BenSkywalker - Friday, January 4, 2019 - link

    The Switch is mobile. When the x1 debuted *four* years ago it obliterated the best from Apple, roughly 50%-100% faster on the gpu side. So yes, if we give the other soc manufacturers four years and a four process step advantage, they can edge out Tegra.

    Qualcomm's lawyers should take a bow on nVidia not being still present in the mobile market, certainly not the laughable "competition" they had on the technology side.

    "Having a hard time seeing a path forward"... That was a cringe worthy line. Why not benchmark direct x on an iPhone and then say the same about the Ax line? Let's take a deep learning/ai platform and benchmark it using antiquated pc desktop applications and then act like there are fundamental design issues... ?
  • TheinsanegamerN - Friday, January 4, 2019 - link

    The tegra X1 doesnt run anywhere near full speed when the device is not plugged into a power source. The Switch also has a fan. It's pretty easy to "obliterate" the competition when you are using a different form factor. I mean, the core I7 with iris pro 580 GPU obliterates the tegra X1, so the X1 must not be very good right?

    The X1 was WAY too power hungry to use in anything other then a dedicated gaming device with a dedicated cooling system. When restricted down to tablet TDPs, the X1's performance drops like a lead rock.

    So, yeah, maybe with another 4 years nvidia could make the tegra work in a proper laptop. Meanwhile, Apple has ALREADY done that with the A12 SoC, and that works in a passive tablet. Nvidia was never able to make their SoC work in a similar system.
  • Alistair - Saturday, January 5, 2019 - link

    Are you replying to my comment? Xavier is new for 2018 and so is Huawei's Kirin 980. We are talking about Xavier, not X1. And Apple's tablet GPU for 2015 equaled nVidia's in perf. The iPad Pro's A9X equalled the Tegra x1 in GPU performance while surpassing it in CPU performance, and at a lower power draw...
  • Alistair - Saturday, January 5, 2019 - link

    I think you were conveniently comparing the 2014 iPad's vs. the 2015 X1, instead of the 2015 iPad Pro vs. the X1.
  • Samus - Saturday, January 5, 2019 - link

    ^^this
  • niva - Friday, January 4, 2019 - link

    Why are there video ads automatically playing on each one of the Anandtech pages? I know you guys are trying to monetize but you've crossed lines that make it annoying for your users to keep visiting the site.

Log in

Don't have an account? Sign up now