Closing Thoughts

Usually at the end of an article we’d be coming to a conclusion. However as this piece is more of a hands-on, we'll limit our scope a bit. We're not really in a position to properly determine how Xavier stacks up in the robotics or automotive spaces, so that's going to have to remain a discussion for another time.

We had a quick, superficial look at what NVIDIA would be able to offer industrial and automotive costumers – in that regard Xavier certainly seems to offer a ton of flexibility and also significant raw processing power. It’s clear that the one aspect in which NVIDIA tries to promote Xavier the most is its vision and machine learning capabilities, and here, although we lack any great comparison points, it does look like NVIDIA is able to provide an outstandingly robust platform.

For most AnandTech readers, the most interesting aspect of the Jetson AGX and Xavier will be the new Carmel CPU cores. Although a deeper microarchitectural analysis of the core was out of the scope of this article, what does matter in the end is the resulting performance and power characteristics, which we did measure in detail. Here NVIDIA’s results landed in relatively modest territories, with Carmel landing at around, or slightly higher performance levels of an Arm Cortex-A75.

Multi-threaded performance of Xavier is great, although again the rather odd CPU cluster configuration can result in scenarios where not all eight cores are able to perform at their peak performance under some circumstances. As Arm tries to enter the automotive sector with dedicated IP, I do wonder if in the future it will make sense for NVIDIA to continue on with their rather exotic CPU microarchitecture.

The challenge to NVIDIA then is how to best stay ahead of Arm and the army of licensees that will be implementing their automotive-focused IP in the future. I think the one aspect where NVIDIA can offer a lot more value than competing products is the fact that NVIDIA is creating a strong software ecosystem and development toolkits, allowing customers to more easily achieve enable their product use-cases.

The Jetson AGX development kit costs a whopping $2500 (It can be attained for $1299 if you are part of Nvidia's development programme) – and even the individual modules are $1100 – making it non-viable for the hobbyist looking to use it as a server machine at home. But for companies looking to setup more complex systems requiring heavy vision processing, or actually deploying the AGX module in autonomous applicaitons for robotics or industrial uses, then Xavier looks quite interesting and is definitely a more approachable and open platform than what tends to exist from competing products.

NVIDIA's Carmel CPU Core - SPEC2006 Rate
Comments Locked

51 Comments

View All Comments

  • linuxgeex - Friday, November 8, 2019 - link

    Add this line to the following files (linux/bsd or windows)

    /etc/hosts or c:/windows/system32/driver/hosts

    127.0.0.1 ads.servebom.com

    job done.
  • TheinsanegamerN - Friday, January 4, 2019 - link

    auto video ads are hell incarnate.
  • Yojimbo - Friday, January 4, 2019 - link

    Regarding NVIDIA's future CPU core development, I think it's important to note that NVIDIA has developed all major IP blocks on the SoC. That probably allows them to work on integration sooner than if they relied on externally developed IP blocks. Also, they have the ability to tune their cores and fabric to their intended application, which is a narrow subset of what ARM is developing for. I'm guessing NVIDIA doesn't tune the performance of their CPU cores using specint or specfp. They probably look at much more specific and realistic benchmarks.

    And by the time the Cortex A76AE is available for NVIDIA to use they will probably have a next iteration of their CPU which perhaps will show up in Orin in early 2021 or even late 2020. It's not clear to me what delayed Xavier from NVIDIA's original schedule. It's possible they'll be able to get the next one out with less time between the launch of the underlying GPU architecture and the availability of the SoC. There was a lot of new stuff that went into Xavier other than the GPU architecture, such as the increased safety features, the DLA, and the PVA.
  • DeepLearner - Friday, January 4, 2019 - link

    I hope they'll send you a T4 soon! I'm dying for numbers on those.
  • eastcoast_pete - Friday, January 4, 2019 - link

    @Andrei: thanks for this review. I wonder if the recent loss of a larger client in the automotive sector (Audi/Volkswagen) to Samsung played a role in Nvidia's willingness to make samples available to you for review. As of model year 2021, Audi will stop using Tegra-based units and move to Samsung's Exynos Auto V9 SoC, which actually features an 8 A76 cores based on ARM's A76 AE design for automotive/vehicular use.
    While that specialized SoC is still awaiting mass production, I also wonder if Samsung's choice to use straight-up ARM A76 cores (yes, they are AE, so not standard A76) portends a sea change for the mainstream Exynos lines also? As you pointed out, Mongoose turned out to be quite disappointing, so is there a change coming? Would appreciate your insights and comment!
  • webdoctors - Friday, January 4, 2019 - link

    I was also confused by the news of Audi using Samsung chips. I don't think this changes the Audi/Nvidia relationship from googling: http://fortune.com/2017/01/05/audi-nvidia-2020/

    I think in the infotainment sector there's just a lot of competition for cheap chips and a low bar for entry. Any Mediatek or run of the mill cellphone chip should do. I doubt you'd care about ECC or safety in the HW playing your music or watching movies. My current car has an aftermarket unit that's 10 years old that can play DVD movies, has GPS maps and integrates a backup camera.

    I'm not sure how you'd program a beast of a chip here, or even what the right benchmarks are since you wouldn't need it just play movies, show maps or run CPU benchmarks. With all the inferencing and visual processing it'd be a waste of resources and money to use it for the traditional tasks done today in cars.

    I'm really curious how Anandtech evaluates these specialized products that aren't your run of the mill CPU/GPU/HDD.
  • unrulycow - Saturday, January 5, 2019 - link

    This is obviously overkill for the entertainment system. It's main purpose is for the semi-autonomous driving systems like Cadillac's SuperCruise or Tesla's Autopilot.
  • Andrei Frumusanu - Friday, January 4, 2019 - link

    As far as I know their mobile roadmap still uses custom cores, there's probably different requirements for automotive or they could have simply said that 8 A76s make a lot more sense than 8 custom cores.
  • eastcoast_pete - Saturday, January 5, 2019 - link

    Thanks Andrei! Yes, design requirements for automotive/vehicle-embedded are different in key areas (safety/security). However, I was/am struck by Samsung not adapting their own Mongoose design for AE use. Maybe their client (Audi) preferred the stock A76 AE design, and it wasn't economical to adapt Mongoose. However, this now means that the most powerful Samsung SoC design (A76 octacore) might be found in - Audi cars.
  • unrulycow - Saturday, January 5, 2019 - link

    They are also losing Tesla as a client. Tesla decided to create their own chip which will theoretically start going into cars in Q2. I would love to see a comparison between the two chips.

Log in

Don't have an account? Sign up now