Bosch and NVIDIA Team Up for Xavier-Based Self-Driving Systems for Mass Market Cars
by Anton Shilov on March 18, 2017 2:00 PM EST- Posted in
- SoCs
- Arm
- NVIDIA
- Volta
- Xavier
- Automotive
- Bosch
- Self-Driving Cars
Bosch and NVIDIA on Thursday announced plans to co-develop self-driving systems for mass-market vehicles. The solutions will use NVIDIA’s next-generation codenamed Xavier SoC as well as the company’s AI-related IP. Meanwhile, Bosch will offer its expertise in car electronics as well as auto navigation.
Typically, automakers mention self-driving cars in the context of premium and commercial vehicles, but it is pretty obvious that, given the opportunity, self-driving is a technology that will be a part of the vast majority of cars available in the next decade and onwards. Bosch and NVIDIA are working on an autopilot platform for mass-market vehicles that will not cost as much as people think, and will be able to be widespread. To build the systems, the two companies will use NVIDIA’s upcoming Drive PX platform based on the Xavier system-on-chip, which is a next-gen Tegra processor set to be mass-produced sometimes in 2018 or 2019.
Bosch and NVIDIA did not disclose too many details about their upcoming self-driving systems, but indicated that they are talking about the Level 4 autonomous capabilities in which a car can drive on its own without any human intervention. To enable Level 4 autonomous capabilities, NVIDIA will offer its Xavier SoC featuring eight general-purpose in-house-designed custom ARMv8-A cores, a GPU based on the Volta architecture with 512 stream processors, hardware-based encoders/decoders for video streams with up to 7680×4320 resolution, and various I/O capabilities.
From performance point of view, Xavier is now expected to hit 30 Deep Learning Tera-Ops (DL TOPS) (a metric for measuring 8-bit integer operations), which is 50% higher when compared to NVIDIA’s Drive PX 2, the platform currently used by various automakers to build their autopilot systems (e.g., Tesla Motors uses the Drive PX 2 for various vehicles). NVIDIA's goal is to deliver this at 30 W, for an efficiency ratio of 1 DL TOPS-per-watt. This is a rather low level of power consumption given the fact that the chip is expected to be produced using TSMC’s 16 nm FinFET+ process technology, the same that is used to make the Tegra (Parker) SoC of the Drive PX 2.
The developers say that the next-gen Xavier-based Drive PX will be able to fuse data from multiple sensors (cameras, lidar, radar, ultrasonic, etc.) and its compute performance will be enough to run deep neural nets to sense surroundings, understand the environment, predict the behavior and position of other objects as well as ensure safety of the driver in real-time. Given the fact that the upcoming Drive PX will be more powerful than the Drive PX 2, it is clear that it will be able to better satisfy demands of automakers. In fact, since we are talking about a completely autonomous self-driving system, the more compute efficiency NVIDIA can get from its Xavier the better.
Speaking of the SoC, it is highly likely that the combination of its performance, power and the level of integration is what attracted Bosch to the platform. One chip with a moderate power consumption means that Bosch engineers will be able to design relatively compact and reasonable-priced systems for self-driving and then help automakers to integrate them into their vehicles.
Unfortunately, we do not know what car brands will use the autopilot systems co-developed by Bosch and NVIDIA. Bosch supplies auto electronics to many carmakers, including PSA, which owns Peugeot, Citroën and Opel brands.
Neither Bosch nor NVIDIA made any indications about when they expect actual cars featuring their autopilot systems to hit the roads. But since NVIDIA plans to start sampling of its Xavier in late 2017 and then mass produce it in 2018 or 2019, it is logical to expect the first commercial applications based on the SoC to become available sometime in the 2020s, after the (extensive) validation and certification period for an automotive system.
Related Reading:
Source: NVIDIA
43 Comments
View All Comments
ddriver - Sunday, March 19, 2017 - link
You could program a machine that killing is wrong, you could even program it to shed tears. But it will will not be on its own accord, it will not understand sorrow, remorse, regret or pain.I can make a machine that looks like a duck, swims like a duck, and quacks as a duck. If your logic is correct, then you should be able to also eat it like you would be able to eat a duck. It would be fun to watching you eat a mechanical duck. Maybe then you will realize in which spectrum of the human intellect you are :)
ddriver - Sunday, March 19, 2017 - link
Furthermore, and not surprising at all, your concept of magic is just as much BS as your concept of science. You only know science from entertaining articles, and you only know magic from entertaining books and movies.Magic is, and has always been things that fall outside of our understanding. It dates back to times when people didn't have the knowledge to explain what they see. Magic was later picked up by religion, when it became the dominant establishment form, to hide the science that religion wanted to keep to itself as an exclusive advantage for the purpose of population control. Science was labeled magic and witchcraft, and its practice was punishable by death. But that was not enough, for there still existed the possibility of people practicing it in secret, which is why it was also given the hoodoo voodoo BS spin, sending pretty much every curious individual in some made up nonsense dead end.
So yeah, magic is very much real, and what makes us tick is exactly it. And it will be magic until we understand it, which we may never really achieve, at least on this level of existence. If it is what makes us, then it is what encapsulates us, we cannot understand what we can't see from the outside, and we cannot see it from the outside because we do not and cannot exist outside of it.
Humans are as narcissistic as they are narrow minded, and I don't mean individually, but as species. We think we are the top of the pyramid, we've had nothing, then atoms, molecules, proteins, single cell organisms, complex life, animals, humans and... that is. We cannot see past our noses, even if the progression and structure of reality are brutally obvious. And it is precisely because we believe we are the top notch that we also believe that there is nothing we cannot understand or achieve. And in a way this is true, but not only because there isn't any more than that, but because we lack the perception for it. There exists an infinite number of higher order organization of consciousness than us, that we could be the forming cells or even atoms of, but not with our current mindset. With that we are stuck here on this rock, we will never really travel in space in the form of a consciousness bound to an axis of time and animated by falling uncontrollably through it. There are things we cannot grasp, much less achieve on our current level, and what makes us tick is one of those things. We are about as much aware of what's ahead of us as a microbe is aware of our human world. It just falls outside the the threshold of perception.
name99 - Sunday, March 19, 2017 - link
ddriver, in this sequence of comments you and some others appear to be honestly trying to grapple with a real problem, so I'll take you at your word.The basic problem, however, is that all of you are operating in a "systematic" mindset. This is the characteristic thinking of modernity (basically the West since ~1500), and its primary characteristic is an assumption that the world is legible. This is not exactly a belief that the world is rational, causal, and so on; it's the stronger belief that
- everything important about the world
- can be encapsulated in a few axioms
- from which everything else can be derived.
Look at this structure. It' encompasses modern science, yes, but also modern law (as opposed to the more flexible way in which common law was handled), modern religion (various fundamentalisms, starting with the protestant reformation), Kantian-style ethics, and various politico-economic theories.
The point I am trying to make is that this is not the ONLY way of thinking about the world --- Kantian style ethical systems, for example, are not the only way to construct ethical systems. The fact that you all are discovering that, indeed, cars constructed according to the "systematic" model cannot cope with the full variety of the world is a discovery about the "systematic" model, not about cars. The same would be just as true if humans were forced to live absolutely according to the supposed rationality of rules derived from a small set of axioms. (This lack of flexibility, the more you're forced to live according to actually poorly chosen, but supposedly universal and optimal, rules, gets worse the weaker in society you are --- hence why parts of life sucks for the jobs of the poor, school children, the colonized, and so on.)
The solution is not to ditch rationality, it is to accept that the world is complex enough that it can't be captured in such a small cage. Mathematics learned this in the early 20th C (that's essentially what Godel is all about), likewise physics. But our legal and political systems, and our public discourse, are still operating according to this 19th C model. The best of us need to figure out how to move our society beyond this, not backwards to nihilism or selfishness, but to a rationality that understands its limits.
AI (or machine intelligence or statistical learning -- if you think arguing about what "real" intelligence is still valuable, even after reading the links below, I'm afraid you have nothing to contribute, you need to sit at the kids table while the adults have their conversation) is perhaps our best hope for society as a whole, not just a few smart individuals, to have to confront and then deal with these issues.
General framework:
https://vividness.live/2015/10/12/developing-ethic...
This is a slightly technical page that tries to show the problems with one very particular version of these "systematic" models of everything:
https://meaningness.com/probability-and-logic
You can work through the entire shebang here if you like:
https://meaningness.com
(I didn't write these pages and know nothing of their author. I have no interest in his [only occasional] asides about Buddhism. Overall they reflect the thinking of a very smart guy who spent the first part of his life at MIT trying to get a variety of AI schemes to work, and concluded from the experience that "our" [ie society's] thinking about these issues is much more broken than it needs to be.)
Meteor2 - Sunday, March 19, 2017 - link
Very interesting, name99. Personally I think humans over-complicate things. Look at animals. They perceive and they react in accordance with their innate purpose to survive and reproduce. We just added simulation and language (and thence co-operation and planning, though dogs do it too), and began to improve our survivability and offensive/defensive capabilities with sticks sharpened with stone tools, furs, and fire. Now we're here. I don't think the basics -- perception, purpose, simulation, execution -- will be too hard to instil in AI and robots. That's why I'm fascinated by process nodes -- it's one of the key enabling technologies.Or maybe I've just watched Westworld too much.
name99 - Sunday, March 19, 2017 - link
If that sort of thing (how human thought is [perhaps] different from pre-human modes of thought) you want to read this:http://www.ribbonfarm.com/2011/03/17/cognitive-arc...
Between that first stage of blooming buzzing confusion ala Henry James, and today, we have an intermediate stage of Julian Jaynes' _The Origin of Consciousness in the Breakdown of the Bicameral Mind_ for which the wikipedia article gives a summary:
https://en.wikipedia.org/wiki/Bicameralism_(psycho...
Meteor2 - Monday, March 20, 2017 - link
Thanks Name, I'd already come across bicameralism (not convinced) but that first link is fascinating. So is the article it links to -- how iron supplanted bronze. I've been reading about that too, just last week!I found this interesting: http://www.todayifoundout.com/index.php/2010/07/ho...
It would seem that people who are profoundly deaf from birth, and not taught sign language, don't have an internal monologue, and show reduced cognition. I think that's telling.
name99 - Monday, March 20, 2017 - link
Thanks, that was deafness link interesting. I know I do all my thinking in natural language, and am amazed at the claims made by some that thinking happens in some sort of universal "pre-language" mentalese.ddriver - Sunday, March 19, 2017 - link
The problem is the lack of understanding the mechanics of our own reasoning. If we could model that, then it can be implemented in various ways, organic, analog electronics, digital electronics, quantum electronics. That's just a detail.I can't seem to find people who actually can explain things in a reasonable way, even things about them. One notable example is preferences - why do people like what they like. The usual answer is "because I like it". They cannot give a logical explanation the mechanics of how things influence them, and why would they find something to be appealing or appalling.
Scientists are presented as those "very smart relative to the general population" individuals, but they aren't really any more intelligent, they are just more trained. Such scientists are very much narrow minded, they are good at what they were trained to do, and mediocre in everything else, and most of the time it is one single thing they are good at, it is very rare to see a scientist proportionally proficient in multiple disciplines.
So it comes at absolutely no surprise that scientists, relying on pre-programmed in their heads set of knowledge to do a single thing, produce an "AI", relying on pre-programmed in their implementation set of knowledge to do a single thing.
It is just the best they can do. They do not understand the root mechanic of intellect, therefore they cannot model and reproduce it.
amdwilliam1985 - Sunday, March 19, 2017 - link
you are very outdated with your info.Go to youtube and watch some video about how Google DeepMind works.
"It" learns by it self! no pre-program needed.
You specify 2 things, entry data and desired results. The AI("machine learning") fill in the blank(aka black box, or self-programming) and connect the dots. Yes, it is very scary~
Meteor2 - Sunday, March 19, 2017 - link
Think computers can't learn?http://www.zdnet.com/article/google-brain-simulato...
And that was five years ago. Computers can learn, can reason in novel situations, and can overcome challenges you might not expect.