The OpenGL stuff shouldn't be "impossible". Even the texture compression. I think developers can deal with that. Where Apple really shot itself in the foot is with the launch of the Metal API, though. Now they're stuck with Imagination for at least a few more years until they make it more abstract to work with multiple GPU architectures and not so..."metal". Or they can wait for OpenGL NG to appear, which will probably take just as much time.
How exactly did Apple "shoot itself in the foot" with Metal. They have a solution right now for mobile apps that rivals what is possible on other platforms. All the major game engines have already migrated to Metal. nVidia can show these generic OpenGL benchmarks all they want, but in practice, graphic intensive apps on the A7 and A8 series chips are seeing far greater efficiency and performance improvements. OpenGL NG sounds great in concept, but it takes forever for a consortium like Khronos to develop new standards and just as long for them to eventually be adopted. This is years away from becoming a reality. Yet, Apple gets all of the benefits of that right now. From my perspective, this gives Apple a strong competitive advantage.
Well said techconc Not sure if you're in to development, SoC design or just a 'user', Krysto...but BOTH Apple's 'Metal' and language 'Swift' were/are HUGE leaps forward to 'cut' the peanut butter layer on the GPU that is Open GL ES ...so developers have 'direct' access to the 'metal' AKA GPU portion of the SoC. It's an amazing feat in 'software engineering' that helped a huge load on the 'hardware engineering' side of the house....specifically because of this! I own a Note 4 for my business I own a 6+ as a personal driver. The former a quad core, 2.7x Ghz procs and the Adreno 420 and 3GB of 'shared' SoC RAM The latter, a dual core, 1.5Ghz procs with IT's solution for graphics and 1GB of 'shared' SoC RAM I love them both, different reasons BUT, Play Asphalt 8 on both. Then tell me 'more muscle, power, RAM, cores or core speed' are the reasons I'm playing a more fluent game on iOS vs android I'm ambidextrous and enjoy using both. Same in the office or home environment. OS X is primary but I've always had a Windows box since the big 'switch' a decade ago Point being, software is damn near, and sometimes MORE important than hardware to the end user's experience. No one outside of us dorks, geeks, and pocket protector wearing Homers has a clue what FitFat, latency, core clock speed, or hell....cores for that matter MEAN! They couldn't tell Ya if theyre rocking 1, 2, 3 GB of RAM or NO RAM, lol. The ultimate end experience is designed and defined by the software and hardware working in synergy WITH a development community willing to step up and develop a million optomized apps for your system. If it's running iOS or Android, you're in luck. Windows, a bit tougher to 'win' and if this SoC does indeed have the power/TDP numbers they're bragging, Apple's never been one to change supply chains There's a reason Tim is CEO, & that's the biggest. When you're dropping 100,000 products a year, you HAVE to have suppliers that can fulfill your orders and needs
Are you honestly expecting a phone with a weaker GPU pushing 50% more pixels to out perform the other? Of course the iPhone is smoother in games its lower res than the note 4
Oh yeah, Note4 has to push more pixels than 6+. However, a resolution that high is simply not necessary in first place, and more importantly, over 30% of the pixel data the SoC has to process are nullified by the pentiled AMOLED. What a waste!
Metal has made easier for to access the GPU and the reason apple had done this due the lack of power on there CPUs compared to android device, yes sure GPUs can run apps with extra power but then so what? Open GL has always been doing that! More will know Java and Open GL and easier for development as all hardware vendor apart from apple will optimise hardware for it.
I would not want to compare Asphalt 8 between devices as horse power and muscle has nothing to do with it but lazy work on the part of game creators.
Providing access to metal will make a difference to apps no doubt but some apps and not all.. Open GL provides access to GPU not sure why it took apple so long. I have a Nexus 9 and iPad Air 2 and can't see apart from Apple hype what the Air 2 had to offer in performance! Nexus 9 single core out performes the Air 2 and so does the 1 year older GPU..
OpenGL is FAR outdated. It has way too many performance bottlenecks due to the aged design, and doesn't scale very well with modern GPU/CPU architectures. Both MS and Apple recognized this, et voila, Metal and the upcoming DX12 are their answers.
Pity that Android can't keep up with this, stuck with the opensource mess.
if OpenGL is FAR outdated and a mess then why are they still supporting it? why dont they try to contribute it to it, to make it better? Wait they cant do that all they need is money. half of the internet runs on opensource.
Everyone who thinks that the X1 is going to crush all the competition is going to get burned once more, just like with K1. OEMs want an integrated solution. Yes, Nvidia has awesome GPU capabilities(duh) but that in of itself isn't enough. Still doesn't have an integrated modem, so the smartphone market is dead on arrival.
Tablets have a better shot, but Nvidia is aiming for the car market for a reason. There may be a few one-off devices that they will aim for but overall, this isn't going to be a significant force in the high-end tablet space. S810 has the GPU capability to run 4K smoothly.
Apple's next GPU is going to be a lot better, just like every year. Can Nvidia compete on a 'total system basis'? The answer remains as it has always been: no.
Nvidia have Icera, their own 4G/LTE modem technology, so hopefully NVidia will have a version of the X1 with that integrated, or at least a version with a "glue-less" interconnect to their Icera module.
NVIDIA has not been pursuing smartphones for a couple years now. What advantage does the Snapdragon 810 have over the Tegra X1 in the high end tablet space? Qualcomm will have to compete based mostly on familiarity of the OEMs with their SOCs, price, and the fact that Samsung is unlikely to choose NVIDIA. If the 810 is available at an earlier date, that also might help them out during that time period.
The Snapdragon 810 doesn't even come close to matching the graphical prowess of the K1, even though it's just about a whole year newer.
The Snapdragon 805 saw little adoption. In tablets, there was the Kindle Fire HDX. That's... one whole design win for tablets. The K1 was in the Shield Tablet and the Nexus 9 (and the Xiaomi MiPad).
@kron I fail to see your point. Apple was able to put out a CPU/GPU that is both more powerful and more energy efficient than the nVidia based part. The original argument suggested that Apple should switch to nVidia. It's obvious from existing parts that there is no reason to do this. Your argument attempts to suggest otherwise, but fails to make the case. If we're basing the case off of future technology (like the X1) then you also need to discuss the Rogue 7 series chips that quite frankly look more impressive.
It is definitely more energy effcient and faster, but not significantly faster. They are on the same level, and there aren't games on iOS that even use that level of power right now.
Imgtec licenses design then anybody do what they want. It's the same of ARM with reference design and in house design (cyclone, denver etc). We are always talking about ARM. What the hell you re talking about?
Since there is no GX6650 around doesn't mean it cannot compete. Talking about speculations maybe an higher clock of GX6650 can compete who know. Apple did it's custom version, low clock and more cluster. Either way it's always PowerVr inside.
Nvidia is only catching up on process node, because what they've shown is when comparing apples to apples:
1) They have a much faster custom 64-bit CPU (A8X needed 50% more CPU to edge Denver K1) 2) They have a much faster GPU architecture (A8X also needed 50% more GPU cores to edge Denver K1, but get destroyed by Tegra X1 on the same 20nm node).
As we can see, once it is an even playing field at 20nm, A8X isn't going to be competitive.
How do you get only 33% for A8X? A8 = 2 core, Denver K1 = 2 core, A8X = 3 core. 1/2 = 50% increase.
Same for A8X over A8. GPU cores went from 4 to 6, again, 2/4 = 50% increase. Total transistors went from 2Bn to 3Bn, again 50% increase.
In summary, Apple fully leveraged 20nm advantage to match Denver K1 GPU and edge in CPU (still losing in single-core) using a brute-force 50% increase in transistors and functional units.
Obviously they won't be able to pull the same rabbit out of the hat unless they go to FinFet early, which is certainly possible, but then again, its not really a magic trick when you pay a hefty premium for early access to the best node is it?
Bottomline is Nvidia is doing more on the same process node as Apple, simple as that, and that's nothing to be ashamed of from an engineering standpoint.
A8X got 8 GPU clusetrs. And I still can't get your idea, you think that A8X is worse because it's brute force ~ 50% faster ? Yeah it is brute force, but I don't know how can you preceive that as a bad thing.
They will certainly try to push finfet and rather hard I think.
And how can you say that nvidia is doing more on the same node while boasting how apple is the one who is doing more and how it's bad just above ?
Wow A8X is 8 clusters and doesn't even offer a 100% increase over A8? Even worst than I thought, I guess I missed that update at some point over the holiday season.
The point is that in order to match the "disappointing" Denver K1, Apple had to basically redouble their efforts to produce a massive 3Bn transistor SoC while fully leveraging 20nm. You do understand that's really not much of an accomplishment when you are on a more advanced process node right?
Sure Apple may push FinFET hard, but from everything I've read, FinFET will be more widely available for ramp compared to the problematic 20nm, which was always limited capacity outside of the premium allocation Apple pushed for (since they obviously needed it to distinguish their otherwise unremarkable SoCs).
It should be obvious why I am saying Nvidia is doing more on the same process node, because when you compare apple to Apples, Nvidia's chip on the 28nm node is more than competitive with the 20nm Apple chips, and when both are on 20nm, its going to be no contest in Nvidia's favor.
Logical conclusion = Nvidia is doing more on the same process node, ie. outperforming their competition when the playing field is leveled.
Chizow the more I read and the more I laugh. You compare clusters with cores they have different technologies and you still state this crap. Maybe would be better to compare how much both of them are capable in term of of GFLOPS at same frequency? This is count. Regarding your absurd discussion of processing node, since the Nvidia chip is so efficient, I look forward to see it in smartphones.
The A8X isn't on any phones either. In fact, they left it out of both iPhones AND the iPad Mini.
And take in mind, even the Qualcomm Snapdragon 805 had few design wins... only the Kindle Fire HDX for tablets. They scored two major phones (Nexus 6 and Note 4) but the other manufacturers haven't used it.
Core counts are irrelevant across GPU architectures, they're just different ways of doing something. If someone gets to the same power draw, performance, and die size with 100 cores as someone else does with 10, what does it matter?
Uh the A8 is an actual product that exists and wait for it you can actually BUY a product with it in there. This is another mobile paper launch by Nvidia with the consumer having no idea when or where it will actually be. The only thing real enthusiasts should care about is the companies that can actually deliver parts people can actually use. Nvidia still has a loooong ways to go in that department. Paper specs mean shit.
Careful, you do mean A8X right? Because Denver K1 is an actual product that absolutely stomps A8, only after Apple somewhat unexpectedly "EnBiggened" their A8 by increasing transistors and functional units 50%, did they manage to match K1's GPU and edge the CPU in multi-core (by adding a 3rd core).
To say Denver K1 didn't deliver is a bit of a joke, since it is miles ahead of anything on the Android SoC front, and only marginally bested in CPU due to Apple's brute-force approach with A8X while leveraging 20nm early. We see that once the playing field has been leveled 20nm, its no contest in favor of Tegra X1.
I mean a product that is widely available to CONSUMERS dude. And please stop with the "stomping" stuff. It means nothing about its performance with its also vastly higher power consumption. The A8 can exist in a smartphone. What smartphones have the K1? Oh that's right none because you would get a hour of use before your battery was dead. Mobile is about performance and speed. You can diss Apple all you want but from an SoC perspective they do it better than anyone else right now.
@Jumangi try to comprehend what he is saying. Apple used a superior process on its A8X and more transistors to just edge the K1 in some CPU benchmarks. While core for core Nvida's is actually more powerful. The GPU in the K1 also has near desktop parity etc OpenGL 4.4. Features like Hardware Tessellation are absent from the A8X.
That's great. It really is but lets be honest. A8x is faster than K1.
And end of the day that is sadly all that matters to the vaaaaaast majority of consumers.
Frankly even that barely matters. What does though is that games run better on my tablet than they do on yours so to speak. (Actually likely they run better on yours since I'm still using a nexus 10 xD)
But sure the new paper launch from nv late this year our early next year will be great and the 2.5 devices that x1 will appear in will be amazing. Making sales in hundreds of thousands.
The point is that the Tegra K1 Denver on 28nm beats the Apple A8 fairly comprehensively on 20nm with the same number of cores. Apple stuck on 50% more cores and 50% more transistors to allow the A8X on 20nm to have a slight edge over the Tegra K1 Denver. This means if Tegra K1 is put on 20nm, it will beat the 3 core Apple A8X with two cores, and the same thing will happen when both move to 16nm.
Oh. Really? Denver K1 is not even as fast as A8X. Do not mention that it uses more than 2 times energy. I really do not understand people like you going around and saying how good nvidia shit is.
(1) I wouldn't rave too enthusiastically about Denver. You'll notice nV didn't... Regardless of WHY Denver isn't on this core, the fact that it isn't is not a good sign. Spin it however you like, but it shows SOMETHING problematic. Maybe Denver is too complicated to shift processes easily? Maybe it burns too much power? Maybe it just doesn't perform as well as ARM in the real world (as opposed to carefully chosen benchmarks)?
(2) No-one gives a damn about "how many GPU cores" a SoC contains, given that "GPU core" is a basically meaningless concept that every vendor defines differently. The numbers that actually matter are things like performance and performance/watt.
(3) You do realize you're comparing a core that isn't yet shipping with one that's been shipping for three months? By the time X1 actually does ship, that gap will be anything from six to nine months. Hell, Apple probably have the A9/A9X in production TODAY at the same level of qualification as X1 --- they need a LONG manufacturing lead time to build up the volumes for those massive iPhone launches. You could argue that this doesn't matter since the chip won't be released until September except that it is quite likely that the iPad Pro will be launched towards the end of Q1, and quite likely that it will be launched with an A9X, even before any Tegra X1 product ships.
1) Huh? Denver is still one of Nvidia's crowning achievements and the results speak for themselves, fastest single-core ARM performance on the planet, even faster than Apple's lauded Cyclone. Why it isn't in this chip has already been covered, its a time to market issue. Same reason Nvidia released a 32-bit ARM early and 64-bit Denver version of Tegra K1 late, time to market. Maybe, in the tight 6 month window they would have needed between bringing Denver and working on Erista, they simply didn't have enough time for another custom SoC? I'm not even an Apple fan and I was impressed with Cyclone when it was first launched. But suddenly, fastest single-core and a dual-core outperforming 4 and even 8-core SoC CPUs is no longer an impressive feat! That's interesting!
2) Actually, anyone who is truly interested does care, because on paper, a 6-core Rogue XT was supposed to match the Tegra K1 in theoretical FLOPs performance. And everyone just assumed that's what the A8X was when Apple released the updated SoC that matched TK1 GPU performance. The fact it took Apple a custom 8-core variant is actually interesting, because it shows Rogue is not as efficient as claimed, or conversely, Tegra K1 was more efficient (not as likely since real world synthetics match their claimed FLOPs counts). So if 6 core was supposed to match Tegra K1 but it took 8 cores, Rogue XT is 33% less efficient than claimed.
3) And you do realize, only a simpleton would expect Nvidia to release a processor at the same performance level while claiming a nearly 2x increase in perf/w right? There's live demos and benchmarks of their new X1 SoC for anyone at CES to test, but I am sure the same naysayers will claim the same as they did for the Tegra K1 a year ago, saying it would never fit into a tablet, it would never be as fast as claimed yada yada yada.
Again, the A9/A9X may be ready later this year, but the X1 is just leveling the playing field at 20nm, and against the 20nm A8/X we see it is no contest. What trick is Apple going to pull out of its hat for A9/A9X since they can't play the 20nm card again? 16nm FinFET? Possible, but that doesn't change the fact Apple has to stay a half step ahead just to remain even with Nvidia in terms of performance.
1) He was saying: why NV didn't continue with Denver design? Being so efficient and only 2 cores why don't shift at 20nn easily? Because they can't and that's it. The other things are speculations.
2) You still compare apple (not Apple) with pears. Any vendors put inside his proprietary technology with their market strategy, important is to figure how GFLOPS and Texel is capable at same frequency and watt. You don't even know how Img cluster is built and nobody does and you still compare with NV cuda cores. Rogue XT frequency is set at 200mhz, Tegra K1 at 950mhz. Again what the heck you re talking about.
3) it is still a prototype type with a fan and nobody could check all the real frequency even though 1ghz seem reasonable. Hod dare you can compare a tablet with a reference board?
Again A9/A9X already exist now as prototypes, Apple doesn't sell chips and doesn't to any those sort of market. They need to see their product in a cycle year life. You live in another planet to not understand that.
>>He was saying: why NV didn't continue with Denver design? Being so efficient and only 2 cores why don't shift at 20nn easily? Because they can't and that's it. The other things are speculations.<<
There is a simple answer to that - Apple has booked all the production slots for 20nm (made by foundry TSMC) to meet demand for the A8. This has pushed back production of the 16nm to late 2015. That is the reason for the delay in Denver, which nVidia originally intended for the Tegra K1 Denver successor, the Pascal chip. That is the reason for the delay in Pascal.
20nm is a risk reducing interim technology which almost everybody is skipping. Apple originally wanted 14/16nm for the A8, only used 20nm because they couldn't wait to release the current iPhone on 14/16nm. nVidia is only producing the Tegra X1 at 20nm because they are worried about the same problem happening at 16nm. With 20nm, they know that Apple will be moving off 20nm with the next iPhone, so there will definitely be spare production capacity.
Since you re so expert about Imagt cluster you can explain why there some model of G6230 - 2 clusters (Allwinner socs) fast as much as an A7 (G6430) - 4 clusters. Maybe because the former has clock frequency higher than latter? But are pretty much the same in term of performance/watt....there we go..
The cores aren't important, the performance is. That is the whole point. The responsiveness depends on single core performance and to a lesser extent two core performance, and on this point Denver beats the crap out of the Apple A8 and A8X. Therefore the fact that Apple added an extra core to the A8 to get A8X is about benchmark bragging rights, and the A8X real world performance (based on single and dual core performance) lags the Tegra K1 Denver even with A8X on 20nm and Denver K1 on 28nm - not good for Apple.
The K1 is four months or so older than the A8X. It crushed every chip very badly for four whole months. If anything, everyone else was/is playing catchup. And not to mention the Snapdragon 810, yet to be released, does not even come close to the K1 despite being a year newer.
> "The K1 is four months or so older than the A8X." How do you come up with that? A8X was in production way before K1. It is just you see A8X only on ipad, when NVidia is showing off all around the test board that does not actually in production.
@Mayuyu; I wouldn't be surprised if this is the final outcome of the Nvidia IP Patent lawsuits and why Apple was excluded from the original litigation. My bet is they (Apple) have already engaged in serious talks with Nvidia and they are both just awaiting a favorable outcome against Samsung/Qualcomm before moving forward.
They'll gain access to better GPU IP immediately, if they are going to pay a licensing fee, might as well pay (presumably) more for better IP I suppose.
From a software support standpoint, they'd gain 100% compatibility and portability with all the latest OpenGL specs, so you'd have a shared graphics platform there with their iOS platforms. Then they wouldn't have to much around with Metal and iOS API so much, which would make it easier for them to merge the two platforms as the rumors have suggested.
None of what you state is a material advantage to Apple.
The essence of your belief is that Nvidia will have a more powerful GPU solution than what Apple can come up with their own or through ImgTec. A lot of people don't think that will be true.
On top of that, this Tegra X1 SoC does not appear suitable for phones, and moreover, Nvidia appears to be concentrating on embedded applications with higher TDP with the X1. This would pose a problem for Apple as the iPhone is currently their most important product.
Not having to support two different graphics APIs, one of which is being developed from scratch, isn't a material advantage to Apple?
And a lot of people don't think what Nvidia has isn't already better than what Imgtech can produce? Who are you referring to exactly? Realists already know Nvidia's last-gen Kepler is already better than PowerVR rogue. Remember when PowerVR released roadmaps and those same people expected PowerVR Rogue XT 6-core to match Tegra K1 based on paper spec FLOP counts alone? What happened in reality? Apple needed to add ANOTHER 2 cores in a custom 8-core Rogue XT configuration to trade blows with K1.
The whole point of licensing is that even if the Tegra SoC as a whole doesn't fit their needs, they can license the IP and integrate it however they like into their own SoC.....
He is talking about Apple licensing a GPU from NVIDIA instead of from IMG. The Tegra X1 is not a GPU, it is an SOC targeted at a specific market segment. The target of the Tegra X1 has been set by the market NVIDIA thinks it can penetrate more than by the abilities of the GPU it contains. NVIDIA was not able to penetrate the smartphone market because of a lack of a good modem option. They have since stopped trying, and this is well-known. Apple has access to a modem, so this is not a concern for them. All they need to consider is if licensing a GPU, or GPU IP, from NVIDIA helps them more than licensing from IMG. I think on the same process technology, NVIDIA's offering would be superior, no matter if used in a phone or tablet.
And they also generate roughly $266 million per year in revenue as a result of their cross-licensing agreement/settlement with Intel. That obviously makes Intel the most likely suitor as the 1st Nvidia SoC GPU IP licensee, since they already have Nvidia GPU IP in their desktop CPU IGPs but still license IMGtech IP for their mobile SoCs.
But my bet is on Apple being the 1st major licensee, largely dependent on the outcome of the Samsung/Qualcomm IP litigation.
Per Samsung's track record, the legal eagles will drag things on for years to come. Not sure what Qaulcomm will do? If you think Apple is in negotiations with Nvidia, which I agree with, then they should be coming to an agreement sometime soon to give the GPU engineers Apple has hired to do their mojo? I'm sure Apple would like nothing better to leave Samsung in the dust by exploiting the advanced gaming market that they are inspiring to, along with a rumored do-all living room console.
I'm not totally sure why all the NV and Apple back and forth. I see this as an Apple competitive chip for Android tablets. Why would Apple jump the Imag. Tech ship? At release time they have had the best GPU in a SOC for their iPads. ImgTech has been good for Apple iteration over iteration.
May Apple be interested in licensing some IP from NV? Maybe. Apple does a lot of custom work and has a desire to remain in the lead on the mobile SOC front at device release.
Exactly. Denver has been a massive disappointment. It really needs to be on 20nm. At 28nm performance is too inconsistent, and throttles too much.
I just find it funny how arrogant Nvidia is. They're always boasting and boasting with these announcements, and yet by the time they ship, they're rarely leading (or in the case of the K1, they're leading, but in literally only 1 shipping device).
Denver needs to be at 16nm. And we might still see it, at the end of the year/early next year if Nvidia releases the X1 on 20nm, and then X2 (I really hope they don't release another "X1", like they did this year with K1, making things very confusing) with Denver and on 16nm.
Unlikely that nV will release 16FF this year or early next year. Apple has likely booked all 16FF capacity for the next year or so, just like they did with 20nm. nV (and Qualcomm and everyone else) with get 16FF when Apple has satisfied the world's iPhone 6S and iPad 2015 demands...
Like on the presentation, boasting about how Tegra K1 is still the best mobile chip (despite A8X matching it at significally lower power (which they don't have a graph for)) despite being released a "year before", while A8X has been released just "now". (While taking nVidia's logic A8X was "released" just few hours after K1 because imagination had annonced Series 6XT GPU's at CES 2014).
Tegra K1 is a 28nm part and the A8X is a 20nm device. The Shield tablet did launch 3 months before the iPad Air 2. Apple has a huge advantage in time to market. They can leverage the latest manufacturing technologies, and they don't have to demonstrate a product and secure design wins. Even though the shield tablet is an NVIDIA design, I doubt they have such tight control of their suppliers as Apple has, and they can't leverage the same high-volume orders. Apple designs a chip and designs a known product around that chip while the chip is being designed, and they can count on it selling in high volume. So you are making an unfair comparison of what NVIDIA is able to do and of the strength of the underlying architecture. If you want to compare the Series 6XT GPU architecture with K1's GPU architecture I think it should be done on the same manufacturing technology.
Well said. Apple's advantage is parallel development and time to market. Their GPU architecture is not that much *better* than their competitors. In fact I'd say that Nvidia has had a significant advantage when it comes to feature set and performance per watt on a given process node since the K1.
Maybe an adventage in feature set, but performance per watt ?
So if you want to compare than For example xiaomi miPad, consumes around 7,9W, when running gfx bench battery life test and that is with performance throttled down to around 30,4 fps on screen a very similar tablet, the iPad mini with retina display and it's A7 processor (actually a 28nm part !) consumes just 4,3W and that is running at 22,9 fps for the whole time.
So I am asking where is that "class leading" efficiency and "significant adventage when it comes to performace per watt" that nvidia is claiming to achieve, because I actually don't see anything like that.
Looking at the gfxbench website, under "long-term performance" I see 21.4 fps listed for the iPad Mini Retina and 30.4 fps listed for the Mi Pad, maybe this is what you are talking about. That is a roughly 40% advantage in performance for the Mi Pad. I can't find anything that says about throttling or the number of Watts being drawn during this test. What I do see is another category listed immediately below that says "battery lifetime" where the iPad Mini Retina is listed at 303 minutes and the Mi Pad is listed at 193 minutes. The iPad Mini Retina has a 23.8 watt-hour battery and the Mi Pad has a 24.7 watt-hour battery. So this seems to imply that the iPad Mini Retina is drawing about 4.7 watts and the Mi Pad is drawing about 7.7 watts, and it comes out to the Mi Pad using about a 63% more power. 40% more performance for 63% more power is a much closer race than the numbers you quoted (Yours come out to about a 33% increase in performance and an 84% increase in power consumption, which is very different.), and one must remember the circumstances of the comparison. Firstly, it is a comparison at different performance levels (this part is fair, since juicytuna claimed that NVIDIA has had a performance per watt advantage), secondly, it is a long-term performance comparison for a particularly testing methodology, and lastly and most importantly, it is a whole-system comparison, not just comparing the GPU power consumption or even the SOC power consumption.
Yeah exactly, when you got two similar platforms with different chips, I think it's safe to say that tegra pulls significally more than A7, because those ~3 additional wats (I don't know where you got your numbers, I know xiaomi got 25,46Wh, and that iPad lasts 330 minutes, A7 iPad's also push out T-rex at around 23 fps since iOS8 update) have to go somewhere. What I am trying to say that imagine how low powered the A7 is if the entire iPad mini at half brightness consumes 4,7W, how huge those 3W that more or less come from the SoC actually are. You will increase the power draw of the entire tablet by over a half, just to get 40% more performance out of your SoC. The tegra K1 in miPad has a 5W TDP, or more than entire iPad mini ! Yet it can't deliver performance that's competitive enough at that power. Like you are a 140 lb man, that can lift a 100 pounds, but you will train a lot untill you will put on 70 pounds of muscles (pump more power intro the soc) to weight 210 or more and you could still only lift like 140 pounds. What a dissapointment !
What I see is a massive increase in power compustion, with not-so massive gains in performace, which is not typical to efficient architectures like nvidia is claiming Tegra k1 is. That's why I think nvidia just kind of failed to deliver on their promise of "revolution" in mobile graphics.
I got my benchmark and battery life numbers from the gfxbench.com website as I said in my reply. I got the iPad's battery capacity from the Apple website. I got the Mi Pad's battery capacity from a review page that I can't find again right now, but looking from other places it may have been wrong. WCCFtech lists 25.46 W-h like you did. I don't know where you got YOUR numbers. You cannot say they are "two similar platforms" and conclude that the comparison is a fair comparison of the underlying SOCs. Yes the screen resolutions are the same, but just imagine that Apple managed to squeeze an extra .5 watts from the display, memory, and all other parts of the system than the "foolish chinesse manufacteurs (sic)" were able to do. Adding this hypothetical .5 watts back would put the iPad Mini Retina at 5.2 watts, and the Mi Pad would then be operating at 40% more performance for 48% (or 52%, using the larger battery size you gave for the MiPad) more power usage . Since power usage does not scale linearly with performance this could potentially be considered an excellent trade-off.
Your analogy, btw, is terrible. The Mi Pad does not have the same performance as does the bulked-up man in your analogy, it has a whole 40% more. Your use of inexact words to exaggerate is also annoying: "I see massive increases in power compustion, with not-so massive gains in performace"and "You increase the power draw by over half just to get 40% more performance". You increase the power by 60% to get 40% more performance. That has all the information. But the important point is that it is not an SOC-only measurement and so the numbers are very non-conclusive from an analytical standpoint.
What I see from those numbers is a fact that Tegra is nowhere near 50% more efficient than A7 like nvidia is claiming.
When Gfx bench battery life test runs the display and the SoC are two major power drawers so I thought is reasonable to make other power using parts neglible. So the entire iPad mini pulls 4,9W (I don't know why I should add another 0,5 W if it doesn't pull that much) and miPad pulls 7,9W. Those are your numbers which actually favor nvidia a bit.
To show you that there is no way around that fact I will lower the compustion of miPad by a W just to favor nvidia even more.
Now when we got 4,9 and 6,9W for both tablets I will substract around 1,5W for the display power, which should be more or less the same for both tablets.
So we got 3,4 and 5,4W of all things but the display power compustion, and most of this will be the SoC power. And we got that the tegra k1 uses more or less 50% more power than A7 for 40% more performance in a scenario that favors nvidia so much it's extremelly unfair.
And even if we take this absurd scenario and scale back the power compustion of tegra K1 down quadratically: 1,5*(1,4)^(-2) we still get that even at A7 level of performance K1 will consume over 75% power of A7 for the same performance. That is an number that is way, way, way off in favor of nvidia and yet it still doesn't come close to "50% more efficient" claim that would require the K1 to consume just 2/3 the power for the same performance.
So please tell me how can you assume that increasing the power draw of the ENTIRE tablet by 60%, just to get 40% more GPU performance out of your SoC, which is a SINGLE part, just a subset of total tablet power draw, can be interpreted as nvidia's SoC is more efficient. Because whatever I will spin that I am not seeing 3x performance and 50% more efficiency from K1 tablets compared to A7 tablets. I see that that K1 tablets throttle to nowhere near 3x faster than A7 iPads and they run down their battery significally faster. And if the same is true for the tegra X1, I don't know why anybody should be excited about those chips.
You don't think it's possible to save power in any other component of the system than the SOC? I think that's a convenient and silly claim. You can't operate under the assumption that the rest of the two very different systems draw the exact same amount of power and so all power difference comes from the SOC. Obviously if you want to compare SOC power draw you look at SOC power draw. Anything else is prone to great error. You can do lots of very exact and careful calculations and you will probably be completely inaccurate.
That's comparing whole SOC power consumption. There's now doubt Cyclone is a much more efficient architecture than A15/A7. Do we know how much this test stresses the CPU? Can it run entirely on the A7s or is it lighting up all 4 A15s? Not enough data.
Furthermore, the performance/watt curve on these chips is non linear so if the K1 was downclocked to match the performance of the iPad I've no doubt its results would look much more favourable. I suspect that is why they compare the X1 to the A8X at same FPS rather than at the same power consumption.
Not if one wants to compare architectures, no. There is no reason why in an alternate universe Apple doesn't use NVIDIA's GPU instand of IMG's. In this alternate universe, NVIDIA's K1 GPU would then benefit from Apple's advantages the same way the Series 6XT GPU benefits in the Apple 8X, and then the supposed point that GC2:CS is trying to make, that the K1 is inherently inferior, would, I think, not hold up.
Apple would never use Nvidia at the power consumption levels it brings. The power is pointless to them if it can't be put into a smartphone level device. Nvidia still doesn't get why nobody in the OEM market wants their tech for a phone.
But the NVIDIA SOCs are on a less advanced process node, so how can you know that? You seem to be missing the whole point. The point is not what Apple wants or doesn't want. The point is to compare NVIDIA's GPU architecture to the PowerVR series 6XT GPU. You cannot directly compare the merits of the underlying architecture by comparing performance and power efficiency when the implementations are using different sized transistors. And the question is not the level of performance and power efficiency Apple was looking for for their A8. The question is simply peak performance per watt for each architecture.
@Yojimbo The Shield was released with the Cortex A15-based Tegra K1, not the Denver-based K1. The former is not competitive with regards to CPU performance, the latter plays in the same league. AFAIK the first Denver-based K1 product was the Nexus 9. Does anyone know of any tablets which use the Denver-based K1?
Apple sell products that have an year life cycle, don't sell chips and therefore they don't need to do any marketing in advance as NV does punctually at any CES.
It's going finfet 16nm later this year (parker). As noted here it's NOT in this chip due to time to market and probably not as much gained by shrinking that to 20nm vs. going straight to 16nm finfet anyway. Even Qcom went off the shelf for S810 again for time to market.
Not sure how you get that Denver is a disappointment. It just came out...LOL. It's a drop in replacement for anyone using K1 32bit (pin compatible), so I'm guessing we'll see many more devices pop up quicker than the first rev, but even then it will have a short life due to X1 and what is coming H2 with Denver yet again (or an improved version).
What do you mean K1 is in ONE device? You're kidding right? Jeez, just go to amazon punch Nvidia K1 into the search. Acer, HP, NV shield, Lenovo, Jetson, Nexus9, Xiaomi (mipad not sold on amazon but you get the point)...The first 4 socs were just to get us to desktop gpu. The real competition is just starting.
Building the cpu wasn't just for mobile either. You can now go after desktops/higher end notebooks etc with NO WINTEL crap in them and all the regular PC trimmings (high psu, huge fan/heatsink, hd's, ssd's etc etc, discrete gpu if desired, 16-32GB of ram etc). All of this timed perfectly with 64bit OS getting polished up for MUCH more complicated apps etc. The same thing that happened to low-end notebooks with chromebooks, will now happen with low end PC's at worst and surely more later as apps advance on android etc and Socs move further up the food chain in power and start running desktop models at 4ghz with fan/heatsinks (with a choice of discrete gpu when desired). With no Wintel Fee (copy of windows + Intel cpu pricing), they will be great for getting poor people into great gaming systems that do most of what they'd want otherwise (internet, email, docs, media consumption). I hope they move here ASAP, as AMD is no longer competition for Intel CPU wise.
Bring on the ARM full PC like box! Denver was originally supposed to be x86 anyway LOL. Clearly they want in on Intel/AMD cpu territory and why not at CPU vs. SOC pricing? NV could sell an amped up SOC at 4ghz for $110/$150 vs. Intel's top end i5/i7's ($229/339). A very powerful machine for $200 less cash but roughly ~perf (when taking out the Windows fee also, probably save $200 roughly). Most people in this group won't miss the windows apps (many won't even know what windows is, grew up on a phone/tablet etc). Developing nations will love these as apps like Adobe Suite (fully featured) etc get moved making these cheap boxes powerful content creators and potent gamers (duh, NV gpu in them). If they catch on in places like USA also, Wintel has an even bigger headache and will need to drop pricing to compete with ARM and all it's ecosystem brings. Good times ahead in the next few years for consumers everywhere. This box could potentially run android, linux, steamos, chrome in a quadboot giving massive software options etc at a great price for the hardware. Software for 64bit on Arm will just keep growing yearly (games and adv apps).
The K1 has shipped in three high end Android Tablets - Nvidia shield, Xiaomi MiPad, and Nexus 9.
Now, how many tablets got a Snapdragon 805? Exynos 5433?
Tegra K1 market performance is simply the result of the fact that high end tablet market is taken up by Apple, and that it doesn't compete in mod range and low end.
It's the result of too high power compustion, that OEM's prefer to keep low.
That's why tegra K1 is used by just foolish chinesse manufacteurs (like tegra 4 in a phone) like xiaomi, google in a desperate need for a non Apple high end 64-bit chip (to showcase how much it's 64-bit) and nvidia themselves.
I think you're right that the K1 is geared more towards performance than other SOCs. The K1 does show good performance/watt, but it does so with higher performance, using more watts. And you're right that most OEMs have preferred a lower power usage. But it doesn't mean that the K1 is a poor SOC. NVIDIA is trying to work towards increasing the functionality of the platform by allowing it to be a gaming platform. That is their market strategy. It is probably partially their strategy because those are the tools they have available to them; that is their bread-and-butter. But presumably they also think mobile devices can really be made into a viable gaming platform. Thinking about it in the abstract it seems to be obvious... Mobile devices should at some point become gaming platforms. NVIDIA is trying to make this happen now.
Only one of the three devices you mention runs on Denver cores (Nexus 9) and performance reviews have been very uneven for that device, to say the least.
@jcwalla, I'm not sure there's "no fruit" from their investment, they are now on their 6th major iteration of Tegra (1-4, K1, X1) with a major variant in Denver K1 and while their marketshare and Tegra revenue won't reflect it, they are clearly the market leader in terms of performance for Android SoCs while going toe-to-toe with the monstrous Apple. Not bad, considering I am positive Apple is probably investing more than Nvidia's yearly revenue in keeping their SoC's relevant. ;)
Breaking into an established market and growing a business from scratch is hard, but Nvidia clearly sees this as an important battle that needs to be fought. As a shareholder and tech enthusiast, I agree, in 10 years there's no doubt I would want an Nvidia GPU in whatever handheld/thin device I am using to power my devices.
The problem is that Nvidia lacks the "killer app" that really distinguishes their SoC over others. Even Apple is beginning to understand this as there's nothing on iOS that remotely takes advantage of the A8X's overkill specs. Nvidia needs to grow the Android/mobile gaming market before they really distinguish themselves, and from what I have seen, THAT is their biggest problem right now.
Tegra is an important LOB for NVIDIA, but I'm more talking about how Denver has been received. When it was in the rumor stage, the scuttlebutt seemed to be about how they were going to marry ARMv8 CPU cores with discrete cards and take over the HPC world, etc. Then that got filtered down to "Yeah Denver is just a custom ARMv8 core for Tegra." (Which isn't earth-shattering; Qualcomm and Apple had been doing custom designs for a long time.) And now it doesn't seem like Denver is really anything special at all.
But did it not involve a lot of hype, money, and time over all those years?
Well, I think that HPC embedded ARM core in a massive GPGPU is still a possibility, but again, you're looking a very focused usage scenario, one which I think was pushed back by the process node delays at 20nm and now 16nm FinFET. We have seen since then Nvidia's roadmaps have changed accordingly with some of the features migrating vertically to new generation codenames.
But the important point is that Nvidia's investment in mobile makes these options and avenues possible, even if Tegra isn't lightning up the P&L statements every quarter.
NVIDIA seems to be marrying themselves to IBM in the HPC space, but maybe ARM HPC is a different segment than what PowerPC occupies? I don't know. But IBM has a lot of experience and expertise in the area. Maybe NVIDIA thought they were biting off more than they could chew, maybe the Denver CPU just wasn't performing well enough, or maybe the opportunity with IBM came along because IBM realized they could benefit from NVIDIA as they didn't have anything to compete with Intel's Xeon Phi, and NVIDIA jumped at it.
In fact, Denver IS very special: it's NOT a custom ARM design, but an emulator, a reincarnation of Transmeta's Crusoe/Efficeon. The sad thing is however, that it has TONS of inherent issues, just like the Crusoe/Efficeon. This time, nVidia did a wise choice by ditching this very questionable design and turned to the traditional native design.
They haven't ditched it. Per at least one top NVIDIA executive, Denver is expected to appear again in future products. Supposedly the reason why Denver is not appearing in the X1 is because it is not ready for the 20nm process shrink, and they want to bring the X1 out faster than Denver would allow. He said Denver is expected to be in 16nm products.
Nvidia hired most of the Transmeta engineers, and have implemented at least one similar innovative feature from Transmeta into Denver called Dynamic Code Optimization which optimizes frequently used software routines.
Why are you saying "breaking into" an established market? Nvidia was in that market back with the Tegra 2 but their BS claims fell flat when put into real products and device makers abandoned them. They lost their market and now have to win it back again.
Really? What major design wins did the Tegra 2 have at the time? They have always been playing catch up with the likes of Qualcomm, Samsung, even TI back in that time period.
At no time has Tegra ever been the market leader in mobile devices, so yeah, so much for that incorrect assertion, clearly they are trying to break into this market and looking at different ways of doing it.
You must have a short memory. Tegra 2 was used in a number of phones because it was the first commercial quad core SoC and companies bought into Nvidia's claims. Then reality came and OEM's abandoned them and they have been trying to turn it around for years now.
Which phones? And still nothing even remotely close to the market share captured and retained by the likes of Qualcomm, even TI in that era.
As for short memory, again, I believe you are mistaken, Tegra 2 was the first mobile "dual core", perhaps you were thinking of Tegra 3, which is probably still Nvidia's biggest commercial Tegra success but still nothing even remotely close to capturing the market lead as it was going up against the likes of Qualcomm's Snapdragon 400 series.
Also, perhaps the biggest boon of Nvidia's investment in mobile has been their amazing turnaround in terms of power efficiency, which is undoubtedly a result of their investment in mobile GPU designs and the emphasis on lowering TDP.
I would suggest that something like Pixelmator would be a good example of an app that leverages the power of the A8X. Though, I would agree that the A8X is overkill for most apps.
Seems to be that the Denver core will take the back seat this year. Judging from the performance of the nexus 9,Denver didn't really set the world on fire as Nvidia previously made it out to be. I think the K1 was relatively a let down last year with limited design win and spotty performance of the Denver architecture. I wonder when will Denver make a come back? 2016?
I can imagine that NVIDIA might release a Denver-and-updated-Maxwell-powered SOC in 2016 and if Denver is successful then a Pascal-and-Denver-powered SOC in 2017. ??? Unless NVIDIA is able to improve their execution well enough to release a Pascal-powered SOC in time for next year. That last possibility seems a bit far-fetched considering their history in the segment, though.
Actually the high end SoC market won't be competitive since only Qualcomm has integrated modem. Guess 4 Denver cores was not doable on 20nm (die size or clocks) and that's disappointing, was really looking forward to more big cores. If they can get the CPU perf they claim, it's not bad but they might have a small window before 16nm shows up. Seems another lost year in mobile for Nvidia, if they even care about it anymore, not so sure they do. A quad Denver in high end, a dual for midrange and glasses, ofc both with integrated modem and maybe they would have been relevant again.
The soft-modem thing didn't seem to work out the way they had hoped. They seem to have given up trying to compete with Qualcomm in the smartphone market. The OEMs don't like the soft-modem and don't Iike a separate modem chip. NVIDIA's SOCs just don't differentiate themselves significantly enough from Qualcomm's that the OEMs are willing to accept one of those two things. Plus Samsung controls most of the Android smartphone market and seems to be very comfortable with their supplier system. I bet frustration about that on the part of NVIDIA is probably partially what led to the patent lawsuit. In any case, I wonder what NVIDIA is doing with Icera currently... if they are trying to sell it, or what.
Not that I think Denver is great or terrible or anything, but modems are not very important on tablets because number of 4G tablets are a fraction of WiFi ones.
Do you people finally see now just how PATHETIC Intel Core M is??
Its top of the line chip, done on way superior process, costs $270, has a GPU that manages around 300GFLOPS, while this here 20nm chip that will sell for well under $100, reaches over 1 TERAFLOP!! And the yearly doubling of the mobile GPU power continues.
Seems like in 2016 we could see small tablets that will be graphically more capable than Xbox one
No disagreement there. Broadwell is a dud (weak update to Haswell) and Broadwel-Y/Core M is a scam that will trick users into buying low-performance expensive chips.
"Seems like in 2016 we could see small tablets that will be graphically more capable than Xbox one" - I don't think that even Nvidia can make the SoC with roughly 3x more performance than Tegra X1 within one year. Maybe in 2017-2018?
3D stacked memory and technologies like NVLINK which are expected to arrive in 2016 will the solve memory bandwidth limitations. We might very well soon see a massive 1 TB/s bandwidth on mobile SoCs. I didn't think bandwidth is the hurdle but rather the power wall which we can overcome by scaling manufacturing process.
If you actually read the chart, the 1 TFLOP number was reached with FP16 operations and not FP32 operations, like literally EVERYONE ELSE uses. The quoted FP32 number is 0.5 TFLOPs, so it wouldn't be until 2017-18 that Tegra could actually reach the Xbox One performance without cheating the numbers.
Its doesn't need more than that for the GPU. More GPU power for a Core M is wasted for the type of products its used for. You build the chip that is balanced for the market your selling too. Why is this so beyond people who always look at every chip on the same level?
As long as it's only on the unprofitable inconsistent disaster that is Android, it's completely useless to the end user. Not a single game will be optimised for it and every game on the Play Store will continue to run like crap and crash on half the devices.
They need to adopt a well managed OS like Windows Phone with proper drivers and release optimised apps on the Windows Store.
Is this impression first hand? What device? Because my low end Moto G never crashes and the play store is completely smooth, more so than my iPad Mini in fact. This is a low end Android device with only Cortex A7 cores and 1GB memory backing them up.
Why do you guys write what essentially is a PR statements by NV as if they were independently validated facts by yourselves? I suppose you guys did not have time to test any of these claims.
So you end up writing contradictory paragraphs one after another. In the first, you say NVIDIA "embarked on a mobile first design for the first time." That statement in and of itself is not something one can prove or disprove, but in the very next paragraph you write,
"By going mobile-first NVIDIA has been able to reap a few benefits.. their desktop GPUs has resulted chart-topping efficiency, and these benefits are meant to cascade down to Tegra as well." (??)
I suggest you read that paragraph again. Maybe you missed something, or worse the whole paragraph comes off unintelligible.
Well the situation itself is confusing since NVIDIA might have designed Maxwell "mobile-first" but actually released it "desktop-first". Then came notebook chips and now we are finally seeing Tegra. So release-wise the power efficiency "cascades down", even though they presumably designed starting from the standpoint of doing well under smaller power envelopes.
But that is a tautology that is totally vacuous of meaning. One can say the opposite thing in the exact same way: "We went with desktop first, but released to mobile first, so that power efficiency we've learned "cascaded up" to the desktops.
So the impression one gets from reading that explanation is that it does not matter whether it was mobile first or desktop first. It is a wordplay that is void of meaningful information. (but designed to sound like something, I guess)
Isn't that standard reviewing practice? "Company X says they did Y in their design, and it shows in Z." The reviewer doesn't have to plant a mole in the organization and verify if NVIDIA really did Y like they said. This is a review, not an interrogation. If the results don't show in Z, then the reviewer will question the effectiveness of Y or maybe whether Y was really done as claimed. Yes, the logical flow of the statement you quoted is a bit weak, but I think it just has to do with perhaps poor writing and not from being some sort of shill, like you imply. The fact is that result Z, power-efficiency, is there in this case and it has been demonstrated on previously-released desktop products.
As far as your statement that one could say the opposite thing and have the same meaning, I don't see it. Because going "mobile-first" means to focus on power-efficiency in the design of the architecture. It has nothing to do with the order of release of products. That is what the author means by "mobile-first," in any case. To say that NVIDIA was going "desktop-first" would presumably mean that raw performance, and not power-efficiency, was the primary design focus, and so the proper corresponding statement would be: "We went desktop-first, but released to mobile first, and the performance is meant to "cascade up" (is that a phrase? probably should be scale up, unless you live on a planet where the waterfalls fall upwards) to the desktops." There are two important notes here. Firstly, one could not assume that desktop-first design should result in increased mobile performance just because mobile-first design results in increased desktop efficiency. Secondly and more importantly, you replaced "is meant to" with "so". "So" implies a causation, which directly introduces the logical problem you are complaining about. The article says "is meant to," which implies that NVIDIA had aforethought in the design of the chip, with this release in mind, even though the desktop parts launched first. That pretty much describes the situation as NVIDIA tells it (And I don't see why you are so seemingly eager to disbelieve it. The claimed result, power-efficiency, is there, as I previously said.), and though maybe written confusingly, doesn't seem to have major logical flaws: "1. NVIDIA designed mobile-first, i.e., for power-efficiency. 2. We've seen evidence of this power-efficiency on previously-released desktop products. 3. NVIDIA always meant for this power-efficiency to similarly manifest itself in mobile products." The "cascade down" bit is just a color term.
I just want to note that I don't think the logical flow of the originally-written statement is as weak as I conceded to in my first paragraph. In your paraphrase-quote you left out the main clause and instead included a subordinate clause and treated it as the main clause. The author is drawing a parallel and citing evidence at the same time as making a logical statement and does so in a way that is a little confusing, but I don't think it really has weak logical flow.
Anyone who is familiar with the convergence of Tegra and GeForce/Tesla roadmaps and design strategy understands what the author(s) meant to convey there.
Originally, Nvidia's design was to build the biggest, fastest GPU they could with massive monolithic GPGPUs built primarily for intensive graphics and compute applications. This resulted in an untenable trend with increasingly bigger and hotter GPUs.
After the undeniably big, hot Fermi arch, Nvidia placed an emphasis on efficiency with Kepler, but on the mobile side of things, they were still focusing on merging and implementing their desktop GPU arch with their mobile, which they did beginning with Tegra K1. The major breakthrough for Nvidia here was bringing mobile GPU arch in-line with their established desktop line.
That has changed with Maxwell, where Nvidia has stated, they took a mobile-first design strategy for all of their GPU designs and modularized it to scale to higher performance levels, rather than vice-versa, and the results have been obvious on the desktop space. Since Maxwell is launching later in the mobile space, the authors are saying everyone expects the same benefits in terms of power saving from mobile Maxwell over mobile Kepler that we saw with desktop Maxwell parts over desktop Kepler parts (roughly 2x perf/w).
There's really no tautology if you took the time to understand the development and philosophy behind the convergence of the two roadmaps.
No, it's not untelligible for reasons that other people have already explained. If you understand the difference between what it is developed for and what is released first you understand the difference. And apparently you don't.
Perhaps you guys can carry a power bank of known quality to this type of demo and use it instead of whatever the demo unit is hooked up to? I was appalled to see a Nexus 9's dropping battery percentage while it was being charged at a local Microcenter. Granted I do not know what kind of power supply it was hooked up to, but all it was running was a couple of Chrome tabs.
@chizow: Another fun fact: The article you reference was specifically addressing the state of cheating among Android OEMs. In fact, the article specifically states "With the exception of Apple and Motorola, literally every single OEM we’ve worked with ships (or has shipped) at least one device that runs this silly CPU optimization." Perhaps you're going to fall back on weasel words and claim that neither Motorola nor Apple are GPU/SoC vendors. If that's the case, then you should also note that this kind of cheating is done at the OEM level, not the SoC vendor level.
The link you gave doesn't contain anything related to the Denver core that cheats at the firmware level. Of course, it's called "optimization" by nVidia.
NVIDIA are claiming power savings compared to the A8X, at the same performance level.
And additionally, they can run the X1 GPU at ~1GHz to achieve greater performance than the A8X. However the A8X's lower GPU clock is just a design decision by Apple so they can guarantee battery life isn't sucky when playing games.
But yet, hardware-wise the X1's GPU specification isn't that amazing when compared to the A8X's GPU.
Last up, how does a quad-A57 at 2+ GHz compare to a dual 1.5GHz Cyclone...
Isn't always amazing how company A's future products compete so well against company B's current products? The X1 won't be competing with the A8X, it will be competing against the A9X. If you're familiar with the PowerVR Rogue 7 series GPUs, you'd wouldn't be terribly impressed with this recent nVidia announcement. It keeps them in the game as a competitor, but they will not be on top. Further, I'm quite certain that Apple's custom A9 chip will compare well to the off the shelf reference designs in the A57 in terms of performance, efficiency or both. If there were no benefits to Apple's custom design, they would simply use the reference designs as nVidia has chosen to do.
Yes but how do you compare your product to something that isn't out yet? You can't test it against rumors. It must be compared with the best of what is out there and then one must judge if the margin of improvement over the existing product is impressive or not. The PowerVR Rogue 7 series is due to be in products when? I doubt it will be any time in 2015 (maybe I'm wrong). When I read the Anandtech article on the details of IMG's upcoming architecture a few months back I had a feeling they were trying to set themselves up as a takeover target. I don't remember exactly why but it just struck me that way. I wonder if anyone would want to risk taking them over while this NVIDIA patent suit is going on, however.
The Tegra X1 isn't out yet either! If you look at Apple's product cycle it's clear that in the summer Apple will release an A9 when they launch the new iPhone. And you can look at Apple's history to estimate the increase in CPU and GPU horsepower.
But NVIDIA HAS the Tegra X1. They are the ones making the comparisons and the Tegra X1 is the product which they are comparing! Apple seems to be releasing their phones in the fall recently, but NVIDIA nor the rest of the world outside Apple and their partners has no idea what the A9 is like and so it can't be used for a comparison! It's the same for everyone. When Qualcomm announced the Snapdragon 810 in April of 2014 they couldn't have compared it to the Tegra X1, even though that's what it will end up competing with for much of its life cycle.
Perhaps those are the raw max-throughput numbers, but if it were that simple there would be no reason for benchmarks. Now let's see how they actually perform.
about the intel chip I have to say that it is a very good CPU (think about sse and avx) + a little GPU but nvidia chip is a good GPU+ reasonable CPU
you can have windows x86 on intel chip and run something like MATLAB (also android) and you can have a good gaming experience with nvidia's
each of them has its use for certain users its not like that every program can use 1TFLOPS of tegra GPU and its not like every user is "game crazy" intel core M have its own users
and of course tegra chip is very hot for mobiles and it is a hard decision for engineers who design mobiles and tablet to migrate from a known chip like snapdragon to an unknown and new chip like tegra
I think both nvidia and intel are doing good and nor deserve blaming but it is a good idea for nvidia to make a cooler chip for mobiles
That is some impressive GPU performance / watt. However I think LPDDR4 with double the bandwidth do help the performance on X1. But even with the accounted difference the A8X GPU still does not hold up against Maxwell, assuming Nvidia benchmarks can be trusted. It should be noted that the A8X is partly a custom GPU from Apple. Since it doesn't come directly off IMG, and it is likely not as power efficient as possible.
This chip looks awesome, but so was all tegras before.
Like the tegra k1, a huge annoncement supposed to bring "revolution" to mobile graphics computing. That turned out to be a power hog, pulling so much power it was absolutelly unsuitable for any phone and it's also throotling significally.
This looks like the same story yet aggain, lots of marketing talk, lots of hype, no promise delivered.
Apple is expected to move to 14nm for the A9. That's just speculation, but given Apple position in the supply chain as opposed to nVidia's I would be surprised if nVidia was able to be on the same process. With regards to CPUs, since nVidia has regressed from the Denver core to the standard reference designs, I wouldn't expect nVidia to have any CPU advantage. Certainly not with single threaded apps anyway. As for the GPU, the Rogue 7 series appears to be more scalable with up to 512 "cores". If the X1 chip has any GPU advantage it would not be for technical reasons. Rather it would be because Apple chose to scale up to that level. Given that Apple has historically chosen rather beefy GPUs, I would again be surprised if they allowed the X1 to have a more powerful GPU. We'll see.
"it's also throotling significally." — Um, no. It has throttling under heavy load but it's about 20% in worst case. It was Snapdragon 800/801 and Exynos 5430 that "throotling significally".
The fact that the announcement for this chip was coordinated with an almost exclusive discussion about automotive applications -- and correct me if I'm wrong, but it does not appear they even discussed gaming or mobile applications, except for the demo -- could be a signal that indicates to which markets NVIDIA wants to focus Tegra and which markets they're abandoning.
A couple years back Jen-Hsun said that Android was the future of gaming, but I wonder if he still believes that today?
I do think there is some truth to the idea that there is not much of a consumer market for high-end mobile graphics. Other than making for a great slide at a press event (Apple), there doesn't seem to be much of a use case for big graphics in a tablet. The kind of casual games people play there don't seem to align with nvidia's strengths.
Casual games will always be more popular than the more hard core type of games. That said, there are plenty of mobile games that push the system hard. Try something like World of Tanks Blitz on your device. On an iPad Air, it's a smooth 60 fps. On a lark, I tried it on a Nexus 7 once it finally came out for Android. A was an unplayable 15 fps (max). The graphics aren't up to the PC level either. The point being, there is plenty of need for more powerful mobile gaming systems and the average "budget" device just isn't up to par for such needs.
I don't think they are necessarily abandoning the gaming market. They could be giving a presentation for their investors to be excited about. Mobile gaming could still be a long-term plan, but they don't see significant growth there this next cycle such that it will give them significant profits. But these automotive initiatives are something new they can try to get people excited about.
How fair is the power comparison? iPad air2 has a complex intrusive rework (from picture) while on x1 platform profiler dumped power is used. The rework itself could contribute to power overhead on iPad air2. Also a8x is tile based rendering (heavy on chip memory access) while X1 is direct rendering and requires heavy ddr access. So gpu core power comparison without including ddr can be very misleading.
Just to confirm - they're actually running system memory at 3200 MHz, correct? The quoted 25.6 GB/s memory bandwidth does not factor in color compression?
To be honest, K1 was pretty impressive and so is the X1. It's good to see Nvidia pushing the graphic limit on the Andriod camp. However, they are usually let down by the higher power consumption, which makes them less suited for mobile phone usage. Looking forward to seeing one at least in a Shield tablet, or possibly a good Android box.
Does anyone know if this will support 64-bit operating systems? I know for sure that the K1 only had up to 32-bit. I'm thinking of buying a chromebook but am torn between buying one with a low-end intel processor for more productivity or NVIDIA processor with at least some graphics capability.
Rereading this article after the report that Nintendo's NX - their new flagship console - would be powered by NVIDIA's Tegra is so enlightening. It's like reading a whole new preview. Many things start making sense in this new context:
HDMI 2.0 and 4K60 support; 16 ROPs; Aggressive clockspeed; Conservative rasterization and MFAA.
To quote the article: "It seems obvious that this would be a great SoC to put in a gaming tablet and a variety of other mobile devices, but it remains to be seen whether NVIDIA can get the design wins necessary to make this happen."
What a conclusion! And what a gaming tablet it would be. You couldn't have known how those words would ring today - over a year later. Talk about a design win. Awesome.
P.S. Please, do an article on the Nintendo NX reports.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
194 Comments
Back to Article
Mayuyu - Monday, January 5, 2015 - link
Apple should start licensing Nvidia GPUs instead of Imagination GPUs for next generation iDevices.twotwotwo - Monday, January 5, 2015 - link
It might be hard (or impossible) for them to do that without breaking compatibility with existing iOS games written around the PowerVR's quirks.Krysto - Monday, January 5, 2015 - link
The OpenGL stuff shouldn't be "impossible". Even the texture compression. I think developers can deal with that. Where Apple really shot itself in the foot is with the launch of the Metal API, though. Now they're stuck with Imagination for at least a few more years until they make it more abstract to work with multiple GPU architectures and not so..."metal". Or they can wait for OpenGL NG to appear, which will probably take just as much time.techconc - Monday, January 5, 2015 - link
How exactly did Apple "shoot itself in the foot" with Metal. They have a solution right now for mobile apps that rivals what is possible on other platforms. All the major game engines have already migrated to Metal. nVidia can show these generic OpenGL benchmarks all they want, but in practice, graphic intensive apps on the A7 and A8 series chips are seeing far greater efficiency and performance improvements.OpenGL NG sounds great in concept, but it takes forever for a consortium like Khronos to develop new standards and just as long for them to eventually be adopted. This is years away from becoming a reality. Yet, Apple gets all of the benefits of that right now. From my perspective, this gives Apple a strong competitive advantage.
akdj - Sunday, January 11, 2015 - link
Well said techconcNot sure if you're in to development, SoC design or just a 'user', Krysto...but BOTH Apple's 'Metal' and language 'Swift' were/are HUGE leaps forward to 'cut' the peanut butter layer on the GPU that is Open GL ES ...so developers have 'direct' access to the 'metal' AKA GPU portion of the SoC. It's an amazing feat in 'software engineering' that helped a huge load on the 'hardware engineering' side of the house....specifically because of this!
I own a Note 4 for my business
I own a 6+ as a personal driver.
The former a quad core, 2.7x Ghz procs and the Adreno 420 and 3GB of 'shared' SoC RAM
The latter, a dual core, 1.5Ghz procs with IT's solution for graphics and 1GB of 'shared' SoC RAM
I love them both, different reasons BUT, Play Asphalt 8 on both. Then tell me 'more muscle, power, RAM, cores or core speed' are the reasons I'm playing a more fluent game on iOS vs android
I'm ambidextrous and enjoy using both. Same in the office or home environment. OS X is primary but I've always had a Windows box since the big 'switch' a decade ago
Point being, software is damn near, and sometimes MORE important than hardware to the end user's experience. No one outside of us dorks, geeks, and pocket protector wearing Homers has a clue what FitFat, latency, core clock speed, or hell....cores for that matter MEAN! They couldn't tell Ya if theyre rocking 1, 2, 3 GB of RAM or NO RAM, lol.
The ultimate end experience is designed and defined by the software and hardware working in synergy WITH a development community willing to step up and develop a million optomized apps for your system. If it's running iOS or Android, you're in luck. Windows, a bit tougher to 'win' and if this SoC does indeed have the power/TDP numbers they're bragging, Apple's never been one to change supply chains
There's a reason Tim is CEO, & that's the biggest. When you're dropping 100,000 products a year, you HAVE to have suppliers that can fulfill your orders and needs
adriaaaaan - Thursday, January 15, 2015 - link
Are you honestly expecting a phone with a weaker GPU pushing 50% more pixels to out perform the other? Of course the iPhone is smoother in games its lower res than the note 4Maleficum - Wednesday, January 21, 2015 - link
Oh yeah, Note4 has to push more pixels than 6+. However, a resolution that high is simply not necessary in first place, and more importantly, over 30% of the pixel data the SoC has to process are nullified by the pentiled AMOLED. What a waste!Maxjonny55 - Saturday, June 20, 2015 - link
Metal has made easier for to access the GPU and the reason apple had done this due the lack of power on there CPUs compared to android device, yes sure GPUs can run apps with extra power but then so what? Open GL has always been doing that! More will know Java and Open GL and easier for development as all hardware vendor apart from apple will optimise hardware for it.I would not want to compare Asphalt 8 between devices as horse power and muscle has nothing to do with it but lazy work on the part of game creators.
Providing access to metal will make a difference to apps no doubt but some apps and not all.. Open GL provides access to GPU not sure why it took apple so long. I have a Nexus 9 and iPad Air 2 and can't see apart from Apple hype what the Air 2 had to offer in performance! Nexus 9 single core out performes the Air 2 and so does the 1 year older GPU..
Wolfpup - Wednesday, September 30, 2015 - link
"lack of power"? Apple's CPUs blow away any other ARM CPUs.Maleficum - Wednesday, January 21, 2015 - link
OpenGL is FAR outdated. It has way too many performance bottlenecks due to the aged design, and doesn't scale very well with modern GPU/CPU architectures.Both MS and Apple recognized this, et voila, Metal and the upcoming DX12 are their answers.
Pity that Android can't keep up with this, stuck with the opensource mess.
MrPoletski - Monday, January 26, 2015 - link
I can't help but think the reason that this all happened at all is because of AMD and Mantle.maskofwraith - Monday, March 2, 2015 - link
if OpenGL is FAR outdated and a mess then why are they still supporting it? why dont they try to contribute it to it, to make it better? Wait they cant do that all they need is money. half of the internet runs on opensource.GC2:CS - Monday, January 5, 2015 - link
Why ?They do incredibly well on graphics font.
Mondozai - Monday, January 5, 2015 - link
Everyone who thinks that the X1 is going to crush all the competition is going to get burned once more, just like with K1. OEMs want an integrated solution. Yes, Nvidia has awesome GPU capabilities(duh) but that in of itself isn't enough. Still doesn't have an integrated modem, so the smartphone market is dead on arrival.Tablets have a better shot, but Nvidia is aiming for the car market for a reason. There may be a few one-off devices that they will aim for but overall, this isn't going to be a significant force in the high-end tablet space. S810 has the GPU capability to run 4K smoothly.
Apple's next GPU is going to be a lot better, just like every year. Can Nvidia compete on a 'total system basis'? The answer remains as it has always been: no.
speculatrix - Monday, January 5, 2015 - link
Nvidia have Icera, their own 4G/LTE modem technology, so hopefully NVidia will have a version of the X1 with that integrated, or at least a version with a "glue-less" interconnect to their Icera module.Yojimbo - Tuesday, January 6, 2015 - link
The soft-modem is MIA. NVIDIA has not been going after the smartphone market.Yojimbo - Tuesday, January 6, 2015 - link
NVIDIA has not been pursuing smartphones for a couple years now. What advantage does the Snapdragon 810 have over the Tegra X1 in the high end tablet space? Qualcomm will have to compete based mostly on familiarity of the OEMs with their SOCs, price, and the fact that Samsung is unlikely to choose NVIDIA. If the 810 is available at an earlier date, that also might help them out during that time period.aenews - Saturday, January 24, 2015 - link
The Snapdragon 810 doesn't even come close to matching the graphical prowess of the K1, even though it's just about a whole year newer.The Snapdragon 805 saw little adoption. In tablets, there was the Kindle Fire HDX. That's... one whole design win for tablets. The K1 was in the Shield Tablet and the Nexus 9 (and the Xiaomi MiPad).
harrybadass - Monday, January 5, 2015 - link
by the time the new Imagination7xxx&A9x is released this will already be obsolete.Nvidia is playing catchup.
kron123456789 - Monday, January 5, 2015 - link
Yeah. Exactly the same was said about Tegra K1 and GX6650.GC2:CS - Monday, January 5, 2015 - link
And what happened ?kron123456789 - Monday, January 5, 2015 - link
What happend? Only custom-made by Apple GXA6850 can compete with K1 and there is no devices with GX6650. That happend.GC2:CS - Monday, January 5, 2015 - link
Tegra K1 is also custom-made.And A8X can not only compete, but what's more important, compete at a much lower power which matters more than just pure performance.
kron123456789 - Monday, January 5, 2015 - link
I mean that GXA6850 was made by Apple, it's not an ImgTec's design(which was GX6650). And A8X is 20nm SoC and this gives A8X efficiency advantage.GC2:CS - Monday, January 5, 2015 - link
Yeah GXA6850 is an semi-custom design.The efficiency adventage offered by 28 to 20 nm transition is only about 25%, the actual difference between k1 and A8X is much bigger.
techconc - Monday, January 5, 2015 - link
@kronI fail to see your point. Apple was able to put out a CPU/GPU that is both more powerful and more energy efficient than the nVidia based part. The original argument suggested that Apple should switch to nVidia. It's obvious from existing parts that there is no reason to do this. Your argument attempts to suggest otherwise, but fails to make the case. If we're basing the case off of future technology (like the X1) then you also need to discuss the Rogue 7 series chips that quite frankly look more impressive.
aenews - Saturday, January 24, 2015 - link
It is definitely more energy effcient and faster, but not significantly faster. They are on the same level, and there aren't games on iOS that even use that level of power right now.lucam - Tuesday, January 6, 2015 - link
Imgtec licenses design then anybody do what they want. It's the same of ARM with reference design and in house design (cyclone, denver etc). We are always talking about ARM. What the hell you re talking about?lucam - Tuesday, January 6, 2015 - link
Since there is no GX6650 around doesn't mean it cannot compete. Talking about speculations maybe an higher clock of GX6650 can compete who know. Apple did it's custom version, low clock and more cluster. Either way it's always PowerVr inside.kron123456789 - Monday, January 5, 2015 - link
And only compete. Not blows away, like somebody were saying.chizow - Monday, January 5, 2015 - link
Nvidia is only catching up on process node, because what they've shown is when comparing apples to apples:1) They have a much faster custom 64-bit CPU (A8X needed 50% more CPU to edge Denver K1)
2) They have a much faster GPU architecture (A8X also needed 50% more GPU cores to edge Denver K1, but get destroyed by Tegra X1 on the same 20nm node).
As we can see, once it is an even playing field at 20nm, A8X isn't going to be competitive.
GC2:CS - Monday, January 5, 2015 - link
Thy just postponed their "much faster custom 64-bit CPU" in favor of off the shelf design and compared to A8X is much higher clocked.A8X has just 33% percent more "cores" than k1 and aggain the GXA6850 GPU is probably miles under ~1Ghz clockspeed that nvidia targets.
And what's wrong with using a wider CPU/GPU ?
And yeah Tegrax1 is up to 2x faster than A8X, but considering it also runs at the same power as K1, it is not a lot more efficient.
chizow - Monday, January 5, 2015 - link
How do you get only 33% for A8X? A8 = 2 core, Denver K1 = 2 core, A8X = 3 core. 1/2 = 50% increase.Same for A8X over A8. GPU cores went from 4 to 6, again, 2/4 = 50% increase. Total transistors went from 2Bn to 3Bn, again 50% increase.
In summary, Apple fully leveraged 20nm advantage to match Denver K1 GPU and edge in CPU (still losing in single-core) using a brute-force 50% increase in transistors and functional units.
Obviously they won't be able to pull the same rabbit out of the hat unless they go to FinFet early, which is certainly possible, but then again, its not really a magic trick when you pay a hefty premium for early access to the best node is it?
Bottomline is Nvidia is doing more on the same process node as Apple, simple as that, and that's nothing to be ashamed of from an engineering standpoint.
GC2:CS - Monday, January 5, 2015 - link
A8X got 8 GPU clusetrs. And I still can't get your idea, you think that A8X is worse because it's brute force ~ 50% faster ? Yeah it is brute force, but I don't know how can you preceive that as a bad thing.They will certainly try to push finfet and rather hard I think.
And how can you say that nvidia is doing more on the same node while boasting how apple is the one who is doing more and how it's bad just above ?
chizow - Monday, January 5, 2015 - link
Wow A8X is 8 clusters and doesn't even offer a 100% increase over A8? Even worst than I thought, I guess I missed that update at some point over the holiday season.The point is that in order to match the "disappointing" Denver K1, Apple had to basically redouble their efforts to produce a massive 3Bn transistor SoC while fully leveraging 20nm. You do understand that's really not much of an accomplishment when you are on a more advanced process node right?
Sure Apple may push FinFET hard, but from everything I've read, FinFET will be more widely available for ramp compared to the problematic 20nm, which was always limited capacity outside of the premium allocation Apple pushed for (since they obviously needed it to distinguish their otherwise unremarkable SoCs).
It should be obvious why I am saying Nvidia is doing more on the same process node, because when you compare apple to Apples, Nvidia's chip on the 28nm node is more than competitive with the 20nm Apple chips, and when both are on 20nm, its going to be no contest in Nvidia's favor.
Logical conclusion = Nvidia is doing more on the same process node, ie. outperforming their competition when the playing field is leveled.
lucam - Tuesday, January 6, 2015 - link
Chizow the more I read and the more I laugh. You compare clusters with cores they have different technologies and you still state this crap. Maybe would be better to compare how much both of them are capable in term of of GFLOPS at same frequency? This is count. Regarding your absurd discussion of processing node, since the Nvidia chip is so efficient, I look forward to see it in smartphones.aenews - Saturday, January 24, 2015 - link
The A8X isn't on any phones either. In fact, they left it out of both iPhones AND the iPad Mini.And take in mind, even the Qualcomm Snapdragon 805 had few design wins... only the Kindle Fire HDX for tablets. They scored two major phones (Nexus 6 and Note 4) but the other manufacturers haven't used it.
squngy - Monday, January 5, 2015 - link
He did not say it is worse, his whole point is that Apple most likely will not be able to do the same thing again.tipoo - Tuesday, May 17, 2016 - link
Core counts are irrelevant across GPU architectures, they're just different ways of doing something.If someone gets to the same power draw, performance, and die size with 100 cores as someone else does with 10, what does it matter?
Jumangi - Monday, January 5, 2015 - link
Uh the A8 is an actual product that exists and wait for it you can actually BUY a product with it in there. This is another mobile paper launch by Nvidia with the consumer having no idea when or where it will actually be. The only thing real enthusiasts should care about is the companies that can actually deliver parts people can actually use. Nvidia still has a loooong ways to go in that department. Paper specs mean shit.chizow - Monday, January 5, 2015 - link
Careful, you do mean A8X right? Because Denver K1 is an actual product that absolutely stomps A8, only after Apple somewhat unexpectedly "EnBiggened" their A8 by increasing transistors and functional units 50%, did they manage to match K1's GPU and edge the CPU in multi-core (by adding a 3rd core).To say Denver K1 didn't deliver is a bit of a joke, since it is miles ahead of anything on the Android SoC front, and only marginally bested in CPU due to Apple's brute-force approach with A8X while leveraging 20nm early. We see that once the playing field has been leveled 20nm, its no contest in favor of Tegra X1.
Jumangi - Monday, January 5, 2015 - link
I mean a product that is widely available to CONSUMERS dude. And please stop with the "stomping" stuff. It means nothing about its performance with its also vastly higher power consumption. The A8 can exist in a smartphone. What smartphones have the K1? Oh that's right none because you would get a hour of use before your battery was dead. Mobile is about performance and speed. You can diss Apple all you want but from an SoC perspective they do it better than anyone else right now.pSupaNova - Tuesday, January 6, 2015 - link
@Jumangi try to comprehend what he is saying.Apple used a superior process on its A8X and more transistors to just edge the K1 in some CPU benchmarks. While core for core Nvida's is actually more powerful.
The GPU in the K1 also has near desktop parity etc OpenGL 4.4. Features like Hardware Tessellation are absent from the A8X.
Alexey291 - Tuesday, January 13, 2015 - link
That's great. It really is but lets be honest. A8x is faster than K1.And end of the day that is sadly all that matters to the vaaaaaast majority of consumers.
Frankly even that barely matters. What does though is that games run better on my tablet than they do on yours so to speak. (Actually likely they run better on yours since I'm still using a nexus 10 xD)
But sure the new paper launch from nv late this year our early next year will be great and the 2.5 devices that x1 will appear in will be amazing. Making sales in hundreds of thousands.
SM123456 - Sunday, February 1, 2015 - link
The point is that the Tegra K1 Denver on 28nm beats the Apple A8 fairly comprehensively on 20nm with the same number of cores. Apple stuck on 50% more cores and 50% more transistors to allow the A8X on 20nm to have a slight edge over the Tegra K1 Denver. This means if Tegra K1 is put on 20nm, it will beat the 3 core Apple A8X with two cores, and the same thing will happen when both move to 16nm.utferris - Monday, April 13, 2015 - link
Oh. Really? Denver K1 is not even as fast as A8X. Do not mention that it uses more than 2 times energy. I really do not understand people like you going around and saying how good nvidia shit is.eanazag - Wednesday, January 7, 2015 - link
It'll likely be in the next Shield.name99 - Monday, January 5, 2015 - link
(1) I wouldn't rave too enthusiastically about Denver. You'll notice nV didn't...Regardless of WHY Denver isn't on this core, the fact that it isn't is not a good sign. Spin it however you like, but it shows SOMETHING problematic. Maybe Denver is too complicated to shift processes easily? Maybe it burns too much power? Maybe it just doesn't perform as well as ARM in the real world (as opposed to carefully chosen benchmarks)?
(2) No-one gives a damn about "how many GPU cores" a SoC contains, given that "GPU core" is a basically meaningless concept that every vendor defines differently. The numbers that actually matter are things like performance and performance/watt.
(3) You do realize you're comparing a core that isn't yet shipping with one that's been shipping for three months? By the time X1 actually does ship, that gap will be anything from six to nine months. Hell, Apple probably have the A9/A9X in production TODAY at the same level of qualification as X1 --- they need a LONG manufacturing lead time to build up the volumes for those massive iPhone launches. You could argue that this doesn't matter since the chip won't be released until September except that it is quite likely that the iPad Pro will be launched towards the end of Q1, and quite likely that it will be launched with an A9X, even before any Tegra X1 product ships.
chizow - Tuesday, January 6, 2015 - link
@Name991) Huh? Denver is still one of Nvidia's crowning achievements and the results speak for themselves, fastest single-core ARM performance on the planet, even faster than Apple's lauded Cyclone. Why it isn't in this chip has already been covered, its a time to market issue. Same reason Nvidia released a 32-bit ARM early and 64-bit Denver version of Tegra K1 late, time to market. Maybe, in the tight 6 month window they would have needed between bringing Denver and working on Erista, they simply didn't have enough time for another custom SoC? I'm not even an Apple fan and I was impressed with Cyclone when it was first launched. But suddenly, fastest single-core and a dual-core outperforming 4 and even 8-core SoC CPUs is no longer an impressive feat! That's interesting!
2) Actually, anyone who is truly interested does care, because on paper, a 6-core Rogue XT was supposed to match the Tegra K1 in theoretical FLOPs performance. And everyone just assumed that's what the A8X was when Apple released the updated SoC that matched TK1 GPU performance. The fact it took Apple a custom 8-core variant is actually interesting, because it shows Rogue is not as efficient as claimed, or conversely, Tegra K1 was more efficient (not as likely since real world synthetics match their claimed FLOPs counts). So if 6 core was supposed to match Tegra K1 but it took 8 cores, Rogue XT is 33% less efficient than claimed.
3) And you do realize, only a simpleton would expect Nvidia to release a processor at the same performance level while claiming a nearly 2x increase in perf/w right? There's live demos and benchmarks of their new X1 SoC for anyone at CES to test, but I am sure the same naysayers will claim the same as they did for the Tegra K1 a year ago, saying it would never fit into a tablet, it would never be as fast as claimed yada yada yada.
Again, the A9/A9X may be ready later this year, but the X1 is just leveling the playing field at 20nm, and against the 20nm A8/X we see it is no contest. What trick is Apple going to pull out of its hat for A9/A9X since they can't play the 20nm card again? 16nm FinFET? Possible, but that doesn't change the fact Apple has to stay a half step ahead just to remain even with Nvidia in terms of performance.
lucam - Wednesday, January 7, 2015 - link
1) He was saying: why NV didn't continue with Denver design? Being so efficient and only 2 cores why don't shift at 20nn easily? Because they can't and that's it. The other things are speculations.2) You still compare apple (not Apple) with pears. Any vendors put inside his proprietary technology with their market strategy, important is to figure how GFLOPS and Texel is capable at same frequency and watt. You don't even know how Img cluster is built and nobody does and you still compare with NV cuda cores. Rogue XT frequency is set at 200mhz, Tegra K1 at 950mhz. Again what the heck you re talking about.
3) it is still a prototype type with a fan and nobody could check all the real frequency even though 1ghz seem reasonable. Hod dare you can compare a tablet with a reference board?
Again A9/A9X already exist now as prototypes, Apple doesn't sell chips and doesn't to any those sort of market. They need to see their product in a cycle year life. You live in another planet to not understand that.
SM123456 - Sunday, February 1, 2015 - link
>>He was saying: why NV didn't continue with Denver design? Being so efficient and only 2 cores why don't shift at 20nn easily? Because they can't and that's it. The other things are speculations.<<There is a simple answer to that - Apple has booked all the production slots for 20nm (made by foundry TSMC) to meet demand for the A8. This has pushed back production of the 16nm to late 2015. That is the reason for the delay in Denver, which nVidia originally intended for the Tegra K1 Denver successor, the Pascal chip. That is the reason for the delay in Pascal.
20nm is a risk reducing interim technology which almost everybody is skipping. Apple originally wanted 14/16nm for the A8, only used 20nm because they couldn't wait to release the current iPhone on 14/16nm. nVidia is only producing the Tegra X1 at 20nm because they are worried about the same problem happening at 16nm. With 20nm, they know that Apple will be moving off 20nm with the next iPhone, so there will definitely be spare production capacity.
utferris - Monday, April 13, 2015 - link
I can not agree you any more. They aliens can not be reasonable.GC2:CS - Wednesday, January 7, 2015 - link
2) I could make 4 cluster A7 GPU faster than tegra K1 and I could make a 16 cluster series 7XT GPU that's slower than tegra K1.So tell me how the heck does the number of clusters or "cores" relates to efficiency ???
lucam - Wednesday, January 7, 2015 - link
Since you re so expert about Imagt cluster you can explain why there some model of G6230 - 2 clusters (Allwinner socs) fast as much as an A7 (G6430) - 4 clusters. Maybe because the former has clock frequency higher than latter? But are pretty much the same in term of performance/watt....there we go..SM123456 - Sunday, February 1, 2015 - link
Errr.. same performance and price per watt for Apple on 20nm as nVidia at 28nm? That is damning.SM123456 - Sunday, February 1, 2015 - link
The cores aren't important, the performance is. That is the whole point. The responsiveness depends on single core performance and to a lesser extent two core performance, and on this point Denver beats the crap out of the Apple A8 and A8X. Therefore the fact that Apple added an extra core to the A8 to get A8X is about benchmark bragging rights, and the A8X real world performance (based on single and dual core performance) lags the Tegra K1 Denver even with A8X on 20nm and Denver K1 on 28nm - not good for Apple.lucam - Tuesday, January 6, 2015 - link
The Tegra speculations of chizow are priceless!aenews - Saturday, January 24, 2015 - link
The K1 is four months or so older than the A8X. It crushed every chip very badly for four whole months. If anything, everyone else was/is playing catchup. And not to mention the Snapdragon 810, yet to be released, does not even come close to the K1 despite being a year newer.utferris - Monday, April 13, 2015 - link
> "The K1 is four months or so older than the A8X."How do you come up with that?
A8X was in production way before K1.
It is just you see A8X only on ipad, when NVidia is showing off all around the test board that does not actually in production.
chizow - Monday, January 5, 2015 - link
@Mayuyu; I wouldn't be surprised if this is the final outcome of the Nvidia IP Patent lawsuits and why Apple was excluded from the original litigation. My bet is they (Apple) have already engaged in serious talks with Nvidia and they are both just awaiting a favorable outcome against Samsung/Qualcomm before moving forward.Aenean144 - Monday, January 5, 2015 - link
Honestly, why? What does doing that really benefit Apple?chizow - Monday, January 5, 2015 - link
They'll gain access to better GPU IP immediately, if they are going to pay a licensing fee, might as well pay (presumably) more for better IP I suppose.From a software support standpoint, they'd gain 100% compatibility and portability with all the latest OpenGL specs, so you'd have a shared graphics platform there with their iOS platforms. Then they wouldn't have to much around with Metal and iOS API so much, which would make it easier for them to merge the two platforms as the rumors have suggested.
Aenean144 - Monday, January 5, 2015 - link
None of what you state is a material advantage to Apple.The essence of your belief is that Nvidia will have a more powerful GPU solution than what Apple can come up with their own or through ImgTec. A lot of people don't think that will be true.
On top of that, this Tegra X1 SoC does not appear suitable for phones, and moreover, Nvidia appears to be concentrating on embedded applications with higher TDP with the X1. This would pose a problem for Apple as the iPhone is currently their most important product.
chizow - Monday, January 5, 2015 - link
Not having to support two different graphics APIs, one of which is being developed from scratch, isn't a material advantage to Apple?And a lot of people don't think what Nvidia has isn't already better than what Imgtech can produce? Who are you referring to exactly? Realists already know Nvidia's last-gen Kepler is already better than PowerVR rogue. Remember when PowerVR released roadmaps and those same people expected PowerVR Rogue XT 6-core to match Tegra K1 based on paper spec FLOP counts alone? What happened in reality? Apple needed to add ANOTHER 2 cores in a custom 8-core Rogue XT configuration to trade blows with K1.
The whole point of licensing is that even if the Tegra SoC as a whole doesn't fit their needs, they can license the IP and integrate it however they like into their own SoC.....
Yojimbo - Monday, January 5, 2015 - link
He is talking about Apple licensing a GPU from NVIDIA instead of from IMG. The Tegra X1 is not a GPU, it is an SOC targeted at a specific market segment. The target of the Tegra X1 has been set by the market NVIDIA thinks it can penetrate more than by the abilities of the GPU it contains. NVIDIA was not able to penetrate the smartphone market because of a lack of a good modem option. They have since stopped trying, and this is well-known. Apple has access to a modem, so this is not a concern for them. All they need to consider is if licensing a GPU, or GPU IP, from NVIDIA helps them more than licensing from IMG. I think on the same process technology, NVIDIA's offering would be superior, no matter if used in a phone or tablet.lucam - Wednesday, January 7, 2015 - link
Really? Why NV doesn't do any product with GPU in a phone anymore?name99 - Monday, January 5, 2015 - link
Who says nV is WILLING to license? Do they license to anyone else?pSupaNova - Tuesday, January 6, 2015 - link
http://www.anandtech.com/show/7083/nvidia-to-licen...
chizow - Tuesday, January 6, 2015 - link
And they also generate roughly $266 million per year in revenue as a result of their cross-licensing agreement/settlement with Intel. That obviously makes Intel the most likely suitor as the 1st Nvidia SoC GPU IP licensee, since they already have Nvidia GPU IP in their desktop CPU IGPs but still license IMGtech IP for their mobile SoCs.But my bet is on Apple being the 1st major licensee, largely dependent on the outcome of the Samsung/Qualcomm IP litigation.
chitownbarber - Tuesday, January 6, 2015 - link
Per Samsung's track record, the legal eagles will drag things on for years to come. Not sure what Qaulcomm will do? If you think Apple is in negotiations with Nvidia, which I agree with, then they should be coming to an agreement sometime soon to give the GPU engineers Apple has hired to do their mojo? I'm sure Apple would like nothing better to leave Samsung in the dust by exploiting the advanced gaming market that they are inspiring to, along with a rumored do-all living room console.eanazag - Wednesday, January 7, 2015 - link
I'm not totally sure why all the NV and Apple back and forth. I see this as an Apple competitive chip for Android tablets. Why would Apple jump the Imag. Tech ship? At release time they have had the best GPU in a SOC for their iPads. ImgTech has been good for Apple iteration over iteration.May Apple be interested in licensing some IP from NV? Maybe. Apple does a lot of custom work and has a desire to remain in the lead on the mobile SOC front at device release.
lucam - Wednesday, January 7, 2015 - link
Old news, after 2 years nobody knows what happened since then.mpeniak - Thursday, January 8, 2015 - link
Totally!!!jwcalla - Monday, January 5, 2015 - link
It seems like Denver was a huge investment that has produced virtually no fruit so far.Time to market seemed to be NVIDIA's problem with Tegra in the past so it does make sense to get Maxwell out the door ASAP.
syxbit - Monday, January 5, 2015 - link
Exactly. Denver has been a massive disappointment. It really needs to be on 20nm. At 28nm performance is too inconsistent, and throttles too much.I just find it funny how arrogant Nvidia is. They're always boasting and boasting with these announcements, and yet by the time they ship, they're rarely leading (or in the case of the K1, they're leading, but in literally only 1 shipping device).
djboxbaba - Monday, January 5, 2015 - link
Yes! How can they keep going about this on a yearly basis, disappointment after disappointment from NVidia in the mobile sector.Krysto - Monday, January 5, 2015 - link
Denver needs to be at 16nm. And we might still see it, at the end of the year/early next year if Nvidia releases the X1 on 20nm, and then X2 (I really hope they don't release another "X1", like they did this year with K1, making things very confusing) with Denver and on 16nm.name99 - Monday, January 5, 2015 - link
Unlikely that nV will release 16FF this year or early next year. Apple has likely booked all 16FF capacity for the next year or so, just like they did with 20nm. nV (and Qualcomm and everyone else) with get 16FF when Apple has satisfied the world's iPhone 6S and iPad 2015 demands...GC2:CS - Monday, January 5, 2015 - link
Yep.Like on the presentation, boasting about how Tegra K1 is still the best mobile chip (despite A8X matching it at significally lower power (which they don't have a graph for)) despite being released a "year before", while A8X has been released just "now". (While taking nVidia's logic A8X was "released" just few hours after K1 because imagination had annonced Series 6XT GPU's at CES 2014).
Yojimbo - Monday, January 5, 2015 - link
Tegra K1 is a 28nm part and the A8X is a 20nm device. The Shield tablet did launch 3 months before the iPad Air 2. Apple has a huge advantage in time to market. They can leverage the latest manufacturing technologies, and they don't have to demonstrate a product and secure design wins. Even though the shield tablet is an NVIDIA design, I doubt they have such tight control of their suppliers as Apple has, and they can't leverage the same high-volume orders. Apple designs a chip and designs a known product around that chip while the chip is being designed, and they can count on it selling in high volume. So you are making an unfair comparison of what NVIDIA is able to do and of the strength of the underlying architecture. If you want to compare the Series 6XT GPU architecture with K1's GPU architecture I think it should be done on the same manufacturing technology.juicytuna - Monday, January 5, 2015 - link
Well said. Apple's advantage is parallel development and time to market. Their GPU architecture is not that much *better* than their competitors. In fact I'd say that Nvidia has had a significant advantage when it comes to feature set and performance per watt on a given process node since the K1.GC2:CS - Monday, January 5, 2015 - link
Maybe an adventage in feature set, but performance per watt ?So if you want to compare than For example xiaomi miPad, consumes around 7,9W, when running gfx bench battery life test and that is with performance throttled down to around 30,4 fps on screen a very similar tablet, the iPad mini with retina display and it's A7 processor (actually a 28nm part !) consumes just 4,3W and that is running at 22,9 fps for the whole time.
So I am asking where is that "class leading" efficiency and "significant adventage when it comes to performace per watt" that nvidia is claiming to achieve, because I actually don't see anything like that.
Yojimbo - Monday, January 5, 2015 - link
Looking at the gfxbench website, under "long-term performance" I see 21.4 fps listed for the iPad Mini Retina and 30.4 fps listed for the Mi Pad, maybe this is what you are talking about. That is a roughly 40% advantage in performance for the Mi Pad. I can't find anything that says about throttling or the number of Watts being drawn during this test. What I do see is another category listed immediately below that says "battery lifetime" where the iPad Mini Retina is listed at 303 minutes and the Mi Pad is listed at 193 minutes. The iPad Mini Retina has a 23.8 watt-hour battery and the Mi Pad has a 24.7 watt-hour battery. So this seems to imply that the iPad Mini Retina is drawing about 4.7 watts and the Mi Pad is drawing about 7.7 watts, and it comes out to the Mi Pad using about a 63% more power. 40% more performance for 63% more power is a much closer race than the numbers you quoted (Yours come out to about a 33% increase in performance and an 84% increase in power consumption, which is very different.), and one must remember the circumstances of the comparison. Firstly, it is a comparison at different performance levels (this part is fair, since juicytuna claimed that NVIDIA has had a performance per watt advantage), secondly, it is a long-term performance comparison for a particularly testing methodology, and lastly and most importantly, it is a whole-system comparison, not just comparing the GPU power consumption or even the SOC power consumption.GC2:CS - Monday, January 5, 2015 - link
Yeah exactly, when you got two similar platforms with different chips, I think it's safe to say that tegra pulls significally more than A7, because those ~3 additional wats (I don't know where you got your numbers, I know xiaomi got 25,46Wh, and that iPad lasts 330 minutes, A7 iPad's also push out T-rex at around 23 fps since iOS8 update) have to go somewhere. What I am trying to say that imagine how low powered the A7 is if the entire iPad mini at half brightness consumes 4,7W, how huge those 3W that more or less come from the SoC actually are.You will increase the power draw of the entire tablet by over a half, just to get 40% more performance out of your SoC. The tegra K1 in miPad has a 5W TDP, or more than entire iPad mini ! Yet it can't deliver performance that's competitive enough at that power.
Like you are a 140 lb man, that can lift a 100 pounds, but you will train a lot untill you will put on 70 pounds of muscles (pump more power intro the soc) to weight 210 or more and you could still only lift like 140 pounds. What a dissapointment !
What I see is a massive increase in power compustion, with not-so massive gains in performace, which is not typical to efficient architectures like nvidia is claiming Tegra k1 is.
That's why I think nvidia just kind of failed to deliver on their promise of "revolution" in mobile graphics.
Yojimbo - Monday, January 5, 2015 - link
I got my benchmark and battery life numbers from the gfxbench.com website as I said in my reply. I got the iPad's battery capacity from the Apple website. I got the Mi Pad's battery capacity from a review page that I can't find again right now, but looking from other places it may have been wrong. WCCFtech lists 25.46 W-h like you did. I don't know where you got YOUR numbers. You cannot say they are "two similar platforms" and conclude that the comparison is a fair comparison of the underlying SOCs. Yes the screen resolutions are the same, but just imagine that Apple managed to squeeze an extra .5 watts from the display, memory, and all other parts of the system than the "foolish chinesse manufacteurs (sic)" were able to do. Adding this hypothetical .5 watts back would put the iPad Mini Retina at 5.2 watts, and the Mi Pad would then be operating at 40% more performance for 48% (or 52%, using the larger battery size you gave for the MiPad) more power usage . Since power usage does not scale linearly with performance this could potentially be considered an excellent trade-off.Your analogy, btw, is terrible. The Mi Pad does not have the same performance as does the bulked-up man in your analogy, it has a whole 40% more. Your use of inexact words to exaggerate is also annoying: "I see massive increases in power compustion, with not-so massive gains in performace"and "You increase the power draw by over half just to get 40% more performance". You increase the power by 60% to get 40% more performance. That has all the information. But the important point is that it is not an SOC-only measurement and so the numbers are very non-conclusive from an analytical standpoint.
GC2:CS - Tuesday, January 6, 2015 - link
What I see from those numbers is a fact that Tegra is nowhere near 50% more efficient than A7 like nvidia is claiming.When Gfx bench battery life test runs the display and the SoC are two major power drawers so I thought is reasonable to make other power using parts neglible.
So the entire iPad mini pulls 4,9W (I don't know why I should add another 0,5 W if it doesn't pull that much) and miPad pulls 7,9W. Those are your numbers which actually favor nvidia a bit.
To show you that there is no way around that fact I will lower the compustion of miPad by a W just to favor nvidia even more.
Now when we got 4,9 and 6,9W for both tablets I will substract around 1,5W for the display power, which should be more or less the same for both tablets.
So we got 3,4 and 5,4W of all things but the display power compustion, and most of this will be the SoC power. And we got that the tegra k1 uses more or less 50% more power than A7 for 40% more performance in a scenario that favors nvidia so much it's extremelly unfair.
And even if we take this absurd scenario and scale back the power compustion of tegra K1 down quadratically: 1,5*(1,4)^(-2) we still get that even at A7 level of performance K1 will consume over 75% power of A7 for the same performance.
That is an number that is way, way, way off in favor of nvidia and yet it still doesn't come close to "50% more efficient" claim that would require the K1 to consume just 2/3 the power for the same performance.
So please tell me how can you assume that increasing the power draw of the ENTIRE tablet by 60%, just to get 40% more GPU performance out of your SoC, which is a SINGLE part, just a subset of total tablet power draw, can be interpreted as nvidia's SoC is more efficient. Because whatever I will spin that I am not seeing 3x performance and 50% more efficiency from K1 tablets compared to A7 tablets. I see that that K1 tablets throttle to nowhere near 3x faster than A7 iPads and they run down their battery significally faster. And if the same is true for the tegra X1, I don't know why anybody should be excited about those chips.
Yojimbo - Tuesday, January 6, 2015 - link
You don't think it's possible to save power in any other component of the system than the SOC? I think that's a convenient and silly claim. You can't operate under the assumption that the rest of the two very different systems draw the exact same amount of power and so all power difference comes from the SOC. Obviously if you want to compare SOC power draw you look at SOC power draw. Anything else is prone to great error. You can do lots of very exact and careful calculations and you will probably be completely inaccurate.juicytuna - Monday, January 5, 2015 - link
That's comparing whole SOC power consumption. There's now doubt Cyclone is a much more efficient architecture than A15/A7. Do we know how much this test stresses the CPU? Can it run entirely on the A7s or is it lighting up all 4 A15s? Not enough data.Furthermore, the performance/watt curve on these chips is non linear so if the K1 was downclocked to match the performance of the iPad I've no doubt its results would look much more favourable. I suspect that is why they compare the X1 to the A8X at same FPS rather than at the same power consumption.
Jumangi - Monday, January 5, 2015 - link
No it should be done on the actual real world products people can buy. That's the only thing that should matter ever.Yojimbo - Monday, January 5, 2015 - link
Not if one wants to compare architectures, no. There is no reason why in an alternate universe Apple doesn't use NVIDIA's GPU instand of IMG's. In this alternate universe, NVIDIA's K1 GPU would then benefit from Apple's advantages the same way the Series 6XT GPU benefits in the Apple 8X, and then the supposed point that GC2:CS is trying to make, that the K1 is inherently inferior, would, I think, not hold up.Jumangi - Monday, January 5, 2015 - link
Apple would never use Nvidia at the power consumption levels it brings. The power is pointless to them if it can't be put into a smartphone level device. Nvidia still doesn't get why nobody in the OEM market wants their tech for a phone.Yojimbo - Monday, January 5, 2015 - link
But the NVIDIA SOCs are on a less advanced process node, so how can you know that? You seem to be missing the whole point. The point is not what Apple wants or doesn't want. The point is to compare NVIDIA's GPU architecture to the PowerVR series 6XT GPU. You cannot directly compare the merits of the underlying architecture by comparing performance and power efficiency when the implementations are using different sized transistors. And the question is not the level of performance and power efficiency Apple was looking for for their A8. The question is simply peak performance per watt for each architecture.OreoCookie - Tuesday, January 6, 2015 - link
@YojimboThe Shield was released with the Cortex A15-based Tegra K1, not the Denver-based K1. The former is not competitive with regards to CPU performance, the latter plays in the same league. AFAIK the first Denver-based K1 product was the Nexus 9. Does anyone know of any tablets which use the Denver-based K1?
lucam - Wednesday, January 7, 2015 - link
Apple sell products that have an year life cycle, don't sell chips and therefore they don't need to do any marketing in advance as NV does punctually at any CES.TheJian - Monday, January 5, 2015 - link
It's going finfet 16nm later this year (parker). As noted here it's NOT in this chip due to time to market and probably not as much gained by shrinking that to 20nm vs. going straight to 16nm finfet anyway. Even Qcom went off the shelf for S810 again for time to market.Not sure how you get that Denver is a disappointment. It just came out...LOL. It's a drop in replacement for anyone using K1 32bit (pin compatible), so I'm guessing we'll see many more devices pop up quicker than the first rev, but even then it will have a short life due to X1 and what is coming H2 with Denver yet again (or an improved version).
What do you mean K1 is in ONE device? You're kidding right? Jeez, just go to amazon punch Nvidia K1 into the search. Acer, HP, NV shield, Lenovo, Jetson, Nexus9, Xiaomi (mipad not sold on amazon but you get the point)...The first 4 socs were just to get us to desktop gpu. The real competition is just starting.
Building the cpu wasn't just for mobile either. You can now go after desktops/higher end notebooks etc with NO WINTEL crap in them and all the regular PC trimmings (high psu, huge fan/heatsink, hd's, ssd's etc etc, discrete gpu if desired, 16-32GB of ram etc). All of this timed perfectly with 64bit OS getting polished up for MUCH more complicated apps etc. The same thing that happened to low-end notebooks with chromebooks, will now happen with low end PC's at worst and surely more later as apps advance on android etc and Socs move further up the food chain in power and start running desktop models at 4ghz with fan/heatsinks (with a choice of discrete gpu when desired). With no Wintel Fee (copy of windows + Intel cpu pricing), they will be great for getting poor people into great gaming systems that do most of what they'd want otherwise (internet, email, docs, media consumption). I hope they move here ASAP, as AMD is no longer competition for Intel CPU wise.
Bring on the ARM full PC like box! Denver was originally supposed to be x86 anyway LOL. Clearly they want in on Intel/AMD cpu territory and why not at CPU vs. SOC pricing? NV could sell an amped up SOC at 4ghz for $110/$150 vs. Intel's top end i5/i7's ($229/339). A very powerful machine for $200 less cash but roughly ~perf (when taking out the Windows fee also, probably save $200 roughly). Most people in this group won't miss the windows apps (many won't even know what windows is, grew up on a phone/tablet etc). Developing nations will love these as apps like Adobe Suite (fully featured) etc get moved making these cheap boxes powerful content creators and potent gamers (duh, NV gpu in them). If they catch on in places like USA also, Wintel has an even bigger headache and will need to drop pricing to compete with ARM and all it's ecosystem brings. Good times ahead in the next few years for consumers everywhere. This box could potentially run android, linux, steamos, chrome in a quadboot giving massive software options etc at a great price for the hardware. Software for 64bit on Arm will just keep growing yearly (games and adv apps).
pSupaNova - Tuesday, January 6, 2015 - link
Agree totally with your post, NVdia did try to put good mobile chips in netbooks with the ION & ION2 and Intel blocked them.Good to see that they have stuck at the job and now are in the position to starting eating Intels lunch.
darkich - Monday, January 5, 2015 - link
That's just not true.The K1 has shipped in three high end Android Tablets - Nvidia shield, Xiaomi MiPad, and Nexus 9.
Now, how many tablets got a Snapdragon 805?
Exynos 5433?
Tegra K1 market performance is simply the result of the fact that high end tablet market is taken up by Apple, and that it doesn't compete in mod range and low end.
darkich - Monday, January 5, 2015 - link
*mid rangeGC2:CS - Monday, January 5, 2015 - link
It's the result of too high power compustion, that OEM's prefer to keep low.That's why tegra K1 is used by just foolish chinesse manufacteurs (like tegra 4 in a phone) like xiaomi, google in a desperate need for a non Apple high end 64-bit chip (to showcase how much it's 64-bit) and nvidia themselves.
Yojimbo - Monday, January 5, 2015 - link
I think you're right that the K1 is geared more towards performance than other SOCs. The K1 does show good performance/watt, but it does so with higher performance, using more watts. And you're right that most OEMs have preferred a lower power usage. But it doesn't mean that the K1 is a poor SOC. NVIDIA is trying to work towards increasing the functionality of the platform by allowing it to be a gaming platform. That is their market strategy. It is probably partially their strategy because those are the tools they have available to them; that is their bread-and-butter. But presumably they also think mobile devices can really be made into a viable gaming platform. Thinking about it in the abstract it seems to be obvious... Mobile devices should at some point become gaming platforms. NVIDIA is trying to make this happen now.esterhasz - Monday, January 5, 2015 - link
Only one of the three devices you mention runs on Denver cores (Nexus 9) and performance reviews have been very uneven for that device, to say the least.PC Perv - Monday, January 5, 2015 - link
Oh I don't know, man. All I know is that every Galaxy tablet has either Exynos or Snapdragon in it.OK, maybe not all of them but I do not think Tegra is in any of them.
kron123456789 - Monday, January 5, 2015 - link
Yeah but it's either Exynos 5420 or Snapdragon 800/801.darkich - Monday, January 5, 2015 - link
Well you dont know much then.Tegra K1 got to market along with the Snapdragon 805 and Exynos 5433.
Out of those three, the K1 took most design wins .
Dont compare the K1 with other Snapdragon and Exynos chips ,and the sea of MTK, Rockchip, Allwinner and Intel atoms chips.
It is an entirely different market
darkich - Monday, January 5, 2015 - link
Clarification- by "most design wins" I was referring to tablet market of courselucam - Wednesday, January 7, 2015 - link
Let's say 2 since one is Nvidia reference tablet and of course it always wins.chizow - Monday, January 5, 2015 - link
@jcwalla, I'm not sure there's "no fruit" from their investment, they are now on their 6th major iteration of Tegra (1-4, K1, X1) with a major variant in Denver K1 and while their marketshare and Tegra revenue won't reflect it, they are clearly the market leader in terms of performance for Android SoCs while going toe-to-toe with the monstrous Apple. Not bad, considering I am positive Apple is probably investing more than Nvidia's yearly revenue in keeping their SoC's relevant. ;)Breaking into an established market and growing a business from scratch is hard, but Nvidia clearly sees this as an important battle that needs to be fought. As a shareholder and tech enthusiast, I agree, in 10 years there's no doubt I would want an Nvidia GPU in whatever handheld/thin device I am using to power my devices.
The problem is that Nvidia lacks the "killer app" that really distinguishes their SoC over others. Even Apple is beginning to understand this as there's nothing on iOS that remotely takes advantage of the A8X's overkill specs. Nvidia needs to grow the Android/mobile gaming market before they really distinguish themselves, and from what I have seen, THAT is their biggest problem right now.
jwcalla - Monday, January 5, 2015 - link
Tegra is an important LOB for NVIDIA, but I'm more talking about how Denver has been received. When it was in the rumor stage, the scuttlebutt seemed to be about how they were going to marry ARMv8 CPU cores with discrete cards and take over the HPC world, etc. Then that got filtered down to "Yeah Denver is just a custom ARMv8 core for Tegra." (Which isn't earth-shattering; Qualcomm and Apple had been doing custom designs for a long time.) And now it doesn't seem like Denver is really anything special at all.But did it not involve a lot of hype, money, and time over all those years?
chizow - Monday, January 5, 2015 - link
Well, I think that HPC embedded ARM core in a massive GPGPU is still a possibility, but again, you're looking a very focused usage scenario, one which I think was pushed back by the process node delays at 20nm and now 16nm FinFET. We have seen since then Nvidia's roadmaps have changed accordingly with some of the features migrating vertically to new generation codenames.But the important point is that Nvidia's investment in mobile makes these options and avenues possible, even if Tegra isn't lightning up the P&L statements every quarter.
Yojimbo - Monday, January 5, 2015 - link
NVIDIA seems to be marrying themselves to IBM in the HPC space, but maybe ARM HPC is a different segment than what PowerPC occupies? I don't know. But IBM has a lot of experience and expertise in the area. Maybe NVIDIA thought they were biting off more than they could chew, maybe the Denver CPU just wasn't performing well enough, or maybe the opportunity with IBM came along because IBM realized they could benefit from NVIDIA as they didn't have anything to compete with Intel's Xeon Phi, and NVIDIA jumped at it.Maleficum - Tuesday, January 6, 2015 - link
In fact, Denver IS very special: it's NOT a custom ARM design, but an emulator, a reincarnation of Transmeta's Crusoe/Efficeon.The sad thing is however, that it has TONS of inherent issues, just like the Crusoe/Efficeon.
This time, nVidia did a wise choice by ditching this very questionable design and turned to the traditional native design.
Yojimbo - Tuesday, January 6, 2015 - link
They haven't ditched it. Per at least one top NVIDIA executive, Denver is expected to appear again in future products. Supposedly the reason why Denver is not appearing in the X1 is because it is not ready for the 20nm process shrink, and they want to bring the X1 out faster than Denver would allow. He said Denver is expected to be in 16nm products.chitownbarber - Tuesday, January 6, 2015 - link
Nvidia hired most of the Transmeta engineers, and have implemented at least one similar innovative feature from Transmeta into Denver called Dynamic Code Optimization which optimizes frequently used software routines.Jumangi - Monday, January 5, 2015 - link
Why are you saying "breaking into" an established market? Nvidia was in that market back with the Tegra 2 but their BS claims fell flat when put into real products and device makers abandoned them. They lost their market and now have to win it back again.chizow - Monday, January 5, 2015 - link
Really? What major design wins did the Tegra 2 have at the time? They have always been playing catch up with the likes of Qualcomm, Samsung, even TI back in that time period.At no time has Tegra ever been the market leader in mobile devices, so yeah, so much for that incorrect assertion, clearly they are trying to break into this market and looking at different ways of doing it.
Jumangi - Monday, January 5, 2015 - link
You must have a short memory. Tegra 2 was used in a number of phones because it was the first commercial quad core SoC and companies bought into Nvidia's claims. Then reality came and OEM's abandoned them and they have been trying to turn it around for years now.chizow - Tuesday, January 6, 2015 - link
Which phones? And still nothing even remotely close to the market share captured and retained by the likes of Qualcomm, even TI in that era.As for short memory, again, I believe you are mistaken, Tegra 2 was the first mobile "dual core", perhaps you were thinking of Tegra 3, which is probably still Nvidia's biggest commercial Tegra success but still nothing even remotely close to capturing the market lead as it was going up against the likes of Qualcomm's Snapdragon 400 series.
http://www.nvidia.com/object/tegra-superchip.html
chizow - Monday, January 5, 2015 - link
Also, perhaps the biggest boon of Nvidia's investment in mobile has been their amazing turnaround in terms of power efficiency, which is undoubtedly a result of their investment in mobile GPU designs and the emphasis on lowering TDP.techconc - Monday, January 5, 2015 - link
I would suggest that something like Pixelmator would be a good example of an app that leverages the power of the A8X. Though, I would agree that the A8X is overkill for most apps.DanD85 - Monday, January 5, 2015 - link
Seems to be that the Denver core will take the back seat this year. Judging from the performance of the nexus 9,Denver didn't really set the world on fire as Nvidia previously made it out to be. I think the K1 was relatively a let down last year with limited design win and spotty performance of the Denver architecture. I wonder when will Denver make a come back? 2016?Yojimbo - Monday, January 5, 2015 - link
I can imagine that NVIDIA might release a Denver-and-updated-Maxwell-powered SOC in 2016 and if Denver is successful then a Pascal-and-Denver-powered SOC in 2017. ??? Unless NVIDIA is able to improve their execution well enough to release a Pascal-powered SOC in time for next year. That last possibility seems a bit far-fetched considering their history in the segment, though.jjj - Monday, January 5, 2015 - link
Actually the high end SoC market won't be competitive since only Qualcomm has integrated modem.Guess 4 Denver cores was not doable on 20nm (die size or clocks) and that's disappointing, was really looking forward to more big cores. If they can get the CPU perf they claim, it's not bad but they might have a small window before 16nm shows up.
Seems another lost year in mobile for Nvidia, if they even care about it anymore, not so sure they do.
A quad Denver in high end, a dual for midrange and glasses, ofc both with integrated modem and maybe they would have been relevant again.
Krysto - Monday, January 5, 2015 - link
Strange that Nvidia still hasn't made big strides with its "soft-modem" that was supposed to easily support multiple bands at once.Yojimbo - Monday, January 5, 2015 - link
The soft-modem thing didn't seem to work out the way they had hoped. They seem to have given up trying to compete with Qualcomm in the smartphone market. The OEMs don't like the soft-modem and don't Iike a separate modem chip. NVIDIA's SOCs just don't differentiate themselves significantly enough from Qualcomm's that the OEMs are willing to accept one of those two things. Plus Samsung controls most of the Android smartphone market and seems to be very comfortable with their supplier system. I bet frustration about that on the part of NVIDIA is probably partially what led to the patent lawsuit. In any case, I wonder what NVIDIA is doing with Icera currently... if they are trying to sell it, or what.PC Perv - Monday, January 5, 2015 - link
Not that I think Denver is great or terrible or anything, but modems are not very important on tablets because number of 4G tablets are a fraction of WiFi ones.darkich - Monday, January 5, 2015 - link
Do you people finally see now just how PATHETIC Intel Core M is??Its top of the line chip, done on way superior process, costs $270, has a GPU that manages around 300GFLOPS, while this here 20nm chip that will sell for well under $100, reaches over 1 TERAFLOP!!
And the yearly doubling of the mobile GPU power continues.
Seems like in 2016 we could see small tablets that will be graphically more capable than Xbox one
Krysto - Monday, January 5, 2015 - link
No disagreement there. Broadwell is a dud (weak update to Haswell) and Broadwel-Y/Core M is a scam that will trick users into buying low-performance expensive chips.kron123456789 - Monday, January 5, 2015 - link
"Seems like in 2016 we could see small tablets that will be graphically more capable than Xbox one" - I don't think that even Nvidia can make the SoC with roughly 3x more performance than Tegra X1 within one year. Maybe in 2017-2018?darkich - Monday, January 5, 2015 - link
Well according to raw output, X1 is already close to Xbone (1TFLOPS vs 1.35TFLOPS)Assuming that Nvidia doubles it again next year, even PS4 could be within reach
TheFlyingSquirrel - Monday, January 5, 2015 - link
The 1TFLOPS of the X1 is for FP16. The 1.35 of the Xbox One is FP32. The FP32 performance of teh X1 as stated in the article is 512GFLOPS.stacey94 - Monday, January 5, 2015 - link
There is a massive memory bandwidth deficiency to overcome. It might have the raw processing power but it won't perform anywhere near as well.kron123456789 - Monday, January 5, 2015 - link
Well, memory bandwidth is a different story)))texasti89 - Saturday, January 10, 2015 - link
3D stacked memory and technologies like NVLINK which are expected to arrive in 2016 will the solve memory bandwidth limitations. We might very well soon see a massive 1 TB/s bandwidth on mobile SoCs. I didn't think bandwidth is the hurdle but rather the power wall which we can overcome by scaling manufacturing process.Someguyperson - Monday, January 5, 2015 - link
If you actually read the chart, the 1 TFLOP number was reached with FP16 operations and not FP32 operations, like literally EVERYONE ELSE uses. The quoted FP32 number is 0.5 TFLOPs, so it wouldn't be until 2017-18 that Tegra could actually reach the Xbox One performance without cheating the numbers.Jumangi - Monday, January 5, 2015 - link
Its doesn't need more than that for the GPU. More GPU power for a Core M is wasted for the type of products its used for. You build the chip that is balanced for the market your selling too. Why is this so beyond people who always look at every chip on the same level?LocutusEstBorg - Monday, January 5, 2015 - link
As long as it's only on the unprofitable inconsistent disaster that is Android, it's completely useless to the end user. Not a single game will be optimised for it and every game on the Play Store will continue to run like crap and crash on half the devices.They need to adopt a well managed OS like Windows Phone with proper drivers and release optimised apps on the Windows Store.
kron123456789 - Monday, January 5, 2015 - link
If they could get an x86 license it would be much better.darkich - Monday, January 5, 2015 - link
Lol, what an Android hating troll.pSupaNova - Monday, January 5, 2015 - link
If games run well on half of Android devices thats still 20x the installed user base of Windows Phone based devices.Also how many OEMs ever made a profit using Windows Phone?
tipoo - Monday, January 5, 2015 - link
Is this impression first hand? What device? Because my low end Moto G never crashes and the play store is completely smooth, more so than my iPad Mini in fact. This is a low end Android device with only Cortex A7 cores and 1GB memory backing them up.tipoo - Monday, January 5, 2015 - link
Oh I read that wrong, you meant the games, not the play store. Still, games almost never crash on this either.PC Perv - Monday, January 5, 2015 - link
Why do you guys write what essentially is a PR statements by NV as if they were independently validated facts by yourselves? I suppose you guys did not have time to test any of these claims.So you end up writing contradictory paragraphs one after another. In the first, you say NVIDIA "embarked on a mobile first design for the first time." That statement in and of itself is not something one can prove or disprove, but in the very next paragraph you write,
"By going mobile-first NVIDIA has been able to reap a few benefits.. their desktop GPUs has resulted chart-topping efficiency, and these benefits are meant to cascade down to Tegra as well." (??)
I suggest you read that paragraph again. Maybe you missed something, or worse the whole paragraph comes off unintelligible.
ABR - Monday, January 5, 2015 - link
Well the situation itself is confusing since NVIDIA might have designed Maxwell "mobile-first" but actually released it "desktop-first". Then came notebook chips and now we are finally seeing Tegra. So release-wise the power efficiency "cascades down", even though they presumably designed starting from the standpoint of doing well under smaller power envelopes.PC Perv - Monday, January 5, 2015 - link
But that is a tautology that is totally vacuous of meaning. One can say the opposite thing in the exact same way: "We went with desktop first, but released to mobile first, so that power efficiency we've learned "cascaded up" to the desktops.So the impression one gets from reading that explanation is that it does not matter whether it was mobile first or desktop first. It is a wordplay that is void of meaningful information. (but designed to sound like something, I guess)
Yojimbo - Monday, January 5, 2015 - link
Isn't that standard reviewing practice? "Company X says they did Y in their design, and it shows in Z." The reviewer doesn't have to plant a mole in the organization and verify if NVIDIA really did Y like they said. This is a review, not an interrogation. If the results don't show in Z, then the reviewer will question the effectiveness of Y or maybe whether Y was really done as claimed. Yes, the logical flow of the statement you quoted is a bit weak, but I think it just has to do with perhaps poor writing and not from being some sort of shill, like you imply. The fact is that result Z, power-efficiency, is there in this case and it has been demonstrated on previously-released desktop products.As far as your statement that one could say the opposite thing and have the same meaning, I don't see it. Because going "mobile-first" means to focus on power-efficiency in the design of the architecture. It has nothing to do with the order of release of products. That is what the author means by "mobile-first," in any case. To say that NVIDIA was going "desktop-first" would presumably mean that raw performance, and not power-efficiency, was the primary design focus, and so the proper corresponding statement would be: "We went desktop-first, but released to mobile first, and the performance is meant to "cascade up" (is that a phrase? probably should be scale up, unless you live on a planet where the waterfalls fall upwards) to the desktops." There are two important notes here. Firstly, one could not assume that desktop-first design should result in increased mobile performance just because mobile-first design results in increased desktop efficiency. Secondly and more importantly, you replaced "is meant to" with "so". "So" implies a causation, which directly introduces the logical problem you are complaining about. The article says "is meant to," which implies that NVIDIA had aforethought in the design of the chip, with this release in mind, even though the desktop parts launched first. That pretty much describes the situation as NVIDIA tells it (And I don't see why you are so seemingly eager to disbelieve it. The claimed result, power-efficiency, is there, as I previously said.), and though maybe written confusingly, doesn't seem to have major logical flaws: "1. NVIDIA designed mobile-first, i.e., for power-efficiency. 2. We've seen evidence of this power-efficiency on previously-released desktop products. 3. NVIDIA always meant for this power-efficiency to similarly manifest itself in mobile products." The "cascade down" bit is just a color term.
Yojimbo - Monday, January 5, 2015 - link
I just want to note that I don't think the logical flow of the originally-written statement is as weak as I conceded to in my first paragraph. In your paraphrase-quote you left out the main clause and instead included a subordinate clause and treated it as the main clause. The author is drawing a parallel and citing evidence at the same time as making a logical statement and does so in a way that is a little confusing, but I don't think it really has weak logical flow.chizow - Monday, January 5, 2015 - link
Anyone who is familiar with the convergence of Tegra and GeForce/Tesla roadmaps and design strategy understands what the author(s) meant to convey there.Originally, Nvidia's design was to build the biggest, fastest GPU they could with massive monolithic GPGPUs built primarily for intensive graphics and compute applications. This resulted in an untenable trend with increasingly bigger and hotter GPUs.
After the undeniably big, hot Fermi arch, Nvidia placed an emphasis on efficiency with Kepler, but on the mobile side of things, they were still focusing on merging and implementing their desktop GPU arch with their mobile, which they did beginning with Tegra K1. The major breakthrough for Nvidia here was bringing mobile GPU arch in-line with their established desktop line.
That has changed with Maxwell, where Nvidia has stated, they took a mobile-first design strategy for all of their GPU designs and modularized it to scale to higher performance levels, rather than vice-versa, and the results have been obvious on the desktop space. Since Maxwell is launching later in the mobile space, the authors are saying everyone expects the same benefits in terms of power saving from mobile Maxwell over mobile Kepler that we saw with desktop Maxwell parts over desktop Kepler parts (roughly 2x perf/w).
There's really no tautology if you took the time to understand the development and philosophy behind the convergence of the two roadmaps.
Mondozai - Monday, January 5, 2015 - link
No, it's not untelligible for reasons that other people have already explained. If you understand the difference between what it is developed for and what is released first you understand the difference. And apparently you don't.OBLAMA2009 - Monday, January 5, 2015 - link
man nvidia is such a jokeMasterTactician - Monday, January 5, 2015 - link
512 GFLOPS... 8800GTX in a phone, anyone? Impressive.kron123456789 - Monday, January 5, 2015 - link
Not exactly. 8800GTX has much more TMUs and much faster memory.GC2:CS - Monday, January 5, 2015 - link
Not exactly in a phone... Rather in a tablet or a notebook.PC Perv - Monday, January 5, 2015 - link
Perhaps you guys can carry a power bank of known quality to this type of demo and use it instead of whatever the demo unit is hooked up to? I was appalled to see a Nexus 9's dropping battery percentage while it was being charged at a local Microcenter. Granted I do not know what kind of power supply it was hooked up to, but all it was running was a couple of Chrome tabs.Maleficum - Monday, January 5, 2015 - link
I simply cannot trust anything nVidia says. The K1 Denver is such a benchmark cheater.ajangada - Monday, January 5, 2015 - link
Umm... What?chizow - Monday, January 5, 2015 - link
Fun fact: Nvidia was the only GPU/SoC vendor that *DIDN'T* cheat in AnandTech's recent benchmark cheating investigations. ;)http://www.anandtech.com/show/7384/state-of-cheati...
techconc - Monday, January 5, 2015 - link
@chizow: Another fun fact: The article you reference was specifically addressing the state of cheating among Android OEMs. In fact, the article specifically states "With the exception of Apple and Motorola, literally every single OEM we’ve worked with ships (or has shipped) at least one device that runs this silly CPU optimization." Perhaps you're going to fall back on weasel words and claim that neither Motorola nor Apple are GPU/SoC vendors. If that's the case, then you should also note that this kind of cheating is done at the OEM level, not the SoC vendor level.chizow - Monday, January 5, 2015 - link
It was actually a simple oversight, I thought I mentioned Android SoC/GPU vendor but it may be because I saw it in the link instead.Maleficum - Tuesday, January 6, 2015 - link
The link you gave doesn't contain anything related to the Denver core that cheats at the firmware level.Of course, it's called "optimization" by nVidia.
chizow - Tuesday, January 6, 2015 - link
Proof of such cheats would be awesome, otherwise I guess we can just file it under FUD.harrybadass - Monday, January 5, 2015 - link
Nvidia X1 is somehow already obsolete when compared to A8x.GXA6850
Clusters 8
FP32 ALUs 256
FP32 FLOPs/Clock 512
FP16 FLOPs/Clock 1024
Pixels/Clock (ROPs) 16
Texels/Clock 16
psychobriggsy - Monday, January 5, 2015 - link
NVIDIA are claiming power savings compared to the A8X, at the same performance level.And additionally, they can run the X1 GPU at ~1GHz to achieve greater performance than the A8X. However the A8X's lower GPU clock is just a design decision by Apple so they can guarantee battery life isn't sucky when playing games.
But yet, hardware-wise the X1's GPU specification isn't that amazing when compared to the A8X's GPU.
Last up, how does a quad-A57 at 2+ GHz compare to a dual 1.5GHz Cyclone...
techconc - Monday, January 5, 2015 - link
Isn't always amazing how company A's future products compete so well against company B's current products? The X1 won't be competing with the A8X, it will be competing against the A9X. If you're familiar with the PowerVR Rogue 7 series GPUs, you'd wouldn't be terribly impressed with this recent nVidia announcement. It keeps them in the game as a competitor, but they will not be on top. Further, I'm quite certain that Apple's custom A9 chip will compare well to the off the shelf reference designs in the A57 in terms of performance, efficiency or both. If there were no benefits to Apple's custom design, they would simply use the reference designs as nVidia has chosen to do.Yojimbo - Monday, January 5, 2015 - link
Yes but how do you compare your product to something that isn't out yet? You can't test it against rumors. It must be compared with the best of what is out there and then one must judge if the margin of improvement over the existing product is impressive or not. The PowerVR Rogue 7 series is due to be in products when? I doubt it will be any time in 2015 (maybe I'm wrong). When I read the Anandtech article on the details of IMG's upcoming architecture a few months back I had a feeling they were trying to set themselves up as a takeover target. I don't remember exactly why but it just struck me that way. I wonder if anyone would want to risk taking them over while this NVIDIA patent suit is going on, however.OreoCookie - Tuesday, January 6, 2015 - link
The Tegra X1 isn't out yet either!If you look at Apple's product cycle it's clear that in the summer Apple will release an A9 when they launch the new iPhone. And you can look at Apple's history to estimate the increase in CPU and GPU horsepower.
Yojimbo - Tuesday, January 6, 2015 - link
But NVIDIA HAS the Tegra X1. They are the ones making the comparisons and the Tegra X1 is the product which they are comparing! Apple seems to be releasing their phones in the fall recently, but NVIDIA nor the rest of the world outside Apple and their partners has no idea what the A9 is like and so it can't be used for a comparison! It's the same for everyone. When Qualcomm announced the Snapdragon 810 in April of 2014 they couldn't have compared it to the Tegra X1, even though that's what it will end up competing with for much of its life cycle.Yojimbo - Monday, January 5, 2015 - link
Perhaps those are the raw max-throughput numbers, but if it were that simple there would be no reason for benchmarks. Now let's see how they actually perform.edzieba - Monday, January 5, 2015 - link
12 cameras at 720p120?! VERY interested in DRIVE PX, even if it'd never end up near a car.ihakh - Monday, January 5, 2015 - link
about the intel chip I have to say that it is a very good CPU (think about sse and avx) + a little GPUbut nvidia chip is a good GPU+ reasonable CPU
you can have windows x86 on intel chip and run something like MATLAB (also android)
and you can have a good gaming experience with nvidia's
each of them has its use for certain users
its not like that every program can use 1TFLOPS of tegra GPU
and its not like every user is "game crazy"
intel core M have its own users
and of course tegra chip is very hot for mobiles and it is a hard decision for engineers who design mobiles and tablet to migrate from a known chip like snapdragon to an unknown and new chip like tegra
I think both nvidia and intel are doing good and nor deserve blaming
but it is a good idea for nvidia to make a cooler chip for mobiles
Morawka - Monday, January 5, 2015 - link
So compared to the K1 it's twice as fast, And it also uses Twice as less energy.So does that mean it will still be a 7w SOC? albeit twice as fast.
Morawka - Monday, January 5, 2015 - link
omg i'm gonna go buy some nvidia stock now. Not because of the X1, but because of the Automotive platforms.iwod - Monday, January 5, 2015 - link
That is some impressive GPU performance / watt. However I think LPDDR4 with double the bandwidth do help the performance on X1. But even with the accounted difference the A8X GPU still does not hold up against Maxwell, assuming Nvidia benchmarks can be trusted.It should be noted that the A8X is partly a custom GPU from Apple. Since it doesn't come directly off IMG, and it is likely not as power efficient as possible.
junky77 - Monday, January 5, 2015 - link
Where's AMD in all this thing..chizow - Monday, January 5, 2015 - link
They're not in the discussion, blame Dirk "Not Interested in Netbooks" Meyer for that one.junky77 - Monday, January 5, 2015 - link
:(But all the other stuff showed here - vehicles and stuff (not that I think there will be a good AI in 2017, but still)
GC2:CS - Monday, January 5, 2015 - link
This chip looks awesome, but so was all tegras before.Like the tegra k1, a huge annoncement supposed to bring "revolution" to mobile graphics computing. That turned out to be a power hog, pulling so much power it was absolutelly unsuitable for any phone and it's also throotling significally.
This looks like the same story yet aggain, lots of marketing talk, lots of hype, no promise delivered.
pSupaNova - Monday, January 5, 2015 - link
Nothing wrong with the Tegra K1 in both forms, I have a Shield Tablet and Nexus 9.I have a Program that I changed so I can run https://www.shadertoy.com/ shaders natively and both tablets are impressively fast.
Nvidia just need to make sure they run on the same process as Apple and they will have the fastest SOC CPU and GPU wise.
techconc - Tuesday, January 6, 2015 - link
Apple is expected to move to 14nm for the A9. That's just speculation, but given Apple position in the supply chain as opposed to nVidia's I would be surprised if nVidia was able to be on the same process. With regards to CPUs, since nVidia has regressed from the Denver core to the standard reference designs, I wouldn't expect nVidia to have any CPU advantage. Certainly not with single threaded apps anyway. As for the GPU, the Rogue 7 series appears to be more scalable with up to 512 "cores". If the X1 chip has any GPU advantage it would not be for technical reasons. Rather it would be because Apple chose to scale up to that level. Given that Apple has historically chosen rather beefy GPUs, I would again be surprised if they allowed the X1 to have a more powerful GPU. We'll see.kron123456789 - Monday, January 5, 2015 - link
"it's also throotling significally." — Um, no. It has throttling under heavy load but it's about 20% in worst case. It was Snapdragon 800/801 and Exynos 5430 that "throotling significally".jwcalla - Monday, January 5, 2015 - link
The fact that the announcement for this chip was coordinated with an almost exclusive discussion about automotive applications -- and correct me if I'm wrong, but it does not appear they even discussed gaming or mobile applications, except for the demo -- could be a signal that indicates to which markets NVIDIA wants to focus Tegra and which markets they're abandoning.A couple years back Jen-Hsun said that Android was the future of gaming, but I wonder if he still believes that today?
I do think there is some truth to the idea that there is not much of a consumer market for high-end mobile graphics. Other than making for a great slide at a press event (Apple), there doesn't seem to be much of a use case for big graphics in a tablet. The kind of casual games people play there don't seem to align with nvidia's strengths.
kron123456789 - Monday, January 5, 2015 - link
If he still belives in android gaming, Nvidia will announce the new Shield Portable at CES or MWC.techconc - Monday, January 5, 2015 - link
Casual games will always be more popular than the more hard core type of games. That said, there are plenty of mobile games that push the system hard. Try something like World of Tanks Blitz on your device. On an iPad Air, it's a smooth 60 fps. On a lark, I tried it on a Nexus 7 once it finally came out for Android. A was an unplayable 15 fps (max). The graphics aren't up to the PC level either. The point being, there is plenty of need for more powerful mobile gaming systems and the average "budget" device just isn't up to par for such needs.Yojimbo - Monday, January 5, 2015 - link
I don't think they are necessarily abandoning the gaming market. They could be giving a presentation for their investors to be excited about. Mobile gaming could still be a long-term plan, but they don't see significant growth there this next cycle such that it will give them significant profits. But these automotive initiatives are something new they can try to get people excited about.ramabg - Monday, January 5, 2015 - link
Intel should start licensing Nvidia GPU instead of using Its slow in house GPUab303 - Monday, January 5, 2015 - link
How fair is the power comparison? iPad air2 has a complex intrusive rework (from picture) while on x1 platform profiler dumped power is used. The rework itself could contribute to power overhead on iPad air2.Also a8x is tile based rendering (heavy on chip memory access) while X1 is direct rendering and requires heavy ddr access. So gpu core power comparison without including ddr can be very misleading.
lucam - Friday, January 9, 2015 - link
Indeed it's very good point and you re the only one have noticed that.yhselp - Tuesday, January 6, 2015 - link
Just to confirm - they're actually running system memory at 3200 MHz, correct? The quoted 25.6 GB/s memory bandwidth does not factor in color compression?Rock1m1 - Tuesday, January 6, 2015 - link
I hope the X1 ends up in the next Shield tablet. If Nvidia wants to really impress however - Shield Portable 2.Johnflo - Wednesday, January 7, 2015 - link
it could be used on Nvidia Drive Car Computers. Read at http://www.mobileinhand.com/tegra-x1-chip-nvidia-s...watzupken - Wednesday, January 7, 2015 - link
To be honest, K1 was pretty impressive and so is the X1. It's good to see Nvidia pushing the graphic limit on the Andriod camp. However, they are usually let down by the higher power consumption, which makes them less suited for mobile phone usage. Looking forward to seeing one at least in a Shield tablet, or possibly a good Android box.KateC - Thursday, January 8, 2015 - link
Regarding the comment on AMD having FP16 support in GCN 1.2. Is this full featured support, e.g., FP16 at double FP32 support?Parablooper - Thursday, January 22, 2015 - link
Does anyone know if this will support 64-bit operating systems? I know for sure that the K1 only had up to 32-bit. I'm thinking of buying a chromebook but am torn between buying one with a low-end intel processor for more productivity or NVIDIA processor with at least some graphics capability.Keermalec - Friday, April 17, 2015 - link
Nvidia should make a phone with an underclocked X1yhselp - Thursday, July 28, 2016 - link
Rereading this article after the report that Nintendo's NX - their new flagship console - would be powered by NVIDIA's Tegra is so enlightening. It's like reading a whole new preview. Many things start making sense in this new context:HDMI 2.0 and 4K60 support;
16 ROPs;
Aggressive clockspeed;
Conservative rasterization and MFAA.
To quote the article: "It seems obvious that this would be a great SoC to put in a gaming tablet and a variety of other mobile devices, but it remains to be seen whether NVIDIA can get the design wins necessary to make this happen."
What a conclusion! And what a gaming tablet it would be. You couldn't have known how those words would ring today - over a year later. Talk about a design win. Awesome.
P.S. Please, do an article on the Nintendo NX reports.