As someone who analyzes GPUs for a living, one of the more vexing things in my life has been NVIDIA’s Maxwell architecture. The company’s 28nm refresh offered a huge performance-per-watt increase for only a modest die size increase, essentially allowing NVIDIA to offer a full generation’s performance improvement without a corresponding manufacturing improvement. We’ve had architectural updates on the same node before, but never anything quite like Maxwell.

The vexing aspect to me has been that while NVIDIA shared some details about how they improved Maxwell’s efficiency over Kepler, they have never disclosed all of the major improvements under the hood. We know, for example, that Maxwell implemented a significantly altered SM structure that was easier to reach peak utilization on, and thanks to its partitioning wasted much less power on interconnects. We also know that NVIDIA significantly increased the L2 cache size and did a number of low-level (transistor level) optimizations to the design. But NVIDIA has also held back information – the technical advantages that are their secret sauce – so I’ve never had a complete picture of how Maxwell compares to Kepler.

For a while now, a number of people have suspected that one of the ingredients of that secret sauce was that NVIDIA had applied some mobile power efficiency technologies to Maxwell. It was, after all, their original mobile-first GPU architecture, and now we have some data to back that up. Friend of AnandTech and all around tech guru David Kanter of Real World Tech has gone digging through Maxwell/Pascal, and in an article & video published this morning, he outlines how he has uncovered very convincing evidence that NVIDIA implemented a tile based rendering system with Maxwell.

In short, by playing around with some DirectX code specifically designed to look at triangle rasterization, he has come up with some solid evidence that NVIDIA’s handling of tringles has significantly changed since Kepler, and that their current method of triangle handling is consistent with a tile based renderer.


NVIDIA Maxwell Architecture Rasterization Tiling Pattern (Image Courtesy: Real World Tech)

Tile based rendering is something we’ve seen for some time in the mobile space, with both Imagination PowerVR and ARM Mali implementing it. The significance of tiling is that by splitting a scene up into tiles, tiles can be rasterized piece by piece by the GPU almost entirely on die, as opposed to the more memory (and power) intensive process of rasterizing the entire frame at once via immediate mode rendering. The trade-off with tiling, and why it’s a bit surprising to see it here, is that the PC legacy is immediate mode rendering, and this is still how most applications expect PC GPUs to work. So to implement tile based rasterization on Maxwell means that NVIDIA has found a practical means to overcome the drawbacks of the method and the potential compatibility issues.

In any case, Real Word Tech’s article goes into greater detail about what’s going on, so I won’t spoil it further. But with this information in hand, we now have a more complete picture of how Maxwell (and Pascal) work, and consequently how NVIDIA was able to improve over Kepler by so much. Finally, at this point in time Real World Tech believes that NVIDIA is the only PC GPU manufacturer to use tile based rasterization, which also helps to explain some of NVIDIA’s current advantages over Intel’s and AMD’s GPU architectures, and gives us an idea of what we may see them do in the future.

Source: Real World Tech

Comments Locked

191 Comments

View All Comments

  • JiggeryPokery - Saturday, August 6, 2016 - link

    You don't seem to have the slightest idea what you're talking about, this is just about tile based rendering, there's nothing dodgy or underhanded about it, it's just another technique for rendering that has it's own benefits and drawbacks. TBR has been around for many years and is even used today by numerous mobile gfx chips from the likes of ARM and PowerVR.
  • versesuvius - Saturday, August 6, 2016 - link

    Oh, yes. Now I remember. Thank you.
  • Scali - Saturday, August 6, 2016 - link

    "You say Nvidia has found a way that makes tiling possible without the programmer knowing anything about it."

    As JiggeryPokery already said, tile-based rendering has been around for ages. Intel has done it, ARM does it, Imagination Tech (PowerVR) does it, and even AMD has some form of tile-rendering.

    "Where is it? In the hardware?"

    Yes, it is in the hardware, which you could have seen if you bothered to check out the patent linked above: https://www.google.com/patents/US20140118366

    "Does it cooperate with other parts of the system? You don't know."

    Erm, what kind of questions are these even? Maxwell has been on the market for about 2 years, and runs all Direct3D, OpenGL and Vulkan software you throw at it, from Windows or linux. Obviously it works just fine, people didn't even notice anything unusual going on, because as already said: the implementation works transparently to the API and applications (so to 'other parts of the system').

    "Your attitude only helps Nvidia to get away with what is clearly another of its unsavory practices."

    How exactly is this even unsavoury? To the end user it works the same as older nVidia GPUs or AMD GPUs, it just makes it a bit more efficient, yielding higher performance and lower power consumption. Sounds like a win-win to me.

    "But Nvidia will have to answer for it sooner or later."

    For what? Making GPUs with the highest performance and best performance-per-watt?
  • versesuvius - Saturday, August 6, 2016 - link

    Because while all those companies have been using it or working on it and have not been quiet about this, Nvidia has kept quiet about this. If it is a trade secret, why everybody else has talked openly about it and applied for patents for the technology while Nvidia has not? Is it not naming the devil so its competitor will not know and then try to do it and remove the advantage that Nvidia has over it? Hardly. As you say a lot of people have known about this and have been using this technique, even Google, again a software company. I also wonder why no one has put the question to Nvidia after this discovery either. This is intriguing stuff, but not only Nvidia but also all the tech sites have forgotten about it. It is transparent to API but that does not mean that programmers cannot make some good use of it purposefully. Why Nvidia is quiet about it? Is there shame in applying for such valuable patent? Is there shame in announcing their prowess and how much they are ahead of the game? Of course not. The thing is that there are people like you who find honor in what Nvidia is doing, and Nvidia has perhaps always counted on that and thus developed into what it is now.

    As for the Google patent, it sounds like bullshit to me. More like something filed by a patent troll.
  • Scali - Saturday, August 6, 2016 - link

    "As for the Google patent, it sounds like bullshit to me. More like something filed by a patent troll."

    Google just provides a service to search and view patents.
    If you bother to look, you see that it links directly to the US Patent Office. It's the real thing.
    I think I know who is doing the trolling here...
  • versesuvius - Sunday, August 7, 2016 - link

    If you say so. Problems was and is, that I am posting from Iran, and since we are under so many American sanctions, including the sanctions that Google on the behest of the US government has put on us that the page does not load completely. Different parts of the page are fetched from different servers, and some of those servers may be shut out to traffics to and from Iran, and some are filtered out by Google due to sanctions. The nastiest one is when the style sheet is missing. However, the abstract came through and still sounds like bullshit, or given American patent system maybe a defense against other patents.

    On the other hand, you who are obviously under no such sanctions have not come up with any explanations as to why Nvidia is so quiet about this technology, while everybody else has been quite clear about it. And while you are on it, (at the risk of violating the righteous, mighty, whatever American government), who did file that patent?
  • Scali - Sunday, August 7, 2016 - link

    "Problems was and is, that I am posting from Iran"

    You're funny. Apparently you cannot even access all the information that is around on the internet, yet you post with an arrogant and all-knowing attitude, and sling all sorts of accusations around, which you can't possibly base on proper information, since you cannot access this information.
    A normal person would not hold and defend strong opinions about things they know too little of. In fact, they'd say upfront that they do not have access to this information, or have not researched the topic in-depth.

    "On the other hand, you who are obviously under no such sanctions have not come up with any explanations as to why Nvidia is so quiet about this technology, while everybody else has been quite clear about it."

    You're turning things around. There is no reason why NVidia needs to disclose every detail of their implementation. No vendor ever does.
    Some vendors may focus more on tile-based rendering, because especially in the case of Imagination Tech, it is their primary strength. They target the mobile market, and their approach can significantly improve power efficiency.
    For desktop cards it is not that relevant. And since Imagination Tech flopped on the desktop and gave tile-based rendering a bad reputation there, you'd first have to prove that it actually is a valid approach, before you can promote it as a feature.
    I think nVidia has done exactly that: They have been using it for a few years now, in GPUs that were very successful.

    "who did file that patent?"

    Inventors: HAKURA; Ziyad S.; (Gilroy, CA) ; DIMITROV; Rouslan; (San Carlos, CA)
    Applicant: NVIDIA Corporation, Santa Clara, CA, US
    Assignee: NVIDIA Corporation, Santa Clara, CA
  • versesuvius - Sunday, August 7, 2016 - link

    I am not turning anything around. I just told you why I do not have the details of the patent you linked to.

    So, that is what Nvidia has been doing for some years (date of the patent?), in fact two generations of its GPUs (and according to you "proving that it is a valid approach" LOL!) and all the while AMD has been sitting on its bottom and doing nothing about it, although there was an Nvidia patent for it for so long and AMD knew very clearly the general direction it had to take and did not move in that direction. Next thing you know, and you are apt to say that what Mr. Hakura and Mr. Dimitrov have achieved is the only way there is in universe to do tiling with the modern PC GPUs, and it would be hopeless waste of resources for AMD and also Intel, dumbass engineers that they are to implement tiling on their GPU systems and reap the enormous benefits it brings.

    Still, I think that the basic ideas put in that patent is bullshit and just a play with some technical terms and that it has never materialized in any Nvidia chip, and I still think that Nvidia has to at least make a general statement about tiling on its GPUs. Of course for now everybody seems happy not to ask, and it will have to wait for another day. But in the meantime there will be closer looks at Nvidia GPUs and the systems that they operate with.
  • Scali - Sunday, August 7, 2016 - link

    "I am not turning anything around. I just told you why I do not have the details of the patent you linked to."

    The patents were already linked by someone else earlier in the thread. Besides, you kept attacking these patents, and only now admitted you can't actually see the details.

    "all the while AMD has been sitting on its bottom and doing nothing about it"

    AMD has enough problems, there's a reason why their cards have been little more than rehashes and rebadges of GCN for years now, why they were late even with basic features such as HDMI 2.0, and why they are the only player on the desktop that still has no FL12_1 support.
    They are probably moving in that direction, they are just not moving as quickly as you think.

    "Still, I think that the basic ideas put in that patent is bullshit and just a play with some technical terms and that it has never materialized in any Nvidia chip"

    This is what is known as 'conjecture'. I don't know why you even bother to post this sort of stuff.

    "and I still think that Nvidia has to at least make a general statement about tiling on its GPUs."

    You never gave any valid reason why though. NVidia, or any other vendor for that matter, has no obligation to share every detail of their products with the general public.

    "But in the meantime there will be closer looks at Nvidia GPUs and the systems that they operate with."

    Why? What reason could you possibly think of that warrants a "closer look" than what people have normally been doing whenever a new GPU arrives?
    I mean, you're talking as if NVidia's GPUs are fundamentally broken or whatever, while in reality they have been doing this for 2 years before people wrote a test application and discovered it.
    If they had never discovered it, would that have changed anything? No it wouldn't. All games that people have tried to run on these GPUs over the past 2 years have worked fine.
    So really. why are you acting like this?
  • versesuvius - Monday, August 8, 2016 - link

    "The patents were already linked by someone else earlier ..."

    Now, who is conjecturing there? Or are you just feeling punctual? Do you read everything, including comment pages that can add up to 50 pages at times? If so, good on you. I wish you joy of it.

    Anyways, you must know by now what the argument is about by now. I'll repeat it for you. Maybe you finally understand. There is nothing in the Nvidia hardware that is put in there to enable the tiling technique that the Nvidia graphic system uses. Nothing in the hardware. It is either the Nvidia drivers or their drivers cooperating with the windows drivers to give it its advantage in Games on Windows, and only on games on Windows. Microsoft has been installing back doors in various parts of Windows since before, so why not install one for Nvidia? It would not even have to be a back doo, just a pinhole. I do not have a silicon map of Nvidia GPU to show to you, and nobody else does either. That is THE trade secret. However, there is nothing in the world that should have kept Nvidia making a point about this, specially something that a lot of people have worked on before and is no special idea to begin with. To say that a company worked on it over a decade ago and could not make it work on the Windows desktop and so Nvidia is right in not talking about it now is quite an stretch. What could go wrong if Nvidia had made it clear that their GPU systems uses tiling and any programmer working on games could utilize this wonderful technique to speed up their games. Then it would not have to muddy its name with "Nvidia, That is how ..." in some games opening screens, while every programmer would code for Nvidia cards from the beginning. That is not how it happened. The only conclusion is that Nvidia is not honest about this and has been doing something wrongful . Now, I cannot prove it. Yet everything is there for anyone who wants to follow the money.

Log in

Don't have an account? Sign up now