Meet The New Future of Gaming: Different Than The Old One

Up until last month, NVIDIA had been pushing a different, more conventional future for gaming and video cards, perhaps best exemplified by their recent launch of 27-in 4K G-Sync HDR monitors, courtesy of Asus and Acer. The specifications and display represented – and still represents – the aspired capabilities of PC gaming graphics: 4K resolution, 144 Hz refresh rate with G-Sync variable refresh, and high-quality HDR. The future was maxing out graphics settings on a game with high visual fidelity, enabling HDR, and rendering at 4K with triple-digit average framerate on a large screen. That target was not achievable by current performance, at least, certainly not by single-GPU cards. In the past, multi-GPU configurations were a stronger option provided that stuttering was not an issue, but recent years have seen both AMD and NVIDIA take a step back from CrossFireX and SLI, respectively.

Particularly with HDR, NVIDIA expressed a qualitative rather than quantitative enhancement in the gaming experience. Faster framerates and higher resolutions were more known quantities, easily demoed and with more intuitive benefits – though in the past there was the perception of 30fps as cinematic, and currently 1080p still remains stubbornly popular – where higher resolution means more possibility for details, higher even framerates meant smoother gameplay and video. Variable refresh rate technology soon followed, resolving the screen-tearing/V-Sync input lag dilemma, though again it took time to catch on to where it is now – nigh mandatory for a higher-end gaming monitor.

For gaming displays, HDR was substantively different than adding graphical details or allowing smoother gameplay and playback, because it meant a new dimension of ‘more possible colors’ and ‘brighter whites and darker blacks’ to gaming. Because HDR capability required support from the entire graphical chain, as well as high-quality HDR monitor and content to fully take advantage, it was harder to showcase. Added to the other aspects of high-end gaming graphics and pending the further development of VR, this was the future on the horizon for GPUs.

But today NVIDIA is switching gears, going to the fundamental way computer graphics are modelled in games today. Of the more realistic rendering processes, light can be emulated as rays that emit from their respective sources, but computing even a subset of the number of rays and their interactions (reflection, refraction, etc.) in a bounded space is so intensive that real time rendering was impossible. But to get the performance needed to render in real time, rasterization essentially boils down 3D objects as 2D representations to simplify the computations, significantly faking the behavior of light.

It’s on real time ray tracing that NVIDIA is staking its claim with GeForce RTX and Turing’s RT Cores. Covered more in-depth in our architecture article, NVIDIA’s real time ray tracing implementation takes all the shortcuts it can get, incorporating select real time ray tracing effects with significant denoising but keeping rasterization for everything else. Unfortunately, this hybrid rendering isn’t orthogonal to the previous concepts. Now, the ultimate experience would be hybrid rendered 4K with HDR support at high, steady, and variable framerates, though GPUs didn’t have enough performance to get to that point under traditional rasterization.

There’s a still a performance cost incurred with real time ray tracing effects, except right now only NVIDIA and developers have a clear idea of what it is. What we can say is that utilizing real time ray tracing effects in games may require sacrificing some or all three of high resolution, ultra high framerates, and HDR. HDR is limited by game support more than anything else. But the first two have arguably minimum performance standards when it comes to modern high-end gaming on PC – anything under 1080p is completely unpalatable, and anything under 30fps or more realistically 45 to 60fps hurts the playability. Variable refresh rate can mitigate the latter and framedrops are temporary, but low resolution is forever.

Ultimately, the real time ray tracing support needs to be implemented by developers via a supporting API like DXR – and many have been working hard on doing so – but currently there is no public timeline of application support for real time ray tracing, Tensor Core accelerated AI features, and Turing advanced shading. The list of games with support for Turing features - collectively called the RTX platform - will be available and updated on NVIDIA's site.

The RTX 2080 Ti & 2080 Review The RTX Recap: A Brief Overview of the Turing RTX Platform
Comments Locked

337 Comments

View All Comments

  • godrilla - Wednesday, September 19, 2018 - link

    Conclusions*
  • BurntMyBacon - Thursday, September 20, 2018 - link

    I'm going to take a (admittedly small) leap of faith and suggest that nVidia most likely is not intentionally limiting performance of Turing cards. Given the amount of hardware dedicated to tasks that don't benefit rasterization, it just doesn't seem like could have left that much performance on the table. It is much more likely that they've simply got prices set high with the intent of dropping them once high end pascal inventory clears out. Of course, after the mining push, they've seen how much the market is willing to bear. They may be trying to establish a new pricing structure that gives the extra profits to them rather than retailers.
  • mapesdhs - Wednesday, September 26, 2018 - link

    Give the amount of deceitful PR being used for this launch, I don't think your leap is justified.
  • tamalero - Wednesday, September 19, 2018 - link

    I honestly believe the endgame of Nvidia is simple. They want to increase their margin, and the only way to to that is to sell the WHOLE full chips, tensor and all to gamers. While still charging top notch to Pros.

    This would lead Nvidia to make LESS variants, saving costs in having to design multiple versions when they cant scale down or cut.
  • PopinFRESH007 - Wednesday, September 19, 2018 - link

    your argument is invalidated by the evidence of this product launch. All three cards are on different chips.
  • eva02langley - Thursday, September 20, 2018 - link

    You are kidding me? This is exactly this. They made an all around chip to tackle pros, gamers and compute. Vega has the same issue. It was aimed at being an iGPU to a dGPU. It does extremely well at low input, but as a dGPU.

    They save cost and standardize their manufacturing process. It is nothing else.
  • Bensam123 - Wednesday, September 19, 2018 - link

    Going to go a weird direction with this. I believe cards are going to start diverging from one another in terms of what gamers are looking for. Hardcore gamers that are after the professional scene and absolute performance always turn graphics down, they drive high refresh rate monitor, with low response times, and high frame rates to absolutely limit the amount of variance (spiking) that is present in their gaming experience.

    Nvidia went for absolute fidelity where they believe the best gaming experience will come from picture perfect representations of an environment, such as with ray tracing. I see ray tracing as a gamer and I go 'Welp that's something I'll turn off'. Hardware review websites are only looking at gaming from a cinematic standpoint, where best gaming will always have everything maxed out running at 8k. Cards do perform differently under different resolutions and especially with different amounts of eye candy turned on. I really think Anand should look into testing cards at 1080p on lowest settings with everything uncapped - Not as the only datapoint, but as another one. Professional gamers or anyone who takes gaming seriously will be running that route.

    Which runs into another scenario, where card makers are going to diverge. AMDs upcomming 7nm version of Vega for instance may continue down Vegas current path, which means they'll be focusing on current day performance (although they mentioned concentrating more on compute we can assume the two will intersect). That means while a 2080ti might be faster running 4k@ultra, especially with rays if that ever takes off, it may lose out completely at 1080p@low (but not eyecancer, such as turning down resolution or textures).

    For testing at absolute bleeding speeds, that 1% that is removed in 99% percentile testing really starts to matter. Mainly because the spikes, the micro-stutters, the extra long hiccups get you killed and that pisses off gamers that aim for the pinnacle. Those might seem like outliers, but if they happen frequently-infrequently enough, they are part of a distribution and shouldn't be removed. When aiming for bleeding speeds, they actually matter a lot more.

    So thus is born the esports gaming card and the cinematic gaming card. Please test both.
  • PopinFRESH007 - Wednesday, September 19, 2018 - link

    so they should include the horizontal line of a completely CPU bound test? Also I'm not understanding the statistical suggestions, they make no sense. Using the 99th percentile is very high already and the minuscule amount of outliers being dropped are often not due to the GPU. As long as they are using the same metric for all tests in the data set it is irrelevant.
  • Bensam123 - Thursday, September 20, 2018 - link

    Not all games are CPU bound, furthermore it wouldn't be completely horizontal, that would imply zero outliers, which never happens. In the case of that instead of looking at 99% frame time you would instead focus on the other part 1% frame time or all the stuttering and microstutters. You can have a confidence in tail ends of a distribution if there is enough data points.

    Also having played tons of games on low settings, you are 100% incorrect about it being a flat line. Go play something like Overwatch or Fortnite on low, you don't automagically end up at CPU cap.
  • V900 - Thursday, September 20, 2018 - link

    Despite all the (AMD) fanboy rage about higher prices, here’s what will happen:

    Early adopters and anyone who wants the fastest card on the market, will get an RTX 2080/2070, Nvidia is going to make a tidy profit, and in 6-12 months prices will have dropped and cheaper Turing cards will hit the market.

    That’s when people with a graphics card budget smaller than 600$ will get one. (AMD fanboys will keep raging though, prob about something else that’s also Nvidia related.)

    That’s how it always works out when a new generation of graphics hit the market.

    But everyone who’s salty about “only” getting a 40% faster card for a higher price won’t enjoy the rest of the decade.

    There won’t be anymore GPUs that deliver a 70-80% performance increase. Not from AMD and not from Nvidia.

    We’re hitting the limits of Moore’s law now, so from here on out, a performance increase of 30% or less on a new GPU will be the standard.

Log in

Don't have an account? Sign up now