NVIDIA this week staged a presentation at Gamescom, and everyone expected the company to showcase their latest professional and gaming graphics cards, targeting both the developers attending Gamescom as well as the public. And show them off they did – NVIDIA revealed the GeForce RTX family consisting of the RTX 2080 Ti, RTX 2080, and RTX 2070, and the Quadro RTX 8000 series. But this has to go down in history as NVIDIA’s most underwhelming presentation of the decade. See why for yourself after the jump.
For the first time in history, NVIDIA is launching a new GPU family with a full high-end stack. With pre-orders opening for the Founders Edition version of the RTX family this week, the company is set to have these cards in stock at retailers in just a month’s time. They’ve never fired an opening salvo like this before. In the past NVIDIA has waited to launch the top-end card, pushing consumers who wanted the best possible GPU to wait potentially months or double-dip with a purchase of the consumer GPU and then the high-end enthusiast card.
|RTX 2080 Ti||RTX 2080||RTX 2070|
|Memory clock||14Gbps GDDR6||14Gbps GDDR6||14Gbps GDDR6|
|Ray perf.||10 GRays/s||8 GRays/s||6 GRays/s|
|Launch price ($)||$999||$699||$499|
The cards all see increases in core count compared to the previous generation, but what’s interesting is how clock speeds on average have not increased. The GTX 1080 was rated for 1.7GHz on average and could be overclocked to over 2.0GHz, but the RTX 2080 stays at the same levels. Memory amounts also remain the same – 11GB for the RTX 2080 Ti, and 8GB for the mainstream cards.
There are also two new product specifications to take note of – the Ray performance and RTX Operations. Ray performance is a measure of how many single light rays can be traced by the GPU in a second, an operation carried out by an RT core, a dedicated unit that only ever does ray tracing. A gigaray is one billion rays per second, so the RTX 2080 Ti can do ten billion of those in a single pass.
RTX operations is how many calculations per second can be done by NVIDIA’s Tensor cores when doing ray tracing to de-noise the image. Ray tracing tends to produce a lot of noise in the image while you’re partially rendering it, and the final render could take a lot longer to complete. It’s certainly not how you want to be playing your games. NVIDIA uses the Tensor cores to reduce the noise in the image and help remove the overhead of having the GPU spend the extra time rendering it, but it also makes use of machine learning to give you nearly free anti-aliasing on the shadows.
Combining the two specifications gives you an idea of the ray tracing performance of any given GPU – it needs to be able to create a lot of light rays quickly, but it also needs to be able to fully use the Tensor cores to de-noise the output of the ray tracing algorithm and produce a clear image.
There’s a new design language for the reference cooler design. It’s now much more expensive, kitted out with a full-coverage black heatsink, dual 80mm fans with lots of blades to push more air through the denser heatsink design, a new metal shroud, and a full coverage backplate. This isn’t the first time NVIDIA has used a dual-fan design for a reference card, but this is the first GPU family that is doing away with the traditional blower design for the Founders Edition. There will be cheaper blower designs available from third parties in the future, and there will also be a smorgasbord of aftermarket designs to choose from, but this is NVIDIA’s custom design and it looks very good indeed.
Note how the port outputs have also changed – DVI-D is no more, replaced with a third DisplayPort 2.1 port and a lonely little USB-C port at the rear for powering a VR headset. There is no front-facing HDMI port, which is a pity given how many people will want to use ports at the front to connect up their HMD gear. Note also the power configuration – there is one six-pin and one 8-pin auxiliary power connector on the RTX 2080 design, and two 8-pin power connectors on the RTX 2080 Ti. These cards will consume a lot of power when overclocked.
Now let’s get on to the other parts of NVIDIA’s presentation.
Games shipping with ray tracing
NVIDIA’s Jen-Hsun Huang announced on stage that the company was working with several game large studios and indie developers to implement ray tracing in their engines and in their games. Ray tracing is technically a part of the DirectX 12 specification, but NVIDIA’s implementation is called RTX for a reason – its implementation is somewhat unique to NVIDIA and included in their GameWorks libraries. While ray tracing can be done by modern GPUs, NVIDIA makes use of their Tensor cores to de-noise images and give them essentially free AA on the shadows.
The first games to come out with RTX compatibility will be Battlefield 5, Shadow of the Tomb Raider, and Metro Exodus. In the near future, we can expect more games to launch or get major updates to support RTX, including:
- Ark: Survival Evolved
- Assetto Corsa Competizione
- Atomic Heart (2019)
- In Death
- Final Fantasy XV
- The Forge Arena
- Fractured Lands
- Hitman 2
- Mechwarrior V: Mercenaries
- PlayerUnknown’s BattleGrounds
- Remnant from the Ashes (2019)
- Serious Sam 4: Planet Badass
- We Happy Few
NVIDIA showed off a demo with some of these games, surprisingly they were all done at 1080p. Framerates for Shadow of the Tomb Raider were expected to be between 30fps and 60fps, and to their credit NVIDIA bravely showed off a framerate counter during the demo. There were some synthetic tests that showed off the theoretical performance of the RTX implementation in a much better light at UltraHD 4K, but because this is aimed at gamers, you would expect NVIDIA to not be satisfied with such low performance for titles that should be a showcase for ray tracing in games. This has created a bit of a stir in tech-related forums and subreddits, and most of /r/NVIDIA is filled with comments about being unimpressed.
Performance tradeoffs are everywhere
The reality is that there’s not a lot of time between rendering and displaying a frame for NVIDIA’s de-noising engine to work. The RTX 2080 Ti, for example, has 4352 CUDA cores at its disposal, so it should be devouring 1080p workloads like a light snack! However, almost a third of the die space is dedicated to Tensor cores and RT cores, which means that there isn’t enough space on the chip to fit more. Because of the low ratio of Tensor cores to CUDA cores, there’s a bottleneck in terms of how much work can be done in a single pass, as well as power constraints while doing the work. As a result, the best you can hope for is a near real-time experience at somewhere between 30fps and 60fps, because that’s how much work they can do in under 16.7 milliseconds. If they wanted to go faster, the only options are higher clock speeds and more Tensor cores, neither of which is possible on the 12 nanometer process that TSMC is using to make these chips.
It’s commendable that NVIDIA is playing the long game here. Ray tracing first was supported in hardware with their Fermi architecture, and back then NVIDIA was proud enough to get up on stage and show some concept demos about how the technology would work in games. Back then, with less complexity in game engines at the time, it would have been possible to make use of near real-time ray tracing. The GTX 480 would have been capable of doing ray tracing at 1080p at 15 frames per second.
AMD does have a similar engine available called Radeon Rays 2.0 for developers to implement, but it doesn’t have the same marketing push behind it. Radeon Rays might also be just as fast as NVIDIA’s RTX, because AMD quotes a time of around 17ms in their benchmark to run an easy render with ray tracing and a second pass for shadows, as well as 27ms for a more difficult scene. That corresponds to an average framerate of between 30-60fps, just like the demo in Shadow of the Tomb Raider. One can only hope that AMD is paying as much attention to this as NVIDIA so that they are competitive in the near future.
There’s another catch in that the RTX 2080 and RTX 2070 cards are much, much slower than the RTX 2080 Ti. Neither card will be capable of outputting upwards of 60fps at 1080p with ray tracing, so compromises will have to be made in other areas such as resolution to keep performance at a playable level. Ray tracing also doesn’t bring tangible benefits to other parts of the game, like special effects. It is possible to use this technology along with NVIDIA’s work in subsurface scattering to produce more realistic textures and animations for characters in cutscenes, but the most noticeable change will be in how much better shadows look and how much more sharply things are defined.
Watch the embedded video of Battlefield 5 to get a sense of what I’m talking about. There’s no benefits for the fire effects, because it looks really bad close-up. There’s some weird shimmering happening with water textures, which make it look instead like it’s a puddle of mercury. There’s also a piece of paper in the video that visibly loses its edges while flapping about. The lighting benefits aren’t even that pronounced either because we don’t have a side-by-side comparison to visualise the changes. Most NAGlings reading this don’t have HDR capable displays either. All of these things are teething issues with first-generation technology, and it’s always going to be unsightly when viewed up-close.
The mainstream appeal of RTX, then, will be for playing games with lighter settings and ray tracing for a few minutes to see what the fuss is about, cranking things up to take 8K screenshots through NVIDIA Ansel, using the GPU’s capabilities to do some professional work with the RTX algorithms in software, or putting the RT and Tensor cores to work for machine learning purposes in games and other software. NVIDIA doesn’t expect this to take off significantly because performance drops are a big hurdle to making the experience worthwhile, and only enthusiasts who can afford $999 for a graphics card will be able to enjoy these games at playable framerates while using ray tracing.
Pricing is a concern
This is perhaps why NVIDIA didn’t show off any performance comparisons between Turing and the outgoing Pascal family, because the older cards can’t handle real-time ray tracing. The new family will probably end up being not that much faster than the Pascal family in apples-to-apples comparisons, which means that more people might opt for a cheaper GTX 1080 Ti instead of the RTX 2080 Ti to save money while they wait for the technology to mature. A third of the transistor budget goes to Tensor cores and RT cores, which is a big risk for NVIDIA.
The price increases are also significant. The GTX 1080 Ti launched at $699 a few years ago, and was a massive gain in performance over the previous generation. The RTX 2080 Ti carries on that tradition, but hikes up the price a further $300, now sitting comfortably in the same price range as the original GTX Titan. The RTX 2080 and RTX 2070 are also much more expensive compared to the previous generation. The RTX 2070 might be just as good as the GTX 1080, but is the inclusion of RTX technology that important to you? Probably not, at least not at this stage.
If this is what NVIDIA was planning all along, then the Turing family serves its purpose as a launch platform for ray tracing as a whole, but is not intended to kickstart any sort of revolution in games or in game engines just yet. Think of it as pipe cleaner, a sign of things to come. You can still disable ray tracing in-game and enjoy high levels of performance with the older lighting engines which still look great with HDR.