Right, the first thing we need to do is talk about the price of this graphics card. At R19,000, it’s far from affordable. How one could ever justify that price for what is essentially GTX 1080 Ti-level performance is beyond me – especially when you consider that you can buy said GTX 1080 Ti for as little as R11,599 brand new, and for even less on the second-hand market. In fact, the ROG STRIX GTX 1080 Ti 11G (which comes bundled with Call of Duty: Black Ops 4, at least at the time of this writing) is currently retailing for just under R17,000.

What would possibly compel you to spend an additional R2,000 on this particular graphics card, which comes with no bundled games or anything else of the sort? Honestly, I can’t really answer that question for anyone.

Technical specifications

GPU / node: TU104 / TSMC 12nm

CUDA cores / Tensor cores / RT cores: 2,944 / 368 / 46

Memory type / frequency / bandwidth: 8GB Micron GDDR6 / 14GHz / 448GB/s

Output interface: 2 x DP 1.4 / 2x  HDMI 2.0b / USB Type-C

Base clock / boost clock: 1,515MHz / 1,890MHz (OC Mode – GPU Tweak)

Dimensions: 299,7 x 130,4 x 51,1mm / 2.7 slot width

API support: OpenGL 4.5 / Vulkan 1.1+ / DirectX 12 (SM6.0+)

RGB lighting: Yes (full software control via AURA)

Benchmark scores and general performance

Testing configuration:

Intel Core i9 9900K (5GHz OC @ 1.275V)

ASUS ROG MAXIMUS X APEX

Corsair Dominator Platinum SE Contrast DDR4 3,466MHz

Intel 760P 512GB M.2 SSD

Seasonic Prime Platinum 850W PSU

Windows 10 x64

(BIOS 1704)

Price and supplier information
Supplier: ASUS
Website: www.asus.com
RRP: R18,999

Obviously, I’m not oblivious to the material reality of these new 2000-series GeForce cards. That said, you have to understand that this pricing has little to do with ASUS, or any other AIC partner for that matter. This is something that NVIDIA dictates, in ways we’re unlikely to ever wrap our minds around. Even if we could, it’s ultimately of little to no importance, because the price isn’t going to change.

With that out of the way, let’s deal with this imposing electronic component as a piece of technology, and set aside its value proposition to the average gamer. What follows is a brief overview of this specific RTX 2080 and the performance it offers. Hopefully it’ll help you decide for or against it, just in case you have R19,000 burning a hole in your wallet (although I’ve yet to see a wallet which can hold R19,000 in cash).

This is without a doubt one of the finest examples of an air-cooled RTX 2080 card you’re likely to ever see. It may be the first RTX 2080 review you’re reading from NAG, so it stands to reason that without another review to compare, it might seem premature to make such a statement. However, let me explain why I believe this to be true. At the very least, whichever version of the RTX 2080 you’re considering, it won’t be objectively better than this one. That’s due to a number of reasons, the most important of which is the heatsink-fan assembly.

If you think back to earlier iterations of the STRIX GPUs, you’ll recall that they weren’t particularly quiet. Cooling wise, they were adequate and in line with what every other equivalent SKU from competing vendors offered – but this didn’t necessarily hold true for the acoustics.

Now, in late 2018, ASUS (or rather ROG, to be more specific) has done a stellar job at making what’s likely an unbeatable cooling solution. This part is important, as it has everything to do with how the RTX 2080 GPU (TU104) deals with clock frequencies, power consumption, and temperatures – all of which ultimately determine performance characteristics.

Do keep in mind that the performance figures you see here are dependent on my specific testing environment, so your own performance metrics will change in accordance with your system. This has always been the case, but is now more relevant than ever to point out, which is why it’s worth stating clearly.

Before I get ahead of myself, let me tell you briefly why the STRIX RTX 2080’s cooler is so incredible. I’ll skip the stuff you can easily research for yourself on the GPU’s website, and say in short that the ROG engineers have redesigned this cooling solution to refine and improve not only the heatsink, but the fans as well.

For starters, the heatsink fin stack has a claimed 20% larger surface area, which is partially responsible for the 2.7-slot design (on that note, make sure you have enough clearance, as this graphics card takes up 2.7 slots). Next up is a reworked contact plate (the part that’s in “direct” contact with the GPU core) that’s significantly smoother due to a better machining process. Finally, there’s the use and careful placement of axial fans. Basically, these fans allow greater gas pressure (in this case, air) to pass through the fins while minimizing noise. Yes, this is a claim you’ll have seen from every GPU’s so-called “new” fan technology, but these axial fans are actually designed using some complex maths and physics – which ultimately aids in achieving increased air pressure without sacrificing acoustic characteristics (these have a better performance-to-noise ratio relative to traditional fan designs). It isn’t just marketing drivel either, as these fans are significantly quieter than the ones on previous STRIX cards.

As for the heatsink, the first thing you’ll notice (aside from the hefty dimensions) is the sheer mass of this cooler. It’s staggering. This is easily one of the heaviest (if not the heaviest) graphics cards I’ve ever come across. It’s quite possible that this is the reason the ROG engineers have had to fit this graphics card with a solid metal brace – which, as you’d expect, minimizes flexing of the PCB. The card may still exhibit minimal signs of “sagging” when mounted inside your case, but you won’t have to worry about bending of the PCB.

As you can imagine, just via sheer mass, fin density and size, this heatsink can deal with plenty of heat. In some contexts it’s actually possible to play games with little to no fan rotation. Of course, given the target market and price, it’s unlikely that Dota 2 is the sort of game you’ll be playing with this graphics card. What this all means in a practical sense is that, for the most part, the fans will remain at low rotation speeds, even when playing some of today’s most-demanding titles.

The reason I’ve written so much about the cooling assembly is because it’s good, and exceedingly so. Under full game load, at no point did I see temperatures above 63 °C, and this is even after running several benchmarks, one after the other. It’s thoroughly impressive and easily the highlight outside of actual GPU performance.

It’s the cooling solution that allows the STRIX RTX 2080 to deliver impressively consistent performance, as it does just enough to keep the TU104 GPU operating between 1,930MHz and 2,010MHz. It may not seem like it, but 80MHz is a lot in clock variation, and the NVIDIA Boost technology has changed yet again.

This new (or at least newer) boost technology works off an even more advanced telemetry system than what was on the Pascal GPUs. It not only monitors absolute values, but the rate of change for those values as well. All of this is used as input data which ultimately determines the boost clock. It’s not just temperature or power, but the dynamic relationship between the two. The general rule of thumb here is: keep the temperature low, and the clocks will stay high.

This is where that gargantuan cooling solution pays off. The TU104 GPU is hyper-sensitive to temperatures (more so than any previous GPU, it seems), and maintaining high clock frequencies demands as low a temperature as possible. A 10 °C temperature change can drop a 2,115MHz clock to 2,010MHz. When you’re overclocking any Turing GPU, you’re essentially setting an upper limit, not a target clock. This is perhaps why – for the first time – you’re actually overclocking the boost clock and not the base clock, despite what you may be reading in GPU-Z or any other application.

Electronically, there’s really not much to say, but plenty to appreciate. ROG cards have always been pretty solid in this regard (and they’re often over-engineered in terms of power), and this model is no different. It makes use of a 12-phase VRM, controlled via a uPI Semiconductor buck controller. I’m unsure of the model, but I suspect it might be the same uP9511 eight-phase controller as used on the STRIX GeForce GTX 1080 Ti OC. Obviously some phases are doubled (there are perhaps six in use), as there’s no 12-phase buck controller that I’m aware of on any graphics card.

The MOSFETs are the familiar and hugely competent Infineon IR3555s, and these are matched with SAP II ferrite core coils. On the memory side we have another uPI Semiconductor uP013 two-phase PWM controller, and what I suspect to be Fairchild FDPC5018SG MOSFETs. As you’d expect, both these controllers are software addressable – but you’ll have limited success in attempting to alter vGPU voltage.

If none of the above matters to you, all you need to know is that this card, despite being the successor to the STRIX 1080, actually has a beefier power delivery system than even the mighty STRIX GTX 1080 Ti. It’s more than enough juice, even for those interested in extreme overclocking.

Now let’s talk game and benchmark performance. As stated at the start of this review, this graphics card performs pretty much on par with any factory overclocked GeForce GTX 1080 Ti graphics card. It’ll lose in a few games (Grand Theft Auto V for example) while coming out ahead in others, and overall it’s a marginally faster card. What really matters is that computationally, this GPU is significantly more capable than any Pascal-based GPU out there, and that includes the TITAN line. The addition of Tensor and Ray Tracing cores puts this GPU leaps and bounds ahead of anything else, especially when it comes to the holy grail of image rendering: ray tracing, and more specifically, real-time ray tracing.

NVIDIA’s Ray Tracing cores deal specifically with that, and do so quite well. That’s not to say they can’t be leveraged to address other tasks which need that sort of computation ability. The Tensor cores, true to their origins, can be exploited for “free anti-aliasing” –via DLSS (Deep Learning Super Sampling), for instance. These Tensor cores are essentially AI cores, which in the context of DLSS can effectively analyse frame/scene data to give you “free” anti-aliasing that’s comparable to what high levels of multi-sampling traditionally offer, but without the expected performance penalty. This is possible because there’s dedicated die space for this, which means using DLSS doesn’t place any additional load on the early rendering process. The anti-aliasing is literally free of a performance hit, or at least as free as we’ve ever seen it. It’s actually a topic that’s well worth exploring on its own, but that’s for another time.

In closing, the RTX 2080 will continue to not only get faster throughout its lifespan, but its unique features will also become more prevalent and more commonly used in future gaming titles. The day where all titles can exploit its extra compute power isn’t today, but it’s practically inevitable that things will go this way, especially when AMD has their own DXR (DirectX Raytracing)-capable hardware. If you look at it from a longevity perspective, the RTX 2080 and its breath-taking price is undoubtedly the graphics card of choice for tomorrow’s games. It’ll future-proof your hardware while offering the same (or slightly better) performance as the fastest gaming GPU there was (at least until recently): the GTX 1080 Ti. Whether or not that makes it worth the asking price is up to you to decide.

From a purely technological perspective, the TU104 is an amazing piece of silicon, and what ASUS has subsequently built around it is nothing short of incredible. If you’re going to be spending this kind of money on a graphics card, then this may just be the one to get, because they won’t come much better than this (if at all).

9For all intents and purposes, this graphics card should get a perfect score – but the price is so prohibitive that it undoes a lot of the appeal. This is an NVIDIA issue though, and not in any way related to ASUS. Aside from that, this card is a marvel. From a technical perspective, it’s the most impressive graphics card I’ve tested to date.

More stuff like this: