If you’re a BMW fan or owner you’re familiar with the upgrade envy when you’re driving last year’s 3-series and suddenly you see the new facelifted version with slightly more power and a much improve asking price (much improved meaning higher). You’re not entirely sure whether getting the finance revisited so that you can trade in for the newcomer is worth it, or if you’re ready to sacrifice a few dinners to afford the maintenance once the warranty runs out. Its faster and more refined and carries the bragging rights of owning this year’s model with all the bells and whistles. Its an improvement but its not enough of an improvement to require you to upgrade, especially if you picked up the older one on a bargain price.

In many ways, the “new” Radeon HD7970 is similar to that bi-yearly facelifted and revamped example of German engineering and design because its only a little tweaked, only a little more improved. Its different and if you were buying one you’d obviously go for the newer version, unless the old card was in your wishlist and suddenly dropped by R500. The new HD7970 is aiming for the top spot as the world’s fastest single graphics card and really, really wants the GTX680 to go away.

Six months ago AMD’s brand-spanking new HD7970 claimed the top spot as the world’s fastest single-core graphics card, beating out even the old guard like the Geforce GTX590 and the dual-GPU HD6990. Featuring the VLIW4 and GCN architecture, it was the king for a solid three months until Nvidia’s GTX680 stole the show. The GTX680 was launched at a much lower price point, used less power, contributed less heat in the chassis and performed better as well. Being blitzkrieg’d on all fronts the engineers at AMD went back to the drawing board and came up with a revamped model, possibly showing that they weren’t putting out the original HD7970 at its potential performance.

A note though, to those reading this: the GTX670 is still AMD’s biggest threat. It uses even less power than the GTX680 and produces almost the same performance, drawing up with identical scores once overclocked slightly. Its undoubtedly the better buy for gamers and enthusiasts, chopping off just over R1000 in price but only sacrificing less than 8% in performance. If you were looking for a great card, that’s it. This HD7970 is still a marvel to behold and it definitely makes up for the staggering price with better performance. But lets not get ahead of ourselves. Looking at the graph above you’ll see that there’s a lot of mimicry going on here, with the new HD7970 shipping with almost identical clock and memory speeds. The GTX680’s boost actually goes up to 1065Mhz but has weaker bandwidth and texture fillrate numbers – ordinarily that would be a problem but thanks to the improvements made to Kepler with bindless textures, its no longer an issue.

Starting off with the improvements, AMD’s driver team worked with the engineering group to introduce their own version of GPU boost, called PowerTune with Boost. I know, that’s confusing. If you’re a Radeon HD6900-series owner, you’re already familiar with PowerTune, allowing certain settings in the drivers to dynamically overclock the card based on power draw. The only difference between PowerTune, released in 2010 and Nvidia’s GPU Boost, released this year, is that it only calculated clock rates of the card based on power draw, whereas GPU Boost boosts up clock rates and voltages based on a number of criteria including power draw. With PowerTune, you’d dial in your regular overclock and the software would keep the card permanently in its overclocked state even under a 50% load.

This meant that if you were silly and set the card in a mode that it couldn’t handle and subsequently burnt it out, you landed up with a dead piece of silicon and a voided warranty. While it is an AMD-developed application, its still overclocking and most warranties don’t cover that. That’s the fundamental difference between this and GPU Boost – it doesn’t realise that the overclock is too far out of reach and just sort of runs away with itself. Like lemmings off a cliff, there’s no barrier or voice of reason to tell it to back down if you tune in ridiculous settings, which GPU Boost does. The card would either crash or overheat and switch itself off.

PowerTune with Boost is AMD’s way of preventing sad stories like that from ever happening. Lemmings will still die but the new HD7970 now dynamically overclocks and downclocks, overvolts and undervolts based on a similar amount of criteria that GPU Boost looks at, in particular TDP, the application running and how much power is currently being drawn and what voltages are required. The older PowerTune software only looked at power draw, frustrating users of cards that were already near their power draw limits and it didn’t even touch voltages. Today with the new HD7970, if you adjust the maximum frequency in Overdrive you’re only raising the headroom for Boost to scale up to as long as there’s available power, the same as GPU Boost.

However, Tom’s Hardware’s tests revealed that the same caveats applied to PowerTune today as they did in late 2010. Dialling up the maximum boost by just 1Mhz left their card running at the maximum clocks that they set, continuing a trend upwards in heat generated and increasing voltages for no reason at all. Only in synthetic benchmarks is it able to trigger this scenario, which is weird since regular benchmarks in the first graph still take the card over 90% of usage. Check out their second graph as well, a measurement of power consumption over time, you’ll notice that there are certain apps that don’t maximise the power draw at all, games included. Only the overclocked Furmark result stays almost at the top of the maximum TDP of the card, with the rest showing spikes and dips I’d associate with regular use. One good piece of news is that even with this new version, you’d never need anything more than a good 550w power supply.

I would like to point out, though, that current HD7970 owners aren’t actually missing out on anything. Dial in 1050Mhz core for your card and you’d enjoy the same performance, albiet without the voltage savings when you’re using the card compared to the Ghz edition at the same overclock. Its debatable whether anyone would actually want the Ghz edition card because when you’re gaming with the regular card at the same 1050Mhz speeds, it actually does a hell of a lot better. It goes to show, I guess, that in the course of challenging the GTX680, AMD has messed up the remarkably economical power consumption that the HD7970 boasted at launch. Even at stock speeds, the differences are more pronounced, with the Ghz edition using far more power while doing the same workload. When both cards are stressed to the max in benchmarks, though, they consume the same amount of power at the same clock speeds.

Skipping all the boring synthetic benchmarks, we jump straight to the games beginning with Battlefield 3. You’ll notice that on average the regular HD7970 trails by less than 10% – that’s the same margin as the difference between the GTX680 and the GTX670, with the Radeon Ghz edition actually costing more than the regular card. Being a heavy shader-based game, Battlefield shows a propensity for Nvidia cards with the GTX670 squeezing in front of a card more than R1500 more expensive at stock speeds. Let me repeat that: stock speeds. Moving onto Crysis the same results appear with the GTX670 performing better than the stock card and the Ghz edition siding up right next to the GTX680 in both resolutions, beating it outright in DX9 mode.

Thanks to a few driver improvements in Catalyst 12.7, both versions of the HD7970 edge past all the Geforce models, even allowing the HD7950 to play with the stronger cards in the Elderscrolls: Skyrim. There’s actually a noticeable graphics bottleneck here, suggesting that with some driver improvements of their own, Nvidia could take the lead here as well. Note that the Ghz edition never brings more than 10fps over the regular version with, with both cards showing negligible performance hits to running the game in Ultra with FXAA applied. DiRT3 shows a similar finish, with the Ghz edition beating the GTX680 by some margin with AA applied. Note the regular HD7970 once again with a second place finish.

Finishing off with World of Warcraft and Metro 2033, WOW again displays a greater love for Nvidia, doing far better at 1080p for reasons we’ll never discern. The game just doesn’t like AMD cards full-stop. Things even out at the native 3-inch resolution, with the Ghz edition placing in third but still ahead of its more thrifty clone with standard speeds and voltages. Metro 2033 once again shows the HD7970 in the lead, placing both cards ahead of the competing Geforce duo. The margin between the GTX680 and the GTX680 is so tiny with MSAA added on that it hits home the trend that Nvidia is seeing with far more people buying the GTX670 even if they had the money for the bigger card.

In the end with the power consumption tests, we see who the real winner is in this high-performance market – its the GTX670. No, seriously, just look at that power consumption. On average its about 70w less than the HD7970, it costs less, its smaller, it creates less heat and once overclocked it’ll perform better in just about every game and benchmark you can think of. If you were thinking of buying the Radeon card, at least have the sense to get the regular version and save the odd R500 you’ll be paying for the Ghz edition which you can easily match with a little overclocking. The only place where’d I’d rather stick to the GTX680 or the HD7970 is because of the extra 1GB of RAM for multi-monitor setups, with the 3GB framebuffer allowing a little more visual fidelity at those extreme resolutions.

If you’re already a HD7970 owner, keep your card at stock speeds and don’t sell it yet. As more driver improvements come your way the card will start paying for itself. If you’ve managed to get one of the voltage-adjustable ASUS Direct CU II models, it might even be worth your while to undervolt it and save even more electricity. If you don’t have the money for a HD7970, don’t worry because you could choose either the HD7950 or the GTX670 to get to roughly the same level.

Source: Tom’s Hardware

Discuss this in the forums: Linky