So Nvidia’s been on a paper launch roll since the release of Kepler with the flagship, the GTX680. It was a mind-bogglingly good card with astounding value for money for the month that it was the sole one available. Those of you who have one already and are reading this are no doubt smiling – it probably gives you that same fizzing sensation James May keeps admitting that builds up in his crotch behind his penis (for reference, yours is called a nerdgasm). Yes, you too get the same feeling that Citroen DS4 Racing owners will get when you roll up in a French car people actually like.

Cue those "Winter is Coming" puns. Yes, I know you want to.

So now for those of you who don’t want to toy with a wanna-be flagship for the gaming graphics card industry, there’s the GTX690 to consider. Go on, click that “More” button! You’re going to want to know what’s inside this analysis…

Ah, you’re here! You were expecting something interesting? Well in a sense the GTX690 is extremely interesting, but I’ll first have you read the GTX680 Analysis that I wrote in three parts looking at Kepler and how it functions and how its going to blow socks off. The architecture is both complex and simple to understand at the same time and even launch prices for Nvidia’s GTX680 and GTX670 have undercut the competing offers from AMD.

Had Nvidia sorted out their issues with TSMC and their 28nm process before this launch, AMD would have a lot more to worry about today. The GTX680 is the competitor for the HD7970, but in reality it punches above that line and very often goes straight up to insult the HD6990 and GTX590 in the face. However its not such a rosy situation for Nvidia – AMD had finished with their 28nm issues a while ago and has had the market for themselves in the last five months. Price cuts to the popular sells like the GTX560 Ti haven’t helped. Winter was coming, and Nvidia had to do something.

The company recently posted a 55% profit loss. Note that they still made a profit, but they really weren’t selling anything towards the end of Fermi’s production run. The main reasons were the lack of 2012-era Tegra design wins with various third-party manufacturers like ASUS and the financial setback that Kepler created. I don’t know why Nvidia didn’t turn to other fabrication-capable companies like Samsung or Intel earlier on, especially considering the latter is actively renting out fab space in its production lines for other companies who need 28nm dies made. Regardless, they’ve decided to tackle the situation with paper launches and hope to hell that their fans will still hold out while things return to normal.

Its an incredibly handsome card, IMO.

And the second of their paper launches was the GTX690. The teaser on Nvidia’s Facebook and Geforce.com pages showed a cropped image of the dual-slot, all-aluminium shroud that covers the dual-GPU monster. There’s plexiglass on both sides of the fan to show off the heatsinks and the LEDs when the card is operating. Its a great way to get that euphoric feeling out when you’re holding an incredibly expensive piece of kit in your hands. The card employs a central blower fan that exhausts heat out both sides of the chassis. It also uses a combination of heatsinks and water vapor chambers to move heat around and out the shroud. Enthusiasts who buy this, however, are going to have a headache dealing with getting rid of the heat going into their chassis, as you’re getting all of the heat off of one GK104 chip.

Its not the most ideal solution, if one has to be honest. Nvidia could have gone the same route as Intel and AMD with their processors, shipping a high-end, top-of-the-range graphics solution with a closed-loop, low-maintenance water cooling kit; but then that would have created headaches for quad-SLI setups. How about custom-designing a reservoir, pump and fan that would fit in unused hard drive bays? A good idea, but the majority of chassis that gamers employ have their drive cages mounted sideways, so that idea goes out the window. You only really see something like that in a dedicated water-cooling setup and you’d actually want the reservoir to be higher than the thing its cooling. 5.25″ bays, then? Too much clutter.

But regardless, the important thing is that it looks like a high-end piece of kit. That first impression is everything and Nvidia even went through the trouble of packaging review units of the cards in boxes that needed a crowbar to open it. Its worlds and away from the feeling you get with the plastic shrouds on the HD7970, for instance. I’d like to see how AMD gets the HD7990 to look better than this (and they will have to, make no mistake). Moving around to the back of the card, we see three dual-link DVI outputs and one mini-Display Port connector. (be sure to tune in on Wednesday, I have something to say about this)

Going under the heatsinks, we finally see the two GK104 chips sitting alongside each other, with a small green chip in the middle. That there is the driving force behind the card’s ability to scale performance well with the two chips in SLI. Its manufactured by PLX Technologies and its a 48-lane PCI-Express 3.0 switch manufactured on the 40nm process. Most motherboards wouldn’t be able to afford the kind of PCI resources that this chip could and we’ll see later that it was a wise decision. Using SLI on most modern motherboards requires eight lanes per GPU with the other eight per 16-lane port used for two-way communication between the cards in addition to the PCI bridge on the top. Nvidia’s GTX590 used the ageing NForce200 chip which didn’t provide any significant benefits to running two GTX580s in SLI. The PLX PEX 8747 provides 16 lanes to each GPU and 16 for use in PCI-Express 2.0 or 3.0 slots. All of the benefits of SLI, almost none of the drawbacks.

Moving right along we see the two 8-pin PCI-Express power connectors used to power the beast that lies before you. While the GTX590 required two as well, it was a full-fat implementation of Fermi and was so close to its power limit of 375watts that overclocking it was a big no-no. The GTX680 sits comfortably at 300watts thanks to Kepler’s redesigned architecture and lower power requirements, giving owners more headroom in future for overclocking and possibly allowing companies like ASUS to really go full throttle and give their CU II lineup a dual-GPU option. The card achieves such a low TDP thanks also to its lower clocks of 915Mhz for the core and the same 6Ghz speeds found on the GTX680.

I would also like you to stop for one moment on the image that shows the naked front of the GTX690. Use your hand to cover roughly half of the card on the left-hand side. What’s left on the right, sans the PLX chip, is essentially the same plan as the redesigned circuit board also found on the GTX670, flipped horizontally. It might not be exactly the same but the maximum TDP of 141watts points to the very same thing. The fact that the core clocks are also the same might be a clue as well. With the disabled shader, two GTX670s in SLI would fall just under the GTX690 and would consume the same amount of power and have the same amount of overclocking headroom.

But I’ve babbled on enough already, lets see some tests! In Battlefield 3 at native 30″ resolutions we see the card drawing up next to the GTX680, but not quite beating it. That’s thanks to the higher clocks on the flagship, and there’s an even smaller gap when you’re looking at a multi-monitor setup. When I said in my analysis on the GTX670 that the disabled shader was possibly not being used that much, the results here further surprise me – about 100Mhz separates the clocks of the GTX690 and GTX680, yet there’s very little difference between them performance-wise. Take note of the HD7970s in Crossfire, that’s roughly what you can expect from the HD7990 as well. Not a bad showing from AMD, all things considered. Crysis 2 shows the same results but is a texture-heavy game, showing off Nvidia’s improvements with the bindless textures I talked about in the GTX680 Analysis.

Skyrim shows the two SLI options battling it out again, both pushing neck-and-neck for the performance crown. The HD7970s in Crossfire still lands up in third place, but isn’t far off in the multi-monitor test. FXAA performance at the native 30″ resolution needs to be looked at though, and I’m assuming that’s down to driver issues more than anything else. DiRT3 shows the same three-place finishers, with the slightly higher clock speed of the GTX680s matching the dual-GPU king every time.

WOW: Cataclysm has proven itself to be mostly processor-limited and has a very mixed ending for all contenders here. Again showing a lack of GCN optimisation, it allowed the GTX590 to finish in third place for once at the native 30″ resolution, while the game allowed the single GTX680 to hang with the big boys when no AA was applied. Crossfire scaling in the game is horrible and definitely one for the driver team’s To-Do list later for Catalyst 12.5. The Radeon team finally take the lead in Metro 2033, showing Kepler’s single biggest flaw – low Compute performance.

Speaking of Compute performance, what’s really going to bake your noodle later on is the results Anandtech posted with their review – two GTX580s in SLI would come close to matching or beating the GTX690 in any game that relies heavily on Tesselation and Compute performance. Games like that, Civilisation V in particular, favour the raw power that Fermi offered to gamers and is the main reason why professionals who couldn’t afford the high prices of Quadro cards ended up buying GTX580s instead. AMD’s GCN architecture is miles ahead of what Fermi and Kepler could deliver but as long as Nvidia has a stranglehold on the market with Quadro and Tesla, I don’t see many enterprises switching to AMD anytime soon. If you still play Civilisation V, Alien vs Predator or Crysis 2 with the High Res textures and Tesselation pack, you don’t need to upgrade yet.

Temperature-wise the card holds its own, idling under 50 degrees on the Windows desktop. Windows 7’s desktop is inherently a 2D interface and the card is likely to stay at that level while you’re interacting with the regular environment. At load Nvidia’s power enhancements to the card keep temperatures under 80 degrees and actually ends up being cooler than the GTX680. Its interesting because the GTX680 spends nearly all of its time in boosted mode and one would assume the GTX690 does the same, staying mostly at its max boost of 1045Mhz for the duration of your game.

Idle power consumption on the desktop is a little higher than the single GTX680, signaling that perhaps one of the chips are completely powered down in idle more. AMD does much the same with its ZeroCore technology, powering down one or more graphics processors when they’re not in use. You can see this working in the idle test with the screen turned off – Crossfired HD7970s consume 20watts  less while Nvidia’s cards only lose three, suggesting that their power-saving technology isn’t being put to good use by the drivers yet. That certainly seems to be the case with the load test, where the GTX590 consumes about 50watts more on average than the GTX690. Its also telling that Nvidia’s GPU Boost settings by default are very restrictive – all the cards power down to the same level in exactly the same spot, but the GTX690 returns to full load power very quickly. By turning up the boost power limit by 20%, we might see a bit more flexibility in how the GPU responds to alternating workloads.

So in conclusion, what do we know? We know that, given the 28nm yield and supply issus and TSMC’s public announcement to concentrate on getting Nvidia’s products sorted out, we can only expect general availability of this beast in about two months. Delays for the GTX680 should be the same, while the GTX670s worldwide supply should be sorted in about a month’s time. Nvidia expects to sell the GTX690 in far greater numbers than the GTX680 where SLI setups are concerned. The dual-GPU solution has clearly shown its more than up to the task of replacing a complicated setup while consuming less power and producing less heat.

In fact, anyone still running a single or dual-GPU setup that’s outdated by at least two generations should really have a look at this especially if they’re looking at triple-monitor gaming. For one its kinder to your power supply and those of you still on Nehalem processors and even those stuck on LGA1156 shouldn’t have many bottleneck issues – its a good bargain and the first time ever that a dual-GPU card has been the better choice by far than two of its lower-class brethren together. The only distinct disadvantages are heat generation and the price, as well as the requirement for a high-end Intel chip to get the most out of the card. As of this writing, its near impossible to get one in the country, as they’re all immediately sold out the moment they land in supplier warehouses. Since there’s such a low yield and a lot of people want one, expect the retail price to bump up about R1000 or so while stock levels even out.

For those of you still running two GTX580s, a GTX590 or two HD6990s, its a little less clear-cut as to whether you should upgrade or not. Fermi’s not too outdated and will hold its own for the rest of the year, at least until the Geforce 700 series arrives. Considering the power savings seen in this card, you can definitely expect a Ti version of this baby, especially if AMD’s HD7990 ends up being a better card and competitor (and I expect it won’t, given the need to have a 250watt TDP for the HD7970).

All in all, a great showing by Nvidia and something you could definitely expect to land up in the NAG Dream Machine next month. Who’ll be buying one? (that’s a joke, in case you didn’t know)

Source: Tom’s HardwareAnandtechGuru3DTechpowerUp!

Discuss this in the forums: Linky