Earlier last week I had a good peek at Nvidia’s GTX680, which is due for release two days from now. NDA’s should lift on Wednesday morning, and we’ll get to see more in-depth how the card performs against its competitors. But before that, there are two more features that have since cropped up, prompting me to take a second look.
Firstly, AMD has cornered the multi-monitor market as far as things go. All you need is a single Radeon 5000, 6000 or 7000 series card, three monitors of roughly the same resolution and size and one DisplayPort adapter to change to your input of choice. Once you enable the monitors in an Eyefinity group you can do things such as set the desktop to the center screen, enable multi-audio output and get things comfortable so you can forget about those nasty bezels.
Nvidia’s come up with the same tech since its launch, but it was software-coded into the later Forceware releases. It’s never been possible to use three monitors on one card due to bandwidth limitations, and the only way you can have both 3D and a multi-monitor setup is if you run two cards in SLI, running the third screen on the second card. This presented a whole new can of worms, however.
If you’re familiar with micro-stuttering on multi-GPU setups, you’ll know that inherent problems with alternate-frame rendering prevent most setups from overcoming the slight micro-lag that annoys gamers. Some people manage to find ways around it, and its even been hinted that disabling the GPU’s on-board sound eliminates the problem. On a multi-monitor setup the middle monitor should always be on a single card, with the two flanking displays powered by the second card. The prevailing theory is that micro-stutter should be more limited to the flanking displays, and since they’re mostly at the edge of your peripheral vision it shouldn’t bother you as much (a hugely expensive F1 simulator actually does this for its machines, as the middle displays run at 300fps).
Running a multi-monitor setup on a single card brings simplicity and elegance to Nvidia fans, and eliminates many of the hassles of SLI. A single GTX680 should be able to power up to four 1080p displays, albeit with a few settings turned down. I mentioned that F1 thingymabob earlier? Mutli-monitor setups on a single Kepler card limit the framerate of the flanking displays and keep the center one steady to the eye. But here’s the next bit that makes me take back all the skepticism I threw at Kepler – TXAA.
See the above graph? I and many others assumed that the scores for the Battlefield 3 run were skewed – there’s no way enabling 4x MSAA would result in nearly 1.5x the performance of the HD7970 . No way in hell, as Anti-Aliasing is the most taxing setting you can enable on your machine, and brought many a high-end rig to its knees in the early Crysis days. Not so with TXAA.
Turns out, TXAA comes with two settings. TXAA1 has a performance penalty hit in the same range as 2x MSAA, but with the image quality of 16x MSAA. TXAA2 ups the ante, with a performance penalty equivalent to 4x MSAA, but with greater clarity and image quality than 16x MSAA can provide. Yes folks, that graph is probably accurate as all hell. The jaws of AMD fans are going to hit the floor come Wednesday.
In further leaked benchmarks, you’ll see that TXAA isn’t enabled, allowing the HD7970 to edge up nicely to the GTX680. In general, without TXAA performance is near-equal. TWIMTBP games will naturally run better, but for the most part the differences are negligible. Nvidia is thinking creatively here, and I’m so far astounded that this is even possible. I’ll be giving a full overview of the GTX680 later this or next week – don’t miss it!
P.S. I wrote earlier in my overview of the Radeon HD7950 that one of its new features in Xfire was that AMD’s Catalyst Control Center downclocked idle cards and stopped their fans completely. Nvidia probably has the same technology, if you look further down the list in the new control panel.