Anandtech’s writer, Ryan Smith, ended off his review of the GTX650 Ti with what some may claim is a stupid saying, that “there’s no such thing as a bad product, only a bad price.” The GTX650 Ti is by no means a bad product. Priced at $150 its designed to take on the massive amount of fans who can’t get the GTX560 anymore, or who don’t want the Radeon HD7850 for whatever reason (which used to be, primarily, the price). However, as before, Nvidia has intentionally crippled performance in a big way to get the card down to this price point – occasionally, to its own detriment in some ways. If you were looking for a good, low-budget product this analysis in particular should be interesting.

The GTX650 Ti is another sibling in the Kepler family, being based off a combination of the GTX660 and the GTX650, both cards at opposite ends of the spectrum with their own mannerisms and shortcomings. Its a card designed to fit in a 110W envelope, allowing it to slide into a range of use-cases and setups that either require low power usage or a smaller physical profile to fit into a tighter chassis. Its priced according to its position in the market, being Nvidia’s only answer to the HD7770 and the HD7850 1GB from AMD, both of which are eating up market share by the KFC bucketload.

ITS WHAT’S INSIDE THAT COUNTS

Right off the bat, you can see how Nvidia managed to middle the card’s position in the market with its specifications. It boasts extra shader cores, double the texture units and the same amount of ROPs to fit in somewhere about 25-30% faster than the vanilla GTX650. Leaving the ROPs the same means that one could expect the card to do very well in games that don’t rely on the GPU much and its especially telling of the card’s target market – budget gamers on 20″ or smaller screens. Only the memory clocks show little improvement, only boosting througput by 6.4GB/s. That should tell you that the card will suffer when AA is applied, as a lack of memory bandwidth and a really small bus size will be to its detriment.

Rather curiously, the design and layout of the GK106 chip at the heart of it all can yield two different outcomes. See, by default GK106 has five GPC clusters enabled, as evidenced in the GTX660 where you have the full monty GK106 chip. In the GTX650 Ti, one of those is disabled to land at 768 CUDA cores, 64 texture units and 16 ROPs stretched across a full compliment of L2 cache. But which GPC cluster is disabled is dependant on which of the modules has the fault or  which is disabled by design. The variation on the GK106 theme either includes two GPCs or three GPCs, like so:

Now Nvidia hasn’t said whether or not this variation will be any faster. One GPC cluster gets access to eight ROPs while the other two GPCs have access to only four. Compared to the two-GPC design, this is a little more symmetrical and logically laid out.  The L2 cache is a little more evenly spread as well with each GPC now receiving the same allocation, as opposed to one GPC accessing more L2 cache in the first design. Nvidia hasn’t confirmed or denied that this is how the variation will work or is laid out, but its the most logical assignment out of the two. It certainly won’t be any faster in games because most benchmarks should strangle the card’s performance when it comes to exploting memory bandwidth, but I expect some enthusiasts will find that the three GPC layout, should they be able to find one, will be the better performer when it comes to calculations and CUDA-compatible applications, due to their parallel nature.

DYNAMITE IN SMALL PACKAGES?

The card itself is diminutive, sharing the same footprint as the GT640, GTX650 and GTX660, measuring 5.75 inches long. While reference models are 4 inches tall, you can expect some low-profile cards to be out in the market pretty soon, along with “Green” editions that eschew the power connector for a lower TDP, slower clocks and slower RAM modules. Nvidia again uses 2Gb DDR5 memory modules, up to four for a total of 1GB RAM. As parters fit the card with even more RAM, you’ll see those modules appear on the empty solder joints surrounding the GPU on the underside of the card.

A really small 75mm fan covers the GPU and gives it just enough cooling performance to dissipate heat effectively. While this card shouldn’t need one, board partners may choose to fit the tiny critter with a dual-slot cooler to improve thermal performance and acoustics. The low height of the cooler on the reference design means that it should fit in really well in ITX chassis, and may even see some passive love in the future. The low 110W TDP means that we’ll only really see this card hitting 60° C at stock clocks, with overclocking probably raising that by another ten degrees or so. At the end of the day, its a cheap cooler, but one that does the job pretty well.

One thing that does noticeably change is the display outputs to the rear. Generally, we see at least one VGA port on mainstream models, with some variations featuring one Display Port connection. Here we have one dual-link Digital-only DVI port, one single-link DVI port with analog compatibility and one mini-HDMI port, an odd inclusion considering the target market (casual gamers and HTPC enthusiasts) doesn’t have mini-HDMI compatible products. Having said that, this is the reference model, so other brands may include a better layout than what we see here. At the very least, the card lives up to Nvidia’s claim of being able to run up to four screens from one source, although you’ll need a DVI cable that doesn’t have the extra four pins if you’re using more modern screens. VGA/D-Sub ones will require an adapter.

GRAPHICAL PERFORMANCE IS SO-SO

Moving onto the performance benchmarks and you’ll notice that in the barrage of reviews online, most will decry the card’s performance because of a castrated memory bus. Putting it through the gauntlet most sites have for their graphics reviews means adding in AA anyways, so don’t be disappointed if you do see poor performance at 1080p with 4x MSAA – its not the kind of workload the card is designed for. The best place for it is with Ultra high settings at 720p or 1650 x 1080 with high settings and 2x AA in most titles.

BATMAN: ARKHAM CITY, BATTLEFIELD 3 AND CRYSIS 2

In Batman: Arkham City, most of Nvidia’s lineup does really well, throwing a curve ball to AMD’s efforts because this is a game tailored to run better on Nvidia GPUs. Having said that, the GTX650 Ti is arguably better than its previous-gen competition – the HD6870 and the GTX560 – thanks to the extra CUDA cores and better texturing performance. Note how the inclusion of double the amount of texture units over the GTX650 helps the card to maintain almost twice the minimum framerate of its smaller brother. The only reason why maximum framerates don’t go any higher is due to the memory bandwidth – there’s simply not enough left to fill out things more.

Battlefield 3 paints a less pretty picture, forcing the GTX650 Ti down to the same performance as its previous-generation competition, the GTX560. Then again, its also drawing equal with the HD6870, so its not all bad. Crysis 2 is rather texture-heavy and in it the card performs well, but drops behind the GTX560 and the HD6870 due to their superior memory bandwidth figures. Note that in all three of these benchmarks, the card the GTX650 Ti aims to replace – the GTX460 that uses just about twice as much electricity for the same job – still hangs on in there without much of a struggle. Toxxyc, you won’t have to upgrade until next year, at least.

DiRT SHOWDOWN, MAX PAYNE 3 AND METRO 2033

DiRT Showdown is light enough for the card to have 4x MSAA applied without any serious performance hits. Because this game doesn’t tax memory bandwidth, its down to raw computational power to steer things along. However, this is a game that runs better on Radeon cards and we see the HD7770 pull ahead for the first time. Max Payne 3 is a CPU-dependant game and you see this from the pattern the results take as more powerful cards are plotted onto the chart, going up ever so slightly as better cards are added in. AMD’s cards pitch in with a sterling average frame rate performance, while the GTX650 Ti merely slots in between the HD6850 and the GTX560 as ordered. The results are good if you’re looking to play this or similar games, but it isn’t really anything to shout about, considering the now-ancient GTX460 still keeps up with ease.

I decided to show two entries for Metro 2033 to demonstrate how much performance the card loses with AA enabled. With 4x MSAA the card struggles to keep up with the game on medium settings and 1080p. At those settings, the Radeon HD7770 matches it and even the HD6850 gets a chance to shine. However, once MSAA is taken off, the card’s performance shoots up to an average of 39fps and a maximum of 59fps – easily besting the HD6850. Then again, at these same settings the GTX460 puts in a good performance. It looks more and more like GTX460 owners will have to wait until next year for their replacement!

THE ELDER SCROLLS: SKYRIM, WORLD OF WARCRAFT AND CIVILISATION V

The Elder Scrolls: Skyrim is no longer a CPU-bound game, thanks to some performance improvements in the latest patches to the year-old title. Even with MSAA the GTX650 hangs in there reasonably well, drawing with the HD7770 but not quite offering the kind of performance you’d expect to pay for a $150 card, not when the HD7770 is now pegged for a $119 price point. Here again both the GTX460 and GTX560 offer strong finishes, giving owners of these cards few reasons to upgrade. World of Warcraft shows the same finishing lineup, with the GTX460 falling under what’s considered a playable 30fps threshold for the first time, necessitating some lighter settings to get better performance. Again, the GTX650 Ti finishes ahead of the HD6850 but doesn’t quite match the GTX560 in a rather texture-heavy title that has always favoured Nvidia cards traditionally.

One area where the card will perform well is in GPU acceleration, using apps that exploit the Nvidia CUDA API to use the muscle the GPU can provide for parallel workloads. In a decompression benchmark run by Anandtech using Civlisation V, everything is taxed to the max- memory bandwidth, computational throughput, texture performance and so on gets a real-world workout, so it ends up being a realistic look at where the card would place in terms of performance with other cards capable of GPU acceleration. Unsurprisingly, its a little behind the HD7770 and picks up performance a bit with an extra 1GB of RAM. If you’re using Photoshop or any other CUDA-compatible application for productivity purposes, a 2GB card will be worth the money. If you’re looking for a cheap card to accelerate your Physx calculations, however, a GT630 would be a better choice.

IN CLOSING…

In the end, we reach two conclusions. One is that the GTX650 Ti isn’t a bad card – in fact, it performs exactly as expected in its price range. The target market is gamers who own 20″ or smaller screens and who don’t need all the bells and whistles enabled. However, at 1650 x 1080 this card makes a strong case for itself even with 4x AA enabled, and those of you who play on large TVs at 720p resolution will be especially happy – better-than-console performance and way better visuals are in store for you, especially if this card can fit into your HTPC and you invest in a Xbox wireless controller.

The second conclusion is that the upgrade path for mid-range gamers has finally settled. What I mean by that is, we’ve got hardware so capable and so fast that its easily lasting us years longer than what the designers and engineers planned. Look at the benchmarks, where a two year-old card approaching its third birthday still is relevant today. Its the same problem with owners of the ageing Core 2 Quad Q9300 and higher, or the Core i7 920, or the Phenom X4 945, or the Phenom X6 and so on. The regular regime of upgrading every year doesn’t work anymore because better performance isn’t easily realised – software just doesn’t make that big a stride in the course of a year, these days.

 BEEN LOOKING FORWARD TO THE FUTURE, BUT MY EYESIGHT IS GOING BAD

What about a two-year upgrade path? Yes, that might work, but you’re still running hardware that might perform perfectly well for another year. Hell, my Athlon X3 and HD6870 combo will still be good for games and general productivity for another year, at least. What we’re going to see now is the migration to a three-year upgrade path for graphics cards and a two-year path for the motherboard and CPU sockets. In fact we’re already seeing that with LGA 1155, LGA 2011, AM3+ and FM2 – all these sockets have a projected eighteen month lifespan when you look at the schedules for the sockets by both processor companies. Both Nvidia and AMD now plug their products as upgrades for owners from two generations ago – with AMD’s cutting off of the HD4000 and earlier family in their commercial drivers earlier this year, the company made it clear that its time for gamers to move on and up.

Thanks to a plateau that software has now reached while we wait for the industry to work on better parallel computing capabilities, hardware remains relevant for far longer. I’m even planning on loading Windows 8 onto a much older machine in the house – with a Celeron E2120, 4GB of DDR2 RAM and a GT240 GPU, it’ll run anything just as well as my current gaming machine.

And that’s where the buck stops – I won’t be spending any more money on that PC becase it will remain relevant for far longer than I expected. The era where component upgrades netted you a 50% jump in performance are over – its now been replaced by an ecosystem war and we can only bide our time and work on saving up money for that next major upgrade as time progresses.

Discuss this in the forums: Linky