It’s a bit strange to think that just a while ago, not more than two months really, NVIDIA launched the GeForce Pascal family with much fanfare. They flew journalists out to a venue, they had lots of Powerpoint slides and numbers and marketing one-liners to throw around, and they made much ado about the new features coming to the GeForce family, like simultaneous multi-projection and ANSEL, the firm’s in-game photography tool. In stark contrast to this, the company last weekend revealed the GeForce GTX Titan X Pascal in a very low-key event aimed at young researchers at Stanford University, U.S. This is the full-fat GP102 chip, a derivative of GP100, and it’s exactly as expensive as the name implies.
So there’s two important things to note here – the price, and general availability. NVIDIA lists the RRP for the GTX Titan X Pascal at $1199. It’s the first single-GPU, consumer-bound card they’re pricing at this level, and that’s rather scary. The Titan branding is no longer at the $1000 price point that everyone drools over, and this makes it much less interesting than it otherwise would have been. Perhaps the high costs associated with building this GPU specifically is what jacked it up some.
The other thing is general availability, and NVIDIA expects this card to be available, and selling, on 2 August 2016. That’s a lot sooner than anyone had expected, and both the early launch and the pricing is an aggressive move. Either NVIDIA knows they have no competition for the rest of the year, hence the move to an early launch, or they’re getting in while the getting’s good before AMD drops Vega (which isn’t confirmed for a 2016 launch, but anything’s possible). While the Titan X Pascal is certainly geared towards professional use, it will also play games with ease.
Based on the 16 nanometer process by TSMC, the GeForce GTX Titan X Pascal (just added on there in case you get confused with the Maxwell version) is a monster of a chip, boasting 3584 CUDA cores, the same number as GP100. It’s possible that the chip is the same size as the one seen in the NVIDIA Tesla P100 Accelerator, but because this chip lacks HBM compatibility, it might end up being smaller thanks to including less double-precision units. It also sports 224 texture units, 96 ROPs, and 12GB of GDDR5X memory on a 384-bit memory bus. Memory bandwidth is a massive 480GB/s, with the chip seeing boost clocks of 1531MHz under ideal temperatures.
All told, NVIDIA says it’s about 60% faster than the previous GTX Titan X, and as much as three times more powerful than the original Titan. However, it’s only about 25% ahead of a GTX 1080 in theoretical compute performance, and the gap in games could be much closer than that. Custom GTX 1080 cards may well be faster than this card even under ideal conditions.
The only unknown at this point is how well the card can handle double-precision compute workloads. If it’s full unthrottled and unlimited as a similar Quadro card may be, it could be worth picking this up if you have a mind to use it for professional purposes like machine learning and content creation using software that supports CUDA. The original Titan was a beast when it came to double-precision math, and NVIDIA opted not to gimp the chip’s capabilities, only its drivers. This could be the same scenario with more oomph, and I’m eager to see how it performs and, more importantly, how AMD responds to the news.