nvidia_teaser_1

Its not every day that a validated GPU-Z entry shows up before Nvidia is due to make a major announcement! TechpowerUp noticed a validation submitted to their database by an anonymous user with a GPU no-one has ever seen before – a Nvidia Quadro M6000. Looking at the entry overall, it seems like an absolute monster and very likely to be the successor to the GTX Titan. If you’re wondering why this is suddenly all exciting, it’s because Nvidia is holding a Geforce conference on 8 January 2015! Hit the jump for the juicy details.

TPU Geforce Maxwell validation

The GPU-Z submission is interesting for a couple of reasons. One is that this card has 12GB of GDDR5 memory, which tallies up with the memory spec of the Quadro K6000, the business-end of the same chip inside the GTX Titan. This probably isn’t an engineering sample – the clock speeds are higher than what ES cards typically run at and the memory is running at 1653MHz or 6.6GHz. There also hasn’t been any leak or listing with the card having 96 ROPS (but seriously, 96 ROPS goddamn!) and the driver string is the same one currently available (Forceware 347.09).

Lets compare things to the GTX Titan quickly.

GTX Titan GPU-Z validation

ROP counts take a hike by 50%, which means there’s better performance at higher resolutions, like UltraHD 4K for example. Multiplying the default clock rate with the TMU count gets us the texture fillrate, which on the GTX Titan is 187.5GTexels/s. Working that backwards for the Quadro M6000, we end up with about 256 TMU, which is a decent boost. Clock speeds are up, so that means that GM200 Maxwell is probably, but not definitely, on a 20nm production process.

Given that memory also consumes chunks of power on a workstation card, it’s also very likely that the GPU isn’t running at its maximum TDP. On a consumer card with 6GB GDDR5 RAM, one could expect anywhere from a 10-20% boost in the default clock speed.

The biggest performance boost not mentioned is how much raw memory bandwidth will be available if you apply Nvidia’s colour compression algorithms. Given Nvidia’s own calculations as to how much their techniques save them on memory bandwidth, it’s possible that the real-world amount is closer to the default +20% depending on the game. This would only really benefit anyone running with one of those 21:9 3440 x 1440 monitors or an UltraHD 4K or 5K display, but it is a nifty trick to save on memory bandwidth for those really intensive workloads.

What will be revealed on 8 January, I wonder. Stay tuned to NAG Online to find out!

Source: TechpowerUp