AMD and NVIDIA both have a new year to look forward to, and both companies are planning to reveal or launch new products this year that finally break away from the old, aging 28-nanometer production process. We’re moving to FinFET in 2016, and the two major fabricators for GPUs, namely TSMC and Global Foundries, are both working on getting their FinFET processes up to standard for the next generation of graphics cards and processors. To drum up consumer interest, AMD has bravely decided to stick its neck out and reveal some of the details about its next generation of graphics cards, leaving NVIDIA to sit back and enjoy the show from afar while they work on getting their own hype train ready to roll. The new architecture is called Polaris, and it’s designed for efficiency, not raw power.
— Raja Koduri (@GFXChipTweeter) November 26, 2015
The efficiency aspect of Polaris was hinted at by AMD’s Radeon Technology Group vice president, Raja Kodouri, in late November 2015. Polaris is a group of stars located in the Ursa Minor constellation. It’s been noted by astronomers multiple times in records dating back almost 2000 years, and it’s been used in star maps for navigation, and for mapping other regions of the sky, owing to its location and designation as the “North star”. Polaris moves towards us at a rate of 17 kilometers per second, which is why it becomes brighter over time.
With planned availability for Q2-Q3 2016, AMD is being really aggressive with their switch to this new architecture and their new production process. Polaris-based GPUs will be their driver for adoption of HDMI 2.0 and Displayport 1.3 standards, as well as the inclusion of working H.265 video acceleration for all GPUs based on it. Currently this is only available in Tonga-based GPUs like the Radeon R9 285, R9 380 and R9 380X, while on the APU side it’s available in AMD’s Carrizo APUs. NVIDIA’s only recently started support H.265 acceleration in their latest Maxwell GPUs, so both companies are basically ready for the rollout of H.265 media for use on 4K displays.
Other than those points, AMD isn’t talking up the finer details at all. “Historic leaps” in performance per watt ratios do sound promising. AMD last said something similar when they were tinkering with then-new 40-nanometer production processes, releasing the Radeon HD 4770 to see how things worked and how it was affected in real-world situations. NVIDIA said something similar as well when they were moving from the Fermi architecture to Kepler. That should give you an idea of how big a leap we can look forward to.
In terms of die space and how much more closely transistors are packed together, the current FinFET offerings from TSMC and Global Foundries (and possibly Samsung as well, if they decide to enter the fray) are both 14nm-class processes, roughly half the size of current 28nm parts. This means that while much of the chip will pack transistors together at 14 nanometer distances from each other, some parts might be spaced out more for better efficiency or to reduce heat output. Intel has been doing this for a while with their 22nm tri-gate process, where it was technically 22nm-class when you worked out the math, but some areas were closer to 26nm.
TSMC’s process is therefore advertised as a 16nm process, while Global Foundries’ product is labeled as 14nm. It may be the case that AMD uses both foundries for different products – making use of GloFo’s process for lower-end GPUs, and using TSMC for the higher-performing parts where higher clock speeds are needed.
As far as the chip layout is concerned, AMD showed off an overall picture of what’s going on, but they didn’t get into any of the details. For the first time, they’re also calling the architecture “4th generation GCN”, which means that they’re trying to separate this chip from their current offerings as much as possible, while still maintaining links to previous changes in the architecture’s history. It also seems that this is based on changes to GCN 2.0, which is Fiji and Bonaire, rather than GCN 3.0, which is known as the Tonga/Antigua family.
One of the big changes made is adding in dedicated logic that is tasked with discarding geometry that doesn’t need to be loaded or ever displayed. Again, that’s a move to improve the efficiency of the chip, and one which might also ease up load on the shader cores to do actual work related to what’s being displayed to the user. There’s also changes made to the memory controller which are based on the improvements made in GCN 3.0, which had to do with colour compression to free up memory bandwidth. Polaris might also be used with GDDR5 or optionally High-Bandwidth Memory (HBM), so this will be useful in either application.
Something else that AMD mentioned to other journalists who got to sit in on the briefing for Polaris is that they’re working on improving workload efficiency. One of the problems with Fiji-based products like the Radeon R9 Fury and Fury X is that their hardware resources couldn’t be completely utilised, or if they were running at full capacity it wasn’t being done efficiently. AMD has been improving the performance of the R9 Fury family through driver improvements since their launch in mid-2015, but Polaris gave them the chance to make efficiency improvements at a hardware level instead. I’m not sure if we’ll see massive jumps in performance, but we will see dramatic improvements in workload efficiency when comparing the R9 Fury X to its upcoming successor.
In terms of actual efficiency gains, AMD showcased a demo being run of Star Wars Battlefront on two systems which were largely identical with the exception of the GPUs – one system was running a pre-production Polaris-based GPU of unknown specification, while the other was running a NVIDIA Geforce GTX 950, with both systems set to run the benchmark at 1920 x 1080 resolution at medium settings, with a frame rate cap of 60fps.
Observing power draw from the wall outlet, the Polaris-based system was able to match up with the rival GPU’s performance while using almost half as much power. It is interesting that things are this far along that AMD is willing to pit an engineering sample GPU up against a shipping product from their competition – it reveals one of their cards from their hand a bit early.
In my opinion though, this show-off is a bit misleading. Setting a framerate cap does lower the power usage of the system, but it also hides the fact that both systems have different performance limits; in other words, the GTX 950 might not have been breaking a sweat running Battlefront in this manner, while the Polaris-based GPU could have been closer to its actual performance limit, and vice versa. It’s also clearly not meant to wow desktop enthusiasts for a very good reason…
The reason is that AMD isn’t gunning for Polaris to be a desktop-first launch. They’re concentrating on efficiency and performance-per-watt, which makes sense when you think about how they want to get back into the notebook markets with products that rival NVIDIA’s discrete chips. In fact, that demonstration of Polaris’ performance with a fps cap while running a game is quite telling that they are looking at the notebook segment first, and may already have some products lined up if they’re revealing those power consumption numbers this early.
If you watch the video embedded below, at the end of this article, you’ll see a bunch of fine print at the end at the 2:26s mark. The Core i7-4790K was running at 80% of its power limit, which is set in the power options menu inside Windows 10. The engineering sample GPU was running at 0.8375v, which is rather low for a desktop GPU, but not unusual for a notebook part. That’s not a desktop system they’re simulating, that’s actually a performance profile for a mid-range 17-inch laptop based on an Intel quad-core processor and a discrete GPU.
It’s hard to get excited about a new mobile GPU, I know, but it’s something that I constantly lament not having in my Laptop Buyer’s Guide, and I’ve grown weary of seeing so many OEMs reach for Intel and NVIDIA purely because of how long they’ve had to make their synergy work properly. This is a good thing for consumers and a step in the right direction for AMD. I hope that they continue to be aggressive in their rollout of this GPU, and that they have it available in notebooks from various vendors on launch day so that people can actually buy them.