Last night, AMD hosted their Capsaicin & Cream event at the Global Developer’s Conference in San Francisco. A lot of tension has been building in the run-up up to this event, and AMD’s Ryzen processor launch takes place very close to it. With everyone expecting some new information about Vega, AMD’s upcoming next-generation GPU, the livestream was packed with viewers. While AMD didn’t reveal Vega itself, there was some useful information in this event that we’ll go through together in this recap.
Vega is now the Radeon RX Vega
I won’t ask anyone to read all the way to the end of this article to get the bit that AMD made us all wait for. Near the end of the event, AMD’s CTO of graphics in the Radeon Technologies Group, Raja Koduri, revealed that AMD thought the public’s reaction to the Vega codename was pretty good, and they adopted Vega as the name for the product as well.
This means little for anyone at this point because we have no more information about Radeon RX Vega itself, but this could be the brand name that effectively replaces the Fury lineup. There might be two or three Vega GPUs that will launch later this year, and they’ll be branded separately from AMD’s regular lineup because it’s a brand new architecture. Hmmm.
Vega includes what AMD is calling a high bandwidth cache controller (HBCC), which is really a high-performance memory controller that interfaces with a wide range of memory and storage mediums. It’s able to drive GDDR5/X memory as well as HBM, and even offers the ability to implement remote pools of VRAM on slower storage like a solid state drive, or networked storage. AMD said that HBCC was implemented because most games didn’t take advantage of memory bandwidth that well, and driving up VRAM use was a primary goal in the design of Vega. They showed off a brief demo running the benchmark tool from Deus Ex: Mankind Divided, with HBCC effectively turned off on the left-most display, and turned on in the example on the right.
Because the HBCC can’t be turned off, AMD limited the demo to only use 2GB of VRAM. When the game was run unhindered, the average framerate was 50% higher, and the minimum framerate improved two-fold. I’ll have to test this in a future review of Vega to see how things really work, and if I can reproduce these tests in a benchmarking routine.
AMD also showed the benefits of of a new feature called Rapid Packed Math available on Vega. They showed off the feature by simulating a full head of hair in a demo, using their TressFX hair renderer. RPM allows you to run floating point calculations at a faster rate if you use 16-bit mode, or half-precision, with a theoretical speed-up of 2x the performance of a 32-bit floating point calculation, or full precision. For workloads like TressFX, half-precision math is ideal because it doesn’t have to be 100% accurate (it’s not being used for industrial purposes). This means one of two things for Vega: AMD can either see games running the TressFX simulation twice as fast, boosting framerates, or you can have twice the number of rendered strands for the same performance hit.
AMD also briefly brought up Ian McLoughlin, CEO of game streaming service Liquid Sky, to talk about their service and how Vega plays into it. McLoughlin said that Vega was a natural choice for them, seeing as they could run game virtualisation for as many as four players on one GPU. A new feature for Vega, called Radeon Virtualised Encode, can be used to encode game streams run on a virtual machine, and up to now the biggest issue for remote game streaming has been the use of H.264 CPU encode for higher quality. AMD says that the virtualised encoder can give you much higher quality for a very small performance hit, and this will make issues like multiple users sharing a single system seeing a performance drop a thing of the past.
Unrelated to Vega, AMD also revealed three new hardware features that affect their entire lineup on Graphics Core Next (GCN). The first is support for forward rendering in Unreal Engine 4.15, which is suited to running VR games. With VR, anti-aliasing is a very hard requirement, but it’s not possible to run it efficiently because most game engines use deferred rendering. In a nutshell, the difference is that forward rendering includes the geometry of all the objects you’re rendering and shading, and only discards that once you’re done rendering a frame, or in some games that geometry is discarded when you’re not looking at an object. This makes running multi-sampled anti-aliasing quite easy, because you still have the raw geometry to work with when figuring out where jagged edges are formed.
In deferred rendering, once the geometry is shaded, it is discarded to save on performance and memory constraints, and anti-aliasing is then done on the shader level, not geometry. This is difficult because there’s no easy way to run MSAA or SSAA on just shaders alone, and sometimes game engines use FXAA, a fast approximation-based approach, to more or less implement 2x MSAA without the precision that MSAA boasts, in areas where the GPU assumes there are jagged edges. For VR games, if the engines support forward rendering, then it’s possible to do per-pixel anti-aliasing without a significant performance hit, and this might be faster than deferred rendering with MSAA on top of that. AMD says that VR games made using Unreal 4.15 now receive a 30% performance boost compared to the older versions with deferred rendering, making it easier to hit framerate targets on weaker hardware.
AMD also announced support for SteamVR’s asynchronous reprojection. NVIDIA has a similar feature already in Pascal, but it is implemented in drivers and was ready ahead of Valve’s feature announcement at GDC 2016. Asynchronous reprojection is used when running a game in VR, and instead of redrawing a frame when a new one isn’t ready – possibly resulting in framerate hiccups when you turn your head and the data isn’t ready yet – the game will instead create a composite of the last rendered frame mixed with the new data required when you moved your head. Pretty neat.
An unexpected partnership!
Lastly, and this came before all the VR announcements that I basically slept through, AMD is partnering up with Bethesda Softworks to optimise their games for Radeon graphics cards and CPUs, as well as to implement support for the Vulkan renderer across the board. That’s a massive bomb dropped right there, guys.
This is quite an unprecedented partnership, and the announcement was only five minutes long. Bethesda implemented Vulkan first as a test in DOOM, and the game runs amazingly well on it. When using Vulkan, the engine powering DOOM can utilise advanced rendering techniques like asynchronous compute shaders, hardware-based anti-aliasing up to 8x TXAA without a performance hit, and multi-threaded core support. Bethesda’s second game using Vulkan will be Prey (2017), and it will ship in May 2017. Not only is this fantastic news for everyone who was worried that Bethesda would adopt DirectX 12 support for all their future titles, but it also means that Windows 7, 8.1, and even Linux gamers will be able to enjoy Bethesda’s games with the full power of the Vulkan engine behind it.
This is good news, everyone. We all benefit when companies support open standards, and open-source standards are even better.
That’s all the important highlights from the AMD Capsaicin & Cream event at GDC 2017. Stay tuned for our future coverage on AMD Ryzen, coming later this week!
Sign up for the NAG Weekend Edition, and get a super-special curated list of what's cool and trendy this week, delivered to your inbox every Friday. Plus, each month, one subscriber can win a prize sponsored by Apex Interactive!