NVIDIA ANSEL pascal intro

NVIDIA’s push with Pascal into the consumer market might give us monstrously powerful cards, but it also heralds some new ways of doing things in games that in the past have been too expensive to do in real-time, or too far ahead of working software implementations. Along with the GTX 1080 and GTX 1070, NVIDIA last week announced some pretty cool technologies for games, and while one isn’t a new idea at all, the others are pretty damn nifty. First up is NVIDIA ANSEL, the company’s answer to improving the capabilities of in-game photography engines.

ANSEL for game photography

Named after the famous photographer Ansel Adams, who spent his life documenting humanity’s changing influence on the environment and each other, ANSEL is sort of a black box feature that game developers can add support for. It’s exposed in the drivers for the GPU, and allows unfettered access to the game’s world to find your most picturesque moment. Adding in support for games is quite easy, and requires very little trouble on the part of the developers. In fact, it’s probably about as easy as supporting the freemode camera options in the Playstation 4 and Xbox One.

The free camera option is a similar feature to freemode camera, and comes with a bunch of capabilities like tilt and zoom effects, and you can adjust the focus point and depth of field effects quite easily. Adding to the camera capabilites are preset filters for you to choose from. Luke from LinusTechTips described it as “Instagram for video games”, which is fairly accurate, I guess. A lot of this is done in the in-game menu, but you might also be able to add in these filters after the fact, and share them through Geforce Experience. I think that would be cool.

NVIDIA ANSEL menu ingame

EXR is a picture format developed by Industrial Light and Magic that allows you to capture and edit a rasterised image generated by a GPU and either edit the image in a method similar to a RAW file, or use it as a stage in the process of creating an animation. The in-game ANSEL tools allow you to export a screenshot to the EXR format by selecting the EXR option under “Capture type”, and then use Adobe Photoshop or GIMP (version 2.9.2 will support this feature) to edit the image in the same way a professional would do to a RAW image.

Super Rez is a bit of a clunky name, but this is a beefier version of the hacks that modders and game photographers currently use to take high-resolution images in-game. Super Rez makes this a bit easier – inside the ANSEL tools is a slider to change the resolution of the captured images. If you scale it all the way to 32x, ANSEL will generate an image that is 32 times your monitor’s resolution. For the NVIDIA demo, the projector was set to 1080p resolution, so the default 4x mode would result in a 33.1 megapixel image. With the 32x scaling, that would render out a super, super-high resolution screenshot at 2.1 GIGAPIXELS! Yes, you read that right. Over two billion pixels. That’s a higher resolution than some of the world’s most advanced still cameras.

NVIDIA ANSEL game support

If that’s not jaw-dropping enough, there’s also a 360 stereo image capture so that you can look at a game screenshot with a head-mounted VR display or Google Cardboard-like applications. That’ll be great for anyone wanting to preview what a game looks like in VR, and how it might behave, when played with a VR headset.

ANSEL support is coming soon to several games that either still have to launch, or have launched already. Adding in the code to support is is easy, and given NVIDIA’s history of making their special features a black-box sort of affair, it’s quite likely that AMD will have to come up with their own version of this, if they haven’t done so already.

VRWorks Audio for ray-casted audio

NVIDIA vrworks audio

Here’s the part where NVIDIA’s demo was a bit odd to watch. I felt I had seen this before. Perhaps in a A3D press release? Nah, it couldn’t have been that. It was AMD’s TrueAudio demonstration! When AMD announced TrueAudio, some of the discussion during the demos was centered around how current Surround Sound technologies don’t work. Convolution Reverb, a sound template of how sounds bounce around in particular environments, only works for a select few instances and games where the environment you’re hearing the sound in matches the one it was recorded from.

AMD’s TrueAudio DSP is supposedly capable of ray-casting audio, but it is a really, really expensive method of simulating sound if you don’t have that kind of hardware, and the games industry has been making do with hacks or EAX since the death of A3D. Instead, it adds in multiple channels of audio sources from within the virtual environment, and these sound sources have an area of effect where they grow more audible the closer you get to their source.

NVIDIA VRWORKS audio waves

TrueAudio might have come close with its methods of creating and applying positional sound to environments in a game, but VRWorks Audio takes this a step further. NVIDIA CEO Jen-Hsun Huang likes to use the term “physically-based” for a lot of things that are done with NVIDIA’s PhysX technology, and VRWorks Audio is a physically-based ray casting simulation for audio. Similar to how you use radio waves to map the coverage of a Wi-Fi network in a house, ray casted audio starts from a focal point in the simulation and travels through the environment as a single wave, bouncing around the room in the same way that light moves (sound waves always travel in straight lines).

NVIDIA uses an algorithm to create a spherical compression effect for the sound effect, and then lets it bounce around the environment in real-time. Because light travels the same way sound does, NVIDIA uses the simulations from their IRAY lighting technology to work how sound should bounce or reverberate off particular objects. IRAY itself mimics how light bounces off certain objects, and it even can expand to give an object colour based on what kind of light you shine on it.

Is this a purely Pascal feature? I think it is. NVIDIA prefers to figure things out in a CUDA compute-like fashion instead of building in a separate DSP chip like AMD did, and VRWorks Audio basically builds on the functionality on IRAY which is also done in real-time. Will it work on older cards? Sure, I think it would. But Pascal will do it much better, and probably with less “lag” in the simulation.

Huang also was careful to throw in the “VR” buzzword around a lot when talking about the audio improvements. Sure, this would work really well for regular gamers with headphones, but the technology is specifically designed for VR use. In fact, most of the technology developed for the GameWorks suite is done entirely for VR purposes; NVIDIA is thinking that far ahead.

Simulataneous multi-projection finally fixes multi-monitor setups

NVIDIA ANSEL multi proj 2D

This is the big ticket item. The one reason why, if you were to ask me about a single feature, you should consider buying a Geforce GTX 1080 or similar. If you’re getting an Oculus Rift or HTC Vive, Pascal is probably the best best for you (barring any similar features being introduced from AMD). In traditional monitor setups, a single viewport works for pretty much every single game on the market. You can increase the resolution and widen the field of view to give you a better overview of the game, but that’s about all you can do.

The problem with multi-monitor setups from the perspective of games designed for a single projection is that they’re rendered as a single image on a flat plane, and then driver software helps tune the angle of the projection to help make the monitors in your peripheral vision look mostly accurate. The issue that crops up is that the distortion created as a result of that angle tuning warps the display, and the output on the side monitors is stretched and ugly. There’s a proposed solution for this, of course, thanks to Pascal.

NVIDIA ANSEL multi proj demo

With Pascal, NVIDIA can split up the projection methods used in the game into sixteen individual viewports, each of which is rendered separately with a single pass. Theoretically, this could mean that a sixteen-monitor Geforce Surround array is possible, but that’s probably only possible with four GPUs at this point. In older architectures like Maxwell, a single pass for this kind of trick isn’t possible, and Pascal has had some hardware changes made to accommodate this. Because it’s transparent to the game and to the monitors themselves, it’ll work on just about any setup, including mismatched multi-monitor setups and PLP.

So what happens is that depending on your monitor setup, the game will render however many viewports you need (up to 16) and will wrap them around you as per your requirements. The GPU will do this whether or not you actually use multi-projection, which is a good thing – in VR, if your game stalls or stutters for whatever reason and the fps drops dramatically, you can still look around smoothly and quickly because the other viewports are still rendered, giving the GPU time to catch up. The Rift needs specific support in games to be able to do this natively, but NVIDIA’s workaround is at the hardware driver level, and incurs a very low performance hit.

The result is quite stunning. With the multi-projection fixes in place, the geometry in the game is no longer stretched, and objects appear in their expected places with the right dimensions. If NVIDIA adds it into the drivers, it’s possible to create a 360° wraparound monitor setup with the viewpoints in their correct locations, enabling you to look behind you in a game without taking your crosshairs off the enemy in front of you. Multi-projection doesn’t require any special monitor technology or connector, and will work with G-Sync displays and with 3D enabled as well.

Interestingly, Huang mentioned that it’s currently possible to do this feature with older architectures, but you have to dedicate one GPU per display, and hack the game to get it to stitch the displays together and maintain the correct dimensions. With Pascal, however, this is possible with a single card, and it happens in a single render pass, so there’s no real difference in performance to doing this instead of a curved 3440 x 1440 display. On a side note, curved displays also benefit from this technology, but the drivers use two stitched viewports instead of a single one to simplify things. You may also have to edit the settings to match the curve of your display, but that should be a simple thing.

Simultaneous multi-projection for VR

NVIDIA ANSEL multi proj VR rendering fixed

It gets better for VR. With a HMD from either Oculus or HTC, the drivers switch into a 2×2 mode for each eye. Four viewports are now set at angles to your eye’s focal point, and the resuling image projected through the lens is perfectly flat and without distortions. If you’re a regular reader of NAG Online, you might remember this piece that I wrote about the PlayStation VR improvements that are rumored for the PS4K console. In that column, this is what I wrote:

Because the lenses in a VR headset aren’t straight, some distortion is created by them when you’re viewing the game’s output, called a pincushion distortion. To counter this, the game engine creates a barrel distortion when it’s rendering the frame, which is later submitted to the VR headset. The opposite distortions result in a normal-looking image, and objects don’t get stretched out of proportion. As a side-note, the inclusion of the barrel distortion is why gradient masking works so well, because the outer edges of the frame are distorted more, and are left to be rendered later.

Now, as a result of doing this instead of rendering a barrel-distorted image first and then using the lens projection to more or less fix the problem, NVIDIA’s solution simply does away with the extra pass that is done by the GPU to create the distortion effect, and the resulting performance dip is lower than it was before. This is a pretty big leap in efficiency, and it’s likely that some of these tricks will eventually trickle down to be used by Sony and Microsoft for their respective VR technologies.

NVIDIA multi proj single pass stereo

Turning off the enhancements for VR results in a 72 fps average for NVIDIA’s in-house tests with the Unreal Engine, and turning them on puts the framerate straight into playable territory at 96 fps. What takes two GPUs currently in most systems with hacks to do, only requires one GPU with Pascal. And that applies to the GTX 1070 as well, a $379 monster of a graphics card that’s easily twice as fast as Geforce GTX 970 in SLI. If AMD has no comeback to this feature, NVIDIA’s complete domination of the graphics card market will soon extend into the VR space as well.

Geforce GTX 1080 can only do two-way SLI

nvidia-geforce-gtx-1080-sli

Beyond all the advancements in VR, projection, and how games are rendered on a Pascal GPU, the bigger change in terms of performance is that the GTX 1080 only does two-way SLI by design. You can’t attach three cards together using a triple-card SLI bridge, and you can use older bridges if you don’t want to buy the news ones NVIDIA and its partners will sell, but triple, even quad SLI is a no-go. NVIDIA hasn’t spoken about their reasons behind this decision yet, and the only confirmation we have is from insiders from NVIDIA’s board and retail partners, but near as I can tell, SLI beyond two cards is dead.

The possible reasons for doing this are beneficial to both gamers and developers. Developers spend huge amounts of time implementing SLI or Crossfire in a game, and having to support the ability for the game engine to split up the workload into three or four lots for each GPU is not a simple task. The number of games that actually benefit from three and four-way SLI setups can be counted on two hands, and the returns in others are not so fantastic. Even tactics like using split-frame rendering instead of alternate-frame rendering doesn’t work, because there’s simply not enough work to go around.

By the time you’re thinking about running a game in 8K, you have to step back and evaluate what you’re doing because that’s a workload too large for high-end GPUs.

Moving forward, games made for DirectX 12, Vulkan, or OpenGL should all support SLI up to two cards. SLI will be available from day one, as opposed to the current situation where a driver has to be released and a multi-GPU profile created for the game. As we get closer to launch, my expectation is that someone will crack open those new SLI bridges and find a PLX chip in there, made to facilitate much faster data transfers through NVLink. Of course, you can still stick four cards into a system, but only two will be used for graphics rendering. Another one could probably be dedicated to PhysX acceleration and the fourth could be for… I dunno, folding@home purposes?

AMD’s Raja Koudori has talked in the past about how Crossfire might look in the future, and he mentioned that being limited to only two cards would be much easier to accommodate for, and better suited for VR games. NVIDIA’s move to adopting this policy on the GTX 1080 just cements for me how small the multi-GPU market is, and how much effort and energy these companies put into it for very little performance gain. Both companies would rather abandon these initiatives than waste all that creative energy on getting these extreme systems to work.

From now on things should be easier, and it starts with Pascal. And that, dear readers, is what makes it special for gamers.