AMD-Mantle-header-grey

For the longest time in an industry dominated by Microsoft’s Direct3D API which contains DirectX in its code base, we’ve had a peculiar thing with multi-GPU graphics systems that requires us to clone the memory pools of one GPU across the others in order to have them working on the same data at the same time. With DirectX and the traditional methods of alternate-frame rendering (AFR) in Crossfire and SLI, its been a cheap way to get results without mucking about with the code too much. With AMD’s Mantle and Microsoft’s DirectX 12, however, that may be changing.

AMD’s Robert Hallock posted on Twitter a screencap of a blog post he had written up detailing how AMD’s Mantle API handled memory access and multiple memory pools. The post was a response to many reviewers and bloggers decrying dual-GPU systems that didn’t pool up the available VRAM, saying that a dual-GPU card advertising twice the actual available amount of memory was tantamount to false advertising.

Hallock Mantle VRAM pooling

Robert Hallock on Twitter talking about Mantle.

 

The text itself is pretty easy to follow, but its the last two paragraphs that really sum things up. One of Mantle’s abilities is allowing for asymmetric loads on two GPUs hooked up together in Hybrid Crossfire and that’s currently possible with an APU and a compatible GPU from the Radeon R5 or R7 series. Since both GPUs have their own VRAM pools (one from the discrete GPU, one from system RAM), there’s obviously no resource sharing at the moment. But through Mantle, a developer can have more fine-grained control over what goes where and they can have the two VRAM pools hold different data for different workloads, which hasn’t been possible before.

Where AMD and Nvidia currently put their feet in the mouth is with their dual-GPU offerings, where they label a Radeon R9 295X2 as having 8GB of VRAM, or a Geforce GTX 690 with 4GB of VRAM, but in reality the pools are split and mirrored between both GPUs on the PCB. It gets worse when you have two GTX 690s in a system, but effectively only 2GB of VRAM – playing at 4K is not a pretty experience.

However, if in the future it becomes possible for developers to pool together VRAM pools more easily, then having 2GB or 3GB frame buffers on a GPU might not turn into such a bad thing at all. Will DirectX 12 do it in the same fashion? Probably, but there’s no telling how things will turn out without a few games on DirectX 12 and Mantle at the same time to test things out.

In any case its more of a stop-gap measure we can use to somewhat fix the performance issues we’re facing today while we work on getting more advanced memory standards working on next-generation GPUs. It’ll save AMD and Nvidia money because they can delay the adoption of those ultra-expensive technologies a bit further, but it’s not going to ultimately solve our memory issues until higher-density chips come on to the market.

Source: Twitter