Rejoice all you gamers in the PC Master Race, because some very good things are coming to the monitor market. AMD’s submission to the VESA standards body has been successful and the organisation will go on to add in a new feature to the Displayport 1.2a standard to be rolled out by monitor manufacturers in the future that will ship their units with modern scalers and a Displayport connection.
Just what is that feature? Why, its variable refresh rates, of course! Nvidia’s proprietary solution is called G-Sync and requires a custom scaler chip and the use of specific monitors and Geforce graphics cards, while AMD’s version, now called Adaptive-Sync, will be available on most monitors with a Displayport connection shipping in late 2014, provided the manufacturers enable the feature and don’t use older hardware.
Not only is the feature going to be reasonably easy to implement on newer scaler hardware, it’s also going to be a lot more flexible than Nvidia’s G-Sync alternative. VESA says that they expect actual implementation to begin 6-12 months from today, with the earliest wave of monitor supporting Adaptive-Sync coming late in 2014, possibly with a showing off by many manufacturers at CES 2015. Monitors supporting A-Sync with the right kind of panels, backlighting and hardware will be able to display any refresh rate inside the following groups:
AMD says that not all of their graphics cards will be able to use Adaptive-Sync when it launches. Currently the only GPUs capable of working with it on the desktop are the Radeon R9 290X and R9 290 and the R7 260X and R7 260. This means that only newer GCN architectures like Bonaire and Hawaii will have this capability built-in. Although AMD has previously says that any of their discrete GPUs that have a Displayport connection from the Radeon HD5000 series and up can run a monitor with Adaptive-Sync, it appears that they’re only concerned with getting their latest products working with it properly. The rest will come through driver implementations.
It will be pretty interesting to see how monitor manufacturers decide on their implementations. For example, the first 9-60Hz range will be ideal for all-round use, with the monitor scaling down to whatever refresh rate the application running at the current time dictates. So if you’re on a laptop and just web browsing, the refresh rate could drop to as low as 20hz for static content, saving on power draw. For the other ranges, the lowest possible frequency is either under or slightly above the minimums that G-Sync imposes on itself. Nvidia’s solution is far more flexible in terms of what they can do with it thanks to vertical hardware and software integration, but Adaptive-Sync will work with a wider range of hardware for less specific requirements.
Looking to the future…
I’m pretty excited about this for a few reasons. One is that it makes Eyefinity a more viable solution for more people. Instead of overextending your budget to aim for something like a Radeon R9 280X and three monitors, it’ll be easier to play with something like the Radeon R9 270. Sure, you can dial down a few settings to improve frame rates, but you won’t get mismatched frames and the game won’t appear to stutter for you in more demanding in-game firefights.
Additionally, we’re hitting a bit of a wall in terms of graphics horsepower. Not only is the jump to 20-nanometer production production processes going to take a while longer for AMD and Nvidia to get to grips with, the leap to UltraHD 4K monitors requires at the very least a Radeon R9 290 to run at good detail settings with acceptable refresh rates. Using Adaptive-Sync, you could compensate for the lack of appropriate hardware to drive the game properly and you can grab the monitor you want while you wait for graphics cardware to play catchup.
Lastly, there’s already been some use of external Displayport (eDP) in laptops in the past, but never to the kind of extent that we are able to push today. If applications could begin to dictate to the hardware and the OS what refresh rate they desire, developers could target ways to make their application take up less GPU and CPU cycles and also drop power draw dramatically.