EA is working on a new, experimental game engine

Codenamed “Halcyon”, the engine was introduced at the Khronos 2018 convention in Munich by Graham Wihlidal, a senior rendering engineer at EA’s Search for Extraordinary Experiences Division (SEED) tech lab. The company has had a lot of time to begin thinking about a replacement for Frostbite, and it’s coming up to the point where Frostbite may need radical new paradigms to work on future hardware.

Halcyon has nothing in common with Frostbite, and even bucks the trend for how developers inside EA’s studios might think about graphics. Its main purpose is to allow for rapid experimentation with different ideas about rendering 3D graphics in modern games, and its clean sheet design allows SEED to work unencumbered by old ideas from existing engines. At this point then, it’s an engine designed for EA to just play around with things and see what works and what doesn’t.

It currently supports Windows, macOS, and Linux platforms, and SEED revealed that it wasn’t designed to support iOS or Metal 2 on iOS. Their main concern was the Metal API on macOS for desktop applications. Some websites that picked up the news about Halcyon have claimed that it’s going to replace Frostbite, and that’s not the case as things stand today.

But it could. A lot of the work going into Halcyon can, and probably will, become the basis of the next iteration of Frostbite.

SEED had clear goals in mind when figuring out how their future rendering engine should work. They firstly wanted to cut down on a lot of unnecessary work that goes on today in the game dev industry – so things like figuring out how 3D meshes need to interact with the virtual environment, even down to how collision detection is handled, can be as hands-off as humanly possible. This allows for developers to rapidly work on new ideas or projects and cut down on time spent waiting on separate groups handling individual things to finish their part.

The clean sheet design also means that EA SEED doesn’t need to worry about old APIs and old ways of rendering graphics. Halcyon only targets modern low-level graphics APIs as render back-ends, and has a couple of tricks that allow EA to almost make their projects a write-once-run-anywhere sort of deal. Most of this wouldn’t be possible without the Vulkan API, so EA owes a lot to the Khronos group to allow this kind of thing to exist, a chimera of code that supports almost every platform you can think of in a neat, standardised fashion.

Multi-GPU setups are also a focus of the engine at the start, and rely on the heterogeneous systems architecture to do its magic. With HSA, discrete graphics adapters are handled and assigned work generically, and there’s little distinction between them in the API. This is how we can have a GeForce GTX card and a Radeon RX card in a multi-GPU setup and not lose access to CUDA or accelerated PhysX and whatnot. For future AAA titles, this means that EA is looking to support explicit multi-adapter modes instead of relying on Crossfire or SLI, and this way they can also distribute workloads across different display adapters.

Interestingly, in this presentation there’s almost no mention of why they decided to do away with linked GPUs, but there are two clues. The first one is that they’re not developing games with Halcyon with alternate frame rendering (AFR) in mind, primarily because AFR introduces issues with some anti-aliasing modes and with things like variable V-Sync.

Halcyon has some interesting goals in terms of the design and look of the games being developed. Games developed using Halcyon – or a future derivative – can have assets streamed in from a local or remote source. One of the things that developers are toying with is texture and asset streaming, where certain parts of the game are pre-rendered and downloaded from a remote location on the internet, allowing the GPU to spend less time doing calculations on something that is otherwise quite taxing. Real-world examples of this are difficult to come by, but Valve is using this approach in Proton for Linux by allowing the shader cache to be shared with other Steam users when running a game with a particular GPU, giving those players a small performance boost.

Another goal with Halcyon was to allow for different rendering approaches for lighting. The final title can use various ways of lighting and shading areas and object, mixing and matching regular rasterisation with ray tracing and even hybrid ray tracing to produce the right image on screen. Hybrid rendering mixes and matches all these things for particular objects or workloads. This means that EA’s developers can target what kind of performance profile they want depending on the target platform and the hardware it is expected to run on.

The presentation gets gradually more and more technical, but another highlight that is interesting is the concept of render handles. In Halcyon, render handles are workloads that are assigned to a display adapter or device. You can have one render handle loaded simultaneously on all devices, each with an instruction to render an object with a different point of view, or you could have all available devices work on the same thing to complete the workload quicker.

The representation of the render handle’s output on any device can be reloaded on the fly, and an existing handle can be moved safely to any device if you end up physically removing a GPU from the machine while the application is running. The aim here is to allow for a dynamic approach to rendering where render devices can be added in and removed on the fly depending on the workload.

This also means that different devices can have different render handles that support different render back-ends. You can have one GPU rendering the application using DirectX 12, another doing it with Vulkan, a third using another renderer, and so on (in the above slides, Proxy is a custom renderer that SEED is working on). As I mentioned before, Halcyon doesn’t use AFR because of rendering issues that become a big problem when used with things like SLI, G-Sync, temporal aliasing, and so on.

You can get around that by using split-frame rendering (SFR), and previously this was how GPUs used to work in tandem to render games. The display is divided up among the GPUs, and each one renders their portion of the display area. Visual issues like microstutter, judder, and performance drops when metering the output of the frames goes away with SFR.

Because Halcyon supports SFR, you can have all the bells and whistles including advanced AA methods that don’t work with AFR on multi-GPU setups. SFR returned as an option for applications to use in DirectX 12 and Vulkan, and the only modern implementation of it otherwise has been in Civilization: Beyond Earth when using AMD’s custom Mantle API. EA’s first-party studio DICE has also implemented SFR experimentally in the Frostbite engine when using Mantle, as seen in this set of slides where they talk about the engine’s capabilities. That capability was never exposed to the public, but the plumbing was still there.

This means that running games in the future on the Halcyon engine may require a different thought process. Instead of trying to render a frame as quickly as possible to pass things over to another GPU or to make time for the next one, we instead shift to a mode where the efficient rendering of objects, ray tracing for lighting and sound, and other post-process elements become the focus. In order to have consistent frame delivery, rendering the game using SFR with multiple GPUs means that you can synchronise the output and still maintain the ability to use temporal effects and retain the smoothness you’d expect from a single-GPU system.

There’s so much more in the 93-slide long presentation that I would be writing more words than you’d probably care to read. It’s a lot of information to process, and some of our readers who are in the game development field will no doubt find it interesting. But there are a couple more things to talk about before we close out on the discussion here.

Firstly, Halcyon introduces the idea of render graphs as workloads. The render graph, much like the render handle, can be run across multiple GPUs, and many render graphs can be run on separate GPUs. The kicker is that this workload is able to be doled out on the fly and it doesn’t matter if a render device suddenly becomes unavailable – the output is just moved over to a different GPU. This bodes well for remote rendering of complex objects, which SEED mentions in their slide about how render graphs can be sent to a server cluster to perform computation on the render graph and produce an output that you didn’t have to calculate locally.

At the same time, this introduces changes in how memory is managed on GPUs and in system memory, and SEED felt it necessary to not target current-gen consoles or PCs with this technology. The performance hit is around 5%, SEED claims, with drivers as they are currently. There’s also talk about automatically putting compute workloads into a queue. With devices like the PlayStation 4, the GNX and GNMX APIs both expect that the developers will manage the compute scheduling manually, and this means that there’s a small window of time in which work can be done on compute shaders (these are shaders that have special effects applied to them). Halcyon does this automatically, which removes the need for the developer to spend time profiling compute workloads and deciding when and how they should be executed for performance reasons.

And finally, Halcyon seems to be the first game engine that follows the ideal Vulkan method when it comes to managing shaders. The Khronos Group knew long ago that DirectX typically dominates the market and discourages ports because developers had to take the shaders made for a DirectX title and create entirely new ones for each platform or API, like OpenGL. The Khronos group developed something called SPIR-V, which is an open standard for shaders that allows them to be translated into shader code that is particular to each API. There’s also a translator, a part of the SPIR-V design, that can translate shaders created using HLSL which Microsoft uses and translates them into SPIR-V shaders, and then translates SPIR-V shaders into other shaders for different platforms.

You can check out the full Halcyon presentation here if you’re interested in reading all the other technical bits. EA likely plans for this to feature in games coming in 2019 and later, and it will be interesting to see how those titles turn out from a technical standpoint.

Your Twitch livestream can help save the world with GivenGain’s charity widget