Recent PS4K rumors make no sense to me at all, technically


It’s been well-established through early leaks that Sony is considering making a new PS4 console – the much-talked about PS4.5, or PS4K – to fit into their new media lineup which includes high-definition 4K content. It makes sense that this is in the works because of the imminent arrival of HDCP 2.2-protected content, which is currently being trialled by Netflix and Amazon video. HDCP 2.2 basically necessitates an upgrade of your entire home theatre system if you don’t have hardware compatible with it.

But the rumors just kept on building up the hype, with the latest ones promising a three-fold increase in raw power, and alleging that Sony is telling devs to develop their games for two systems. I’d like to share some thoughts about these rumors, and outline why I don’t think they’ll come to pass.

The new rumors, or leaks as some people like to call them now, came from Giantbomb, who claims to have documents outlining Sony’s plans for the console, now codenamed NEO, as well as documents pertaining to the hardware specifications. On the table is a CPU clock speed bump from 1.6GHz to 2.10GHz, an increase in memory bandwidth of 42GB/s to 218GB/s, as well as a doubling of the shader cores in the GPU, which Giantbomb claims is based on AMD’s Polaris architecture. It sounds like an amazing machine – and it undoubtedly would be even more of a unicorn at a rumored price of $399 at launch – but it doesn’t seem to fit Sony’s standard M.O. when it comes to iterating on their consoles and saving money.

Even the rumored restrictons on developers to not make any content specifically for the NEO doesn’t make sense. With two consoles out in the wild, developers would have to make their games and optimise it for two platforms, and that’s not taking into account developers who end up creating a multi-platform title. There are significant differences between AMD’s current version of the Graphics Core Next architecture inside the PS4 and the upcoming Polaris-based GPUs, enough to make running the same game on both architectures behave differently.

There will be more optimal ways of doing things on Polaris that perform worse on older GCN architectures. Any sort of clock speed increases, increases in memory bandwidth, or other performance improvements will change how their engine behaves on the console. They’d have to make sure their physics engine can handle the higher framerates, they’d have to include additional higher-resolution textures for outputting the image to a 4K display, and the game as a result would consume more storage space on the Blu-Ray disk.

Any sort of performance optimisations for the faster console would come at the expense of performance degradation on the older console. If you want proof of this, look no further than NVIDIA’s performance in their recent drivers on Kepler-based graphics cards. Performance continues to drop in newer games on Kepler cards, while Maxwell GPUs continue to get faster and more performant.

The days of Crazy Ken spending almost a billion dollars on the Cell processor inside the PS3 are long behind us, and Sony wouldn’t want a repeat of that again. They don’t need the performance crown because they’ve already won against the Xbox One, and they’re outselling Microsoft’s console 2-1 across the globe. A new PlayStation 4 that ends up being more expensive to manufacture makes no sense.

Making a new PS4 based on Polaris won’t be cheap

finfet cost estimations

The table above is an estimate of die costs from Handel Jones, founder and CEO of International Business Strategies.  This table was part of a slide Jones showed in a keynote given at the SOI Consortium, in Shanghai in 2014. It’s not an accurate, to-the-dollar table, because prices of bulk die orders are usually kept under wraps by the various foundries who try to keep things competitive by not announcing their pricing to the public. It is instead a prediction of prices, and it’s the best estimate that we have at this point.

AMD’s Polaris is going to use 14nm FinFET production, and rumors thus far have pointed to the company offering only two GPUs based on Polaris, with cut-down dies filling in some of the cheaper price points. Doing only two designs seems quite sensible when you look at the predicted wafer costs. AMD’s GPUs and APUs for both consumers and the two console giants are currently made at 28nm, and they skipped 20nm because the expected performance benefits weren’t enough to justify the cost. But 14nm FinFET is a huge jump up in price – it’s expected to be almost twice as expensive as either 28nm process.

Now, according to documents and interviews that have been available to the public for a while now, AMD disables parts of the dies on the PS4 and Xbox One APUs to improve yields, as they’re wont to do on their GPU products. They do this to get as many useable dies out of the process. It bothers me then that some rumors claim that they’ll double the core count and take advantage of a die shrink to make a chip that’s effectively the same size of the current design at 28nm. There’s a performance benefit because the chip has more execution units, sure, but now you’re not getting any more yields out of the wafer, and suddenly your APU costs twice as much to manufacture.

Sony’s looking to reduce or manage costs, not increase them. It makes much more sense instead to use the current design at 14nm, and use the space savings to fit twice the number of dies available on the wafer. Production costs then drop slightly, Sony benefits from the smaller process which reduces heat and required voltage, and the consumer gets a cooler and quieter console as a result.

A power increase isn’t necessary for PlayStation VR

Sony LSSDK engine (1)

A secret presentation held by Sony London studios at GDC 2016 demoed the firm’s in-house VR engine that they hope independent studios would one day use. It’s called LSSDK, and it’s a multi-platform game engine for creating games for the PS VR and PS4 – it only works on Windows as a necessity to make development easier. It has some amazingly complex stuff going on, like real-time lighting, global illumination, and tiled rendering.

One of the key problems that LSSDK addresses is how to properly render a VR game without requiring monstrous amounts of computing horsepower. After all, now you have the added problem of rendering a game at a higher resolution, effectively twice for each eye, with the lack of GPU memory bandwidth and computing power creating stutters as you pan the viewport around a complex scene. So you have to find a way of reducing the power required to render a frame in VR, and it’s surprisingly not hard to do with modern game engines.

Sony LSSDK engine (2)

Sony’s engine supports something called resolution gradient masking. In a nutshell, a set of complex algorithms determines at what resolution pixels visible in the viewport are rendered at. The example above is quite extreme with four gradient layers, but it’s quite logical. At the center of your field of view, you have the image rendered at native, high-definition resolution. As the images moves further away from the center, it drops down in quality. The first step is a quarter drop in quality, then 1/2, and then finally 3/4 resolution. This is a massive saving on bandwidth, and on low-end systems results in a boost between 15-25% in framerates.

The trick to this is that you’re not rendering the full object in the layers that fall outside of your focal point. You’re only rendering a representation of what it would look like if you were rendering it at 1/4, 1/2, or 3/4 resolution, in the same way that NVIDIA’s delta colour compression algorithm renders a representation of what the colours of particular pixels in a group might be.

Games using PS VR and the LSSDK engine could potentially save more performance with tricks like pre-baked lighting, colour compression, FXAA instead of MSAA, and using smaller textures. The demos shown purportedly were able to show that the LSSDK engine was able to power games at 60fps, so there’s no need at the moment for a hardware upgrade to fill in the gap. If you suddenly had an extra 25% GPU horsepower with a tiny hit to image quality that’s more or less imperceptible in motion, for almost no monetary expense other than development costs, then why would you want to spend money to introduce it through a hardware upgrade?

The little black box increases performance already


This is the PlayStation VR Processing Unit (PS VR PU), that handles how games displayed on the PS VR are also able to be simultaneously shown on the display. This is done on the fly by the GPU in desktop computers, but with a typical setup including a NVIDIA Geforce GTX 970, there’s enough horsepower available to do it. This black box takes the output from the PS4, which would normally have the PS VR connected to the HDMI port, and duplicates the signal so you can have other people watch what you’re doing.

This is actually a somewhat intensive task, as it happens. Because the lenses in a VR headset aren’t straight, some distortion is created by them when you’re viewing the game’s output, called a pincushion distortion. To counter this, the game engine creates a barrel distortion when it’s rendering the frame, which is later submitted to the VR headset. The opposite distortions result in a normal-looking image, and objects don’t get stretched out of proportion. As a side-note, the inclusion of the barrel distortion is why gradient masking works so well, because the outer edges of the frame are distorted more, and are left to be rendered later.

Not doing this on the PS4 saves some GPU power. The PS VR PU also handles some other things like spatial audio processing, saving more power that the PS4 could do with.


In the end, does it benefit anyone to make a PS4K that has double the GPU horsepower? No, not really. Sony’s vision for VR on the console is more in line with a curated store of polished titles exclusive to the headset, and it’s far more likely that the PS5, along with generation two of the PS VR headset, will open the floodgates to developers to go wild. In the same way that I don’t expect Holo Lens to ever come to Xbox One, I don’t expect that Sony would throw away money just to make a more powerful PS4 that really benefits no-one at this stage.

Whatever it ends up being, it’s certainly not going to be a console capable of AAA-levels of gaming at 4K. Even 4K 30Hz is a difficult goal for the hardware to meet, and it’s far more likely that the new console will be a cooler, quieter, more capable media powerhouse.

The Game Awards air on 03 December