AMD’s Financial Analyst Day 2017 conference took place this week, and it was a little difficult not to feel some euphoria from the video previews and the demos and the music selection, because while it was somewhat cringey and cheesy in parts, the overall feel is a bit like that of past engagements from AMD/ATi, which was a different company altogether that was a lot more upbeat and energetic. AMD can dress it up formally all it wants, but they trotted out nerds to talk about nerdy stuff, and the somewhat blunt honesty that came out of it was both funny and rather on-the-nose. Let’s take a look at what they have planned for consumers in the near future.

Updated Roadmaps show AMD’s cohesive approach coming into play

AMD’s newly updated roadmaps are a bit different from their previous designs, because these ones don’t commit to specific time frames. Rather, AMD has a general plan to do [X] number of things in a given year, but they won’t commit to a launch window unless they’re absolutely certain of their targets and production capability. The first example of this is with the Zen family. Zen as we know it is currently on the 14-nanometer process created by Samsung and licensed by Global Foundries, but the 14nm+ update, which we saw launch with the new Polaris GPUs for the RX 500 series, will eventually also come to Zen. This is more than likely going to be a refresh that will launch sometime in Q4 2017/Q1 2018, and it will allow AMD to fix a few bugs and bump up clock speeds.

“Zen 2” follows after, launching with a revamped architecture on a brand new 7-nanometer production process created by IBM’s foundries and licensed by Global Foundries. This gives AMD a leg up on Intel, which is set to only have 10nm production starting this year, and it means that their time frame for Zen 2 is somewhere in 2H2018. That’s a long way away. “Zen 3” is even further, possibly seeing a mid-2019 launch with the 7nm+ process. Zen 2’s architecture is already finished the design and testing stage, and AMD will be moving on to starting their tape-outs sometime this year, so at least there won’t be any delays this time.

GPU-wise, we see the same thing. AMD’s strategy is to place all of their products on new process nodes, which means that from now on, whenever there’s a new GPU launch coming up, we can expect a CPU launch not soon after. This means that with Navi and their next-gen GPU design, they’re going to be staggering releases to match up so that they can save money by not needing to use different foundries or process nodes.

I guess that helps enthusiasts somewhat, doesn’t it? Instead of trying to figure out when and where you should be upgrading with Intel or NVIDIA, AMD’s setup means that you can plan for a new build that you’ll buy in the next launch window, which means that you’ll always be on the bleeding edge of their GPU and CPU products.

In the datacentre market, AMD is finally re-entering the server space with EPYC (more on this later), “Rome”, and finally “Milan” sometime in 2019. As I’ve said above, the staggering of product launches to match process nodes means that AMD can plan ahead for their future by working with their foundry partners instead of relying on an internal goal set by someone with ideas of a grand design.

It looks like the end goal here was consistency for the company and for its investors, who have benefited handsomely from AMD’s goodwill earned in the past year.

In terms of product launches for now, Ryzen 3 will be popping up in Q3 of 2017, which means that it will only launch after the Computex 2017 show. That’s not too bad considering that their partners only just finished up fixing motherboard issues and problems with RAM, but it does mean that a lot of people will be disappointed that they won’t be able to pick one up in June 2017. We’ll have to wait a while until AMD’s E3 conference to know more.

Just after E3, in around the July-August 2017 time frame, AMD will be launching Ryzen Pro for the commercial desktop market (more for OEMs and big-name brands like Dell and HP). This means that we’ll see more OEM options with regular Ryzen processors as well, which is nice. Alongside that is Ryzen mobile, which is already lined up for several design wins with the big laptop brands. During the conference AMD made a small mention of Apple when they were talking about past successes, so perhaps there’s something in the works here.

Ryzen mobile is set to blow you off your chair though – it will include a Vega GPU integrated into the die for the first time, either as a MCM design, or possibly a fully integrated die like previous APUs. That does raise the likelihood that Ryzen 3 has APUs with Vega in it as well. Ryzen Pro mobile also launches very late this year, and will probably slip to a Q1 2018 launch if there’s any delay in achieving certification for the GPU drivers.

Infinity Fabric extends its reach

A few years back, AMD acquired SeaMicro with some change it had in the bank, and it did so for the single purpose of buying SeaMicro’s technology that they were going to use to compete against Intel: Freedom Fabric. SeaMicro’s fabric had some caveats that worked against its adoption, though. Systems integrators using it needed external processors to handle all the load, and there wasn’t any existing infrastructure to support it. The original idea was that motherboard vendors would design these XL-ATX motherboards that just had slots on them, and several daughter boards carrying a GPU, CPU, RAM, and networking capabilities.

The fabric was an agnostic solution because it would work with any chipset, any CPU, and any GPU out there, but the proprietary interface and cost of adoption saw very few partners jumping in. AMD saw the potential to integrate this into their Heterogeneous Systems Architecture, and the result almost five years later is Infinity Fabric.

Now located on-die within the Zen and Bristol Ridge cores (and, I presume, into Vega as well), it is able to deliver full-speed bi-directional communication between any two or more processing nodes, which means that it is also capable of allowing communication between AMD’s core complexes (CCX) inside Ryzen. The fabric is also what’s going to enable the Vega and Navi GPU cores to talk to the high-bandwidth cache controller and video outputs, as well as allowing a direct channel to the GPU over PCI Express. Two versions of this exist in Ryzen – one fabric to control the data streams between the CPU and GPU cores and the local memory pool, and one to control every other part of the system.

In AMD’s multi-socket server designs coming this year, Infinity Fabric is extended beyond the CPU socket and will allow communication between each processor, or from one processor to a GPU linked to the same fabric. Shown above is an example of a triple-socket system that can hold up to three Zen-based processors. However, you may be confused by AMD’s use of the term “socket” when referring to other linked nodes in the fabric. The fabric doesn’t differentiate between a GPU that is on-die, or attached via PCIe, so for all intents and purposes a connected GPU would just be treated as another processor in a different socket.

It’s a little clunky, but this is the same basic design that Freedom Fabric had, only it now has the full backing of a company that can sell an integrated solution.

During an AMD AMA on Reddit last year, I asked the company whether their use of interposer technology would mean that their official strategy was to use multi-chip designs to work around the slowing of Moore’s law. Before that AMA, Radeon Technologies Group CTO Raja Koduri had already suggested that a possible future for their graphics cards would be to use two dies in transparent Crossfire, where the chip looks like a single unit rather than two separate ones. In the future, a high-end GPU from AMD will be a multi-chip design rather than a monolithic one.

AMD calls this their “Moore’s Law Plus” strategy. They will be using a mixture of interposer technology, Infinity Fabric, and multi-chip designs in order to work around issues with production or the lack of a new production process. It’s worth noting that both NVIDIA and Intel will have to walk down this road sooner than later, but AMD is in the lead now. NVIDIA’s NVLink is similar to Infinity Fabric, but it’s only for GPU-GPU communication, replacing the hardware and protocols NVIDIA previously used with SLI.

Premium products generate the most revenue

One of the myths that have been perpetuated in the past decade has been that volume shipments of hardware is the place where manufacturers reap the most reward, but this was at odds with the success that companies like Apple and NVIDIA saw from their premium products. Prior to Ryzen’s launch, AMD played in the mainstream segment with Bulldozer-based processors and APUs, but it was a losing battle. For a business opportunity that took up almost half of AMD’s production capacity, it played a very small part in their revenue stream.

With Ryzen, and soon the high-end options in the desktop and server markets, AMD’s hope is that the competition they’re injecting into the premium markets will contribute a larger portion of the revenue. Ryzen isn’t cheap, and this is why – targeting the enthusiast premium market allows them to charge more per chip. NVIDIA’s practice of asking for $10,000 for their top-end designs in the Tesla and Quadro families is finally rubbing off on them.

With the cruft out of the way, let’s take a look at the new products!

AMD Radeon Vega Frontier Edition

AMD’s first Vega-based GPU was the MI25 Instinct Accelerator, intended for use in professional applications that required lots of compute, or virtualised desktops with fully accelerated 3D capabilities. Their next Vega product is the Vega Frontier Edition, designed for use in graphics-heavy professional workloads. This is a fully-enabled Vega part with up to 16GB of HBMv2 memory and the new high bandwidth cache controller that allows for the use of remote storage as GPU RAM.

The Frontier Edition is a premium design, with a blue brushed aluminium shroud that covers a blower-style air cooler and this funky cube at the end lit up by yellow LEDs. With two 8-pin auxiliary power connectors, you can expect power draws in the 250W range for this monster, and there’ll be an equally stylish gold-and-blue version that uses an all-in-one water cooler. While there are Displayport and HDMI outputs, this card is not geared towards gamers and will not be cheap either. AMD stresses this fact in their presentation, because while they know that NVIDIA is eating up the high-end market, Vega is a clean-sheet design that will be as disruptive to the market as Ryzen. Biding their time for when HBMv2 production eases up for the consumer launch is probably their only option now.

Vega Frontier Edition is another one of AMD’s premium products that they’re marketing for the people who buy into crazy expensive products like the GeForce GTX Titan XP for work purposes. The volume might be low compared to what AMD ships in the Polaris GPU family, but starting with the premium markets and then moving to mainstream is a better idea than their past launches. While Polaris addresses the market that buys GPUs at the $200 price point, Vega will sit in the $300-$500 market, and the ultra high-end $1000 market at launch. AMD implied that Polaris would be sticking around for a while, and this makes sense because their roadmap pretty much outlines this. Vega is the high-end GPU for this year, Navi is the one for mid-to-late 2018. Perhaps after that they’ll merge the markets addressed by a refreshed Polaris and Navi with a single GPU family.

The Frontier Edition is set to replace the Radeon R9 Fury X in AMD’s portfolio for creative professionals, and it will be interesting to see where they take this brand. Vega is a great option for machine learning projects and general compute use, and I anticipate that it will bring the heat to NVIDIA’s Pascal and Volta families.

Also, on a related note, AMD’s open-source compute stack, known as Radeon Open Compute platform (ROCm), seems to include TensorFlow support. Tensor math is a crucial component of how machine learning is being done by companies like Facebook and Google these days, and while AMD does not have dedicated Tensor units to run this math, with Vega they feel confident enough that they can integrate it as a software component of the ROCm stack instead. This makes Vega quite an attractive option, because a lot of machine learning initiatives aren’t CUDA-based, which weakens NVIDIA’s offerings in this space despite their hardware strengths.

ThreadRipper is a hilarious product name

For the first time, AMD briefly confirmed the existence of the product family known as “ThreadRipper” from previous leaks on the internet. This is their answer to Intel’s High-End Desktop (HEDT) lineup that uses the LGA 2011-3 socket and quad-channel memory. ThreadRipper will have two Ryzen 7 dies integrated into a single package and it will use a new socket as well. It is possible that the same socket is used by AMD’s new Zen-based server parts, which means that a lot people are going to have to buy new CPU coolers.

But regardless, this is AMD’s most aggressive move on Intel to date. They do not have a HEDT-style product, even though Ryzen 7 continues to show up Intel’s Broadwell-E family in benchmarks for a third of the price. Offering a 16-core, 32-thread processor in that same market might be something that Intel is ill-prepared for, because their 10-core Core i7-6950X is priced at $1750. AMD will spend more time on ThreadRipper closer to E3, and will have something to demo for the public at Computex Taipei.

AMD’s server line is now… EPYC

AMD’s product naming teams are pun masters, for sure. The company’s new server line gets a new logo and a name – EPYC is now AMD’s server offering, intended to replace the old Opteron family branding. It is both catchy and cringey, because AMD worked “epic” puns into their product reveal as much as they could (although CEO Lisa Su managed to say it with a straight face without slipping in a giggle). With a new socket that uses over 4,000 pins in a LGA design, the EPYC processor is enormous. It can hold up to 32 cores in a single package and will address up to 4TB of system memory in a dual-socket system. Each EPYC processor boasts 128 lanes of PCIe 3.0 connectivity, which means that it is ideal for multi-GPU setups for machine learning.

There’s also a dedicated security engine embedded, which AMD didn’t go into detail on. It is likely that this is the same ARM security co-processor that AMD uses on their console designs for the PS4 and Xbox One, and it adds on to AMD’s existing PSP (platform security processor) offering. Infinity Fabric also has some encryption added to the system to keep everything secure, so that is probably part of the package as well. It is unknown whether AMD will allow retailers to sell EPYC systems to regular consumers or not, but it would be a mistake not to.

AMD will be positioning EPYC as a disruptive product in the server market, offering customers more benefit than regular dual-socket systems through space-saving and cost-cutting measures that will attract a lot of attention from network admins. In the same space as a 2U dual-socket Intel Xeon server motherboard, AMD can offer a smaller XL-ATX design that is 50% smaller with more connectivity and features, not to mention greater core density. Every EPYC server CPU will have unlocked chipset capabilities, up to eight memory channels (even on the chips that have disabled cores and/or cache), as well as integrated networking. Intel’s designs in this space aren’t cable of offering the same features yet, but this may change with the Kaby Lake-X family.

The aggressive packaging and offering isn’t just for cost reasons alone, though. AMD’s vision is that customers will be able to spec and buy a system that is both 50% smaller and less complex to setup, which can be rolled out to join clusters of other similarly-configured servers using Radeon Instinct accelerators for machine learning. The vertical integration is quite daunting to think about, actually. Not only is the entire system coherent thanks to Infinity Fabric, the GPUs themselves can also share a giant address space hosted on the network thanks to the high bandwidth cache controller in Vega, along with all the other systems packing an Instinct GPU.

And then there’s the software offering through ROCm. AMD’s plan is to offer customers the ability to spec a system with a 32-core EPYC processor, up to 512GB of RAM, up to sixteen storage drives, and eight Radeon Instinct MI25 accelerators, and you can put this system in your network to add to your workflow that might require any of the frameworks or middleware that ROCm supports. Setting it all up should be relatively simple if you’re using a Linux operating system, and the various frameworks that AMD supports can be packaged as Flatpak applications, which makes deployment much simpler.

Overall, AMD’s plans for the rest of 2017 are quite packed. There’s still Ryzen 3 coming for consumers, which will hopefully launch later this month. Then there’s the Radeon Vega Pro Frontier Edition as well as the first few EPYC processors and motherboards coming online in June 2017, possibly in the same week as Computex. If we’re lucky, we’ll learn more about Vega for consumers at the same time, and hope that it’ll launch sometime between July and August.

Then, finally, the third strike in AMD’s blitzkrieg is Ryzen Mobile, launching on notebooks starting in Q3 2017. Including integrated graphics is a big challenge to Intel’s product lineup, and AMD’s drivers will help with making gaming notebooks based on APUs a workable possibility, rather than the systems we have today which are riddled with compromises. The future is bright for AMD, and I look forward to seeing how Intel and NVIDIA respond as time passes.

More stuff like this: