In the last few days, a set of slides from an Intel presentation given to attendees at the IEEE International Solid State Circuits Conference currently being held in San Francisco has had the rest of the PC industry trying to guess what Intel is up to with their latest prototype graphics card that fits into a PCIe slot. It seems that Intel has aspirations to use their graphics hardware to power machine learning algorithms and offer a competing product in the enterprise market. It’s only a prototype for now, but its existence could be a new direction for Intel as they struggle to find new growth markets.

The design is pretty straightforward. Intel took the 9th generation Intel HD Graphics processor found in their Coffee Lake processors and made it into its own die, complete with a fully programmable gate array (FPGA) to act as the main interface with the PCIe slot and standards to allow the GPU to function. FPGAs are expensive to manufacture and aren’t really that power-efficient compared to an ASIC, which is purpose-designed to run a particular piece of software or a process.

That points to the one-off nature of this project inside Intel. Making GPUs this way would be expensive, and it’s not the normal way of doing things in the GPU market. You’ll notice that there isn’t a memory controller listed on the block diagram as well. The GPU likely makes use of system memory over PCI Express. That’s the slow way of doing things, but it cuts down on costs of manufacture if this ever becomes a real product.

Intel’s motivation for doing this is to experiment with how they could make their GPU more power efficient if it had its own power budget and die. They figured out a way of managing and gating power to every part of the GPU depending on the workload. Areas of the chip that have a high demand will get more of the power budget allocated to them, like the execution units for rendering a 3D object, while other parts like the fixed-function logic that barely do anything at that time will receive less of the budget. When blocks of the GPU don’t need to be active, they are put into a sleep state.

As for why this slide is labelled “Motivation”, Intel is likely looking for solutions internally that they can offer to companies who want a general-purpose GPU for accelerating things like machine learning and AI training. Intel hasn’t had a foot in the door to this market yet, and they’re very far behind AMD and NVIDIA in this regard. Offering a capable low-power solution that is available on-die or as an add-in card as a starting point for mass adoption could be their end game here.

Finally, although there were other slides shown off in their presentation, this one is pretty interesting. Intel implemented a new turbo boost algorithm that increases the clock speed for the execution units/shader units in their design. According to the graph, the baseline turbo method does improve performance as the clock speed increases, but every other part of the GPU has to be clocked at the same level to provide a 1:1 clock. Intel designs their GPUs for efficiency, not performance.

With EU Turbo mode, the clock speed for the shaders can be set to 2x the base clock frequency, with the other modules running at half the current EU clock speed. Compared to the baseline frequency scaling, performance improves by an average of 37%. Intel didn’t specify power consumption in this test, but if it turns out that this is a higher clock at the same power usage as the baseline, it would be a hell of an improvement for their graphics team to pull off.

Don’t count on Intel making discrete GPUs just yet, though. This is just a testing and validation sample, where they can display the kind of power savings they are making in the GPU space. It wouldn’t surprise me if these improvements make their way into the generation of GPUs after Coffee Lake, and being able to improve performance by nearly 40% would put Intel back in the running for the internal graphics race against AMD’s Ryzen 3 2200G and Ryzen 5 2400G.

Source: PC Watch via Hexus.

More stuff like this: