Intel Haswell-E header

Intel’s Haswell-E plaform is arguably their best effort in recent times and it improves on nearly every aspect of their offering to the desktop market – you get more cores for less money, you get new features and hardware, you get DDR4 compatibility and you get the new X99 chipset, chock-full of features that many of you will buy it on but won’t ever use. But there are a few issues with X99 that might be reason for Intel to tread carefully with how they manage the Skylake launch next year and here’s why.

On the Haswell-E launch, Intel didn’t dedicate a while lot of time to the X99 chipset itself. They had a single slide for it in their marketing materials and that was it. Some people generally glanced over this but I think it’s relevant for a few reasons that may be of interest not only to the enthusiast owners, but to the general market as well.

hswex99-12

Let’s start off with the stuff that won’t change for a while. There are four memory channels on the Haswell-E chips and these officially have a limit of 2133MHz. The DDR4 standard actually starts off at 1866MHz but the lower speed will mainly be for those modules sold to enterprise customers that don’t need the speed but desire the power savings and the extra bandwidth. Alongside that to the left is the 40 dedicated PCI-Express lanes for graphics cards, SSDs or whatever you want that needs to use those lanes.

I’d like to remind readers at this point that Intel artificially gimps this in the Core i7-5820K, only opening up 28 of those 40 lanes for use. That somewhat sucks, but at least you can still do three-way SLI/Crossfire with four lanes open for something else like a PCI-E SSD.

Below that is the interesting part. Intel still limits the X99 chipset, as it’s done for more recent Core-branded products on the desktop, to DMI 2.0 x4. DMI is based on PCI-E 2.0 signaling and hasn’t changed much from Sandy Bridghe. There’s about 20Gb/s worth of bandwidth between the chipset and the CPU split between four lanes. Everything else in the system feeds off of that – USB, networking, Thunderbolt, SATA, even HDMI audio.

Now the repercussions of this aren’t going to be felt by most people because not everyone is going to make the leap to LGA-2011-3 and Haswell-E. Most mainstream LGA1150 systems top out with two GPUs, two SSDs and up to three or four hard drives, which most of the time aren’t in a RAID array.

No, if you were buying into the Haswell-E platform you’re probably not the type who just wants it for Battlefield 4 or Farmville – you want to be doing a number of things at the same time in the background while you play a game or work on photo or video editing or some other activity that involves a lot of processing and threading power. But if you’re buying it for that high-end use, you need to be aware that the chipset limitations come into play in exactly the same way as the Z97 platform.

ASRock's Z97 Extreme 6 has two M.2 slots, but one eats from PCI-E 3.0 while the bottom slot feeds off the Z97 chipset.

ASRock’s Z97 Extreme 6 has two M.2 slots, but one eats from PCI-E 3.0 while the bottom slot feeds off the Z97 chipset.

One example of this that is fast becoming a headache is RAID with SATA SSDs. Tom’s Hardware recently tested out this theory with the ASRock Z97 Extreme 6 and their tests showed that even 2.0GB/s is an overly optimistic number – 1.6GB/s was more realistic and that’s even easier to saturate with just three SSDs or one Samsung XP941. There’s just not enough headroom left for anything else to really kick up in speed. Anyone looking to improve the storage situation would have to splurge much more money on Intel’s SSD DC P3500 because it works on PCI-E slots and boasts read speeds of 2.5GB/s and writes of 1.7GB/s.

If you chose to put in, say, more than four 512GB Crucial M550 drives in there, you’d be wasting money buying any more drives because your read and write speeds from the array would be limited to what you can push through DMI, assuming that all other components connected to the chipset are idle. All you’d be getting out of a five or six-way SSD array is redundancy, nothing more.

Another example is the stupid-fast Samsung XP941, which is a PCI-Express SSD based on the SATA/AHCI protocol. Because it works on PCI-E 2.0 over a M.2 connector, the XP941 takes away four lanes from the X99 chipset. It boasts read speeds of up to 1.4GB/s and write speeds of around 1.0GB/s. If you had to put two of those drives in RAID 0 on a board that supports two four-lane M.2 connectors, speeds would be capped out at 2.0GB/s reads and writes. That’s not a limitation on the hardware, that’s because Intel cannot drive a M.2 PCI-E 2.0 drive faster without taking away all the bandwidth DMI has to offer.

It’s not much of a headache to drop the built-in storage options and move to something that better suits your needs, but it may not be an easy pill to swallow when it comes to actually buying that better hardware. Then you need to work out if your system configuration is compatible, especially if you’re already running multiple GPUs and want to avoid overheating your expensive new toy.

DMI is flexible enough up to a certain point. That certain point, as it happens, isn’t that far out of most people’s reach. Three 128GB Crucial M550 drives in RAID 0 can pretty much top it out. For Intel, they no longer have all the time in the world to improve on things at a slow, determined pace. Storage needs and speeds are fast oustripping their capabilities and that train isn’t stopping for anyone. Thunderbolt-connected storage is going to become very important in the next few years with 4K  and 21:9 monitors.

Hell, even AMD’s chipset for socket FM2+ is already catching up flexibility, even if the platform as a whole isn’t anywhere near close to what Haswell can offer to consumers.

The time for sleeping on the job is up and Intel is now at their limit of what their chipset can be reasonably expected to do. All this means is that consumers need to pay more attention to what they’re buying and possibly overextend their budget to buy hardware that properly suits their needs. No-one can afford to wait for Skylake in 2H 2015 moving to PCI-E 3.0-based DMI to fix this up.