The SLI question: Is multi-GPU worth it?

SLI or CrossFire means doubling your spend on PC graphics. Here were going to be discussing whether if it pays off in game performance, and if a dual-GPU rig is required for 4k gaming.

So, you’ve got a shiny new 4K TV or gaming display, and you want to know what it takes to run it? Let’s get this out of the way – you’re going to need some extremely powerful hardware. There’s no single card on the planet that can run any game at 4K resolution with all settings maxed, with a minimum frame rate of 60fps. Sure, if you’d like to compromise on quality settings, go right ahead, but that’s not the true Ultra experience PC gaming purists crave. Nope, we want everything cranked to the max, running at a silky smooth minimum of 60fps… but the bad news is that even with two of the fastest GPUs on the planet, this still seems to be an unobtainable dream. We put three of the fastest GPUs on the market into dual-card setups, to see just how well they scaled. The results might surprise you – in some games the 4k dream is obtainable, but in many newer titles it’s not quite there yet.

Nvidia’s SLI

There are several major differences between Nvidia’s SLI (Scalable Link Interface) technology, which dates back to the 3dfx days, and AMD’s Crossfire. The main difference is that SLI relies upon a hardware bridge to connect the two cards, and they’ve upgraded it for the new GeForce GTX 10XX series. The old flexible SLI connectors are now gone with the latest generation, upgraded to a new super duper LED equipped hardware bridge. The speed of the bridge has gone up too – from 400MHz on the old flexi-models, to 650MHz on the new model (though the old models will still work, for a slight performance decrease. Nvidia has also decided to focus on dual-SLI this time around. Those who want to run triple and quad SLI systems will need to get a special key from Nvidia to enable it. There’s also the fact that SLI requires more bandwidth per PCIe slot, with a minimum of x8 speed.

SLI also has a couple of nifty features absent from AMD’s CrossFire. The first is SLI Antialiasing, which allows double the usual level of antialiasing. For example, when running two cards, it’s possible to run SLI16x. There’s also PhysX, Nvidia’s proprietary physics engine. If you only need the power of one card, the other can be dedicated to handle the PhysX load. Though not many games use PhysX, those like Batman Arkham Knight will notice a huge leap in graphical fidelity.


Compared to Nvidia, AMD has far fewer requirements to run. From the R9 series and upwards there’s no bridge required to join the cards, as it instead uses the PCIe bus via XDMA to communicate between the cards. It also doesn’t need x8 speed PCIe lanes, as it’s happy to run on a x4 lane. This means many motherboards that can only handle twin Nvidia cards are able to run three or four AMD cards with ease.

Also, it’s possible to CrossFire some cards that have the same GPU, but different memory configurations. Finally, it’s possible to CrossFire a discrete GPU with an integrated GPU found on AMD’s APUs.

How does it work?

There are several ways in which these technologies work, depending on the game. The first is called Split Frame Rendering. The cards take each frame and split them down the middle, rendering 50% of the frame. However, it’s a little bit smarter than that. If the driver detects that the top 20% of the frame needs more processing power, that’s where it will delineate the screen.

The other method is known as Alternate Frame Rendering, which is the method most commonly used. Each card is tasked with rendering an entire frame – the first card might do frames 1, 3 and 5, while the second card handles 2, 4 and 6. Once the second card finishes rendering it’s frame, it sends it over the SLI bridge.


While having twin cards might lead you to assume you’re going to get double the performance, this is not the case. The performance leap can be as low as 20%, or as high as 90%. It all depends on how suited a game engine is to SLI or CrossFire. There’s also the need for more power – you’re doubling the energy required by your GPUs, so will need a more powerful PSU, though today’s top-end GPUs only use around 180W of power, so you can get away with a 600W PSU if it’s a high-quality model.

If you’re running twin 4GB cards, you’d expect that the system now has 8GB of video memory, but that’s not the case. As multi-GPU usually uses AFR, it needs to store all the texture and geometry data in each card’s memory, so your system will operate as if it has 4GB of memory, but that’s set to change with DX12.

One of the worst problems though is heat dispersion and fan noise. GPUs run hot at the best of times – place two of them together and you’re going to get a huge leap in GPU temperatures. A card that is silent in single mode will probably be a howler when run in multi-GPU mode, which is why many multi-GPU users choose to use water cooling instead.

There’s also the fact that new games usually require a special profile to run in multi-GPU mode. This is why we’re now faced with the dreaded Day 1 patch every time a game comes out, and even then there are often bugs that can take weeks to be fixed that single GPU systems don’t face. Some games don’t even support multi-GPU mode, and never do, though this is a rarity. In rare instances, some games may not even work at all.

One weird issue that can be noticed is the fact that 4K is so clear that it allows LOD changes to be seen much more easily. During a Division benchmark you can notice assets changing from low-res to high-res, something you’d never seen at 1080p. If 4K is going to become the new standard, developers are going to have to deliver higher resolution textures for LOD swaps.

DX12 – the game changer

There’s a variety of reasons for getting excited about DirectX 12, but one of them is the fact it will allow multi-GPU use between both Nvidia and AMD cards. Microsoft calls it Explicit Asynchronous Multi-GPU capabilities, and it’s meant to allow users to mix and match the brands of their cards. This is because DX12 treats all of the GPUs inside a system as one large GPU.

In the past Alternate Frame Rendering has been the preferred method for multi-GPU rendering, but DX12 will instead move to Split Frame Rendering. This will allow devs to divide the texture and geometry data between the GPUs, combining the total amount of video RAM available. So those with twin 4G cards will finally have access to all 8GB.

There is one slight problem though – the coding behind the splitting of the load will be left to the developer, and they’re already busy enough. Today’s multi-GPU systems generally don’t require a lot of work from the dev, with AMD and Nvidia ensuring it works at the driver level, so the fact that the devs now have to do this as well is a concern. Yet Microsoft ensures us that “implementing the SFR should be a relatively simple and painless process for most developers” according to an interview on Tom’s Hardware. We’ll believe it when we see it. We think at best that it’s going to take two to three years before this feature becomes commonplace, if at all.

There is one huge benefit though – finally that integrated GPU inside your CPU is going to be usable with your discrete video card. It may not add a lot to performance, but at least it’s not just sitting there twiddling its thumbs.


There’s one main consideration when setting up a multi-GPU system – spacing. Most cards these days come with dual slot coolers, and some motherboards only have a single slot gap between each PCIe slot. This means it’s impossible to place the cards next to each other, and with Nvidia’s new SLI bridge being made of hard metal, placement might be an issue. We’d suggest checking your motherboard can handle dual slot cards next to each other before purchase.

If you’d like to go for a three or four way multi-GPU board, you’re probably going to have to buy an E-ATX form factor board to squeeze them all on. If you’re taking the Nvidia route, you’re also going to have to ensure that each PCIe lane has enough bandwidth to handle Nvidia’s requirements. This is why many boards only support dual Nvidia, yet can handle triple or quad AMD.

You’re also going to need to have enough power supply plugs. If each of your cards requires twin 8-pin plugs, and you’re going for four cards, that’s a whopping eight 8-pin plugs, which means it’s likely you’re going to need to upgrade your power supply. There’s also the added demand on your power supply system from the extra GPUs, which might also mean another power supply.

Fitting in Nvidia’s cards isn’t such a hassle as their cooling system is built in. However, AMD R9 Fury X cards each come with their own small 120mm radiator, so you’re going to need to find room for that. Once the hardware is in place, it’s simply a matter of downloading the latest drivers and installing them.

Performance Reality

So, just how well do multi-GPU setups actually work? We spent several days testing three of the top-end cards on the market – the AMD Radeon R9 Fury X, GeForce GTX 1080 and GeForce GTX 1070 – to find out. The results were a mixed bag to say the least.

Our first benchmark off the list is Metro Last Light. Despite its age, it’s still one of the most graphically intensive games on the market. When it came to performance increases we saw a decent leap of 56% on the Nvidia GeForce GTX 1080. The GeForce GTX 1070 did even better, with a performance increase of 63%. Sadly AMD didn’t fare so well, with a minor 34% performance increase.

Next up was Shadow of Mordor, and it was here that we really saw the difference in driver quality between AMD and Nvidia. The Twin GeForce GTX 1080s saw an absolutely huge 85% increase, a relief considering it took them many months to release an SLI profile for this game. The GeForce GTX 1070 fared just as well, with a whopping 84% performance leap. And then there was AMD. It’s obvious they haven’t released a CrossFire profile that works properly for this game, as the single card actually proved to be one frame faster.

It was time to test the cards’ DX12 prowess with TimeSpy, and it was here that AMD fared better. The GeForce GTX 1080 witnessed a healthy 53% boost in performance – not the doubling many would expect, but enough to push many games past the 60Hz barrier. The GeForce GTX 1070 put in an even better effort, managing to hit a speedy 63% performance boost. Finally there were the AMD cards, which are renowned for their DX12 advantages. After having the benchmark crash one, we finally obtained a score for dual cards – and once again it appears AMD haven’t optimised the CrossFile profile, scoring a mere 2.3% performance boost.

Our final benchmark is one of the most demanding, Ubisoft’s The Division. If you’ve seen it in action, you’ll see why it needs such horsepower – it’s utterly stunning. Once again though it highlighted the work AMD needs to do on its CrossFire support, with a 0% increase in performance. That’s right, 0%. Yet when we disabled CrossFire in the control panel, the performance jumped to 65fps, a 150% performance increase. As you can see, there is something seriously wrong with AMD’s drivers, as a 150% performance increase is basically impossible – 100% is the theoretical maximum. In opposition, the 1080 witnessed a huge 87% performance increase, while the 1070 came in at 63%.

With numbers like these we simply can’t recommend AMD’s CrossFire implementation when compared to Nvidia’s. The results seemed to change nearly every time we ran a benchmark, making it impossible to get consistent numbers. AMD has a lot of work to do to make CrossFire work consistently across a wide range of games.


So there you have it – in many instances, multi-GPU setups can provide incredible performance gains, provided you go with Nvidia; AMD still has a lot of work to do. Yet it’s worth remembering the cost involved. Not only do you have to buy an identical graphics card, you’ll probably also need to upgrade your PSU. Then there’s the noise factor, which some gamers hate. Thankfully the dreaded driver problems of the past seem to have been mostly cleared up, and multi-GPU setups are far more reliable than they ever were. However, there are still a handful of games that do not scale well with multi-GPUs; it seems we got lucky with our pick.

Still, for the ultimate gaming machine, serious gamers will probably choose multi-GPU. It allows for silky smooth framerates at high resolutions and enabling features like DSR without any performance hit.

Why Not SLI Your system today! Call 0115 9279064 for more details or why not visit us at


Be the first to know about new products

Be the first to know about new products

Join our mailing list to receive the latest news and updates from our team.

You have Successfully Subscribed!

%d bloggers like this: