Now that DirectX 11 has been around for more than a year, it’s worth assessing how well it’s doing in terms of performance and cost. Before we do that, however, it’s worth briefly recapping the major features DirectX 11 has brought to the table.
Hardware tessellation allows games to make something from very little. When a 3D object, like a creature or person, is built, that object consists of many triangles. However, any given scene really has practical limits to how many triangles you can have in a scene.
Hardware tessellation creates additional triangles by interpolating them from existing geometry. Artists can put cues into a triangle mesh which will tell the hardware just how much additional geometry to add, and how to add it. Judicious use of hardware tessellation is great: heads are round, hilltops are round, curved objects more realistic. If the artists or 3D programmers are careless, a DX11 graphics card can end up trying to heavily tessellate everything. As was seen with Crysis 2’s addition of DX11 tessellation, this can result in stupidly excessive tessellation that hammers your graphics card without any significant benefit, like adding zillions of triangles to a flat surface.
Multithreaded rendering is something of a misnomer, because the CPU doesn’t actually handle graphics rendering. However, the CPU does figure out what the GPU needs, batches up a bunch of GPU commands and ships them to the GPU. In previous versions, this task of batching up and sending GPU commands to the graphics card was mostly handled by one core – so the graphics card was often seen drumming its fingers impatiently while waiting for the GPU to send it stuff to do.
Multithreaded rendering allows all the cores to handle this parsing of GPU commands and shipping them off. The result is better efficiency all around and more effective use of the graphics hardware. The neat thing about this feature is that all you need to do is that if you’re running a DirectX 11 game on an older DX10 or DX9 card, you’ll still get multithreaded rendering, even though other DX11 features won’t work.
The number of DirectX 11 capable games is growing. Perhaps more interesting is the number of developers who avoided DX10, but are adopting DirectX 11.
The GPU consists of many small, very fast floating point-capable cores running in parallel. That makes it ideally suited for 3D graphics rendering. However, there are a number of different types of compute tasks that can take advantage of the hundreds of small, floating point cores on a GPU, resulting in better performance than if those same tasks were running on the x86 CPU core.
Certain kinds of computation in games are examples of this, like physics calculations. The open source Bullet Physics library, for example, can take advantage of DirectCompute to handle some physics chores. Specialized blur effects, like a background bokeh filter or depth-of-field effects run more efficiently on the GPU.
There are a lot of smaller enhancements, but these three are the biggies.
From the number of PC game titles supporting the API, DX11 seems to be doing pretty well. You see quite a few new games ship with full support for the latest DirectX API. There are notable exceptions. Skyrim, which did very well on the PC, was a DirectX 10 title. There are several reasons a game developer might choose to not to use DX11 in a new title:
- They’re using an older game engine
- The game is a direct port from a console title, with no added graphics features
- The game deadlines don’t have time for implementing a new API, and the game itself isn’t that graphics intensive.
Nevertheless, the number of DirectX 11 capable titles is growing. Perhaps more interesting is the number of developers who avoided DX10, but are adopting DirectX 11. That could be because of DX10’s association with the sluggish Windows Vista, or it could be that DX10 really didn’t offer a lot beyond DX9. DirectX 11 is another animal entirely.
Now that we have some base understanding of DX11, let’s talk about the cost from a user perspective. I’m not talking about performance comparisons between DX11 and DX10 – those have been done to death. I want to hone in on what’s the cost to you, the user, to make the shift?
First, what do I mean by cost?
The first is money. This is the obvious one. If you’re not already the owner of DX11 hardware, then you’ll have to upgrade if you want to take advantage of the new API features in games.
The second one is what I’ll call the expectation cost. This is kind of like opportunity, but in addition to factoring in issues like time spend during the upgrade and foregoing buying something you might have wanted instead of a new graphics card, there’s the issue of your own expectations. Here are a few examples of what I mean.
- Early DX11 hardware. The first DirectX 11 graphics card to ship were the Radeon HD 5870 and 5850s back in 2009. At the time, they seemed blazingly fast. They offered a ton of shaders inside the GPU, and performed impressively with older games. But they suffered a bit with some early DirectX 11 titles. Turning up all the feature son STALKER: Call of Pripyat, for example, proved to be something of a slideshow.
- The Nvidia Effect. Eventually, Nvidia shipped its Fermi generation GPU in the form of the GTX 480. Yes, the GTX 480 was a power hog, and ran hot, but it was also a tessellation monster. So several games shipped (HAWX 2, later the Crysis 2 update) that just threw triangles at the tessellation engine with casual disregard. AMD cried foul, and so did owners of AMD GPUs.
- The 'dial it up to 11’ effect. I’m referring to the old joke from Spinal Tap, not the 3D API here. You get a new graphics card, and a new DirectX 11 game. Look how pretty it looks! Urp, it’s also kind of unplayable. Let’s dial it back down.
- DirectX 11 for $100! Both AMD and Nvidia shipped entry level cards that boasted of running DirectX 11 for just a C-note. Except that it really didn’t, because… yes, another slideshow.
- Buy more cards! After you’ve determined that you really can’t run The Witcher 2 or Total War: Shogun 2 with all the eye candy cranked up, the solution: buy another card and run them in dual GPU mode! The graphics card companies will love you!
- More new hardware! The Radeon 6000 series, and now the 7000 series shipped. Nvidia updated Fermi to the GTX 5xx cards, which ran cooler and were pretty damned fast. The HD 7000 is the first single card I’ve seen that can run high end DX11 games with all the eye candy cranked up on 1080p monitors. Heck, even The Witcher 2 runs with decent frame rates. We only had to wait for three generations of hardware.
But don’t buy a GPU that’s the first generation to support a new API and expect great performance in that API. Instead, buy it because you need solid performance in current games.
If all of this sounds like I’m disillusioned, I’m not. But looking at all of this from the perspective of the average user, and looking back over the past couple of years, I return to the basics of what I’ve always suggested, which goes something like this:
Once a new API comes out, so will new hardware. There’s a symbiotic relationship between Microsoft and the GPU makers, after all. But don’t buy a GPU that’s the first generation to support a new API and expect great performance in that API. Instead, buy it because you need solid performance in current games. New GPUs will typically run existing games very, very well. Once games running the new API come out, keep your expectations in check, and only use a few features. Typically, if the game supported tessellation, I’d enable that – but not advanced lighting or compute shaders.
If the new API really has you turned on, expect decent performance only from the second generation of GPUs supporting said API.
Follow those rules of thumb, and you may not save money. But your blood pressure and heartburn will be kept in check.