Experienced PointsWhat Does the End of Moore's Law Mean for Gaming?Experienced Points - RSS 2.0
No discussion of computer technology is complete without a terrible car analogy, so here's mine: Imagine you have two errands to run: "Get groceries", and "pick up the kids from school". If you had two cars (with drivers, obviously) you could do both tasks at the same time. One car gets the food, the other gets the kids. But if your errands were instead "Get kids at school" and "take kids to soccer practice" then they can't be done simultaneously, and the extra car would be useless. In computing, tasks that must be done in order like this are referred to as "serial".
The point is that we've pretty much pushed conventional multi-threading about as far as it can go. We've offloaded most heavy-duty non-serial tasks to background threads. This includes stuff like animation, sound, polygon pushing, and maybe some AI pathfinding. Every thread adds to the overall complexity that the programmer has to manage, and all of the big jobs are already done. All that's left are difficult tasks for very small gains. If someone made a thirty-two core processor tomorrow, it wouldn't do anything to speed up your games because no normal game could keep that many processors busy.
The one area that can always use more cores is the graphics processor. Graphics processing - taking millions of triangles and turning them into a single frame of gameplay for you to look at - can use as many processors as we throw at it. In some extreme theoretical case, you could have one processor for each pixel on the screen. Yes, that would result in a graphics card about thirty square meters in size and have the power draw of a small neighborhood. (Based on the current core density of 2048 CUDA cores in 114 square cm on the NVIDIA 980.) But the point is that you can keep adding graphics cores and get more pixels and higher framerates.
That's assuming the general processor - the one running the game - can keep up, which it can't. I don't have any proof, but I strongly suspect that the current-gen console games that can't nail 60fps are limited by their CPU, not their graphics throughput.
Which means that we're running into a soft ceiling on general computing power (which games are hungry for) and have plenty of graphics power, which is the one kind of power we don't need. Graphics already look amazing, and the really taxing high-end graphics are so expensive to produce that only the top AAA developers can afford to use them. And having the power to draw tons of frames isn't really useful if the CPU can't run the game fast enough to make those extra frames happen. On top of all that, it's now really hard to notice improvements to graphics technology, because they already look so good. So even if you quadruple the graphics power, it wouldn't mean anything to the consumer. It won't make the game smoother and it will only look a tiny bit better even if developers can afford to put the power to use. The kind of power we can have is the power we really don't need.
So the end - or "slowing down", if you prefer - of Moore's Law isn't going to mean anything right away. But it does mean console generations should last longer. (Sony thought the PS3 was going to be a ten-year console. They miscalculated, but it's more likely to happen for the PS4.)
The two wildcards here are VR and 60fps gaming. If VR takes off, it might give us a push into another console generation. If Sony and Microsoft decide that the general public wants 60fps games and not just the hardcore, it might push us into another generation.
For me, this new status quo is pretty great. I'm not going to miss the 90's where I needed to buy a new PC every two years just to keep up. And I'm also not going to miss the aughts, where I needed a new graphics card every two years. We're finally entering a time where we can worry less about hardware and more about the games.
(Have a question for the column? Ask me!.)