Experienced Points

Experienced Points
What Does the End of Moore's Law Mean for Gaming?

Shamus Young | 31 Aug 2015 16:00
Experienced Points - RSS 2.0

This is a very strange time for computer technology. I mean, it's always a strange time for one reason or another, because the evolution of the computer has been so unlike other technologies that nobody really has any frame of reference for how things ought to work or what will happen next. The only sure thing in computers is that they will keep getting exponentially faster.

Until now-ish.

And that's why this time is strange. The only thing we were ever sure of is no longer a sure thing.

image

I'm sure most of you have heard of Moore's Law, but just for context: In 1965 co-founder of Intel Gordon Moore observed that every two years, we'd be able to get twice as many circuits onto the same size computer chip. Over the decades the rule gradually morphed into the more informal idea that "computers get twice as fast" every two years.

But this sort of undersells the extreme gains we've experienced in performance. Yes, we could fit twice as many circuits onto the same size chip, but at the same time we were ramping up clock speeds. So not only did you have twice as many circuits capable of doing almost double the work, but they were also doing that work twice as fast. On top of this, the newer devices would have roughly twice the memory and twice the storage. (Although magnetic drives didn't follow quite the same growth curve, it's close enough for our discussion here.)

It's like having a car with twice the horsepower, twice the fuel efficiency, twice the braking power, twice the tire grip, half the weight, and twice as aerodynamic. The resulting car is a lot more than just "double" the speed of the previous one. While there are several different systems for trying to measure the overall "usefulness" of a computer, there's no good way to get an apples-to-apples comparison across computer generations. As the machines get faster, we ask them to do more things and the software we've used has gotten less efficient.

The point is, computers have done a lot more than just double in "speed" or "power" every two years.

But like all good things, this trend couldn't last. About a decade ago clock speeds stopped climbing exponentially and pretty much leveled off. (If they had continued, our computers would be cranking at something like 64Ghz, instead of being stuck somewhere around 4Ghz. You'd also need a liquid nitrogen cooling system to keep them from melting your computer.) Making that many circuits go that fast generates too much heat and we don't have a good way to get rid of it. This is even more true now that so many chips are aimed at mobile devices where heat and power consumption are far greater concerns than raw processing power. At the same time, the circuit density is climbing much more slowly now. Things are still getting faster (by way of getting smaller) but progress is more incremental and less exponential.

So what does this mean for games? I'm getting there.

When we couldn't ramp up the clock speeds any more, we started packing in more cores. If we can't make the new CPU twice as fast, we'll make it the same speed but give you two of them. The problem with more cores is that they're useless unless the developer can put them to use by breaking their game into multiple threads.

But it's not quite that easy. You can't just keep chopping videogame logic into threads, because some tasks have to be done before or after other tasks. The game needs to process user input, see that they fired their weapon, calculate the trajectory of the bullet, and apply damage to the victim. You can't really break that down into multiple tasks because they must be done in order. You can't calculate damage until you know who the bullet hit. You can't plot the trajectory of the bullet until you know the user fired, and you can't know they fired until you process their input.

RELATED CONTENT
Comments on