Next-gen AI and Next-gen BS

EP 3x3

Back in the programming period of my career, my most interesting days were when my boss would come to me and ask, “Shamus, how hard would it be to add an option to do X?” In this case, X represents something impractical, unknown, or difficult. For example, “Make the program run twice as fast.” (He never actually asked me to do that, but it makes this example easier to understand. Also I should add that he was an awesome guy, even when asking for implausible or challenging things.)

I couldn’t honestly tell him it was impossible, because it’s perfectly reasonable to assume that with the right tricks and optimizations you could get that kind of performance increase. You might have to resort to John Carmack-level ingenuity to make it happen, but the opportunities are there. And I also couldn’t tell him how long it would take, since I didn’t know how to do it yet. The truthful, honest answer to the question was usually me brainstorming various ways I’d attack the problem, which wasn’t the kind of simple “It would take X days to complete” estimate he was looking for. I’m sure people in other careers run into the same thing.

The thing is, I loved answering these questions and considering the problems, and I was always disappointed if he didn’t tell me to dive into it for a couple of weeks and see what I could do.

These questions about AI are a lot like the questions from my boss; the real answers are probably long, technical, and riddled with exceptions and footnotes. It might involve reading or authoring a white paper. But I can take a stab at clearing up some of the confusion, and hopefully help you to spot instances where marketing wizards are trying to dazzle you with bullshit. I’m not an AI programmer, but I’ve dabbled a bit.

We use the term AI for a lot of different things, to the point where we could spend the rest of this column debating different definitions and still not have something that everyone would accept. Just to avoid that, I’m going to divide video game AI into two broad categories: Strategy AI and Combat AI. Those are not even remotely the only two types and that’s not what their designers call them, but I’m writing this for you, not other coders. These two systems are the big ones that come to mind when thinking about games, so let’s stick to them.

These two AI problems are fundamentally different. Strategy AI is the kind of AI that plays StarCraft, Civilization, or Master of Orion. This kind of AI is hard. Hire a pro. Maybe hire a couple. A single-player strategy game will live or die based on its AI. I’ve never written something like this, but like my boss asking me to double performance, I know this is a serious challenge without needing to consult Google first. Getting the computer to use all in-game resources efficiently in wildly divergent circumstances against unpredictable humans in a way that’s interesting and varied is an incredibly difficult problem to solve. It’s such a hard problem that in the early days programmers usually cheated and gave the AI extra resources to make up for the massive cognitive handicap.

Strategy games are valued for their depth. People love playing games where it’s hard or even impossible to discover and perfect a single overarching strategy. They want layers of gameplay systems to consider and lots of interplay between them. “Producing more food in the short-term will let me expand faster, but I’ll need to make sure I put in enough into military or I’ll get crushed, and I need to invest in science for the long-term or all those bases and units will be worthless.”

This depth comes back to bite the eager game designer in the ass, because the deeper the gameplay, the harder it is to make the AI. Every juicy system that the player can explore adds yet another dimension to the AI puzzle.

But while this kind of AI is hard to design, hard to code, and slow to test, the actual CPU cost usually isn’t a big concern. We’ve had some really good AI in the past, on machines a lot slower than what we have today. We’ve got plenty of cycles to spend on AI if we want to. Frankly, I seriously doubt AI programmers bumped into the processing limits of the previous generation, so I don’t know that this new one is going to open any new doors for us.

At the other extreme, we have Combat AI. Here we have the AI that drives all those mooks you mow down in a shooter. Helghast, Combine, Covenant, NCR, Terrorists, and so on.

Recommended Videos
Xbox One - Main Image

While strategy AI starts out stupid and requires a massive effort to make it do anything remotely stimulating, a combat-type AI has the opposite problem. It starts out godlike. The systems of a combat AI are incredibly simple and their goal is straightforward: “Shoot player with gun.” The game already knows where you are, and doing the math to plot a line from the gun to the player’s noggin is trivial. The fastest, easiest AI to write is one that headshots the player with pinpoint accuracy the instant they come into view, and keeps doing so until the player is dead. That also makes for an awful game, which is why designers generally don’t stop there.

Unlike with a strategy-oriented AI, making the AI “good” in this case involves making it less efficient. We need to give this flawless killbot some human fallibility. We want it to miss sometimes, based on speed and distance. But we want it to miss the way a human would miss. We don’t want it to just spray bullets randomly like its hands are vibrating. It should miss a lot at first, but eventually settle down and hit the target reliably as long as the target holds still. Because that’s basically how humans shoot.

You have to apply this same sort of “simulated fallibility” to all the other parts of the AI: Detecting the player, tracking the player, choosing cover, and working with teammates. The question isn’t “What is the most optimal action?” but “what is the most human-seeming action?”

So getting back to this “new generation” business: I have never seen a combat-type game where the AI required anything remotely special in the way of processing power. A lot of people (myself included) would probably say the original F.E.A.R. was the high-water mark (or very close to it) for squad-based AI combat. That ran fine on 2005 hardware, and even then I’m sure those wizard AI routines were barely a blip of the overall CPU usage. The AI required to decide whether to go up a ladder is peanuts compared to the power needed to animate the 3D model of the soldier doing the climbing.

Last year the pre-release hype for the new Thief promised that modern CPU power would make more advanced AI possible. This claim would be outrageous if it weren’t so hilarious. It’s like claiming that going to the gym will give you the muscles to carry around MP3 players loaded with more music than ever before. It’s such a strange claim that you don’t even know which part you’re supposed to be arguing with. For the record, the “super AI” in the new Thief keeps track of how many times it’s been spooked by noises or partial glimpses of the player, and if it gets spooked three times it will go aggressive and begin actively searching. It’s an interesting gameplay addition and I approve of it, but that was completely possible using 1998 hardware. We’re talking about a couple of bytes of memory and a few lines of code, here. Heck, I’m sure 1988 hardware would have been able to handle it.

Note that I’m not saying that good AI is easy to write. It’s just that no matter how difficult it might be to develop, once the game is running the AI is not eating CPU cycles the way sound, animation, particles, or physics simulations are.

Whenever marketing tells you that we’re going to need some brand-new technology to make the next generation of AI, they are probably suffering from some sort of truth deficiency. (They might not be lying. They might just be baffled by the technology and doing their best with what they know.) If we’re talking about combat AI, then this is pretty much a guarantee. The one exception where they might be right is when we’re playing a game with lots and lots of actors – perhaps battles with hundreds or thousands of participants. If the AI has to pilot a lot of ships or make large groups of people fight? Then yeah, that’s going to cost you some processor cycles. Although it’s also going to be animating all that stuff, so again it’s likely the AI will still be a small portion of the overall cost.

You should probably just default to being extremely skeptical anytime someone says anything about horsepower near the start of a console generation. (Or if you’re more of a PC player, then always.) Manufacturers need to sell these machines as fast as possible. Publishers want to sell you the game by telling you how awesome and advanced it is. Developers don’t want to have to straddle multiple generations if they can help it. (Like developing for PS3 and PS4 at the same time.) Everyone has a reason to tell you that $500 purchase is justified and necessary. And if they can do so by making difficult-to-disprove technical claims? Then so much the better.

Shamus Young is the guy behind Twenty Sided, Spoiler Warning, and The Witch Watch.


The Escapist is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more
related content
Read Article Stadia’s Pricing Model Will Ultimately Be Its Downfall
Google Stadia's Pricing Model Will Ultimately Be Its Downfall
Read Article The Bungie Split Could Prove Activision’s Incompetence
Read Article Marvel’s Avengers Was Missing More than Just Gameplay
Marvel's Avengers, Square Enix, Crystal Dynamics, Eidos Montreal
Related Content
Read Article Stadia’s Pricing Model Will Ultimately Be Its Downfall
Google Stadia's Pricing Model Will Ultimately Be Its Downfall
Read Article The Bungie Split Could Prove Activision’s Incompetence
Read Article Marvel’s Avengers Was Missing More than Just Gameplay
Marvel's Avengers, Square Enix, Crystal Dynamics, Eidos Montreal