Human Ingenuity Beats AI’s Machine Precision

Last week I talked about the StarCraft II match between professional players and the AlphaStar AI, where the AI beat the humans 10-0. About a month after those initial 10 games, there was rematch where pro player MaNa beat AlphaStar. This match was streamed live, and you can watch it on YouTube. This newer version of AlphaStar had been modified so that it no longer had the full map vision I complained about in my previous column, although I don’t think that’s why it lost. But before I talk about the loss, let’s talk about Unreal Tournament.

Recommended Videos

Back in 1999, I was really into Unreal Tournament. I was still cursed with a dial-up internet connection, so I couldn’t play online. I had to settle for playing against the bots. The opponent AI in Unreal Tournament isn’t anything special by today’s standards, but at the time it was pretty cutting edge. Once I got bored mowing down waves of low-difficulty AI bots, I decided to face off against a single bot with its difficulty set to “Godlike.”

It was a strange match. What I quickly discovered was that the bot was basically unbeatable in the open. With machine perfect aim and reflexes that enabled it to dodge around rockets, it was often able to kill me before I managed to inflict any damage. By the time I respawned and armed myself, it had gathered up a bunch of armor and power-ups, ensuring our next battle would be even shorter. I died again and again, and I never came close to killing it.

After lots of failure and frustration, I found a tight bottleneck on the level. If the bot wanted to reach me, the most direct route would take it through this hallway. I stood just off to one side so the bot wouldn’t have line-of-sight to me as it passed through the bottleneck. As soon as I heard the door open on the far side, I knew the bot was coming. Without waiting for it to come into view, I started launching rockets into the passage. The bot charged into the salvo and died instantly.

Once I’d found this spot, I was almost unbeatable. The bot never changed its tactics. It never took the longer route that would allow it to get behind me. It never stopped charging face-first into my missile salvo. It made the same idiotic mistake again and again. In the end, I was able to overcome its supreme reflexes by exploiting its inability to learn from its mistakes. Most hand-coded AI today — the kind you find in commercial video games — have the same problem. The AI can’t learn from its mistakes because that would require it to somehow  rewrite itself.

Why Playing StarCraft is Hard for AI

https://www.youtube.com/watch?v=dF7bMsc2Li8

Among people who don’t play the game, there seems to be this impression that StarCraft is a really complicated game of Rock, Paper, Scissors. Supposedly, you win by memorizing the complex network of which units counter which other units. I can understand why people think this. That’s what the story missions  in the single-player campaign teach you. But at the professional level, the real heart of the game involves balancing your economy.

The game has worker units that can harvest resources. Don’t worry about what the resources are called. For the purposes of this discussion, just think of them as “money.” The more workers you have, the faster you can bring in money. You can then use that money to build military units, or to build more workers so you can make money even faster. At first, the choice seems pretty straightforward. You can spend 50 money to make either a soldier or a worker, but there’s a hidden cost to that soldier. If you build the worker, it will then bring in 40 money every minute. That soldier doesn’t just cost you 50 money it costs you 50 money up front and 40 money of lost potential income per minute. Since the additional money could be used to build more workers and allow you to bring in money faster still, the gains can add up quickly.

The longer you can hold off building an army, the stronger your economy will be. You can build systems that give you access to advanced units and expand to new areas of the map for even more income. The only downside to playing this way is that it leaves you vulnerable in the short term. If all you build is infrastructure, then your opponent will be able to march across the map and crush you with a small force. On the other hand, if you build an army too soon while your opponent continues to expand, then you’ll find yourself falling behind because of the way gains compound.

This means that your goal is to either build an army large enough to crush your foe in a single assault, or to build the minimum army required to prevent them from doing the same to you. To make that decision, you need to keep tabs on what your adversary is up to. If they start building an army, then you need to do the same.

Players will hide buildings from their opponents in an attempt to mask their true intentions, and they’ll work hard to sneak units into the enemy base so they can see what they’re up against. This is why you see commentators obsessing over the count, movement and activity of worker units  in a professional match.These small actions seem inconsequential, but they hint at what the player might be planning.

This makes things tricky for an AI because it means a lot of your decisions need to be based on extrapolation. You need to look around the enemy base and have a good understanding of what you’re not seeing. If you don’t see any production buildings, then you need to realize they exist somewhere else on the map. Furthermore, the fact that your opponent is deliberately hiding them from you should tell you something. You need to be able to look at the things you do see in their base and guess what they’re trying to build and what sort of attack they have planned.

Neural networks have gotten pretty good at pattern-matching problems, but extrapolation is another challenge entirely. To extrapolate, you need to understand what’s possible within the rule space of the game in question, and you also need to be able to make informed guesses based on what you know about your opponent. You probably need to have a working theory of mind to be good at this sort of thing. Neural networks are doing amazing things, but they’re nowhere near that level yet.

Why AlphaStar Lost

As in its previous matches, AlphaStar quickly got the upper hand by using its superior speed and accuracy to manage its units. It also continued to favor a brute-force army of a single unit type and tended to move its units around as a whole rather than splitting them up. MaNa noticed these weaknesses, and was able to exploit them.

AlphaStar had a massive army and was moving towards MaNa’s base to crush him yet again. MaNa flew a couple of units directly into the heart of AlphaStar’s base. These units were not a serious threat to AlphaStar. They certainly would have done some damage, but AlphaStar would have been fine if it just ignored them and proceeded with its plan to crush MaNa’s base. Instead, AlphaStar turned the entire army around and marched it all the way back home to deal with these two attackers. As soon as the ground army arrived, MaNa flew his units away. With nothing left to do at home, the AlphaStar army once again began the march across the map.

Once the AlphaStar army had left home, MaNa returned with his two units and again began doing bits of annoying damage. In a game between humans, this type of attack is called harassment. It’s not designed to kill the opponent, but instead is intended to distract them and force them to build units to deal with it. AlphaStar was unable to cope with this type of harassment and fell for it again and again.

If AlphaStar was a human, it would have been able spot the pattern and adapt. It could leave just a handful of units at home to ward of this harassment. It could build static defenses to keep the harassment at bay. It could build one or two flying units to kill MaNa’s guerrilla strike force. It could ignore the attacks and proceed with its original plan to attack MaNa’s base. But instead MaNa was able to force AlphaStar to bring its units home. AlphaStar kept falling for this trick until MaNa had built a large enough army to counter the AI.

In the end, this matchup reminds me a lot of that match I had with a Godlike bot 20 years ago. The machine was able to win several times due to superior precision and reaction times, but was defeated once I saw a weakness and devised a strategy to exploit it. Without the ability to learn on the fly, the AI was helpless against the new strategy.

It’s sort of sad that after 20 years, AI is still stuck in the same rut. AlphaStar is monumentally more sophisticated than the hand-coded AI I played against in 1999, but the game still came down to machine precision vs. human ingenuity. This isn’t because the AlphaStar team failed at their job; it’s because making artificial intelligence is hard.

Mother nature worked on this problem for billions of years, and as far as we can tell humans are the best she could come up with. Neural networks are only a few decades old at this point, and they’ve made some really impressive progress so far.

Developing AI is less about inventing more fun opponents for us to play against and more about producing systems robust enough to solve complex medical and automation problems that will make humans safer and healthier. It just happens that playing StarCraft is a really good way to test the efficacy of these new systems. And if that gives us some fun matches to watch in the meantime, so much the better.


The Escapist is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more
related content
Read Article Final Fantasy 7 Rebirth Is Like Watching a Dinosaur Die
FF7 Rebirth Cloud and Chocobo
Read Article Solo Leveling Proves Why the Three Episode Rule Still Matters
Where And When Can I Stream Solo Leveling?
Read Article What Makes Studio Ghibli’s Hayao Miyazaki So Singular?
The Boy and the Heron reveals cast list
Related Content
Read Article Final Fantasy 7 Rebirth Is Like Watching a Dinosaur Die
FF7 Rebirth Cloud and Chocobo
Read Article Solo Leveling Proves Why the Three Episode Rule Still Matters
Where And When Can I Stream Solo Leveling?
Read Article What Makes Studio Ghibli’s Hayao Miyazaki So Singular?
The Boy and the Heron reveals cast list
Author
Shamus Young
Shamus Young is a <a href="https://store.steampowered.com/app/358830/Good_Robot/">game developer</a>, <a href="https://www.shamusyoung.com/twentysidedtale/?p=27792">critic</a>, and <a href="https://www.shamusyoung.com/twentysidedtale/?page_id=45100">novelist</a>. He's just published a new cyberpunk novel. <a href="https://www.shamusyoung.com/twentysidedtale/?page_id=45100">Check it out</a>!