We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

Why StarCraft is the Perfect Battle Ground for Testing Artificial Intelligence

The real world is full of complex problems. Fantasy realms are training computers to solve them.

By Jennifer Walter
Dec 2, 2019 10:30 PMDec 3, 2019 5:03 PM
AS Bnet Zerg2
A Protoss-versus-Zerg showdown in StarCraft II. Credit: DeepMind

Newsletter

Sign up for our email newsletter for the latest science news
 

DeepMind, an offshoot of Google's parent company, debuted a computer program in January capable of beating professional players at one of the world's toughest video games. StarCraft is a military science fiction franchise set in a universe rife with conflict, where armies of opponents face off to become the most powerful. And DeepMind's program, called AlphaStar, reached StarCraft II's highest rank — Grandmaster. It can defeat 99.8 percent of human players, according to a study published in the journal Nature in October.

StarCraft is one of the most popular, difficult electronic sports in the world. And that status has spurred a smattering of code-writers to use it as a training ground for artificial intelligence. It's not just corporate research groups like DeepMind putting StarCraft to the test, either. Amateur gamers and academics have also taken on the challenge of attempting to beat human StarCraft players with autonomous bots.

But why StarCraft? On its face, the video game has the standard hallmarks of its fantasy counterparts: strife in a post-apocalyptic world, a race to make yourself the most powerful opponent and a battle to defeat your enemies. But instead of controlling a single first-person shooter agent, as in games like Halo or Overwatch, players manage a whole economy of builders, fighters and defense systems that work symbiotically to keep them from losing.

Although fantastical in nature, StarCraft's multi-faceted world creates complexities that mirror our own. And using the game as an incubator to train computers could help researchers build better bots with real-world effects.

Watch Your Back

Training AI algorithms to win games against humans has a long, storied history. Even before computers existed, people created illusory “robots” that could beat players at games like chess. In 1997, IBM’s Deep Blue defeated the world chess champion, and other powerful computer algorithms, like DeepMind’s AlphaZero and AlphaGo, followed suit in defeating human board game masters at their craft.

But video games bring complexity to the next level. In StarCraft, players compete as one of three races — Terran, Protoss or Zerg — each with certain strengths and weaknesses. For example, Protoss are powerful fighters, but don’t spawn quickly. On the other hand, Zerg spawns the quickest, but aren't strong fighters, so their power comes in numbers. 

And besides simply selecting the strengths and weaknesses of your race, you also control multiple facets: workers gathering resources, builders creating defense systems, and fighters attacking enemies. You have to keep an eye on your units while making sure other players don’t take advantage of your weak spots.

From those facets, researchers study how certain techniques lead to the most effective gameplay. In 2011, Memorial University of Newfoundland computer scientist David Churchill co-authored a paper on build order in StarCraft II, studying how the prioritization of resource-building could affect success in the game.

The research, Churchill says, gives us a clearer understanding of how algorithms work to solve problems in a simulated environment.

“There’s a certain sexiness to game AI that allows it to be digested by the general public,” Churchill says. And games also provide a way to test the “intelligence” of an algorithm — how well it learns, computes and carries out commands autonomously.

Beyond the Board

Before StarCraft, Churchill started tinkering with algorithms designed to defeat board games. The program he built for his doctoral thesis was designed to win a game called Ataxx, a 1990s-era arcade-style strategy game played on a virtual board. It was the first time he created a program that could play a game better than he could.

Since then, Churchill's research has focused on video game AI, with StarCraft being a favorite. One element that separates a board game AI from video game AI is deceptively simple: the player’s ability to see the entire landscape at once.

Unlike Ataxx, you can’t see the whole map in StarCraft without scrolling, which makes it harder keep an eye on all your resources. It also makes it more difficult to see what your enemy is plotting — or, as Churchill says, engulfs you in the “fog of war.”

“You don’t know what your enemy is doing until you’re standing right next to them,” he says. It’s a closer representation to real life; in most scenarios, your knowledge of a problem won’t be omniscient.

And games like checkers or chess don’t happen in real time — once a player makes a move, there’s no time limit for an opponent to make theirs. But, as Churchill says, “in StarCraft, if I’m not shooting you, you’re shooting me.”

He even compares it to something as seemingly simple as soccer. If you’re standing around on the field, players will continue to kick the ball, goals will continue to be scored and the game will continue to progress. The same goes for StarCraft — regardless of whether you closely maintain your forces or actively fight your enemies, the game will continue with or without your intervention.

Taking on complex games like StarCraft can help scientists train algorithms to learn new skills in an environment with lots of variables. Churchill says video games can be a gateway to teaching machines to be better at image recognition, search suggestions, or any algorithm that has to assist humans in making decisions.

“That level of complexity (in games) starts to approach what we see in the real world,” he says.

Bot Battleground

Since 2011, Churchill has organized a yearly, international event called the AIIDE StarCraft AI Competition, where game enthusiasts and professionals alike come together to build and test algorithms for games. One of those games is StarCraft, although they use StarCraft: Brood War as the testing grounds, instead of StarCraft II.

But the bots that teams build for AIIDE are different than projects like AlphaStar, Churchill says. Some are “true AI,” or bots that use neural networks to learn patterns and build on past knowledge to win a game. Others take a simpler approach with hard-coded rules that instruct a unit to move a certain way if something specific happens during gameplay.

And every year, the organizers enforce a rule that teams must open-source their code after the competition. That way, competitors can build on past algorithms to make their bots stronger, smarter and more powerful.

Even with AlphaStar in the headlines, Churchill says the competition isn’t going anywhere. While the DeepMind team touts the algorithm’s high success rate, the amount of resources put into the project reaches a standard of power that goes well beyond the abilities of the average coder.

“It’s an unfathomable undertaking,” Churchill says. And the challenges that remain show that bigger isn't always better.

Too Many TPUs?

When AlphaStar first debuted, the algorithm performed with super-human capabilities. It had certain advantages over humans; for example, the computer could see all its visible units without having to pan around the map to execute commands, and completed actions more precisely than a pro player clicking a mouse.

So, for the Nature paper, DeepMind put limitations on the computer's ability to simultaneously control its units. Other limits on the speed and abilities of the program were in place from the beginning to make it compete on a level closer to a human player. But even with the boundaries, AlphaStar was still capable of defeating professionals.

And the power behind the machine was stronger than any StarCraft bot formerly created. DeepMind created multiple automated players to specialize as certain races, and trained each by having them watch human game replays for 44 days. The processing power was backed by Google’s third-generation Tensor Processing Unit (TPU), a massively powerful chip used in its machine learning programs for apps like Google Translate.

AlphaGo, the algorithm designed by DeepMind to defeat the board game Go, uses 4 TPUs. AlphaStar, on the other hand, uses a whopping 32.

“What they’ve created is an algorithm that only DeepMind can use,” Churchill says. For him, the next challenge is for researchers to scale down game algorithms so that they guzzle a little less energy and work a little more elegantly.

Team games, as opposed to 1-on-1 battles, could also pose a new challenge for unmanned bots. And as algorithms continue to mesh with human players, there might be a time where humans and AI play on the same team.

But for now, games like StarCraft will continue to usher in research into how well machine learning can take on complex problems. For Churchill, the worlds of fantasy and reality are nearly indistinguishable.

“It’s all these pros and cons and pluses and minuses," Churchill says. "Every person has something they’re trying to maximize … you’re playing the game of maximizing the numbers. That’s what we’re doing in games."

Editor's Note: This story has been updated from an earlier version to correct the name of the AIIDE Starcraft AI Conference and to clarify the capabilities that DeepMind programmers gave AlphaStar.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.