MIT Teaches Computer to Read, Conquer the Planet

| 13 Jul 2011 21:20

Researchers have taken humanity one giant leap closer to robotic Armageddon by teaching a computer how to read, understand and very effectively apply the manual to the strategy classic Civilization.

Sure, we all like to joke about the looming machine apocalypse, but when I found out about how researchers at MIT taught a computer to read - and worse, to apply the knowledge it gained from said reading in a simulation about conquering and quite possibly blowing up the entire world - well, let's just say I started to think that maybe it's not all that funny after all.

Regina Barzilay, associate professor of computer science and electrical engineering at MIT, along with her graduate student S.R.K. Branavan and David Silver of University College London, presented a report at this year's meeting of the Association for Computational Linguistics about teaching a computer to "read" through a program in which it learned how to play the PC strategy game Civilization. To play it alarmingly well, in fact.

"Games are used as a test bed for artificial-intelligence techniques simply because of their complexity," said Branavan, who was first author on this paper as well as one from 2009 based on the simpler task of PC software installation. "Every action that you take in the game doesn't have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways."

Game manuals, Barzilay added, are ideal for such experiments because they explain how to play but not how to win. "They just give you very general advice and suggestions, and you have to figure out a lot of other things on your own," she said.

But the truly amazing-slash-frightening part of the whole thing is the fact that the computer began with a very limited amount of information - the actions it could take, like right or left-clicking, information displayed on the screen and a measure of success or failure - and no prior knowledge of the what it's supposed to do, or even what language the manual was written in. Because of that blank-slate beginning, its initial play style was nearly random, but it gained knowledge as it progressed by comparing words on the screen with words in the manual and searching surrounding text for associated words, slowly figuring out what they meant and which actions led to positive results.

The augmented Civ-machine ended up winning 79 percent of the games it played, compared to a winning rate of only 46 percent for a computer that didn't have access to the written instructions. Some members of the ACL audience apparently criticized the report, saying the system performed so well because it was put up against relatively weak computer opponents, but according to Brown University Professor of Computer Science Eugene Charniak, that argument misses the point. "Who cares?" he said. "The important point is that this was able to extract useful information from the manual, and that's what we care about."

It's pretty heady stuff, with a more down-to-Earth benefit for gamers being the promise of far more sophisticated computerized opponents in videogames. Instead of the relatively exploitable preset routines we have today, we could in the relatively near future find ourselves squaring off against computerized opponents with the ability to actually learn, adapt and come at us with ever-evolving tactics and strategies. But the long-term prospects may not be so sunny. If that thing ever figures out how to play Alpha Centauri, we are screwed.

Source: MIT News, via Edge

Comments on