In celebration of the release of Terminator Genisys, I have come to tell you that, yes, robots are going to take over humanity one day. In fact, that’s the entire story of humanity’s relationship with technology, one where human labor is slowly phased out and replaced with cheaper and easier to maintain machine labor. So Skynet doesn’t have to nuke us or build a gigantic army of kill robots. It just has to be better at what we do than we are. Already robots and computers have replaced us in doing factory work, complex calculations and data management, and even stock trading, and while we fragile humans believe we are safe in creative endeavors, because only humanity can harness the spark of inspiration, we are so totally wrong! Computers have already been programmed to produce visual art, music, and poetry, and now, they can even design our games for us. The machines are coming for Magic: The Gathering!
By that I mean, a number of Magic: The Gathering fans (lead by “Talcos” on MTG Salvation) have programmed deep recurrent neural networks to create new and interesting Magic cards. No humanity required. Mark Rosewater, your job is in jeopardy. Come with me if you want to live!
So how do you teach a machine to be creative? A good way to start is to mimic the human brain, which is basically what recurrent neural networks, or RNNs, do.
Bear in mind, I’m not a computer scientist, but I’ll do my best to describe, in simple terms, what an RNN does, to the best of my understanding. Essentially, any neural network will take a set of information and start looking for patterns. The patterns that it finds are then stored, much as your brain would store say, visual information when you look around a room. That information then starts to decay over time, eventually being deleted if nothing reinforces it. If we were to use the human brain as an analogy, this is why you don’t have a photographic memory of everything you see all the time. Your brain quickly forgets things that aren’t reinforced. However, for things you see over and over again, details start to stick in your memory, and a computerized neural network operates very similarly. As patterns repeat, that information doesn’t decay, and you eventually end up with a set of information that means something and that is very similar, if not analogous, to actual learning. (For more information on how RNN’s work, check out this informative talk by Alec Radford.)
Note, however, this is not exactly the same as human learning. RNNs like the one we will be talking about do not understand the concepts expressed by the data they are given, concepts that we as humans would find second nature. For example, even though this RNN can create Magic cards, it has no idea what the rules of Magic: The Gathering are. For that matter, it has no idea what the English language is, and can apply no intrinsic meaning to words, or even recognize that a string of characters is a word. It doesn’t even really have a concept of math. Numbers are just symbols to an RNN.
What it does know, however, are patterns. It can find patterns, and recreate those patterns in novel ways. That’s essentially the same thing we are doing when we create Magic cards. We know that two mana gets us a creature of about 2/2 power and toughness. Why? Because we have seen other creatures with that same mana cost and power and toughness in the past. We just have these extra layers of understanding, which describe what power, toughness, and mana actually mean.
So Talcos took every card in magic history, compiled into one JSON file, and fed it into an RNN, to see what he would get out. But before all of this, the RNN needs to train itself by examining the data you give it. The longer you let it learn, the better your output data becomes.
For this, you need to specify a number of parameters, including how much of the data it will look at at once. Too little, and your RNN won’t detect meaningful patterns; too much, and your RNN will take years to run. In fact, RNNs are already so heavy on processing that GPUs are used to run them. Not only that, but you may run into a problem with the RNN becoming over fit and unable to innovate. Speaking of innovation, you also have to set your “temperature” or, in other words, how risky the RNN will be when creating new cards. Too risky and the RNN will completely ignore formatting and the laws of language. Not risky enough, and your RNN will simply print cards that already exist. It’s kind of a Goldilocks problem when you think about it – the parameters need to be set just right.
Very early on in the learning process, the RNN manages to figure out the format of a Magic card. Casting cost, name, type, rules text, and power and toughness, are always in the same place relative to each other, and so that pattern becomes apparent very quickly. It also knows some basic English, as most rules text on Magic cards tend to follow the same format. Unfortunately, that’s about all the RNN knows at this point. It hasn’t made greater connections between casting costs, effects, or even card types.
This causes its output to be useless at best and garbled at worst. Talcos’ original run created such wonderful spells as Grirerer Knishing, a 4G Instant – Arcane whose effect is “Exile target creature you control.” I’m sure someone will make it good in some crazy combo deck.
The RNN also knows that reminder text comes after keywords, but can’t match the right reminder text to the right keyword. In fact, sometimes it even creates its own reminder text, and this can be just as kooky as the rest of the card. It interprets morph to mean “You may cast this card from your graveyard to the battlefield, you may pay 1. If an enchantment card, then put the rest of target creature card from your graveyard for its flashback cost. If exile is you sacrifice it unless you pay 1G. If you do, put a 3/1 green Soldier creature token onto the battlefield. Put it into your graveyard.” Could you follow that?
But perhaps most hilariously, the RNN has learned that the guys over at Wizards of the Coast love their new keywords. They love keywords so much that the RNN has started to create its own! One of Talcos’ first generated cards was a blue creature with Mointainspalk and Tromple, whatever that does.
Allowing the RNN to continue training itself overnight allowed it to start noticing larger patterns. For example, it started to learn “flavor.” It recognized that weenies and life gain were often found on white cards, and graveyard recursion was on black cards. Note, it has no idea what this actually means, just that the sequence of characters that refers to gaining life or putting 1/1 tokens into play is something often found on white cards, and the sequence of characters that refers to returning things from your graveyard tends to be found on black cards. This allowed it to print cards like Light of the Bild, a 2/2 Spirit with Flying for 2WW that has the ability “Whenever Light of the Bild blocks, you may put a 1/1 green Angel creature token with flying onto the battlefield.”
Light of the Bild is actually an interesting study in how the RNN interprets cards. For example, it understood a pattern of 1/1 creatures being generated by white effects, of blocking being something white does, of angels having flying and being something made by white, and of spirits having flying and being white. However, it didn’t particularly understand that the angel token itself had to be white, and since there are many instances of green 1/1 tokens in the game, it made the angel token green. Still, it’s a totally workable card that hasn’t ever been printed before and is pretty playable.
Even at this level of training, however, we end up with some interesting errors. Reminder text and new keywords continue to be a problem. Power levels on cards that don’t have easily recognizable textual patterns, such as lands, can get pretty crazy (It printed a land with no drawback that adds WUGG to your mana pool.) There are also a lot of vanilla cards and color shifted cards that operate a lot like other cards in magic do, which I guess isn’t a problem considering that kind of mirrors Magic: The Gathering common design, but it doesn’t do much to show us that computers can be creative.
Since Talcos has started his project, a number of other fans have come forth with suggestions to help his RNN generate more interesting and more creative cards. A lot of these suggestions involve refining the data that is fed into the RNN in order to make patterns more apparent. One suggestion involved removing all reminder text, which, of course, eliminates the problem of incorrect or misplaced reminder text. Another suggestion said to insert more reminder text, so that every instance of every keyword would always have reminder text after it. This would make the pattern of mechanics being linked with reminder text more easy to learn. It also means that the RNN could generate new reminder text for new keywords, which it did! Turns out that Tromple means “Whenever this deals damage to a player, put a +1/+1 counter on it.” We now have a keyword for the Slith ability!
Most methods of data refining involve making patterns much more obvious in the data set. For example, if you move a creature’s power and toughness closer to its type line, then the RNN will more easily recognize that creatures need power and toughness. It doesn’t have to scan through other rules text to notice the pattern. Similarly, since the RNN has no concept of math, changing mana costs to a unary form, (i.e. 111UG instead of 3UG) will allow it to see the pattern that longer mana costs tend to be linked with more powerful or crazier abilities. Another suggested method was to use a cipher to turn every word in Magic into an alphanumeric code. This reduces longer words like “battlefield” into character strings of just 2 or 3 characters, meaning the RNN has to search through less data in order to notice patterns. Of course, its output would then be coded, but then you can just run the cipher in reverse in order to get back plain English. Speaking of plain English, you could also train the RNN on simple English first in order to make it’s sentences read more like the actual English language, but doing so might actually screw up the MTG styles formatting of rules text, which doesn’t necessarily hold to the rules of English.
By far the best method of data refining was to simply cut down the data. If your data set is too small, your RNN can’t notice patterns, but there are tens of thousands of Magic cards out there, and that’s more than enough data. So if you reduce your data set to a certain type of card you are looking for, the RNN both learns better, and innovates better. For example, if you want the RNN to generate and instant or sorcery spell, you can feed it a data set of only instants and sorceries. It may not understand how instants and sorceries work, but it has no conflicting examples of cards that work in other ways so you are guaranteed to get an example that fits the mold.
But even after refining the data certain problems still persist. At times, the RNN still simply reprints cards that already exist. It’s not the greatest at coming up with names, because it can’t find any meaningful patterns in them. Understanding links between abilities and later mentions of said abilities (like a card with Kicker actually having a kicker effect) tends to evade the RNN’s understanding because it doesn’t have a long enough “attention span” to recognize that the ability has to show up again. Finally, the RNN still struggles with card power level at times. It created a 1UU counter spell that makes you put a green elephant creature into play, and that’s pretty innovative. Unfortunately, it made this green elephant 5/5, because the same pattern it used to realize counter spells can sometimes have cool effects like putting creature tokens into play, was not the same pattern used when it realized green creatures tend to be big and beefy.
Even with all these errors, the applications of this technology are numerous. For example, you can ask the RNN to generate a card with a certain line of text in it. Not only does this make the RNN respond more accurately, because part of the card is already human input, it also allows you to essentially ask for brand new cards. Need a new 1BB creature to fill out your set? The RNN can do that. Need a new rare? It can do that too. Want a creature with a specific ability? It can do that! Heck, you can even come up with new mechanics and it will do its best to put those mechanics into use.
For example, Talcos ran the RNN with instants and sorceries to attempt to create “Legendary Instants and Sorceries,” something the game hasn’t seen before. How these spells work isn’t particularly clear, although consensus seems to be that legendary spells like this can only be played once per game (further uses are countered automatically.) The RNN knows that legendary things tend to have more powerful and complicated effects, and thus it created Receleral Touch, a 1U Legendary Instant counter spell that also causes the player whose spell is being countered to sacrifice an artifact, creature, or land.
Maybe our technology isn’t quite advanced enough for RNNs to do all of our Magic design, but it’s certainly powerful enough to act as an incredible tool. The ability for designers to feed information into a neural network and get actual readable workable output that follows the rules of Magic is incredible. With just a few keystrokes, an RNN can create thousands of cards of whatever casting cost, type, or effect that you want. It can effectively fill in the gaps between our human creative blocks. MTG players on the original thread are already compiling full spoilers of cards generated by RNNs for their own fantasy robo expansions. I’ve even compiled some of my favorites in a list that you can read here. And all of this is being done on desktop computers that use the same GPUs that you would use to play Watchdogs or Far Cry 4.
So maybe Mark Rosewater’s job isn’t quite at risk of being taken over by the MTG-1000 yet. But I think this is evidence enough that it’s not only humans that can be creative. Sometime in the near future, as our technology gets more and more powerful, we may end up using computers to generate new and novel ideas for Magic expansions that frankly, our fleshy minds would have never even thought about. And so when humanity is finally drowned out in darkness of the final night that is the robot uprising, we can all rest easy knowing that two T-800’s somewhere will be complaining because the VirtuaRosewater program recently generated a brand new Magic expansion focused on banding.
Nobody likes banding.
If you want to try running your own Magic: The Gathering card generation RNN, check out Wildfire393’s guide on how to get one up and running on a basic desktop computer, and let us know what hilarious cards your network comes up with.