Geek, Dork, Nerd

Geek, Dork, Nerd
Philosophy of Game Design - Part Two

Robert Yang | 5 Oct 2010 12:36
Geek, Dork, Nerd - RSS 2.0

Charts, graphs, heat maps, death maps, kill maps, eye tracking, heart rate monitors, player analytics - an empirical method to game design argues that collecting player data and interpreting it properly makes good games.


(Taking that idea a hundred steps further, logical positivism argues that anything unscientific isn't verifiable and thus is meaningless, which in itself is an unscientific statement, which is partly why logical positivism quickly died the way it did.)

But this kind of data-driven design is plagued by similar problems posed in the philosophy of science:

When is data accurate / pertinent, and how do you go about collecting it?

If you collect data from highly competitive clan servers, or perhaps from someone who's never played a videogame before in their life, are those sets of data valid for balancing the game for everyone else? (It depends.) Should we instead test on some sort of "average player" and if so, then who is that player? (It depends.) Is that really the best way to achieve accessibility, or do we end up pleasing no one by trying for everyone? (It depends.)

And then how do you go about interpreting that data you've just collected?

Imagine in Team Fortress 2 that data indicates fewer players are playing as spies - does that mean Pyros are overpowered or that Engineers are too difficult to kill or something else entirely? (We would need more data.) And if Engineers are too difficult to kill as a Spy, is it actually a level design problem with specific overpowered build sites on popular maps, or is it a sound-related bug where the Spy's cloak sound is too loud, or is it a balancing issue with how the Spy's cloak doesn't last long enough to get past the front line? (We would need more data.) Or is this a good thing, to have so few players playing as Spies? (It depends.)

But now let's say you want to know why a player keeps falling off a cliff.

Do you track the player's camera position and vector to produce a heat map of what they look at, to determine whether they notice the "Danger! Don't Fall Off!" sign, and increase the contrast on the sign texture to compensate?

Do you map their movement vectors against the level's collision model - maybe their movement speed doesn't decelerate fast enough - or do you increase the friction parameter on the dirt materials?

Do you just ask them, "Why do you keep falling off the cliff?"

Maybe this behaviorist notion, that we can deduce a player's intention from observing their actions, is just side-stepping the issue. Why not just solicit player feedback directly and have them verbalize their intentionality? Social liberalism holds that all members of society should have (at least some) input with regards to the process of running their government.

While the empirical school of game design collects quantitative player data, this social liberal approach collects a form of qualitative player data through focus groups, surveys, and analyzing player feedback from emails, forums and blogs.

The social liberal account holds that good games come from listening to as many individual players as possible and interpreting that feedback properly. Here, "accessibility" means decentralizing power and sharing the reins of design.

(As a sort of pseudo-variant, perhaps a neoliberal approach would argue for feedback from clans and guilds, or maybe third-party vendors and game publishers, and value that over individual players' opinions. The resulting design changes might trickle down and indirectly help individual players.)

In Left 4 Dead 2, players vote for which game modes to keep; in Halo: Reach, Bungie uses voting results to balance multiplayer playlists. Increasingly, players are now making game design decisions through direct democracy.

Comments on