Homeopathy, for those who don’t know, is a form of alternative “medicine”. It involves diluting an active medicinal ingredient into a solution so many times that there ends up being a mathematically near-zero chance of the solution containing even a single molecule of the active ingredient. Yet the United States spent 3.1 billion on homeopathic products in 2007, which can seem pretty strange to someone who understands what it really is.
Now, clearly homeopathy doesn’t “work” in the sense of having actual medicinal effects. Homeopaths have tried to claim that it does by conjuring all sorts of bizarre theories, one of the most common being that “water has a memory” – it can “remember” the molecules that used to be in it, and somehow that memory has an effect. Whatever – pretty obviously nonsense but that’s not what concerns me here.
Delicious. It’s impossible for anyone to tell the difference between pure sugar pills and homeopathic drugs. That’s because they are the exact same thing.
What does interest me is the fact that homeopathy does work in the sense that people think it works. The placebo effect is very powerful (for some kinds of ailments, anyway), and having a “school of medicine” with practitioners all telling you that this sugar pill will stop your headache may alleviate the pain, since pain is understood to be a highly subjective sense that can be affected by the state of mind of a person. The simple concept that “you’re being taken care of now, everything will be OK” may bring comfort to that person. Here’s a really great Derren Brown video showing the great power of placebo.
More often, though, is probably the following scenario: the person gets a headache, takes a homeopathic remedy, and then the headache goes away. This is erroneously marked a “hit” in that person’s mind – the remedy worked! Nevermind that it’s incredibly likely that the headache simply happened to go away.
We tend to remember the hits and forget the misses in this way. Psychics and astrologers have been using this against us for ages, and techniques such as cold reading are a very real method that can be used to make (help?) humans believe that which is almost certainly very fake.
The Desire To Believe
I should perhaps use the term “help” rather than “make”, because there’s another important aspect to this problem, which is that people generally want to believe the sorts of things that work because of placebo. The very idea of alternative medicine is a pleasing narrative that appeals to both our modern first-world guilt (“I’m hip to these ancient/third-world remedies, I’m not really one of the insensitive conquering Westerners!”) and our justifiable distaste for some of the actually evil or irrational policies taken up by mainstream medicine in the past or the present.
We also have a desire to simply “have been right”. After all, we went against the grain and chose an alternative. We invested in this alternative, and it would definitely cost us a little more to end up having to admit that it was a big waste of time and money.
Finally, we may be evolutionarily tuned to “see agency/causality where there is none“. With regard to survival, it’s in our interests to imagine agency behind a rustling bush and run. Those individuals who did otherwise were taken out of the gene pool over time after enough rustling bushes turned out to be lions rather than raccoons.
The famous Jesus toast. We’re bread to see patterns.
So, before I move on, it’s good to review these two specific reasons that we sometimes believe things without reasonable evidence: the desire to believe, and our biology making the job easy on us.
So how does any of this relate to games? I’ve been struggling over some concepts with regards to randomness in games for a few years now, and I think I’ve come to understand some things that might be helpful to others.
Before I go on, a quick disclaimer: I’m talking about games in a specific context here. I’m talking specifically about interactive systems that involve winning and losing and decision making – systems wherein you express your level of understanding through those decisions and meet a win/loss outcome. There’s a bit more on my definition for games here.
Games with high levels of output randomness – i.e., card draws, dice rolls, or other random things that get between a player and his agency – tend to have more random outcomes. This isn’t really contested. A professional Poker player will win a smaller % of games against a new player than a professional Chess player will against a new player. There’s more randomness in the system of Poker, so the win percentage naturally moves towards 50%.
This guy is really bad at the game.
What’s much harder to pin down is an individual game’s outcome. If you won that game of Dominion or Settlers of Catan, how much of it was because of your actions, and how much of it was because of sheer chaos? It’s extremely hard – practically impossible – to pin this down, because of layers upon layers of many decisions that were made. Maybe one of the decisions you made was somewhat poor, but it was only because you were put in a bad position due to randomness. And maybe that’s not so bad, except for the fact that maybe your opponent wasn’t put in any kind of compromised situation at all – his randomness dealt him a nice steady flow of problems that he was used to dealing with.
I don’t care about this question for some alpha-male “who’s better” reason. I care because if we can’t determine the cause of the outcome, then feedback for your agency is diminished. And the agency-feedback loop is what games and all kinds of interactive systems are all about.
Of course, the defense is that over a large enough number of games, it’s not really a problem. The better player wins most of the time. However, that doesn’t detract from the phenomenon I want to explore – that of imagined agency.
I’ve talked about this a bit before back on the Dinofarm blog. The idea is that after and during a specific highly random game (i.e. almost any card game or game involving dice, virtual or otherwise), players tend to imagine that they have a different level of agency than they actually do.
For instance, if a player is doing well, he’s quite likely to attribute that success to himself. Of course, he won’t do this if the random swing is so strong that it’s patently clear. For example, if a player lucks out and finds an extremely powerful weapon early on in a roguelike, they’re likely to point that out whenever they explain how their game went. “I did really well – I mean, I got this crazy sword early on, so most of the game was a total pushover…” they might say. But, barring freakish outliers like that, they’re likely to take credit for a successful game.
It makes sense. If we watch children playing Candyland, Trouble, War, or any other 100% random children’s game, we can see that despite the fact that the game is completely random, the children are assigning agency to their actions. They take credit for the 6 they rolled. We can clearly see that this was not a product of their skill, yet they can’t see this. It’s similar to how Tic-Tac-Toe is interesting for children because they can’t see the solution, whereas to us adults it’s boring because we can.
So then doesn’t it also make sense that a game with a large degree of randomness – say Poker or Dominion – would have a similar effect? Some players will see through the illusion and point out that there is no direct line between player input and game feedback. These players will see – as clearly as every adult sees for Candyland – that the outcome of the game is not determined by the players, but rather by randomness.
In short, highly random games tend to be built on a similar illusion – a placebo effect, if you will – of agency.
The Game Placebo
What determines whether a person is able to see through a highly random game or not? Well, it’s similar to the factors that cause a person to see through a fake drug’s illusion. Firstly, a base level of intelligence does factor in. If the person just doesn’t even have the faculties to start asking the question “is this real”, as most children do not, then the question won’t get asked.
But the second aspect, which interests me more, is the inclination to ask that question in the first place. Most gamers I know do not see the indirect relationship because they do not ask the question in the first place. They aren’t thinking along the lines of “tracing the steps between my actions and the outcome of the game”.
This is understandable, of course, as most people play games to simply have a good time, and sitting there and analyzing exactly what’s going on probably isn’t in their interest for doing so. But as the film critic Plinkett often says, even if these players don’t consciously notice this disconnect, their brain does. What I mean by that is that even if they are consciously ignorant of the mechanistic processes that make up the play of the game, the effect of those processes is real and undeniable, and your brain feels them fully. If your brain is able to detect that lack of agency-feedback, it will diminish its interest in the game. In short, your brain gets tired of the randomness, even if you don’t, and many gamers will find themselves simply wanting to move on to other games without knowing exactly why.
To paraphrase something Richard Garfield said at PRACTICE this year, something along the lines of “you don’t have to be an engineer to feel the effects of a faulty bridge”.
So what’s the matter? Why can’t I just let people enjoy their games for a short time and then move onto something else when they get bored? There’s two problems with this whole setup.
Firstly, game designers should basically never, ever be in this camp. Game designers should be fighting at all times to be fully aware of the actual stuff that’s really happening when people are playing games. Game designers not being able to discern between the illusion and the reality is similar to a doctor who really can’t tell the difference between medicine and sugar pills.
The second issue is that this kind of creates a loop of consumers buying games, getting excited about them, and throwing them out after a year or so, and then moving onto the next one. This is great for companies who want to sell games, but it’s a treadmill that wears people out. After enough cycles like this, players will eventually feel as though it’s not even worth looking – that they already know how it’s going to go. They’re going to get a game, get super excited about it, grog it, and then throw it away.
There is certainly value to this placebo effect. It can attract people in the short term, and act as a “shortcut” to interesting or otherwise “fun” game designs. However, it’s just worth being conscious of the downside as well. If we’re talking about how to achieve great game designs, we should probably rule out shortcuts.
I think that the more we rely on this illusion to get people interested in our games, the less healthy our gaming ecosystem becomes. We should be fostering games that can stand up to a lifetime of play, or more. We should be striving to make games whose depth can be felt. We should want players to know that if they invest time in learning this game, it will continually give back to them. We should make games that strive to become a facet of a person’s life. This is how we can establish trust between designer and player, and through that trusting relationship, everyone really wins.