Why All The Fuss About Equality?

In my last post, I discussed why the term “inequality aversion” is a rather inelegant way to express certain human motivations. A desire to not be personally disadvantaged is not the same thing as a desire for equity, more generally. Further, people readily tolerate and create inequality when it’s advantageous for them to do so, and such behavior is readily understandable from an evolutionary perspective. What isn’t as easily understandable – at least not as immediately – is why equality should matter at all. Most research on the subject of equality would appear to just take it for granted (more-or-less) that equality matters without making a concerted attempt to understand why that should be the case. This paper isn’t much different.

On the plus side, at least they’re consistent.

The paper by Raihani & McAuliffe (2012) sought to disentangle two possible competing motives when it comes to punishing behavior: inequality and reciprocity. The authors note that previous research examining punishment often confounds these two possible motives. For instance, let’s say you’re playing a standard prisoner’s dilemma game: you have a choice to either cooperate or defect, and let’s further say that you opt for cooperation. If your opponent defects, not only do you lose out on the money you would have made had he cooperated, but your opponent also ends up with more money than you do overall. If you decided to punish the defector, the question arises of whether that punishment is driven by the loss of money, the aversion to the disadvantageous inequality, or some combination of the two.

To separate the two motives out, Raihani & McAuliffe (2012) used a taking game. Subjects would play the role of either player 1 or player 2 in one of three condition. In all conditions, player 1 was given an initial endowment of seventy cents; player 2, on the other hand, started out with either ten cents, thirty cents, or seventy cents. In all conditions, player 2 was then given the option of taking twenty cents from player 1, and following that decision player 1 was then given an option of playing ten cents to reduce player 2′s payoff by thirty cents. The significance of these conditions is that, in the first two, if player 2 takes the twenty cents, no disadvantageous inequality is created for player 1. However, in the last condition, by taking the twenty cents, player 2 creates that inequality. While the overall loss of money across conditions is identical, the context of that loss in terms of equality is not. The question, then, is how often player 1 would punish player 2.

In  the first two conditions, where no disadvantageous inequality was created for player 1, player 1 didn’t punish significantly more whether player 2 had taken money or not (approximately 13%). In the third treatment, where player 2′s taking did create that kind of inequality, player 1 was now far more likely to pay to punish (approximately 40%). So this is a pretty neat result, and it mirrors past work that came at the question from the opposite angle (Xiao & Houser, 2010; see here). The real question, though, concerns how we are to interpret this finding. These results, in and of themselves, don’t tell us a whole lot about why equality matters when it comes to punishment decisions.

They also doesn’t tell me much about this terrible itch I’ve been experiencing lately, but that’s a post for another day.

I think it’s worth noting that the study still does, despite it’s best efforts, confound losing money and generating inequality; in no condition can player 2 create disadvantageous inequality for player 1 without taking money away as well. Accordingly, I can’t bring myself to agree with the authors, who conclude:

  Together, these results suggest that disadvantageous inequality is the driving force motivating punishment, implying that the proximate motives underpinning human punishment might therefore stem from inequality aversion rather than the desire to reciprocate losses.

It could still well be the case that player one would rather not have twenty cents taken from them, thank you very much, but don’t reciprocate with punishment for other reasons. To use a more real-life context, let’s say you have a guest come to your house. At some point after that guest has left, you discover that he apparently also left with some of the cash you had been saving to buy whatever expensive thing you had your eye on at the time. When it came to deciding whether or not you desired to see that person punished for what they did, precisely how well off they were relative to you might not be your first concern. The theft would not, I imagine, automatically become OK in the event that the guy only took your money because you were richer than he was. A psychology that was designed to function in such a manner would leave one wide-open for exploitation by selfish others.

However, how well off you were, relative to how needy the person in question was, might have a larger effect in the minds of other third party condemners. The sentiment behind the tale of Robin Hood serves as an example towards that end: stealing from the rich is less likely to be condemned by others than stealing from one of lower standing. If other third parties are less likely, for whatever reason, to support your decision to punish another individual in contexts where you’re advantaged over the person being punished, punishment immediately risks becoming more costly. At that point. it might be less costly to tolerate the theft rather than risking condemnation by others for taking action against it.

What might be better referred to as, “The Metallica V. Napster Principle”.

One final issue I have with the paper is a semantic one: the authors label the act of player 2 taking money as cheating, which doesn’t fit my preferred definition (or, frankly, any definition of cheating I’ve ever seen). I favor the Tooby and Cosmides definition where a cheater is defined as “…an individual who accepts a benefit without satisfying the requirements that provision of that benefit was made contingent upon.” As there was no condition required for player 2 to be allowed to take money from player 1, it could hardly be considered an act of cheating. This seemingly minor issue, however, might actually hold some real significance, in the Freudian sense of the word.

To me, that choice of phrasing implies that the authors realize that, as I previously suggested, player 1s would really prefer if player 2s didn’t take any money from them; after all, why would they? More money is better than less. This highlights, for me, the very real and very likely possibility that what player 1s were actually punishing was having money taken from them, rather than the inequality, but they were only willing to punish in force when that punishment could more successfully be justified to others.

References: Raihani NJ, & McAuliffe K (2012). Human punishment is motivated by inequity aversion, not a desire for reciprocity. Biology letters PMID: 22809719

Xiao, E., & Houser, D. (2010). When equality trumps reciprocity Journal of Economic Psychology, 31, 456-470 DOI: 10.1016/j.joep.2010.02.001

Inequality Aversion Aversion

While I’ve touched on the issues surrounding the concept of “fairness” before, there’s one particular term that tends to follow the concept around like a bad case of fleas: inequality aversion. Following the proud tradition of most psychological research, the term manages to both describe certain states of affairs (kind of) without so much as an iota of explanatory power, while at the same time placing the emphasis on, conceptually, the wrong variable. In order to better understand why (some) people (some of the time) behave “fairly” towards others, we’re going to need to address both of the problems with the term. So, let’s tear the thing down to the foundation and see what we’re working with.

“Be careful; this whole thing could collapse for, like, no reason”

Let’s start off with the former issue: when people talk about inequality aversion, what are they referring to? Unsurprisingly, the term would appear to refer to the fact that people tend to show some degree of concern for how resources are divided among multiple parties. We can use the classic dictator game as a good example: when given full power over the ability to divide some amount of money, dictator players often split the money equally (or near-equally) between themselves and another player. Further, the receivers in the dictator games also tend to both respond to equal offers with favorable remarks and respond to unequal offered with negative remarks (Ellingsen & Johannesson, 2008). The remaining issue, then, concerns how are we to interpret findings like this, and why should we interpret them in such a fashion

Simply stating that people are averse to inequality is, at best, a restatement of those findings. At worst, it’s misleading, as people will readily tolerate inequality when it benefits them. Take the dictators in the example above: many of them (in fact, the majority of them) appear perfectly willing to make unequal offers so long as they’re on the side that’s benefiting from that inequality. This phenomena is also illustrated by the fact that, when given access to asymmetrical knowledge, almost all people take advantage of that knowledge for their own benefit (Pillutla & Murnighan,1995). As a final demonstration, take two groups of subjects; each subject given the task of assigning themselves and another subject to one of two tasks: the first task is described as allowing the subject a chance to win $30, while the other task has no reward and is described as being dull and boring.

In the first of these two groups, since subjects can assign themselves to whichever task they want, it’s perhaps unsurprising that 90% of the subjects assigned themselves to the more attractive task; that’s just simple, boring, self-interest. Making money is certainly preferable to being bored out of your mind, but automatically assigning yourself to the positive task might not be considered the fairest option The second group, however, flipped a coin in private first to determine how they would assign tasks, and following that flip made their assignment. In this group, since coins are impartial and all, it should not come as a surprise that…90% of the subjects again assigned themselves to the positive task when all was said and done (Batson, 2008). How very inequality averse and fair of them.

“Heads I win; Tails I also win.”

A recent (and brief) paper by Houser and Xiao (2010) examined the extent to which people are apparently just fine with inequality, but from the opposite direction: taking money away instead of offering it. In their experiment, subjects played a standard dictator game at first. The dictator had $10 to divide however they chose. Following this division, both the dictator and the receiver were given an additional $2. Finally, the receiver was given the opportunity to pay a fixed cost of $1 for the ability to reduce the dictator’s payoff by any amount. Another experimental group took part in the same task, except the dictator was passive in the second experiment; the division of the $10 was made at random by a computer program, representing simple chance factors.

A general preference to avoid inequality would, one could predict, be relatively unconcerned with the nature of that inequality: whether it came about through chance factors or intentional behavior should be irrelevant. For instance, if I don’t like drinking coffee, I should be relatively averse to the idea whether I was randomly assigned to drink it or whether someone intentionally assigned me to drink it. However, when it came to the receivers deciding whether or not to “correct” the inequality, precisely how that inequality came about mattered: when the division was randomly determined, about 20% of subjects paid the $1 in order to reduce the other player’s payoff, as opposed to the 54% of subjects who paid the cost in the intentional condition (Note: both of these percentages refer to cases in which the receiver was given less than half of the dictator’s initial endowment). Further still, the subjects in the random treatment deducted less, on average, than the subjects in the intention treatment.

The other interesting part about this punishment, as it pertains to inequality aversion, is that most people who did punish did not just make the payoffs even; the receivers deducted money from the dictators to the point that the receivers ended up with more money overall in the end. Rather than seeking equality, the punishing receivers brought about inequality that favored themselves, to the tune of 73% of the punishers in the intentional treatment and 66% in the random treatment (which did not differ significantly). The authors conclude:

…[O]ur data suggest that people are more willing to tolerate inequality when it is cause by nature than when it is intentionally created by humans. Nevertheless, in both cases, a large majority of punishers attempt to achieve advantageous inequality. (p.22)

Now that the demolition is over, we can start rebuilding.

This punishment finding also sheds some conceptual light on why inequality aversion puts the emphasis on the wrong variable: people are not averse to inequality, per se, but rather seem to be averse to punishment and condemnation, and one way of avoiding punishment is to make equal offers (of the dictators that made an equal or better offer, only 4.5% were punished). This finding highlights the problem of assuming a preference based on an outcome: just because some subjects make equal offers in a dictator game, it does not follow that they have a genuine preference for making equal offers. Similarly, just because men and women (by mathematical definition) are going to have the same number of opposite-sex sexual partners, it does not follow that this outcome was obtained because they desired the same number.

That is all, of course, not to say that preferences for equality don’t exist at all, it’s just that while people may have some motivations that incline them towards equality in some cases, those motivations come with some rather extreme caveats. People do not appear averse to inequality generally, but rather appear strategically interested in (at least) appearing fair. Then again, fairness really is a fuzzy concept, isn’t it?

References: Batson, C.D. (2008). Moral masquerades: Experimental exploration of the nature of moral motivation. Phenomenology and the Cognitive Sciences, 7, 51-66

Ellingsen, T., & Johannesson, M. (2008). Anticipated Verbal Feedback Induces Altruistic Behavior. Evolution and Human Behavior DOI: 10.1016/j.evolhumbehav.2007.11.001

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment Economic Letters DOI: 10.1016/j.econlet.2010.07.008

Pillutla, M.M. & Murnighan, J.K. (1995). Being fair or appearing fair: Strategic behavior in ultimatum bargaining. Academy of Management Journal, 38, 1408-1426.