Some Bathwater Without A Baby

When reading psychology papers, I am often left with the same dissatisfaction: the lack of any grounding theories in them and their inability to deliver what I would consider a real explanation for their findings. While it’s something I have harped on for a few years now, this dissatisfaction is hardly confined to me, as others have voiced similar concerns for at least around the last two decades, and I suspect it’s gone on quite a bit longer than that. A healthy amount of psychological research strikes me as empirical bathwater without a theoretical baby, in a manner of speaking; no matter how interesting that empirical bathwater might be – whether it’s ignored or the flavor of the week – almost all of it will eventually be thrown out and forgotten if there’s no baby there. Some new research that has crossed my eyes a few times lately follows that same trend; a paper examining the reactions of individuals who were feeling powerful to inequality that disadvantaged them or others. I wanted to review that paper today and help fill in the missing sections from it where explanations should go.

Next step: add luxury items, like skin and organs

The paper, by Sawaoka, Hughes, & Ambady (2015), contained four or five experiments – depending on how one counts a pilot study – in which participants were primed to think of themselves as powerful or not. This was achieved, as it so often is, by having the participants in each experiment write about a time they had power over another person or about a time that other people had power over them, respectively. In the first pilot study, about 20 participants were primed as powerful and another 20 primed as relatively powerless. Subsequently, they were told they would be playing a dictator game with another person, in which the other person (who was actually not a person) would be serving as the dictator in charge of dividing up 10 experimental tokens between the two; tokens which, presumably, were supposed to redeemed for some kind of material reward. Those participants who had been primed to feel more powerful expected to receive a higher average number of these tokens (M = 4.2) relative to those primed to feel less powerful (M = 2.2). Feeling powerful, it seemed, lead to participants expecting better treatment from others.

In the next experiment, participants (N = 227) were similarly primed before completing a fairness reaction task. Specifically, participants were presented with three pictures representing distributions of tokens: one of which represented the participant’s payment while the other two represented the payments to others. It was the job of participants to indicate whether these tokens were distributed equally between the three people or whether the distribution was unequal. The distributions could have been (a) equal, (b) unequal, favoring the participant, or (c) unequal, disfavoring the participant. The measure of interest here was how quickly the participants were able to identify equal and unequal distributions. As it turns out, participants primed to feel powerful were quicker to identify unfair arrangements that disfavored them, relative to less powerful participants by about a tenth of a second, but were not quicker to do so when the unequal distributions favored them.

The next two studies followed pretty much the same format and echoed the same conclusion, so I don’t want to spend too much time on their details. The final experiment, however, examined not just reaction times to assessments of equality, but rather how quickly participants were willing to do something about it. In this case, participants were told they were being paid by an experimental employer. The employer to whom they were randomly assigned would be responsible for distributing a payment amount between them and two other participants over a number of rounds (just like the experiment I just mentioned). However, participants were also told that there were other employers they could switch to if they wanted after each round. The question of interest, then, was how quickly participants would switch away from employers who disfavored them. Those participants that were primed to feel powerful didn’t wait around very long in the face of unfair treatment that disfavored them, leaving after the first round, on average; by contrast, those primed to feel less powerful waited about 3.5 rounds to switch if they were getting a bad relative deal. If the inequality favored them, however, the powerful participants were about as likely to stay over time as the less powerful ones. In short, those who felt powerful not only recognized poor treatment of themselves (but not others) quicker, they also did something about it sooner.

They really took Shia’s advice about doing things to heart

These experiments are quite neat, but, as I mentioned before, they are missing a deeper explanation to anchor them anywhere.. Sawaoka, Hughes, & Ambady (2015) attempt an explanation for their results, but I don’t think they get very far with it. Specifically, the authors suggest that power makes people feel entitled to better treatment, subsequently making them quicker to recognize worse treatment and do something about it. Further, the authors make some speculations about how unfair social orders are maintained by powerful people being motivated to do things that maintain their privileged status while the disadvantaged sections of the population are sent messages about being powerless, resulting in their coming to expect unfair treatment and being less likely to change their station in life. These speculations, however, naturally yield a few important questions, chief among which being, “if feeling entitled yields better treatment on the part of others, then why would anyone ever not feel that way? Do, say, poor people really want to stay poor and not demand better treatment from others as well?” It seems that there are very real advantages being forgone by people who don’t feel as entitled as powerful people do, and we would not expect a psychology that behaved that way – that just avoided taking welfare benefits – to have been selected for.

In order to craft something approaching a real explanation for these findings, then, one would need to begin with a discussion about some possible trade-offs that have to be made: if feeling entitled was always good for business, everyone would feel entitled all the time; since they don’t, there are likely some costs associated with feeling entitled that, at least in certain contexts, prevents its occurrence. One of the most likely trade-offs involves the costs associated with conflict: if you feel you’re entitled to a certain kind of treatment you feel you’re not receiving, you need to take steps to ensure the correction of that treatment, since other people aren’t exactly expected just going to start giving you more benefits for no reason. To use a real life example, if you feel your boss isn’t compensating you properly for your work, you need to demand a raise, threatening to inflict costs on him – such as your quitting – if your demands aren’t met.

The problems with such a course of action are two-fold: first, your boss might disagree with your assessment and let you quit, and losing that job could pose other, very real costs (like starving and homelessness). Sometimes an unfair arrangement is better than no arrangement at all. Second, the person with whom you’re bargaining might attempt to inflict costs on you in turn. For instance, if you begin a dispute with law enforcement officers because you believe they have treated you unfairly and are seeking to rectify that situation, they might encourage your compliance with the arrangement with a well-placed fist to your nose. In other words, punishment is a two-way street, and trying to punish stronger individuals – whether physically or socially stronger – is often a poor course of action to take. While “punching-up” might be appealing to certain sensitivities in, say, comedy, it works less well when you’re facing down that bouncer with a few inches and a few dozens pounds of muscle on you.

I’m sure he’ll find your arguments about equality quite persuasive

Indeed, this is the same kind of evolutionary explanation offered by Sell, Tooby, & Cosmides (2009) for understanding the emotion of anger and its associated entitlement: one’s formidability – physically and/or socially – should be a key factor in understanding the emotional systems underlying how they resolve their conflicts; conflicts which may well have to do with distributions of material resources. Those who are better suited to inflict costs on others (e.g., the powerful) are also likely to be treated better by others who wish to avoid the costs of conflicts that accompany poor treatment. This could suggest, however, that making people feel more powerful than they actually are would, in the long-term, tend to produce quite a number of costs for the powerful-feeling, but actually-weak, individuals: making that 150-pound guy think he’s stronger than the 200-pound one might encourage the former to initiate a fight, but not make him more likely to win it. Similarly, encouraging your friend who isn’t that good at their job to demand that raise could result in their being fired. In other words, it’s not that social power structures in society are maintained simply on the basis of inertia or people getting sent particular kinds of social messages, but rather that they reflect (albeit imperfectly) important realities in the actual value people are able to demand from others. While the idea that some of the power dynamics observed in the social world reflect non-arbitrary differences between people might not sit well with certain crowds, it is a baby capable of keeping this bathwater around.

References: Sawaoka, T., Hughes, B., & Ambady, N. (2015). Power heightens sensitive to unfairness against the self. Personality & Social Psychology Bulletin, 41, 1023-1035.

Sell, A., Tooby, J., & Cosmides, L. (2009). Formidability and the logic of human anger. Proceedings of the National Academy of Science, 106, 15073-78.

Evolutionary Marketing

There are many popular views about the human mind that, roughly, treat it as a rather general-purpose kind of tool: one that’s not particularly suited to this task or that, but more as a Jack of all trades and master of none. In fact, many such perspectives view the mind as (baffling) being wrong about the world almost all the time. If one views the mind this way, one can be lead into making some predictions about how it ought to behave. As one for instance, some people might predict that our minds will, essentially, mistake one kind of arousal for another. A common example of this thinking involves experiments in which people are placed in a fear-arousal condition in the hopes that they will subsequently report more romantic or sexual attraction to certain partners they meet at that time. The explanation for this finding often hinges on some notion of people “misplacing” their arousal – since both kinds of arousal involve some degree of overlapping physiological responses – or reinterpreting a negative arousal as a positive one (e.g., “I dislike being afraid, so I must actually be turned on instead”). I happen to think that such explanations can’t even possibly be close to true, largely because the response to arousal generated by fear and sexual interest should motivate categorically different kinds of behavior.

Here’s one instance where an arousal mistake like that can be costly

Bit by bit, this view of the human mind is being eroded (though progress can be slow), as it does not fit the empirical evidence or possess any solid theoretical groundings. As a great example of this forward progress, consider the experiments demonstrating that learning mechanisms appear to be eloquently tailored to specific kinds of adaptive problems, since learning to, say, avoid poisonous foods requires much different cognitive rules, inputs, and outputs, than learning to avoid predator attacks. Learning, in other words, represents a series of rather domain-specific tasks which a general-purpose mechanism could not navigate successfully. As psychological hypotheses begin to get tailored more closely to considerations of recurrent adaptive problems, new previously-unappreciated, features of our minds come into stark relief.

So let’s return to the matter of arousal and think about how arousal might impact our day-to-day behavior, specifically with respect to persuasion; a matter of interest to anyone in the fields of marketing or advertising. If your goal is to sell something to someone else – to persuade them to buy what you’re offering – the message you use to try and sell it is going to be crucial. You might, for example, try to appeal to someone’s desire to stand out from the crowd in order to get them interested in your product (e.g., “Think different“); alternatively, you might try to appeal to the popularity of a product to get them to buy (e.g., “The world’s most popular computer”). Importantly, you can’t try to send both of these messages at once (“Be different by doing that thing everyone else is doing”), so which message should you use, and in what contexts should you use it?

A paper by Griskevicius et al (2009) sought to provide an answer to that very question by considering the adaptive functions of particular arousal states. Previous accounts examining how arousal affected information processing were on the general side of things: the general arousal-based accounts would predict that arousal – irrespective of the source – should yield shallower processing of information, causing people to rely more on mental heuristics, like scarcity or popularity, when assessing a product; affect valance-based accounts took this idea one step further, suggesting that positive emotions, like happiness, should yield shallower processing, whereas negative emotions, like fear, should yield deeper processing. However, the authors proposed a new way of thinking about arousal – based on evolutionary theory that suggests those previous theories are too vague to help us truly understand how arousal shapes behavior. Instead, one needs to consider what adaptive functions particular arousal states serve in order to understand when one type of message will be persuasive in that context.

Don’t worry; if this gets too complicated, you can just fall back on using sex

To demonstrate this point, Griskevicius et al (2009) examined two arousal-inducing contexts: the aforementioned fear and romantic desire. If the general arousal-based accounts are correct, both the scarcity and popularity appeals should become more persuasive as people become aroused by romance or fear; by contrast, if the affect valance-accounts are correct, the positively-valanced romantic feelings should make all sorts of heuristics more persuasive, whereas the negatively-valanced fear arousal should make both less persuasive. The evolutionary account instead focuses on the functional aspects of fear and romance: fear activates self-defense-relevant behavior, one form of which would be to seek safety in numbers; a common animal defense tactic. If one were motivated to seek safety in numbers, a popularity appeal might be particularly persuasive (since that’s where a lot of other people are), whereas a scarcity appeal would not be; in fact, sending the message that a product would help make one stand out from the crowd when they’re afraid could actually be counterproductive. By contrast, if one is in a romantic state of mind, positively differentiating oneself from your competition can be useful for attracting and subsequently retaining attention. Accordingly, romance-based arousal might have the reverse effect, making popularity heuristics less persuasive while making scarcity appeals more so.

To test these ideas, Griskevicius et al (2009) induced romantic desire or fear in about 300 participants by having them read stories or watch movie clips related to each domain. Following the arousal-inducing, participants were then asked to briefly examine an advertisement for a museum or restaurant which contained a message that appealed to popularity (e.g., “visited by over 1,000,000 people each year”), scarcity (“stand out from the crowd”), or neither message, and then report on how appealing the location was and whether or not they would be likely to go there (on a 9-point scale across a few questions).

As predicted, the fear condition led to popularity messages to be more persuasive (M = 6.5) than the control advertisements (M = 5.9). However, fear had the opposite effect for the scarcity messages (M = 5.0), making them less appealing than the control ads. That pattern of results was flipped for the romantic desire condition: scarcity appeals (M = 6.5) were more persuasive than controls (M = 5.8), whereas the popularity appeals were less persuasive than either (M = 5.0). Without getting too bogged down in the details on their second experiment, the authors also reported that these effects were even more specific than that: in particular, appeals to scarcity and popularity only had their effects when discussing behavioral aspects (stand out from the crowd/everyone’s doing it); when discussing attitudes (everyone’s talking about it) or opportunities (limited time offer) popularity and scarcity did not differ in their effectiveness, regardless of the type of arousal being experienced.

One condition did pose interpretive problems, though…

Thinking about the adaptive problems and selection pressures that shaped our psychology is critical for constructing hypotheses and generating theoretically plausible explanations for understanding its features. Expecting some kind of general arousal, emotional valance, or other such factors to explain much about the human (or nonhuman) mind is unlikely to pan out well; indeed, it hasn’t been working out for the field for many decades now. I don’t suspect such general explanations will disappear in the near future, despite their lack of explanatory power, though; they have saturated much of the field in psychology and many psychologists lack the necessary theoretical background to fully appreciate why such explanations are implausible to begin with. Nevertheless, I remain hopeful that someday the future of psychology might not include reams of thinking about misplaced arousal and general information processing mechanisms that are, apparently, quite bad at solving important adaptive problems.

References: Griskevicius, V., Goldstein, N., Mortensen, C., Sundie, J., Cialdini, R., & Kenrick, D. (2009). Fear and loving in Las Vegas: Evolution, emotion, and persuasion. Journal of Marketing Research, 46, 384-395.

Privilege And The Nature Of Inequality

Recently, there’s been a new comic floating around my social news feeds claiming that it will forever change the way I think about something. It’s not like there’s ever isn’t such article on my feeds, really, but I decided it would provide me with the opportunity to examine some research I’ve wanted to write about for some time. In the case of this mind-blowing comic, the concept of privilege is explained through a short story. The concept itself is not a hard one to understand: privilege here refers to cases in which an individual goes through their life with certain advantages they did not earn. The comic in question looks at an economic privilege: two children are born, but one has parents with lots of money and social connections. As expected, the one with the privilege ends up doing fairly well for himself, as many burdens of life have been removed, while the one without ends up working a series of low-paying jobs, eventually in service to the privileged one. The privileged individual declares that nothing has ever been handed to him in life as he is literally being handed some food on a silver platter by the underprivileged individual, apparently oblivious to what his parent’s wealth and connections have brought him.

Stupid, rich baby…

In the interests of laying my cards on the table at the outset, I would count myself among those born into privilege. While my family is not rich or well-connected the way people typically think about those things, there haven’t been any necessities of life I have wanted for; I have even had access to many additional luxuries that others have not. Having those burdens removed is something I am quite grateful for, and it has allowed me to invest my time in ways other people could not. I have the hard-work and responsibility of my parents to thank for these advantages. These are not advantages I earned, but they are certainly not advantages which just fell from the sky; if my parents had made different choices, things likely would have worked out differently for me. I want to acknowledge my advantages without downplaying their efforts at all.

That last part raises a rather interesting question that pertains to the privilege debate, however. In the aforementioned comic, the implication seems to be – unless I’m misunderstanding it – that things likely would have turned out equally well for both children if they had been given access to the same advantages in their life. Some of the differences that each child starts with seems to be the results of their parent’s work, while other parts of that difference are the result of happenstance. The comic appears to suggest the differences in that case were just due to chance: both sets of parents love their children, but one set seems to have better jobs. Luck of the draw, I suppose. However, is that the case for life more generally; you know, the thing about which the comic intends to make a point?

For instance, if one set of parents happen to be more short-term oriented – interested in taking rewards now rather than foregoing them for possibly larger rewards in the future, i.e., not really savers – we could expect that their children will, to some extent, inherit those short-term psychological tendencies; they will also inherit a more meager amount of cash. Similarly, the child of the parents who are more long-term focused should inherit their proclivities as well, in addition to the benefits those psychologies eventually accrued.

Provided that happened to be the case, what would become of these two children if they both started life in the same position? Should we expect that they both end up at similar places? Putting the questions another way, let’s imagine that, all the sudden, the wealth of this world was evenly distributed among the population; no one had more or less than anyone else. In this imaginary world, how long would that state of relative equality last? I can’t say for certain, but my expectation is that it wouldn’t last very long at all. While the money might be equally distributed in the population, the psychological predispositions for spending, saving, earning, investing, and so on are unlikely to be. Over time, inequalities will again begin to assert themselves as those psychological differences – be they slight or large – accumulate from decision after decision.

Clearly, this isn an experiment that couldn’t be run in real life – people are quite attached to their money – but there are naturally occurring versions of it in everyday life. If you want to find a context in which people might randomly come into possession of a sum of money, look no further than the lottery. Winning the lottery, both whether one wins at all and how much money you get, are as close to randomly determined as we’re going to get. If the differences between the families in the mind-blowing comic are due to chance factors, we would predict that people who win more money in the lottery should, subsequently, be doing better in life, relative to those who won smaller amounts. By contrast, if chance factors are relatively unimportant, than the amount won should be less important: whether they win large or small amounts, they might spend it (or waste it) at similar rates.

Nothing quite like a dose of privilege to turn your life around

This was precisely what was examined by Hankins et al (2010): the authors sought to assess the relationship between the amount of money won in a lottery and the probability of the winner filing for bankruptcy within a five year period of their win. Rather than removing inequalities and seeing how things shake out, then, this research took the opposite approach: examining a process that generated inequalities and seeing how long it took for them to dissipate.

The primary sample for this research were the Fantasy 5 winners in Florida from April 1993 to November, 2002 who had won $600 or more: approximately 35,000 of them after certain screening measures had been implemented. These lottery winners were grouped into those who won between $10,000 and $50,000, and those who won between $50,000 and $150,000 (subsequent analyses would examine those who won $10,000 or less as well, leading to small, medium, and large winner groups).

Of those 35,000 winners, about 2,000 were linked to a bankruptcy filing within five years of their win, meaning that a little more than 1% of winners were filing each year on average; a rate comparable to the broader Florida population. The first step was to examine whether the large winners were doing comparable amounts of bankruptcy filing prior to their win, relative to the low winners which, thankfully, they were. In pretty much all respects, those who won a lot of money did not differ from those who won less before their win (including race, gender, marital status, educational attainment, and nine other demographic variables). That’s what one would expect from the lottery, after all.

Turning to what happened after their win, within the first two years, those who won larger sums of money were less likely to file for bankruptcy than smaller winners; however, in years 3 through 5 that pattern reversed itself, with larger winners becoming more likely to file. The end result of this shifting pattern was that, in five years time, large winners were equally likely to have filed for bankruptcy, relative to smaller winners. As Hankins et al (2010) put it, large cash payments did not prevent bankruptcy; they only postponed it. This result was consistently obtained after attempting a number of different analyses, suggesting that the finding is fairly robust. In fact, when the winners eventually did file for bankruptcy, the big winners didn’t have much more to show for it than small winners: those who won between $25,000 and $150,000 only had about $8,000 more in assets than those who had won less than $1,500, and the two groups had comparable debts.

Not much of an ROI on making it rain these days, it seems

At least when it came to one of the most severe forms of financial distress, large sums of cash did not appear to stop people from falling back into poverty in the long term, suggesting that there’s more going on in the world than just poor luck and unearned privilege. Whatever this money was being spent on, it did not appear to be sound investments. Maybe people were making more of their luck than they realized.

It should be noted that this natural experiment does pose certain confounds, perhaps the most important of which is that not everyone plays the lottery. In fact, given that the lottery itself is quite a bad investment, we are likely looking at a non-random sample of people who choose to play it in the first place; people who already aren’t prone to making wise, long-term decisions. Perhaps these results would look different if everyone played the lottery but, as it stands, thinking about these results in the context of the initial comic about privilege, I would have to say that my mind remains un-blown. Unsurprisingly, deep truths about social life can be difficult to sum up in a short comic.

References: Hankins, S., Hoekstra, M., & Skiba, P. (2010). The ticket to easy street? The financial consequences of winning the lottery. Vanderbilt Law and Economics Research Paper, 10-12.

Are People Inequality Averse?

People are averse to a great many things: most of us are averse to the smell of feces or the taste of rotting food; a few people are averse to idea of intercourse with opposite sex individuals, while many people are averse to same-sex intercourse. As I have been learning lately, there are also many people who happen to be in charge of managing academic journals that are averse to the idea of publishing research papers with only a single experiment in them. Related to that last point, there have been claims made that people are averse to inequality per se. I happen to have a new (ish; it’s been written up for over a year) experiment which I feel speaks to the matter that I can hopefully find a home for soon. In the meantime, since I will be talking about this paper at an upcoming conference (NEEPS), I have decided to share some of the results with all of you pre-publication. Anyone interested in reading the paper proper can feel free to contact me for a copy.

   And anyone out there with an interest in publishing it…

To start off, consider the research that my experiment was based on which purports to demonstrate that human punishment is driven by inequality, rather than losses; a rather shocking claim. Rahani & McAuliffe (2012) note that many experiments examining human punishment possess an interesting confound: they tend to generate both losses and inequality for participants. Here’s an example to make that more concrete: in what’s known as a public goods game, a group of four individuals are each given a sum of money. Each individual can decide how much of their money to contribute to a public pot. Every dollar put into the public pot gets multiplied by three and then the pot is equally distributed among all players. From the perspective of getting the maximum overall payment for the group, each member should contribute all their money, meaning everyone makes three times the amount they started out with. However, for any individual player to maximize their own payment, the best course of action is to contribute nothing, as every dollar contributed only returns 75 cents to their own payment. The best payoff for you, then, would be if everyone else contributed all of their money (giving you $0.75 for every dollar they have), and for you to keep all your money. The public and private goods are at odds.

A large body of literature finds that those who contribute to the public good are more likely to desire that costs be inflicted on those who do not contribute as much. In fact, if they’re given the option, contributors will often pay some of their remaining money to inflict costs on those who did not contribute. The question of interest here is what precisely is being punished? On the one hand, those who contributed are, in some sense, having a cost inflicted on them by less cooperative individuals; on the other, they also find themselves at a payoff disadvantage, relative to those who did not contribute. So are these punitive sentiments being driven by losses, inequality, or both?

To help answer that question, Rahani & McAuliffe (2012) put together a taking game. Two players – X and Y – started the game with a sum of money. Player X could take some amount of money from Y and add it to his own payment; player Y could, in turn, pay some of their money to reduce player X’s payment following the decision to take or not. The twist on this experiment is that each player started out with a different amount of money. In cents, the starting payments were: 10/70, 30/70, and 70/70, respectively. As player X could take 20 cents from Y, the resulting payments (if X opted to take the money) would be 30/50, 50/50, or 90/50. So, in all cases, X could take the same amount of money from Y; however, in only one case would this taking generate inequality favoring X. The question, then, is how Y would punish X for their behavior.

The experiment found that when X did not take any money from Y, Y did not spend much to punish (about 11% of subjects paid to punish the non-taker). As there’s no inequality favoring X and no losses incurred by Y, this lack of punishment isn’t terribly shocking. However, when X did take money from Y, Y did spend quite a bit on punishment, but only when the taking generated inequality favoring X. In the event that X ended up still worse off, or as well off, as Y after the taking, Y did not punish significantly more than if X took nothing in the first place (about 15% in the first two conditions and 42% in the third). This would seem to demonstrate that inequality – not losses – is what is being punished.

 ”Just let him take it; he’s probably worse off than you”

Unfortunately for this conclusion, the experiment by Raihani & McAuliffe (2012) contains a series of confounds as well. The most relevant of these is that there was no way for X to generate inequality that favored them without taking from Y. This means that, despite the contention of the authors, its still impossible to tell whether the taking or the inequality is being punished. To get around this issue, I replicated their initial study (with a few changes to the details, keeping the method largely the same), but made two additions: the introduction of two new conditions. In the first of these conditions, player X could only add to their own payment, leaving Y’s payment unmolested; in the second, player X could only deduct from player Y’s payment, leaving their own payment the same. What this means is that now inequality could be generated via three different methods: someone taking from the participant, someone adding to their own payment, and someone destroying some of the other participant’s payment.

If people are punishing inequality per se and not losses, the means by which the inequality gets generated should not matter: taking should be just as deserving of punishment as destruction or augmentation. However, this was not the pattern of results I observed. I did replicate the original results of Raihani & McAuliffe (2012) – where taking resulted in more punishment when the taker ended up with more than their victim (75% of players punished), while the other two conditions did not show this pattern (punishment rates of 40% and 47%). When participants had their payment deducted by the other player without that other player benefiting, punishment was universally high and inequality played no significant role in determining punishment (63%, 53%, and 51%, respectively). Similarly, when the other player just benefited himself without affecting the participant’s payment participants were rather uninterested in punishment, regardless of whether that person ended up better off than them (18%, 19%, and 14%).

In summary, my results show that punishment tended to be driven primarily by losses. This makes a good deal of theoretical sense when considered from an evolutionary perspective: making a few reasonable assumptions, we can say any adaptation that led its bearer to tolerate costs inflicted by others in order to allow those others to be better off would not have a bright reproductive future. By contrast, punishing individuals who inflict costs on you can readily be selected for to the extent that it stops them from doing so again in the future. The role of inequality only seemed to exist in the context of the taking. Why might that be the case? While it’s only speculation on my part, I feel the answer to that question has quite a bit to how other, uninvolved parties might react to such punishment. If needier individuals make better social investments – all else being equal – other third parties might be less willing to subsidize the costs of punishing them, deterring the actual person who was taken from from punishing the taker in turn. The logic is a bit more involved than that, but the answer to the question seems to involve wanted to provide benefits towards those who would appreciate them most for the best return on it.

“Won’t someone think about the feelings of the rich? Probably not”

The hypothesis that people are averse to inequality itself seems to rest on rather shaky theoretical foundations as well. An adaptation that exists to achieve equality with others sounds like a rather strange kind of mechanism. In no small part, it’s weird because equality is a constraint on behavior, and constraining behavior does not allow certain, more-useful outcomes to be reached. As an example, if I have a choice between $5 for both of us or $7 for you and $10 for me, the latter option is clearly better for both of us, but the constraint of equality would prevent me from taking it. Further, if you’re inflicting costs on me, it seems I would be better off if I could prevent you from inflicting them. A poorer person mugging me doesn’t suddenly mean that being mugged would not be something I want to avoid. Perhaps there are good, adaptive reasons that equality-seeking mechanisms could exist despite the costs they seem liable to reliably inflict on their bearers. Perhaps there are also good reasons for many journals only accepting papers with multiple experiments in them. I’m open to hearing arguments for both.

References: Marczyk, J. (Written over a year ago). Human punishment is not primarily motivated by inequality aversion. Journal of Orphan Papers Seeking a Home. 

Raihani, N. & McAuliffe, K. (2012). Human punishment is motivated by inequality aversion, not a desire for reciprocity. Biology Letters, 8, 802-804.

Perverse Punishment

There have been a variety of studies conducted in psychology examining what punishment is capable of doing; mathematical models have been constructed too. As it turns out, when you give people the option to inflict costs on others, the former group are pretty good at manipulating the behavior of the latter. The basic principle is, well, pretty basic: there are costs and benefits to acting in various fashions and, if you punish certain behaviors, you shift the plausible range of self-interested behaviors. Stealing might be profitable in some cases, unless I know that it will, say, land me jail for 5 years. Since five years in jail is a larger cost than benefit I might reap from stealing (provided I am detected, of course), the incentive to not steal is larger and people don’t do take things which aren’t theirs. The power of punishment is such that, in theory, it is capable of making people behave in pretty much any conceivable fashion so long as they are making decisions on the basis of some kind of cost/benefit calculation. All you have to do is make the alternative courses of action costlier, and you can push people towards any particular path (though if people behave irrespective of the costs and benefits, punishment is no longer effective).

Now, in most cases, the main focus of this research on punishment has been on what one might dub “normal” punishment. A case of normal punishment would involve, say, Person A defecting on Person B, followed by person B then punishing person A. So, someone behaves in an anti-social fashion and gets punished for it. This kind of punishment is great for maintaining cooperation and pointing out how altruistic people are. However, a good deal of punishment in these experiments is what one might dub “perverse”.

“Yes; quite perverse indeed…”

By perverse punishment, I am referring to instances of punishment where people are punished for giving up their own resources and benefiting others. That people are getting punished for behaving altruistically is rather interesting, as the pro-social behavior being targeted for punishment is, at least in the typical experiments, benefiting the people enacting the punishment. As we tend to punish behavior we want to see less of, and self-benefiting behavior is generally something we want more of, the punishment of others for benefiting the punisher appears to be rather strange. Now I think this strangeness can be resolved, but, before doing that, it is worthwhile to consider an experiment examining whether or not punishment is also capable of reducing perverse punishment.

The experiment – by Cinyabuguma, Page, & Putterman, (2006) – began with a voluntary contribution game. In games like these (which are also known as public goods game), a number of players start off with a certain pool of resources. In the first stage of the game, each player has the option to contribute any amount of their resources towards the public pool. The resources in this pool get multiplied by some amount and then distributed equally among all the players. The payout of these games are such that everyone could do better if they all contributed, but at the individual level contributions make one worse off. So, in other words, you make the most money when everyone else contributes the most and you contribute nothing. In the second stage of the game, the amount that each player has donated to the public good becomes known to everyone else, and each person has the option to “punish” others, which involves giving up some of your own payment to reduce someone else’s payment by 4 times the amount you paid.

The twist in this experiment is the addition of another condition. In that condition, after the first two steps (First subjects contribute and, second, subjects learn of the contributions of others and can punish them), there was then a round of second-order punishment. What this means is that, after people punished the first time, each participant got to see who punished who, and could then punish each other again. Simply put: I could punish someone for either punishing me or for punishing someone else. So the first condition allowed for the punishment of contributions alone, whereas the second allowed for both the punishment of contributions and the punishment of punishment. The question of interest is whether or not perverse punishment and/or cooperation was any different between the two.

“It’s still looking pretty perverse to me”

The answer to that question is yes, but the differences are quite slight, and often not significant. When people could only punish contributions, the average contribution was 7.09 experimental dollars (each person could contribution up to 10); when punishment of punishment was also permitted, the average contribution rose ever so slightly to in between 7.35 and 7.97 units. Similarly, earnings increased when people could punish punishment: when the second-order punishment was an option, people earned more (about 13.35 units) relative to when second-order punishment wasn’t an option (around 12.86 units). So, though these differences weren’t terribly significant, allowing for the punishment of punishers tended to increase the overall amount of money people made slightly.

Also of interest, though, is the nature of the punishment itself. In particular, there are two findings I would like to draw attention to: the first of these is that if someone received punishment for punishing others, they tended to punish less during later periods. In other words, since punishing others was itself punished, less punishment took place (though this seemed to affect the perverse punishment more so than the normal type). This is a fairly expected result.

The second finding I would like to draw attention to concerns the matter of free-riders. Free-riders are individuals who benefit from the public good, but do not themselves contribute to it. Now, in the case of this economic game we’ve been discussing, there are two types of free-riders: the first are people who don’t contribute much to the public good and, accordingly, are targeted for “normal” punishment. However, there are also second-order free-riders; I know this must be getting awfully hard to keep track of, but these second-order free-riders are people who benefit from free-riders being punished, but do not themselves punish others. To put that in simple terms, I’m better off if anti-social people are punished and if I don’t have to be the one to punish them personally. What I find interesting in these results is that these second-order free-riders were not targeted for punishment; instead, those who punished – either normally or perversely – ended up getting punished more as revenge. Predictably, then, those who failed to punish ended up with an advantage over those who did punish. Not only did they not have to spend money on punishing others, but they also weren’t the target of revenge punishment.

So what does all this tell us when it comes to helping us understand perverse punishment, and punishment more generally?  Well, part of that answer comes from considering the fact that it was predominately people who were above/below the average contribution level of the group doing most of the punishing; relatedly, they were largely targeting each other. This suggests, to me, anyway, that a good deal of “perverse” punishment is a kind of preemptive defense (or, as some might call it, an offense) against one’s probably rivals. Since low contributors likely have some inkling that those who contribute a lot will preferentially target them for punishment, this “perverse” punishment could simply reflect that knowledge. Such an explanation makes the “perverse” punishment seem a bit less perverse. Instead of reflecting people punishing against their interests, perverse punishment might work in their interests to some degree. They don’t want to be punished, and they are trying to inflict costs on those who would inflict costs on them.

Which at least makes more sense than the “He’s just an asshole” hypothesis…

I think it helps to also think about what patterns of punishment were not observed to answer our question. As I mentioned initially, people’s payoffs in these games would be maximized if everyone else contributed the maximum and they personally contributed nothing. It follows, then, that one might be able to make himself better off by punishing anyone else who contributes less than the maximal amount, irrespective of how much the punisher contributed. Yet this isn’t what we see. This raises the question, then, of why average contributors don’t receive much punishment, despite them still contributing less than that highest donors. The answer to these questions no doubt lies, in part, on the fact that punishing others is costly, as previously mentioned Thinking about when punishment becomes less costly should shed light on the matter, but since this has already gone a bit long, I’ll save that speculation for when my next paper gets published.

Reference: Cinyabuguma, M., Page, T., & Putterman, L. (2006). Can second-order punishment deter perverse punishment? Experimental Economics, 9, 265-279.

Is Morality All About Being Fair?

What makes humans moral beings? This is the question that leads off the abstract of a paper by Baumard et al (2013), and certainly one worth considering. However, before one can begin to answer that question, one should have a pretty good idea in mind as to what precisely they mean by the term ‘moral’. On that front, there appears to be little in the way of consensus: some have equated morality with things like empathy, altruism, impartiality, condemnation, conscience, welfare gains, or fairness. While all of these can be features of moral judgments, none of these intuitions about what morality is tends to differentiate it from the non-moral domain. For instance, mammary glands are adaptations for altruism, but not necessarily adaptations for morality; people can empathize with the plight of sick individuals without feeling that the issue is a moral one. If one wishes to have a productive discussion of what makes humans moral beings, it would seem to beneficial to begin from some solid conceptualization of what morality is and what it has evolved to do. If you don’t start from that point, there’s a good chance you’ll end up talking about a different topic than morality.

Thankfully, academia is no place for productivity.

The current paper up for examination by Baumard et al (2013) is a bit of an offender in that regard: their account explicitly mentions that a definition for the term is harm to agree upon and they use the word “moral” to mean “fair”. To understand this issue, first consider the model that the authors put forth: their account attempts to explain moral sentiments by suggesting that selection pressures might have been expected to shape people to seek out the best possible social deals they could get. In simple terms, the idea contains the following points: (1) people are generally better off cooperating than not, but (2) some individuals are better cooperative partners than others. Since (3) people only have a limited budget of time and energy to spend on these cooperative interactions and can’t cooperate with everyone, we should expect that (4) so long as people have a choice as to whom they cooperate with, people will tend to choose to spend their limited time with the most productive partners. The result is that overly-selfish or unfair individuals will not be selected as partners, resulting in selection pressures generating cognitive mechanisms concerned with fairness or altruism. Their model, in other words, centers around managing the costs and benefits from cooperative interactions. People are moral (fair) because it leads to them be preferred as an interaction partner.

Now that all sounds well and good – and I would agree with each of the points in the line of thought – but it doesn’t sound a whole lot like a discussion about what makes people moral. One way of conceptualizing the idea is to think about a simple context: shopping. If I’m in the market for, say, a new pair of shoes, I have a number of different stores I might buy my shoes from and a number of potential shoes in each store. Shopping around for a shoe that I like with the most for a reasonable price fills all the above criteria in some sense, but shoe-shopping is not itself often a moral task. That a shoe I like is priced at a range higher than I am willing to pay does not necessarily mean I will say that such pricing is wrong the way I might say stealing is wrong. Baumard et al (2013) recognize this issue, noting that a challenge is explaining why people don’t just have selfish motives, but also moral motives that lead to them to respect other people’s interests per se.

Now, again, this would be an excellent time to have some kind of working definition of what precisely morality is, because, if one doesn’t, it might seem a bit peculiar to contrast moral and selfish motivations – which the authors do – as if the two are opposite ends of some spectrum. I say that because Baumard et al (2013) go on to discuss how people who have truly moral concerns for the welfare of others might be chosen as cooperative partners more often because they’re more altruistic, building up a reputation as a good cooperator, and this is, I think, supposed to explain why we have said moral concerns. So the first problem here is that the authors are no longer explaining morality per se, but rather altrustic behaviors. As I mentioned in the first paragraph, mechanisms for altruism need not be moral mechanisms. The second problem I see is that, provided their reasoning about reputation is accurate (and I think it is), it seems perfectly plausible for non-moral mechanisms to make that judgment as well: I could simply be selfishly interested in being altruistic (that is to say, I would care about your interests out of my own interests, the same way people might not murder each other because they’re afraid of going to jail or possibly being killed in the process themselves).The authors never address that point, which bodes poorly for their preferred explanation.

“It’s a great fit if you can just look passed all the holes…”

More troublingly for the partner-choice model of morality, it doesn’t seem to explain why people punish others for acts deemed immoral. The only type of punishment it seems to account for would be, essentially, revenge, where an individual punishes another to secure their own self-interest and defend against future aggression; it might also be able to explain why someone might not wish to continue working in an unfair relationship. This would leave the model unable to explain any kind of moral condemnation from third parties (those not initially involved in the dispute). It would seem to have little to say about why, for instance, an American might care about the woes suffered by North Korean citizens under the current dictatorship. As far as I can tell, this is because the partner-choice account for morality is a conscience-centric account, and conscience does not explain condemnation; that I might wish to cooperate with ‘fair’ people doesn’t explain why I think someone should be punished for behaving unfairly towards a stranger. The model at least posits that moral condemnation ought to be proportional to the offense (i.e. an eye for an eye), seeking to restore fairness, but not only is this insight not a unique prediction, it’s also contradicted by some data on drunk driving I covered before (that is, unless a man hitting a woman while driving his car is more “unfair” than drunk woman hitting a man).

Though I don’t have time to cover every issue I see with the paper in depth (in large part owing to it’s length), the main issue I see with the account is that Baumard et al (2013) never really define what it is they mean by morality in the first place. As a result, the authors appear to just substitute “altruism”  or “fairness” for morality instead. Now if they want to explain either of those topics, they’re more than welcome to; it’s just that calling them morality instead of what they actually mean (fairness) tends to generate quite a bit of well-deserved confusion. In the interests of progress, then, let’s return to the concern I raised about the opening question. When we are asking about what makes people moral, we need to start by considering what morality is. The short answer to that question is that morality is, roughly, a perception: at a basic level, it’s the ability to perceive acts in or states of the world along a dimension of “right” or “wrong” in much the same way we might perceive sensations as painful or pleasure. This spectrum seems to range from the morally-praiseworthy at one end to the morally-condemnable at the other, with a neutral point somewhere in the middle.

Framed in this light, we can see a few, rather large problems with conflating morality with things like fairness. The first of these is that perceiving an outcome as immoral would require that one first perceives it as unfair and then as immoral, as neither the reverse ordering, or one in which both perceptions appeared simultaneously, does not makes any sense. If one can have a perception of fairness divorced from a moral perception, then, it seems that one could use that perception to do the behavioral heavy lifting when it comes to partner choice. Again, people could be selfishly fair. The second problem that becomes apparent is that we can consider whether perceptions of immorality can be generated in response to acts that do not appear to deal with fairness or altruism. As sexual and solitary behaviors (like incest or drug use) are moralized with some frequency, the fairness account seems to be lacking. In fact, there are even issues where altruistic behavior has been morally condemned by others, which is precisely the opposite of what the Baumard et al (2013) model would seem to predict.

If we reconceptualize such behaviors properly, though…

Instead of titling their paper, ”A mutualistic approach to morality”, the authors might have been better served with the title “A mutualistic approach to fairness”. Then again, this would only go so far when it comes to remedying the issue, as Baumard et al (2013) never really define what they mean by “fair” either. Since people seem to disagree on that issue with frequency, we’re still left with more than a bit of a puzzle. Is it fair that very few people in the world hold so much wealth? Would it be fair for that wealth to be taken from them and given to others? People likely have different answers to those questions.

Now the authors argue that this isn’t really that large of a problem for their account, as people might, for instance, disagree as to the truth of a matter while all holding the same concept of truth. Accordingly, Baumard et al (2013) posit that people can disagree about what is fair even if they hold the same concept of fairness. The problem with that analogy, as far as I see it, is that people don’t seem to have competing senses of the word “truth” while they do have different senses of the word “fair”: fairness based on outcome (everyone gets the same amount), based on effort (everyone gets in proportion to what they put in), based on need (those who need the most get the most), and perhaps others still. Which of these concepts people favor is likely going to be context-specific. However, I don’t know of that the same can be said of different senses of the word “true”. Are there multiple senses in which something might or might not be true? Are these senses favored contextually? Perhaps there are different senses of the word, but none come to mind as readily.

Baumard et al (2013) might also suggest that by “fair” they actually mean “mutually beneficial” (writing, “Ultimately, the mutualistic approach considers that all moral decisions should be grounded in consideration of mutual advantage”), but we’d still be left with the same basic set of problems. Bouncing interchangeably between three different terms (moral, fair, and mutually-beneficial) is likely to generate more confusion than clarity. It is better to ensure one has a clear idea of what one is trying to explain before one sets out to explain it.

References: Baumard, N., Andre, JB., & Sperber, D. (2013). A mutualistic approach to morality: The evolution of fairness by partner choice. Behavioral & Brain Sciences, 36, 59-122.

 

The Best Mate Money Can Buy

There’s a proud tradition in psychological research that involves asking people about how much they value this thing or that one, be it in a supermarket or, for our present purposes, in a sexual partner. Now there’s nothing intrinsically wrong with doing this kind of research, but while there are certain benefits to it, the method does have its shortcomings. One easy way to grasp a potential issue with this methodology is to consider the dating website Okcupid.com. When users create a profile on this site, they are given a standard list of questions to answer in order to tell other people about themselves. Some of these questions deal with matters like, “What are six things you couldn’t do without?” or “what are you looking for in partner?”. The typical sorts of answers you might find to questions like these are highlighted in a video I really like called “The Truth About Being Single“:

“All these people keep interrupting my loneliness!”

The problem with questions like these is that – when they are posed in isolation – their interpretation can get a bit difficult; they often seem to blur the lines between what people require and what they just want. More precisely, the ratings people give to various items or traits in terms of their importance might not accurately capture their degree of actual importance. A quick example concerns cell phones and oxygen. If you were to ask people on Okcupid about five things they couldn’t do without on a day-to-day basis, more people would probably list their phones than the air they breathe. They would also tell you that, in any given year, they likely spend much more money on cell phones than air. Despite this, air is clearly the more important item, as cell phones stop being useful when the owner has long since asphyxiated (even if the cell phone would allow you to go out playing whatever bird-themed game is currently trending).

Perhaps that all seems very mundane, though: “Yes, of course,” you might say, “air is more important than iPhones, but putting ‘I need air’ on your dating profile or asking people how important is the air they breathe on a survey doesn’t tell you much about the person, whereas iPhone ownership makes you a more attractive, cool, and intelligent individual”. While it’s true that “people rate breathing as very important” will probably not land you any good publications or hot dates, when we start thinking about the relative importance of the various traits people look for in a partner, we can end up finding out some pretty interesting things. Specifically, we can begin to uncover what each sex views as necessities and what they view as luxuries in potential partners. The key to this method involves constraining the mate choices people can make: when people can’t have it all, what they opt to have first (i.e. people want air before iPhones if they don’t have either) tells us – to some extent – where their priorities lie.

Enter a paper by Li et al (2002). The authors note that previous studies on mating and partner selection have found sex differences in the importance placed on certain characteristics: men tend to value physical attractiveness in a partner more than women, and women tend to value financial prospects more than men. However, the ratings of these characteristics are not often found to be of paramount importance, relative to ratings of other characteristics like kindness, creativity, or a sense of humor (on which the sexes tend to agree). But perhaps the method used to derive those ratings is missing part of the larger picture, as it was in our air/iPhone example. Without asking people to make tradeoffs between these characteristics, researchers might be, as Li et al put it, “[putting the participants in] the position of someone answering a question about how to spend imaginary lottery winnings”. When people have the ability to buy anything, they will spend proportionately more money on luxuries, relative to necessities. Similarly, when people are asked about what they want in a mate, they might play up the importance of luxuries, rather than necessities if they are just thinking about the traits in general terms.

“I’m spending it all on cans of beans!”

What Li et al (2002) did in the first experiment, then, was to provide 78 participants with a list of 10 characteristics that are often rated as important in a long-term partner. The subjects were told to, essentially, Frankenstein themselves a marriage partner from that list. Their potential partners would start out in the bottom percentile for each of those traits. What this means is that, if we consider the trait of kindness, their partner would be less kind than everyone else in the population. However, people could raise the percentile score of their partner in any domain by 10% by spending a point from their “mating budget” (so if one point was invested in kindness, their partner would now be less kind than 90% of people; if two points were spent, the partner is now less kind than 80% of people, and so on). The twist is that people were only given a limited budget. With 10 traits and 10 percentiles per trait, people would need 100 points to make a partner high in everything. The first budget people started with was 20 points, which requires some tough calls to be made.

So what do people look for in a partner first? That depends, in part, on whether you’re a man or a woman. Women tended to spend the most – about 20% of their initial budget (or 4 points) – on intelligence; men spent comparably in that domain as well, with about 16% of their budget going towards brains. The next thing women tended to buy was good financial prospects, spending another 17% beefing up their partner’s yearly income. Men, on the other hand, seemed relatively unconcerned with their partner’s salary, spending only 3% of their initial budget on a woman’s income. What men seemed much more interested in was getting physical attractiveness, spending about 21% of their initial budget there; about twice what the women spent. The most vital characteristics in a long-term partner, then, seemed to be intelligence and money for women, and attractiveness and intelligence for men, in that order.

However, as people’s mating budget was increased, from 20 points to 60 points, these sex differences disappeared. Both men and women began to spend comparably as their budgets were increased and tradeoffs became less pressing. In other words, once people had the necessities for a relationship, they bought the same kinds of luxuries. These results were replicated in a slightly-modified second study using 178 undergraduates and five traits instead of ten. In the final study, participants were given a number of potential dates to screen for acceptability. These mates were said to be have been rated along the previous 5 characteristics in a high/medium/low fashion. Participants could reveal the hidden ratings of the potential dates for free, but were asked to reveal as few as possible in order to make a decision. As one might expect, men tended to reveal how physically attractive the potential mate was first more than any other trait (43% of the time, relative to women’s 16%), whereas women tended to first reveal how much social status the men had (35% of the time, relative to men’s 16%). Men seem to value good looks and women tend to value access to resources. Stereotype accuracy confirmed.

A now onto the next research project…

This is the reason I liked the initial video so much. The first portion of the video reflects the typical sentiments that people often express when it comes to what they want in a partner (“I just want someone [who gets me/to spend time with/ to sleep next to/ etc]“). These are, however, often expressions of luxuries, rather than necessities. Much like the air we breathe, the presence of the necessities in a potential mate are, more or less, taken for granted – at least until they’re absent, that is. So while traits like “creativity” might make an already-attractive partner more attractive, being incredibly creative will likely suddenly count for quite a bit less if you’re poor and/or unattractive, depending on who you’re trying to impress. I’ll leave the final word on the matter to one of my favorite comedians, John Mulaney, as I think he expresses the point well: “Sometimes I’ll be talking to someone and I’ll be like, “yeah, I’ve been really lonely lately”, and they’ll be like, “well we should hang out!” and I’m like, “no; that’s not what I meant”.

References: Li, N., Bailey, J., Kenrick, D., & Linsenmeier, J. (2002). The necessities and luxuries of mate preferences: Testing the tradeoffs. Journal of Personality and Social Psychology, 82, 947-955.

Simple Rules Do Useful Things, But Which Ones?

Depending on who you ask – and their mood at moment – you might come away with the impression that humans are a uniquely intelligent species, good at all manner of tasks, or a profoundly irrational and, well, stupid one, prone to frequent and severe errors in judgment. The topic often penetrates into lay discussions of psychology, and has been the subject of many popular books, such as the Predictably Irrational series. Part of the reason that people might give these conflicting views of human intelligence – either in terms of behavior or reasoning – is the popularity of explaining human behavior through cognitive heuristics. Heuristics are essentially rules of thumb which focus only on limited sets of information when making decisions. A simple, perhaps hypothetical example of a heuristic might be something like a “beauty heuristic”. This heuristic might go something along the lines of when deciding who to get into a relationship with, pick the most physically attractive available option; other information – such as the wealth, personality traits, and intelligence of the perspective mates – would be ignored by the heuristic.

Which works well when you can’t notice someone’s personality at first glance.

While ignoring potential sources information might seem perverse at first glance, given that one’s goal is to make the best possible choice, it has the potential to be a useful strategy. One of these reasons is that the world is a rather large place, and gathering information is a costly process. The benefits of collecting additional bits of information are outweighed by the costs of doing so past a certain point, and there are many, many potential sources of information to choose from. To the extent that additional information helps one make a better choice, making the best objective choice is often a practical impossibility. In this view, heuristics trade off accuracy with effort, leading to ‘good-enough’ decisions. A related, but somewhat more nuanced benefit of heuristics comes from the sampling-error problem: whenever you draw samples from a population, there is generally some degree of error in your sample. In other words, your small sample is often not entirely representative of the population from which it’s drawn. For instance, if men are, on average, 5 inches taller than women the world over, if you select 20 random men and women from your block to measure, your estimate will likely not be precisely 5 inches; it might be lower or higher, and the degree of that error might be substantial or negligible.

Of note, however, is the fact that the fewer people from the population you sample, the greater your error is likely to be: if you’re only sampling 2 men and women, your estimate is likely to be further from 5 inches (in one direction or the other) relative to when you’re sampling 20, relative to 50, relative to a million. Importantly, the issue of sampling error crops up for each source of information you’re using. So unless you’re sampling large enough quantities of information capable of balancing that error out across all the information sources you’re using, heuristics that ignore certain sources of information can actually lead to better choices at times. This is because the bias introduced by the heuristics might well be less predictively-troublesome than the degree of error variance introduced by insufficient sampling (Gigerenzer, 2010). So while the use of heuristics might at times seem like a second-best option, there appear to be contexts where it is, in fact, the best option, relative to an optimization strategy (where all available information is used).

While that seems to be all well and good, the acute reader will have noticed the boundary conditions required for heuristics to be of value: they need to know how much of which sources of information to pay attention to. Consider a simple case where you have five potential sources of information to attend to in order to predict some outcome: one of these is sources strongly predictive, while the other four are only weakly predictive. If you play an optimization strategy and have sufficient amounts of information about each source, you’ll make the best possible prediction. In the face of limited information, a heuristic strategy can do better provided you know you don’t have enough information and you know which sources of information to ignore. If you picked which source of information to heuristically-attend to at random, though, you’d end up making a worse prediction than the optimizer 80% of the time. Further, if you used a heuristic because you mistakenly believed you didn’t have sufficient amounts of information when you actually did, you’ve also made a worse prediction than the optimizer 100% of the time.

“I like those odds; $10,000 on blue! (The favorite-color heuristic)”

So, while heuristics might lead to better decisions than attempts at optimization at times, the contexts in which they manage that feat are limited. In order for these fast and frugal decision rules to be useful, you need to be aware of how much information you have, as well as which heuristics are appropriate for which situations. If you’re trying to understand why people use any specific heuristic, then, one would need to make substantially more textured predictions about the functions responsible for the existence of the heuristic in the first place. Consider the following heuristic, suggested by Gigerenzer (2010): if there is a default, do nothing about it. That heuristic is used to explain, in this case, the radically different rates of being an organ donor between countries: while only 4.3% of Danish people are donors, nearly everyone in Sweden is (approximately 85%). Since the explicit attitudes about the willingness to be a donor don’t seem to differ substantially between the two countries, the variance might prove a mystery; that is, until one realizes that the Danes have an ‘opt in’ policy to be a donor, whereas the Swedes have an ‘opt out’ one. The default option appears to be responsible for driving most of variance in rates of organ donor status.

While such a heuristic explanation might seem, at least initially, to be a satisfying one (in that it accounts for a lot of the variance), it does leave one wanting in certain regards. If anything, the heuristic seems more like a description of a phenomenon (the default option matters sometimes) rather than an explanation of it (why does it matter, and under what circumstances might we expect it to not?). Though I have no data on this, I imagine if you brought subjects into the lab and presented them with an option to give the experimenter $5 or have the experimenter give them $5, but highlighted the first option as default, you would probably find very few people who did not ignore the default heuristic. Why, then, might the default heuristic be so persuasive at getting people to be or fail to be organ donors, but profoundly unpersuasive at getting people to give up money? Gigerenzer’s hypothesized function for the default heuristic – group coordination – doesn’t help us out here, since people could, in principle, coordinate around either giving or getting. Perhaps one might posit that another heuristic – say, when possible, benefit the self over others – is at work in the new decision, but without a clear, and suitably textured theory for predicting when one heuristic or another will be at play, we haven’t explained these results.

In this regard, then, heuristics (as explanatory variables) share the same theoretical shortcoming as other “one-word explanations” (like ‘culture’, ‘norms’, ‘learning’, ‘the situation’, or similar such things frequently invoked by psychologists). At best, they seem to describe some common cues picked up on by various cognitive mechanisms, such as authority relations (what Gigerenzer suggested formed the following heuristic: if a person is an authority, follow requests) or peer behavior (the imitate-your-peers heuristic: do as your peers do) without telling us anything more. Such descriptions, it seems, could even drop the word ‘heuristic’ altogether and be none the worse for it. In fact, given that Gigerenzer (2010) mentions the possibility of multiple heuristics influencing a single decision, it’s unclear to me that he is still be discussing heuristics at all. This is because heuristics are designed specifically to ignore certain sources of information, as mentioned initially. Multiple heuristics working together, each of which dabble in a different source of information that the others ignore seem to resemble an optimization strategy more closely than heuristic one.

And if you want to retain the term, you need to stay within the lines.

While the language of heuristics might prove to be a fast and frugal way of stating results, it ends up being a poor method of explaining them or yielding much in the way of predictive value. In determining whether some decision rule even is a heuristic in the first place, it would seem to behoove those advocating the heuristic model to demonstrate why some source(s) of information ought to be expected to be ignored prior to some threshold (or whether such a threshold even exists). What, I wonder, might heuristics have to say about the variance in responses to the trolley and footbridge dilemmas, or the variation in moral views towards topics like abortion or recreational drugs (where people are notably not in agreement)? As far as I can tell, focusing on heuristics per se in these cases is unlikely to do much to move us forward. Perhaps, however, there is some heuristic heuristic that might provide us with a good rule of thumb for when we ought to expect heuristics to be valuable…

References: Gigerenzer, G. (2010). Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality Topics in Cognitive Science., 2, 528-554 DOI: 10.1111/j.1756-8765.2010.01094.x

The Inferential Limits Of Economic Games

Having recently returned from the Human Behavior & Evolution Society’s (HBES) conference, I would like to take a moment to let everyone know what an excellent time I had there. Getting to meet some of my readers in person was a fantastic experience, as was the pleasure of being around the wider evolutionary research community and reconnecting with old friends. The only negative parts of the conference involved making my way through the flooded streets of Miami on the first two mornings (which very closely resembled this scene from the Simpsons) and the pool party at which I way over-indulged in drinking. Though there was a diverse array of research presented spanning many different areas, I ended up primarily in the seminars on cooperation, as the topic tends most towards my current research projects. I would like to present two of my favorite findings from those seminars, which serve as excellent cautionary tales concerning what conclusions one can draw from economic games. Despite the popular impression, there’s a lot more to evolutionary psychology than sex research.

Though the Sperm-Sun HBES logo failed to adequately showcase that diversity.

The first game to be discussed is the classic dictator game. In this game, two participants are brought into the lab and assigned the role of either ‘dictator; or ‘recipient’. The dictator is given a sum of money (say, $10) and is given the option to divide it however they want between the pair. If the dictator was maximally selfish – as standard economic rationality might suggest – they would consistently keep all the money and given none to the recipient. Yet this is not what we frequently see: dictators tend to give at least some of the money to the other person, and an even split is often made. While giving these participants anonymity from one another does tend to reduce offers, even ostensibly anonymous dictators continue to give. This result clashes somewhat with our every day experiences: after all, provided we have money in our pocket, we’re faced with possible dictator-like experiences every time we pass someone on the street, whether they’re homeless and begging for money or apparently well-off. Despite the near-constant opportunities during which we could transfer money to others, we frequently do not. So how do we reconcile the two experimental and everyday results?

One possibility is to suggest that the giving in dictator games is largely induced by experimental demand effects: subjects are being placed into a relatively odd situation and are behaving rather oddly because of it (more specifically, because they are inferring what the experimenter “wants” them to do). Of course, it’s not so easy to replicate the effects the contexts of the dictator game (a sudden windfall of a divisible asset and a potential partner to share it with) without subjects knowing they’re talking part in an experiment. Winking & Mizer (2013) manged to find a way around these problems in Las Vegas. In this field experiment, a confederate would be waiting at a bus stop when the ignorant subject approached. Once the subject was waiting for the bus as well, the confederate would pretend to take a phone call and move slightly away from the area with their back turned to the subject. It was at this point that the experiment approached on his cell, ostensibly in a hurry. As the experimenter passed the subject, he gave them $20 in poker chips, saying that he was late for his ride to the airport and didn’t have time to cash them in. These casino chips are an excellent stimuli, as they provided a good cover story for why they were being handed over: they only have value when cashed in, and the experimenter didn’t have time to do so. Using actual currency wouldn’t work well, as it might raise suspicions about the setup, since currency travels well from place to place.

In the first condition, the experimenter left and the confederate returned without further instruction; in the second condition, the experimenter said, “I don’t know. You can split them with that guy however you want” while gesturing at the confederate before he ran off. A third condition involved an explicit version of the dictator game experiment with poker chips, during which anonymity was granted. In the standard version of the experiment – when the subjects knew about the game explicitly – 83% of subjects offered at least some of the chips to other people with a median offer around $5, resembling previous experimental results fairly well. How about the other two conditions? Well, of the 60 participants who were not told they were explicitly taking part in the game, all of them kept all the money. This suggests very strongly that all – or at least most – of the giving we observe in dictator games is grounded in the nature of the experiment itself. Indeed, many of the subjects in the first condition, where the instruction to split was not given, seemed rather perplexed by the purpose of the study during the debriefing. The subjects wondered precisely why in the world they would split the money with the confederate in the first place. Like all of us walking down the street with money on our person, the idea that they would just give that money to other people seemed rather strange.

“I’m still not following: you want to do what with all this money, again?”

The second paper of interest looked at behavior in another popular game: the public goods game. In these games, subjects are typically placed together in groups of four and are provided with a sum of money. During each round, players can invest any amount of their money in the public pot and keep the rest. All the money in the pot is then multiplied by some amount and then divided equally amongst all the participants. In this game, the rational economic move is typically to not put any money in, as for each dollar you put in, you receive less than a dollar back (since the multiplier is below the number of subjects in the group); not a great investment. On the other hand, the group-maximizing outcome is for all the subjects to donate all their money, so everyone ends up richer than when they started. Again, we find that subjects in these games tend to donate some of their money to the public pot, and many researchers have inferred from this giving that people have prosocial preferences (i.e. making other people better off per se increases my subjective welfare). If such an inference was correct, then we ought to expect that subjects should give more money to the public good provided they know how much good they’re doing for others.

Towards examining this inference, Burton-Chellew & West (2013) put subjects into a public goods game in three different conditions. First, there was the standard condition, described above. Second was a condition like the standard game, except subjects received an additional piece of information in the form of how much the other players in the game earned. Finally, there was a third condition in which subjects didn’t even know the game was being played with other people; subjects were merely told they could donate some fraction of their money (from 0 to 40 units) to a “black box” which would perform a transformation on the money received and give them a non-negative payoff (which was the same average benefit they received in the game when playing with other people, but they didn’t know that). In total, 236 subjects played in one of the first two conditions and also in the black box condition, counterbalancing the order of the games (they were informed the two were entirely different experiments).

How did contributions change between the standard condition and the black box condition over time? They didn’t. Subjects that knew they were playing a public goods game donated approximately as much during each round as the subjects who were just putting payments into the black box and getting some payment out: donations started out relatively high, and declined over time (presumably and subjects were learning they tended to get less money by contributing). The one notable difference was in the additional information condition: when subjects could see the earnings of others, relative to their contributions, subjects started to contribute less money to the public good. As a control condition, all the above three games were replicated with a multiplication rule that led the profit-maximizing strategy to being donate all of one’s available money, rather than none. In these conditions, the change in donations between standard and black box conditions again failed to differ significantly, and contributions were still lower in the enhanced-information condition. Further, in all these games subjects tended to fail to make the profit-maximizing decision, irrespective of whether that decision was to donate all their money or none of it. Despite this strategy being deemed relatively to “easy” to figure out by researchers, it apparently was not.

Other people not included, or required

Both of these experiments pose some rather stern warnings about the inferences we might draw from the behavior of people playing economic games. Some our our experiments might end up inducing certain behaviors and preferences, rather than revealing them. We’re putting people into evolutionarily-strange situations in these experiments, and so we might expect some evolutionarily-strange outcomes. It is also worth noting that just because you observe some prosocial outcome – like people giving money apparently altruistically or contributing to the good of others – it doesn’t follow that these outcomes are the direct result of cognitive modules designed to bring them about. Sure, my behavior in some of these games might end up reducing inequality, for instance, but it doesn’t following that people’s psychology was selected to do such things. There are definite limits to how far these economic games can take us inferentially, and it’s important to be aware of them. Do these studies show that such games are worthless tools? I’d say certainly not, as behavior in them is certainly not random. We just need to be mindful of their limits when we try and draw conclusions from them.

References: Burton-Chellew MN, & West SA (2013). Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences of the United States of America, 110 (1), 216-21 PMID: 23248298

Winking, J., & Mizer, N. (2013). Natural-field dictator game shows no altruistic giving. Evolution and Human Behavior. http://dx.doi.org/10.1016/j.evolhumbehav.2013.04.002

Equality-Seeking Can Lift (Or Sink) All Ships

There’s a saying in economics that goes, “A rising tide lifts all ships”. The basic idea behind the saying is that marginal benefits that accrue from people exchanging goods and services is good for everyone involved – and even for some who are not directly involved – in much the same way that all the boats in a body of water will rise or fall in height together as the overall water level does. While there is an element of truth to the saying (trade can be good for everyone, and the resources available to the poor today can, in some cases, be better than those available to even the wealthy in generations past), economies, of course, are not like bodies of water that rise and fall uniformly; some people can end up radically better- or worse-off than others as economic conditions shift, and inequality is a persistent factor in human affairs. Inequality – or, more aptly, the perception of it – is also commonly used as a justification for furthering certain social or moral goals. There appears to be something (or somethings) about inequality that just doesn’t sit well with people.

And I would suggest that those people go and eat some cake.

People’s ostensible discomfort with inequality has not escaped the eyes of many psychological researchers. There are some who suggest that humans have a preference for avoiding inequality; an inequality aversion, if you will. Phrased slightly differently, there are some who suggest that humans have an egalitarian motive (Dawes et al, 2007) that is distinct from other motives, such as enforcing cooperation or gaining benefits. Provided I’m parsing the meaning of the phrase correctly, then, the suggestion being made by some is that people should be expected to dislike inequality per se, rather than dislike inequality for other, strategic reasons. Demonstrating evidence of a distinct preference for inequality aversion, however, can be difficult. There are two reasons for this, I feel: the first is that inequality is often confounded with other factors (such as someone not cooperating or suffering losses). The second reason is that I think it’s the kind of preference that we shouldn’t expect to exist in the first place.

Taking these two issues in order, let’s first consider the paper by Dawes et al (2007) that sought to disentangle some of these confounding issues. In their experiment, 120 subjects were brought into the lab in groups of 20. These groups were further divided into anonymous groups of 4, such that each participant played in five rounds of the experiment, but never with the same people twice. The subjects also did not know about anyone’s past behavior in the experiment. At the beginning of each round, every subject in each group received a random number of payment units between some unmentioned specific values, and everyone was aware of the payments of everyone else in their group. Naturally, this tended to create some inequality in payments. Subjects were given means by which to reduce this inequality, however: they could spend some of their payment points to either add or subtract from other people’s payments at a ratio of 3 to 1 (in other words, I could spend one unit of my payment to either reduce your payment by three points or add three points to your payment). These additions and deductions were all decided on in private an enacted simultaneously, so as to avoid retribution and cooperation factors. It wasn’t until the end of each round that subjects saw how many additions and reductions they had received. In total, each subject had 15 chances to either add to or deduct from someone else payment (3 people per round over 5 rounds).

The results showed that most subjects paid to either add to or deduct from someone else’s payment at least once: 68% of people reduced the payment of someone else at least once, whereas 74% increased someone’s payment at least once. It wasn’t what one might consider a persistent habit, though: only 28% reduced people’s payment more than five times, while 33% added, and only 6% reduced more than 10 times, whereas 10% added. This, despite their being inequality to be reduced in all cases. Further, an appreciable number of the modifications didn’t go in the equality-reducing direction: 29% of reductions went to below-average earners, and 38% of the additions went to above-average earners. Of particular interest, however, is the precise way in which subjects ended up reducing inequality: the people who earned the least in each round tended to spend 96% more on deductions than top earners. In turn, top earners averaged spending 77% more on additions than the bottom earners. This point is of interest because positing a preference for avoiding inequality does not necessarily help one predict the shape that equality will ultimately take.

You could also cut the legs off the taller boys in the left picture so no one gets to see.

The first thing worth point out here, then, is that about half of all the inequality-reducing behaviors that people engaged in ended up destroying overall welfare. These are behaviors in which no one is materially better off. I’m reminded of part of a standup routine by Louis CK, concerning that idea, in which he recounts the following story (starting at about a 1:40):

“My five-year old, the other day, one of her toys broke, and she demanded I break her sister’s toy to make it fair. And I did.”

It’s important to note this so as to point out that achieving equality itself doesn’t necessarily do anything useful. It is not as if equality automatically makes everyone – or anyone – better off. So what kind of useful outcomes might such spiteful behavior result in? To answer that question, we need to examine the ways people reduced inequality. Any player in this game could reduce the overall amount of inequality by either deducting from high earners payment or adding to low earners. This holds for both the bottom and top earners. This means that there are several ways of reducing inequality available to all players. Low earners, for instance, could reduce inequality by engaging in spiteful reductions towards everyone above them until they’re all down at the same low level; they could also reduce the overall inequality by benefiting everyone above them, until everyone (but them) is at the same high level. Alternatively, they could engage in a mixture of these strategies, benefiting some people and harming others. The same holds for high earners, just in the opposite directions. Which path people would take depends on what their set point for ‘equal’ is. Strictly speaking, then, a preference for equality doesn’t tell us which method people should opt for, nor does it tell us what levels of inequality will be relatively accepted and efforts to achieve equality will cease.

There are, however, other possibilities for explaining these results beyond a preference for inequality per se. One particularly strong alternative is that people use perceptions of inequality as inputs for social bargaining. Consider the following scenario: two people are working together to earn a joint prize, like a $10 reward. If they work together, they get the $10 to split; if they do not work together, neither will receive anything. Further, let’s assume one member of this pair is greedy, and in round one, after they cooperate, takes $9 of the pot for themselves. Now, strictly speaking, the person who received $1 is better off than if they received nothing at all, but that doesn’t mean they ought to accept that distribution, and here’s why: if the person with $1 refuses to cooperate during the next round, they only lose that single dollar; the selfish player would lose out on nine-times as much. This asymmetry in losses puts the poorer player in a stronger bargaining position, as they have far less to lose from not cooperating. It is from bargaining structures similar in structure to this that our sense of fairness likely emerged.

So let’s apply this analysis back to the results of the experiment: people all start off with different amounts of money and people are in positions to benefit or harm each other. Everyone wants to leave with as much benefit as possible, which means contributing nothing and getting additions from everyone else. However, since everyone is seeking this same outcome and they can’t all have it, certain compromises need to be reached. Those in high-earning positions face a different set of problems in that compromise than those in low-earning positions: while the high earners are doing something akin to trying to maintain cooperation by increasing the share of resources other people get (as in the previous example), low earners are faced with the problem of negotiating for a better payoff, threatening to cut off cooperation in the process. Both parties seem to anticipate this, with low earners disproportionately punishing high earners, and high earners disproportionately benefiting low earners. That there is no option for cooperation or bargaining present in this experiment is, I think besides the point, as our minds were not designed to deal with the specific context presented in the experiment. Along those same lines, simply telling people that “you’re now anonymous” doesn’t mean that their mind will automatically function as if it was positive no one could observe its actions, and telling people their computer can’t understand their frustration won’t stop them from occasionally yelling at it.

“Listen only to my voice: you are now anonymous. You are now anonymous”

As a final note, one should be careful about inferring a motive or preference for equality just because inequality was sometimes reduced. A relatively simple example should demonstrate why: consider an armed burglar who enters a store, points their gun at the owner, and demands all the money in the register. If the owner hands over the money, they have delivered a benefit to the burglar at a cost to themselves, but most of us would not understand this as an act of altruism on the part of the owner; the owner’s main concern is not getting shot, and they are willing to pay a small cost (the loss of money) so as to avoid a larger one (possible death). Other research has found, for instance, that when given the option to pay a fixed cost (a dollar) to reduce another person’s payment by any amount (up to a total of $12), when people engage in reduction, they’re highly likely to generate inequality that favors themselves. (Houser & Xiao, 2010). It would be inappropriate to suggest that people are equality-averse from such an experiment, however, and, more to the point, doing so wouldn’t further our understanding of human behavior much, if at all. We want to understand why people do certain things; not simply that they do them.

References: Dawes CT, Fowler JH, Johnson T, McElreath R, & Smirnov O (2007). Egalitarian motives in humans. Nature, 446 (7137), 794-6 PMID: 17429399

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment Economic Letters DOI: 10.1016/j.econlet.2010.07.008.