Making A Great Leader

Selfies used to be a bit more hardcore

If you were asked to think about what makes a great leader, there are a number of traits you might call to mind, though what traits those happen to be might depend on what leader you call to mind: Hitler, Gandhi, Bush, Martin Luther King Jr, Mao, Clinton, or Lincoln were all leaders, but seemingly much different people. What kind of thing could possibly tie all these different people and personalities together under the same conceptual umbrella? While their characters may have all differed, there is one thing all these people shared in common and it’s what makes anyone anywhere a leader: they all had followers.

Humans are a social species and, as such, our social alliances have long been key to our ability to survive and reproduce over our evolutionary history (largely based around some variant of the point that two people are better at beating up one person than a single individual is; an idea that works with cooperation as well). While having people around who were willing to do what you wanted have clearly been important, this perspective on what makes a leader – possessing followers – turns the question of what makes a great leader on its head: rather than asking about what characteristics make one a great leader, you might instead ask what characteristics make one an attractive social target for followers. After all, while it might be good to have social support, you need to understand why people are willing to support others in the first place to fully understand the matter. If it was all cost to being a follower (supporting a leader at your own expense), then no one would be a follower. There must be benefits that flow to followers to make following appealing. Nailing down what those benefits are and why they are appealing should better help us understand how to become a leader, or how to fall from a position of leadership.

With this perspective in mind, our colorful cast of historical leaders suddenly becomes more understandable: they vary in character, personality, intelligence, and political views, but they must have all offered their followers something valuable; it’s just that whatever that something(s) was, it need not be the same something. Defense from rivals, economic benefits, friendship, the withholding of punishment: all of these are valuable resources that followers might receive from an alliance with a leader, even from the position of a subordinate. That something may also vary from time to time: the leader who got his start offering economic benefits might later transition into one who also provides defense from rivals; the leader who is followed out of fear of the costs they can inflict on you may later become a leader who offers you economic benefits. And so on.

“Come for the violence; stay for the money”

The corollary point is that features which fail to make one appealing to followers are unlikely to be the ones that define great leaders. For example – and of relevance to the current research on offer – gender per se is unlikely to define great leaders because being a man or a woman does not necessarily offer much to many followers. Traits associated with them might – like how those who are physically strong can help you fight against rivals better than one who is not, all else being equal – but not the gender itself. To the extent that one gender tends to end up in positions of leadership it is likely because they tend to possess higher levels of those desirable traits (or at least reside predominantly on the upper end of the population distribution of them). Possessing these favorable traits that allow leaders to do useful things is only one part of the equation, however: they must also appear willing to use those traits to provide benefits to their follows. If a leader possesses considerable social resources, they do you little good if said leader couldn’t be any less interested in granting you access to them.

This analysis also provides another context point for understanding the leader/follower dynamic: it ought to be context specific, at least to some extent. Followers who are looking for financial security might look for different leaders than those who are seeking protection from outside aggression; those facing personal social difficulties might defer to different leaders still. The match between the talents offer by a leader and the needs of the followers should help determine how appealing some leaders are. Even traits that might seem universally positive on their face – like a large social network – might not be positives to the extent it affects a potential follower’s perception of their likelihood of receiving benefits. For example, leaders with relatively full social rosters might appear less appealing to some followers if that follower is seeking a lot of a leader’s time; since too much of it is already spoken for, the follower might look elsewhere for a more personal leader. This can create ecological leadership niches that can be filled by different people at different times for different contexts.

With all that in mind, there are at least some generalizations we can make about what followers might find appealing in a leader in an, “all else being equal…” sense: those with more social support with be selected as leaders more often, as such resources are more capable of resolving disputes in your favor; those with greater physical strength or intelligence might be better leaders for similar reasons. Conversely, one might follow such leaders because of the costs failing to follow would incur, but the logic holds all the same. As such, once these and other important factors are accounted for, you should expect irrelevant factors – like sex – to fall out of the equation. Even if many leaders tend to be men, it’s not their maleness per se that makes them appealing leaders, but rather these valued and useful traits.

Very male, but maybe not CEO material

This is a hypothesis effectively tested in a recent paper by von Rueden et al (in press). The authors examined the distribution of leadership in a small-scale foraging/farming society in the Amazon, the Tsimane. Within this culture – as others – men tend to exercise the greater degree of political leadership, relative to women, as measured by domains including speaking more during social meetings, coordinating group efforts, and resolving disputes. The leadership status of members within this group were assessed by ratings of other group members. All adults within the community (male n = 80; female n = 72) were photographed, and these photos were then then given to 6 of the men and women in sets of 19. The raters were asked to place the photos in order in terms of which person whose voice tended to carry the most weight during debates, and then in terms of who managed the most community projects. These ratings were then summed up (from 1 to 19, depending on their position in the rankings, with 19 being the highest in terms of leadership) to figure out who tended to hold the largest positions of leadership.

As mentioned, men tended to reside in positions of greater leadership both in terms of debates and management (approximate mean male scores = 37; mean female scores = 22), and both men and women agreed on these ratings. A similar pattern was observed in terms of who tended to mediate conflicts within the community: 6 females were named in resolving such conflicts, compared with 17 males. Further, the males who were named as conflict mediators tended to be higher in leadership scores, relative to non-mediating males, while this pattern didn’t hold for the females.

So why were men in positions of leadership in greater percentages than females? A regression analysis was carried out using sex, height, weight, upper body strength, education, and number of cooperative partners predicting leadership scores. In this equation, sex (and height) no longer predicted leadership score, while all the other factors were significant predictors. In other words, it wasn’t that men were preferred as leaders per se, but rather that people with more upper body strength, education, and cooperative partners were favored, whether male or female. These traits were still favored in leaders despite leaders not being particularly likely to use force or violence in their position. Instead, it seems that traits like physical strength were favored because they could potentially be leveraged, if push came to shove.

“A vote for Jeff is a vote for building your community. Literally”

As one might expect, what makes followers want to follow a leader wasn’t their sex, but rather what skills the leader could bring to bear in resolving issues and settling disputes. While the current research is far from a comprehensive examination of all the factors that might tap leadership at different times and contexts, it represents a sound approach to understanding the problem of why followers select particular leaders. By thinking about what benefits followers tended to reap from leaders over evolutionary history can help inform our search for – and understanding of – the proximate mechanisms through which leaders end up attracting them.

References:  von Rueden, C., Alami, S., Kaplan, H., & Gurven, M. (In Press). Sex differences in political leadership in an egalitarian society. Evolution & Human Behavior, doi:10.1016/j.evolhumbehav.2018.03.005

Punishment Might Signal Trustworthiness, But Maybe…

As one well-known saying attributed to Maslow goes, “when all you have is hammer, everything looks like a nail.” If you can only do one thing, you will often apply that thing as a solution to a problem it doesn’t fit particularly well. For example, while a hammer might make for a poor cooking utensil in many cases, if you are tasked with cooking a meal and given only a hammer, you might try to make the best of a bad situation, using the hammer as an inefficient, makeshift knife, spoon, and spatula. That you might meet with some degree of success in doing so does not tell you that hammers function as cooking implements. Relatedly, if I then gave you a hammer and a knife, and tasked with you the same cooking jobs, I would likely observe that hammer use drops precipitously while knife use increases quite a bit. It is also worth bearing in mind that if the only task you have to do is cooking, the only conclusion I’m realistically capable of drawing concerns whether a tool is designed for cooking. That is, if I give you a hammer and a knife and tell you to cook something, I won’t be able to draw the inference that hammers are designed for dealing with nails because nails just aren’t present in the task.

Unless one eats nails for breakfast, that is

While all that probably sounds pretty obvious in the cooking context, a very similar set up appears to have been used recently to study whether third-party punishment (the punishment of actors by people not directly affected by their behavior; hereafter TPP) functions to signal the trustworthiness of the punisher. In their study, Jordan et al (2016) has participants playing a two-stage economic game. The first stage was a TPP game. In this game, there are three players: player A is the helper, and is given 30 cents, player B is the recipient, and given nothing, and player C is the punisher, given 20 cents. The helper can choose to either give the recipient 15 cents or nothing. If the helper decides to give nothing, the punisher then has the option to pay 5 cents to reduce the helper’s pay by 15 cents, or not do so. In this first stage, the first participant would either play one round as a helper or a punisher, or play two rounds: one in the role of the helper and another in the role of the punisher.

The second stage of this game involved a second participant. This participant observed the behavior of the people playing the first game, and then played a trust game with the first participant. In this trust game, the second participant is given 30 cents and decides how much, if any, to send to the first participant. Any amount sent is tripled, and then the first participant decides how much of that amount, if any, to send back. The working hypothesis of Jordan et al (2016) is that TPP will be used a signal of trustworthiness, but only when it is the only possible signal; when participants have an option to send better signals of trustworthiness – such as when they are in the roll of the helper, rather than the punisher – punishment will lose its value as a signal for trust. By contrast, helping should always serve as a good signal of trustworthiness, regardless of whether punishment is an option.

Indeed, this is precisely what they found. When the first participant was only able to punish, the second participant tended to trust punishers more, sending them 16% more in the trust game than non-punishers; in turn, the punishers also tended to be slightly more trustworthy, sending back 8% more than non-punishers. So, the punishers were slightly, though not substantially, more trustworthy than the non-punishers when punishing was all they could do. However, when participants were in the helper role (and not the punisher role), those who transferred money to the recipient were in turn trusted more – being sent an average of 39% more in the trust game than non-helpers – and were, in fact, more trustworthy – returning an average of 25% more than non-helpers. Finally, when the first participant was in the role of both the punisher and the helper, punishment was less common (30% of participants in both roles punished, whereas 41% of participants who were only punishers did) and, controlling for helping, punishers were only trusted with 4% more in the second stage and actually returned 0.3% less.

The final task was less about trust and more about upper-body strength

To sum up, then, when people only had the option to punish others, punishment behavior was used by observers as a cue to trustworthiness. However, when helping was possible as well, punishment ceased to predict trustworthiness. From this set of findings, the authors make the rather strange conclusion that “clear support” was found for their model of punishment as signaling trustworthiness. My enthusiasm for that interpretation is a bit more tepid. To understand why, we can return to my initial example: you have given people a tool (a hammer/punishment) and a task (cooking/a trust game). When they use this tool in the task, you see some results, but they aren’t terribly efficient (16% more trusted and 8% more returned). Then, you give them a second tool (a knife/helping) to solve the same task. Now the results are much better (39% more trusted, 25% more returned). In fact, when they have both tools, they don’t seem to use the first one to accomplish the task as much (punishment falls 11%) and, when they do, they don’t end up with better outcomes (4% more trusted, 0.3% less returned). From that data alone, I would say that the evidence does not support the inference that punishment is a mechanism for signaling trustworthiness. People might try using it in a pinch, but its value seems greatly diminished compared to other behaviors.  

Further, the only tasks people were doing involved playing a dictator and trust game. If punishment serves some other purpose beyond signaling trustworthiness, you wouldn’t be able to observe it there because people aren’t in the right contexts for it to be observed. To make that point clear, we could consider other examples. First, let’s consider murder. If I condemn murder morally and, as a third party, punish someone for engaging in murder, does this tell you that I am more trustworthy than someone else who doesn’t punish it themselves? Probably not; almost everyone condemns murder, at least in the abstract, but the costs of engaging in punishment aren’t the same for all people. Someone who is just as trustworthy might not be willing or able to suffer the associated costs. What about something a bit more controversial: let’s say that, as a third party, I punish people for obtaining or providing abortions. Does hearing about my punishment make me seem like a more trustworthy person? That probably depends on what side of the abortion issue you fall on.

To put this in more precise detail, here’s what I think is going on: the second participant – the one sending money in the trust game, so let’s call him the sender – primarily wants to get as much money back as possible in this context. Accordingly, they are looking for cues that the first participant – the one they’re trusting, or the recipient – is an altruist. One good cue for altruism is, well, altruism. If the sender sees that the recipient has behaved altruistically by giving someone else money, this is a pretty good cue for future altruism. Punishment, however, is not the same thing as altruism. From the point of the view of the person benefiting from the punishment, TPP is indeed altruistic; from the point of view of the target of that TPP, the punishment is spiteful. While punishment can contain this altruistic component, it is more about trading off the welfare of others, rather than providing benefits to people per se. While that altruistic component of punishment can be used as a cue for trustworthiness in a pinch when no other information is available, that does not suggest to me sending such a signal is its only, or even its primary function.

Sure, they can clean the floors, but that’s not really why I hired them

In the real world, people’s behaviors are not ever limited to just the punishment of perpetrators. If there are almost always better ways to signal one’s trustworthiness, then TPP’s role in that regard is likely quite low. For what it’s worth, I happen to think that the roll of TPP has more to do with using transient states of need to manage associations (friendships) with others, as such an explanation works well outside the narrow boundaries of the present paper when things other than unfairness are being punished and people are seeking to do more than make as much money as possible. Finding a good friend is not the same thing as finding a good altruist, and friendships do not usually resemble trust games. However, when all you are observing is unfairness and cooperation, TPP might end up looking a little bit like a mechanism for building trust. Sometimes. If you sort of squint a bit.

References: Jordan, K., Hoffman, M., Bloom, P. & Rand. D. (2016). Third-party punishment as a costly signal of trustworthiness. Nature, 530, 473-476.

Some Bathwater Without A Baby

When reading psychology papers, I am often left with the same dissatisfaction: the lack of any grounding theories in them and their inability to deliver what I would consider a real explanation for their findings. While it’s something I have harped on for a few years now, this dissatisfaction is hardly confined to me, as others have voiced similar concerns for at least around the last two decades, and I suspect it’s gone on quite a bit longer than that. A healthy amount of psychological research strikes me as empirical bathwater without a theoretical baby, in a manner of speaking; no matter how interesting that empirical bathwater might be – whether it’s ignored or the flavor of the week – almost all of it will eventually be thrown out and forgotten if there’s no baby there. Some new research that has crossed my eyes a few times lately follows that same trend; a paper examining the reactions of individuals who were feeling powerful to inequality that disadvantaged them or others. I wanted to review that paper today and help fill in the missing sections from it where explanations should go.

Next step: add luxury items, like skin and organs

The paper, by Sawaoka, Hughes, & Ambady (2015), contained four or five experiments – depending on how one counts a pilot study – in which participants were primed to think of themselves as powerful or not. This was achieved, as it so often is, by having the participants in each experiment write about a time they had power over another person or about a time that other people had power over them, respectively. In the first pilot study, about 20 participants were primed as powerful and another 20 primed as relatively powerless. Subsequently, they were told they would be playing a dictator game with another person, in which the other person (who was actually not a person) would be serving as the dictator in charge of dividing up 10 experimental tokens between the two; tokens which, presumably, were supposed to redeemed for some kind of material reward. Those participants who had been primed to feel more powerful expected to receive a higher average number of these tokens (M = 4.2) relative to those primed to feel less powerful (M = 2.2). Feeling powerful, it seemed, lead to participants expecting better treatment from others.

In the next experiment, participants (N = 227) were similarly primed before completing a fairness reaction task. Specifically, participants were presented with three pictures representing distributions of tokens: one of which represented the participant’s payment while the other two represented the payments to others. It was the job of participants to indicate whether these tokens were distributed equally between the three people or whether the distribution was unequal. The distributions could have been (a) equal, (b) unequal, favoring the participant, or (c) unequal, disfavoring the participant. The measure of interest here was how quickly the participants were able to identify equal and unequal distributions. As it turns out, participants primed to feel powerful were quicker to identify unfair arrangements that disfavored them, relative to less powerful participants by about a tenth of a second, but were not quicker to do so when the unequal distributions favored them.

The next two studies followed pretty much the same format and echoed the same conclusion, so I don’t want to spend too much time on their details. The final experiment, however, examined not just reaction times to assessments of equality, but rather how quickly participants were willing to do something about it. In this case, participants were told they were being paid by an experimental employer. The employer to whom they were randomly assigned would be responsible for distributing a payment amount between them and two other participants over a number of rounds (just like the experiment I just mentioned). However, participants were also told that there were other employers they could switch to if they wanted after each round. The question of interest, then, was how quickly participants would switch away from employers who disfavored them. Those participants that were primed to feel powerful didn’t wait around very long in the face of unfair treatment that disfavored them, leaving after the first round, on average; by contrast, those primed to feel less powerful waited about 3.5 rounds to switch if they were getting a bad relative deal. If the inequality favored them, however, the powerful participants were about as likely to stay over time as the less powerful ones. In short, those who felt powerful not only recognized poor treatment of themselves (but not others) quicker, they also did something about it sooner.

They really took Shia’s advice about doing things to heart

These experiments are quite neat, but, as I mentioned before, they are missing a deeper explanation to anchor them anywhere.. Sawaoka, Hughes, & Ambady (2015) attempt an explanation for their results, but I don’t think they get very far with it. Specifically, the authors suggest that power makes people feel entitled to better treatment, subsequently making them quicker to recognize worse treatment and do something about it. Further, the authors make some speculations about how unfair social orders are maintained by powerful people being motivated to do things that maintain their privileged status while the disadvantaged sections of the population are sent messages about being powerless, resulting in their coming to expect unfair treatment and being less likely to change their station in life. These speculations, however, naturally yield a few important questions, chief among which being, “if feeling entitled yields better treatment on the part of others, then why would anyone ever not feel that way? Do, say, poor people really want to stay poor and not demand better treatment from others as well?” It seems that there are very real advantages being forgone by people who don’t feel as entitled as powerful people do, and we would not expect a psychology that behaved that way – that just avoided taking welfare benefits – to have been selected for.

In order to craft something approaching a real explanation for these findings, then, one would need to begin with a discussion about some possible trade-offs that have to be made: if feeling entitled was always good for business, everyone would feel entitled all the time; since they don’t, there are likely some costs associated with feeling entitled that, at least in certain contexts, prevents its occurrence. One of the most likely trade-offs involves the costs associated with conflict: if you feel you’re entitled to a certain kind of treatment you feel you’re not receiving, you need to take steps to ensure the correction of that treatment, since other people aren’t exactly expected just going to start giving you more benefits for no reason. To use a real life example, if you feel your boss isn’t compensating you properly for your work, you need to demand a raise, threatening to inflict costs on him – such as your quitting – if your demands aren’t met.

The problems with such a course of action are two-fold: first, your boss might disagree with your assessment and let you quit, and losing that job could pose other, very real costs (like starving and homelessness). Sometimes an unfair arrangement is better than no arrangement at all. Second, the person with whom you’re bargaining might attempt to inflict costs on you in turn. For instance, if you begin a dispute with law enforcement officers because you believe they have treated you unfairly and are seeking to rectify that situation, they might encourage your compliance with the arrangement with a well-placed fist to your nose. In other words, punishment is a two-way street, and trying to punish stronger individuals – whether physically or socially stronger – is often a poor course of action to take. While “punching-up” might be appealing to certain sensitivities in, say, comedy, it works less well when you’re facing down that bouncer with a few inches and a few dozens pounds of muscle on you.

I’m sure he’ll find your arguments about equality quite persuasive

Indeed, this is the same kind of evolutionary explanation offered by Sell, Tooby, & Cosmides (2009) for understanding the emotion of anger and its associated entitlement: one’s formidability – physically and/or socially – should be a key factor in understanding the emotional systems underlying how they resolve their conflicts; conflicts which may well have to do with distributions of material resources. Those who are better suited to inflict costs on others (e.g., the powerful) are also likely to be treated better by others who wish to avoid the costs of conflicts that accompany poor treatment. This could suggest, however, that making people feel more powerful than they actually are would, in the long-term, tend to produce quite a number of costs for the powerful-feeling, but actually-weak, individuals: making that 150-pound guy think he’s stronger than the 200-pound one might encourage the former to initiate a fight, but not make him more likely to win it. Similarly, encouraging your friend who isn’t that good at their job to demand that raise could result in their being fired. In other words, it’s not that social power structures in society are maintained simply on the basis of inertia or people getting sent particular kinds of social messages, but rather that they reflect (albeit imperfectly) important realities in the actual value people are able to demand from others. While the idea that some of the power dynamics observed in the social world reflect non-arbitrary differences between people might not sit well with certain crowds, it is a baby capable of keeping this bathwater around.

References: Sawaoka, T., Hughes, B., & Ambady, N. (2015). Power heightens sensitive to unfairness against the self. Personality & Social Psychology Bulletin, 41, 1023-1035.

Sell, A., Tooby, J., & Cosmides, L. (2009). Formidability and the logic of human anger. Proceedings of the National Academy of Science, 106, 15073-78.

Are People Inequality Averse?

People are averse to a great many things: most of us are averse to the smell of feces or the taste of rotting food; a few people are averse to idea of intercourse with opposite sex individuals, while many people are averse to same-sex intercourse. As I have been learning lately, there are also many people who happen to be in charge of managing academic journals that are averse to the idea of publishing research papers with only a single experiment in them. Related to that last point, there have been claims made that people are averse to inequality per se. I happen to have a new (ish; it’s been written up for over a year) experiment which I feel speaks to the matter that I can hopefully find a home for soon. In the meantime, since I will be talking about this paper at an upcoming conference (NEEPS), I have decided to share some of the results with all of you pre-publication. Anyone interested in reading the paper proper can feel free to contact me for a copy.

   And anyone out there with an interest in publishing it…

To start off, consider the research that my experiment was based on which purports to demonstrate that human punishment is driven by inequality, rather than losses; a rather shocking claim. Rahani & McAuliffe (2012) note that many experiments examining human punishment possess an interesting confound: they tend to generate both losses and inequality for participants. Here’s an example to make that more concrete: in what’s known as a public goods game, a group of four individuals are each given a sum of money. Each individual can decide how much of their money to contribute to a public pot. Every dollar put into the public pot gets multiplied by three and then the pot is equally distributed among all players. From the perspective of getting the maximum overall payment for the group, each member should contribute all their money, meaning everyone makes three times the amount they started out with. However, for any individual player to maximize their own payment, the best course of action is to contribute nothing, as every dollar contributed only returns 75 cents to their own payment. The best payoff for you, then, would be if everyone else contributed all of their money (giving you $0.75 for every dollar they have), and for you to keep all your money. The public and private goods are at odds.

A large body of literature finds that those who contribute to the public good are more likely to desire that costs be inflicted on those who do not contribute as much. In fact, if they’re given the option, contributors will often pay some of their remaining money to inflict costs on those who did not contribute. The question of interest here is what precisely is being punished? On the one hand, those who contributed are, in some sense, having a cost inflicted on them by less cooperative individuals; on the other, they also find themselves at a payoff disadvantage, relative to those who did not contribute. So are these punitive sentiments being driven by losses, inequality, or both?

To help answer that question, Rahani & McAuliffe (2012) put together a taking game. Two players – X and Y – started the game with a sum of money. Player X could take some amount of money from Y and add it to his own payment; player Y could, in turn, pay some of their money to reduce player X’s payment following the decision to take or not. The twist on this experiment is that each player started out with a different amount of money. In cents, the starting payments were: 10/70, 30/70, and 70/70, respectively. As player X could take 20 cents from Y, the resulting payments (if X opted to take the money) would be 30/50, 50/50, or 90/50. So, in all cases, X could take the same amount of money from Y; however, in only one case would this taking generate inequality favoring X. The question, then, is how Y would punish X for their behavior.

The experiment found that when X did not take any money from Y, Y did not spend much to punish (about 11% of subjects paid to punish the non-taker). As there’s no inequality favoring X and no losses incurred by Y, this lack of punishment isn’t terribly shocking. However, when X did take money from Y, Y did spend quite a bit on punishment, but only when the taking generated inequality favoring X. In the event that X ended up still worse off, or as well off, as Y after the taking, Y did not punish significantly more than if X took nothing in the first place (about 15% in the first two conditions and 42% in the third). This would seem to demonstrate that inequality – not losses – is what is being punished.

 ”Just let him take it; he’s probably worse off than you”

Unfortunately for this conclusion, the experiment by Raihani & McAuliffe (2012) contains a series of confounds as well. The most relevant of these is that there was no way for X to generate inequality that favored them without taking from Y. This means that, despite the contention of the authors, its still impossible to tell whether the taking or the inequality is being punished. To get around this issue, I replicated their initial study (with a few changes to the details, keeping the method largely the same), but made two additions: the introduction of two new conditions. In the first of these conditions, player X could only add to their own payment, leaving Y’s payment unmolested; in the second, player X could only deduct from player Y’s payment, leaving their own payment the same. What this means is that now inequality could be generated via three different methods: someone taking from the participant, someone adding to their own payment, and someone destroying some of the other participant’s payment.

If people are punishing inequality per se and not losses, the means by which the inequality gets generated should not matter: taking should be just as deserving of punishment as destruction or augmentation. However, this was not the pattern of results I observed. I did replicate the original results of Raihani & McAuliffe (2012) – where taking resulted in more punishment when the taker ended up with more than their victim (75% of players punished), while the other two conditions did not show this pattern (punishment rates of 40% and 47%). When participants had their payment deducted by the other player without that other player benefiting, punishment was universally high and inequality played no significant role in determining punishment (63%, 53%, and 51%, respectively). Similarly, when the other player just benefited himself without affecting the participant’s payment participants were rather uninterested in punishment, regardless of whether that person ended up better off than them (18%, 19%, and 14%).

In summary, my results show that punishment tended to be driven primarily by losses. This makes a good deal of theoretical sense when considered from an evolutionary perspective: making a few reasonable assumptions, we can say any adaptation that led its bearer to tolerate costs inflicted by others in order to allow those others to be better off would not have a bright reproductive future. By contrast, punishing individuals who inflict costs on you can readily be selected for to the extent that it stops them from doing so again in the future. The role of inequality only seemed to exist in the context of the taking. Why might that be the case? While it’s only speculation on my part, I feel the answer to that question has quite a bit to how other, uninvolved parties might react to such punishment. If needier individuals make better social investments – all else being equal – other third parties might be less willing to subsidize the costs of punishing them, deterring the actual person who was taken from from punishing the taker in turn. The logic is a bit more involved than that, but the answer to the question seems to involve wanted to provide benefits towards those who would appreciate them most for the best return on it.

“Won’t someone think about the feelings of the rich? Probably not”

The hypothesis that people are averse to inequality itself seems to rest on rather shaky theoretical foundations as well. An adaptation that exists to achieve equality with others sounds like a rather strange kind of mechanism. In no small part, it’s weird because equality is a constraint on behavior, and constraining behavior does not allow certain, more-useful outcomes to be reached. As an example, if I have a choice between $5 for both of us or $7 for you and $10 for me, the latter option is clearly better for both of us, but the constraint of equality would prevent me from taking it. Further, if you’re inflicting costs on me, it seems I would be better off if I could prevent you from inflicting them. A poorer person mugging me doesn’t suddenly mean that being mugged would not be something I want to avoid. Perhaps there are good, adaptive reasons that equality-seeking mechanisms could exist despite the costs they seem liable to reliably inflict on their bearers. Perhaps there are also good reasons for many journals only accepting papers with multiple experiments in them. I’m open to hearing arguments for both.

References: Marczyk, J. (Written over a year ago). Human punishment is not primarily motivated by inequality aversion. Journal of Orphan Papers Seeking a Home. 

Raihani, N. & McAuliffe, K. (2012). Human punishment is motivated by inequality aversion, not a desire for reciprocity. Biology Letters, 8, 802-804.

Quid Pro Quo

Managing relationships is a task that most people perform fairly adeptly. That’s not to say that we do so flawlessly – we certainly don’t – but we manage to avoid most major faux pas with regularity. Despite our ability to do so, many of us would not be able to provide compelling answers that help others understand why we do what we do. Here’s a frequently referenced example: if you invited your friend over for dinner, many of you would likely find it rather strange – perhaps even insulting – if after the meal your friend pulled out his wallet and asked how much he owed you for the food. Though we would find such behavior strange or rude, when asked to explain what is rude about it, most people would verbally stumble. It’s not that the exchange of money for food is strange; that part is really quite normal. We don’t expect to go into a restaurant, be served, eat, and then leave without paying. There are also other kinds of strange goods and services – such a sex and organs – that people often do see something wrong with exchanging resources for, at least so long as the exchange is explicit; despite that, we often have less of a problem with people giving such resources away.

Alright; not quite implicit enough, but good try

This raises all sorts of interesting questions, such as why is it acceptable for people to give away things but not accept money for them? Why would it be unacceptable for a host to expect his guests to pay, or for the guests to offer? The most straightforward answer is that the nature of these relationships are different: two friends have different expectations of each other than two strangers, for instance. While such an answer is true enough, it don’t really deepen our understanding of the matter; it just seems to note the difference. One might go a bit further and begin to document some of the ways in which these relationships differ, but without a guiding functional analysis of why they differ we would be stuck at the level of just noting differences. We could learn not only that business associates treat each other differently than friends (which we knew already), but also some of the ways they do. While documenting such things does have value, it would be nice to place such facts in a broader framework. On that note, I’d like to briefly consider one such descriptive answer to the matter of why these relationships differ before moving onto the latter point: the distinction between what has been labeled exchange relationships and communal relationships. 

Exchange relationships are said to be those in which one party provides a good or service to the other in the hopes of receiving a comparable benefit in return; the giving thus creates the obligation for reciprocity. This is the typical consumer relationship that we have with businesses as customers: I give you money, you give me groceries. Communal relationships, by contrast, do not carry similar expectations; instead, these are relationships in which each party cares about the welfare of the other, for lack of a better word, intrinsically. This is more typically of, say, mother-daughter relationships, where the mother provisions her daughter not in the hopes of her daughter one day provisioning her, but rather because she earnestly wishes to deliver those benefits to her daughter.On the descriptive level, then, this difference between expectations of quid pro quo are supposed to differentiate the two types of relationships. Friends offering to pay for dinner are viewed as odd because they’re treating a communal relationship as an exchange one.

Many other social disasters might arise from treating one type of social relationship as if it were another. One of the most notable examples in this regard is the ongoing disputes over “nice guys”, nice guys, and the women they seek to become intimate with. To oversimplify the details substantially, many men will lament that women do not seem to be interested in guys who care about their well-being, but rather seek men who offer resources or treat them as less valuable. The men feel they are offering a communal relationship, but women opt for the exchange kind. Many women return the volley, suggesting instead that many of the “nice guys” are actually entitled creeps who think women are machines you put niceness coins into to get them to dispense sex. Now, it’s the men seeking the exchange relationships (i.e., “I give you dinner dates and you give me affection”), whereas the women are looking for the communal ones. But are these two types of relationships – exchange and communal – really that different? Are communal relationships, especially those between friends and couples, free of the quid-pro-quo style of reciprocity? There are good reasons to think that they are not quite different in kind, but rather different in respect to the  details of the quids and quos.

A subject our good friend Dr. Lecter is quite familiar with

To demonstrate this point, I would invite you to engage in a little thought experiment: imagine that your friend or your partner decided one day to behave as if you didn’t exist: they stopped returning your messages, they stopped caring about whether they saw you, they stopped coming to your aid when you needed them, and so on. Further, suppose this new-found cold and callous attitude wouldn’t change in the future. About how long would it take you to break off your relationship with them and move onto greener pastures? If your answer to that question was any amount of time whatsoever, then I think we have demonstrated that the quid-pro-quo style of exchange still holds in such relationships (and if you believe that no amount of that behavior on another’s part would ever change how much you care about that person, I congratulate you on the depths of your sunny optimism and view of yourself as an altruist; it would also be great if you could prove it by buying me things I want for as long as you live while I ignore you). The difference, then, is not so much whether there are expectations of exchanges in these relationships, but rather concerning the details of precisely what is being exchanged for what, the time frame in which those exchanges take place, and the explicitness of those exchanges.

(As an aside, kin relationships can be free of expectations of reciprocity. This is because, owing to the genetic relatedness between the parties, helping them can be viewed – in the ultimate, fitness sense of the word – as helping yourself to some degree. The question is whether this distinction also holds for non-relatives.)

Taking those matters in order, what gets exchanged in communal relationships is, I think, something that many people would explicitly deny is getting exchanged: altruism for friendship. That is to say that people are using behavior typical of communal relationships as an ingratiation device (Batson, 1993): if I am kind to you today, you will repay with [friendship/altruism/sex/etc] at some point in the future; not necessarily immediately or at some dedicated point. These types of exchange, as one can imagine, might get a little messy to the extent that the parties are interested in exchanging different resources. Returning to our initial dinner example, if your guest offers to compensate you for dinner explicitly, it could mean that he considers the debt between you paid in full and, accordingly, is not interested in exchanging the resource you would prefer to receive (perhaps gratitude, complete with the possibility that he will be inclined to benefit you later if need be). In terms of the men and women example for before, men often attempt to exchange kindness for sex, but instead receive non-sexual friendship, which was not the intended goal. Many women, by contrast, feel that men should value the friendship…unless of course it’s their partner building friendship with another woman, in which case it’s clearly not just about friendship between them.

But why aren’t these exchanges explicit? It seems that one could, at least in principle, tell other people that you will invite them over for dinner if they will be your friend in much the same way that a bank might extend a loan to person and ask that it be repaid over time. If the implicit nature of these exchanges were removed, it seems that lots of people could be saved a lot of headache. The reason such exchanges cannot be made explicit, I think, has to do with the signal value of the exchange. Consider two possible friends: one of those friends tells you they will be your friend and support you so long as you don’t need too much help; the other tells you they will support you no matter what. Assuming both are telling the truth, the latter individual would make the better friend for you because they have a greater vested interest in your well-being: they will be less likely to abandon you in times of need, less likely to take better social deals elsewhere, less likely to betray you, and the like. In turn, that fact should incline you to help the latter more than the former individual. After all, it’s better for you to have your very-valuable allies alive and well-provisioned if you want them to be able to continue to help you to their fullest when you need it. The mere fact that you are valuable to them makes them valuable to you.

“Also, your leaving would literally kill me, so…motivation?”

This leaves people trying to walk a fine line between making friendships valuable in the exchange-sense of the word (friendships need to return more than they cost, else they could not have been selected for), while maintaining the representation that they not grounded in explicit exchanges publicly so as to make themselves appear to be better partners. In turn, this would create the need for people to distinguish between what we might call “true friends” – those who have your interests in mind – and “fair-weather friends” – those who will only behave as your friend so long as it’s convenient for them. In that last example we assumed both parties were telling the truth about how much they value you; in reality we can’t ever be so sure. This strategic analysis of the problem leaves us with a better sense as for why friendship relationships are different from exchange ones: while both involve exchanges, the nature of the exchanges do not serve the same signaling function, and so their form ends up looking different. People will need to engage in proximately altruistic behaviors for which they don’t expect immediate or specific reciprocity in order to credibly signal their value as an ally. Without such credible signaling, I’d be left taking you at your word that you really have my interests at heart, and that system is way too open to manipulation.

Such considerations could help explain, in part, why people are opposed to exchanging things like selling organs or sex for money but have little problem with such things being given for free. In the case of organ sales, for instance, there are a number of concerns which might crop up in people’s minds, one of the most prominent being that it puts an explicit dollar sign on human life. While we clearly need to do so implicitly (else we could, in principle, be willing to exhaust all worldly resources trying to prevent just one person from dying today), to make such an exchange implicit turns the relationship into an exchange one, sending a message along the lines of, “your life is not worth all that much to me”. Conversely, selling an organ could send a similar message: “my own life isn’t worth that much to me”. Both statements could have the effect of making one look like a worse social asset even if, practically, all such relationships are fundamentally based in exchanges; even if such a policy would have an overall positive effect on a group’s welfare.

References: Batson, C. (1993). Communal and exchange relationships: What is the difference? Personality & Social Psychology Bulletin, 19, 677-683.

DeScioli, P. & Kurzban, R. (2009). The alliance hypothesis for human friendship. PLoS ONE, 4(6): e5802. doi:10.1371/journal.pone.0005802

Perverse Punishment

There have been a variety of studies conducted in psychology examining what punishment is capable of doing; mathematical models have been constructed too. As it turns out, when you give people the option to inflict costs on others, the former group are pretty good at manipulating the behavior of the latter. The basic principle is, well, pretty basic: there are costs and benefits to acting in various fashions and, if you punish certain behaviors, you shift the plausible range of self-interested behaviors. Stealing might be profitable in some cases, unless I know that it will, say, land me jail for 5 years. Since five years in jail is a larger cost than benefit I might reap from stealing (provided I am detected, of course), the incentive to not steal is larger and people don’t do take things which aren’t theirs. The power of punishment is such that, in theory, it is capable of making people behave in pretty much any conceivable fashion so long as they are making decisions on the basis of some kind of cost/benefit calculation. All you have to do is make the alternative courses of action costlier, and you can push people towards any particular path (though if people behave irrespective of the costs and benefits, punishment is no longer effective).

Now, in most cases, the main focus of this research on punishment has been on what one might dub “normal” punishment. A case of normal punishment would involve, say, Person A defecting on Person B, followed by person B then punishing person A. So, someone behaves in an anti-social fashion and gets punished for it. This kind of punishment is great for maintaining cooperation and pointing out how altruistic people are. However, a good deal of punishment in these experiments is what one might dub “perverse”.

“Yes; quite perverse indeed…”

By perverse punishment, I am referring to instances of punishment where people are punished for giving up their own resources and benefiting others. That people are getting punished for behaving altruistically is rather interesting, as the pro-social behavior being targeted for punishment is, at least in the typical experiments, benefiting the people enacting the punishment. As we tend to punish behavior we want to see less of, and self-benefiting behavior is generally something we want more of, the punishment of others for benefiting the punisher appears to be rather strange. Now I think this strangeness can be resolved, but, before doing that, it is worthwhile to consider an experiment examining whether or not punishment is also capable of reducing perverse punishment.

The experiment – by Cinyabuguma, Page, & Putterman, (2006) – began with a voluntary contribution game. In games like these (which are also known as public goods game), a number of players start off with a certain pool of resources. In the first stage of the game, each player has the option to contribute any amount of their resources towards the public pool. The resources in this pool get multiplied by some amount and then distributed equally among all the players. The payout of these games are such that everyone could do better if they all contributed, but at the individual level contributions make one worse off. So, in other words, you make the most money when everyone else contributes the most and you contribute nothing. In the second stage of the game, the amount that each player has donated to the public good becomes known to everyone else, and each person has the option to “punish” others, which involves giving up some of your own payment to reduce someone else’s payment by 4 times the amount you paid.

The twist in this experiment is the addition of another condition. In that condition, after the first two steps (First subjects contribute and, second, subjects learn of the contributions of others and can punish them), there was then a round of second-order punishment. What this means is that, after people punished the first time, each participant got to see who punished who, and could then punish each other again. Simply put: I could punish someone for either punishing me or for punishing someone else. So the first condition allowed for the punishment of contributions alone, whereas the second allowed for both the punishment of contributions and the punishment of punishment. The question of interest is whether or not perverse punishment and/or cooperation was any different between the two.

“It’s still looking pretty perverse to me”

The answer to that question is yes, but the differences are quite slight, and often not significant. When people could only punish contributions, the average contribution was 7.09 experimental dollars (each person could contribution up to 10); when punishment of punishment was also permitted, the average contribution rose ever so slightly to in between 7.35 and 7.97 units. Similarly, earnings increased when people could punish punishment: when the second-order punishment was an option, people earned more (about 13.35 units) relative to when second-order punishment wasn’t an option (around 12.86 units). So, though these differences weren’t terribly significant, allowing for the punishment of punishers tended to increase the overall amount of money people made slightly.

Also of interest, though, is the nature of the punishment itself. In particular, there are two findings I would like to draw attention to: the first of these is that if someone received punishment for punishing others, they tended to punish less during later periods. In other words, since punishing others was itself punished, less punishment took place (though this seemed to affect the perverse punishment more so than the normal type). This is a fairly expected result.

The second finding I would like to draw attention to concerns the matter of free-riders. Free-riders are individuals who benefit from the public good, but do not themselves contribute to it. Now, in the case of this economic game we’ve been discussing, there are two types of free-riders: the first are people who don’t contribute much to the public good and, accordingly, are targeted for “normal” punishment. However, there are also second-order free-riders; I know this must be getting awfully hard to keep track of, but these second-order free-riders are people who benefit from free-riders being punished, but do not themselves punish others. To put that in simple terms, I’m better off if anti-social people are punished and if I don’t have to be the one to punish them personally. What I find interesting in these results is that these second-order free-riders were not targeted for punishment; instead, those who punished – either normally or perversely – ended up getting punished more as revenge. Predictably, then, those who failed to punish ended up with an advantage over those who did punish. Not only did they not have to spend money on punishing others, but they also weren’t the target of revenge punishment.

So what does all this tell us when it comes to helping us understand perverse punishment, and punishment more generally?  Well, part of that answer comes from considering the fact that it was predominately people who were above/below the average contribution level of the group doing most of the punishing; relatedly, they were largely targeting each other. This suggests, to me, anyway, that a good deal of “perverse” punishment is a kind of preemptive defense (or, as some might call it, an offense) against one’s probably rivals. Since low contributors likely have some inkling that those who contribute a lot will preferentially target them for punishment, this “perverse” punishment could simply reflect that knowledge. Such an explanation makes the “perverse” punishment seem a bit less perverse. Instead of reflecting people punishing against their interests, perverse punishment might work in their interests to some degree. They don’t want to be punished, and they are trying to inflict costs on those who would inflict costs on them.

Which at least makes more sense than the “He’s just an asshole” hypothesis…

I think it helps to also think about what patterns of punishment were not observed to answer our question. As I mentioned initially, people’s payoffs in these games would be maximized if everyone else contributed the maximum and they personally contributed nothing. It follows, then, that one might be able to make himself better off by punishing anyone else who contributes less than the maximal amount, irrespective of how much the punisher contributed. Yet this isn’t what we see. This raises the question, then, of why average contributors don’t receive much punishment, despite them still contributing less than that highest donors. The answer to these questions no doubt lies, in part, on the fact that punishing others is costly, as previously mentioned Thinking about when punishment becomes less costly should shed light on the matter, but since this has already gone a bit long, I’ll save that speculation for when my next paper gets published.

Reference: Cinyabuguma, M., Page, T., & Putterman, L. (2006). Can second-order punishment deter perverse punishment? Experimental Economics, 9, 265-279.

Some Free Consulting Advice For Improving Online Dating

I find many aspects of life today to be pretty neat, owing largely to the wide array of fun and functional gadgets we have at our disposal. While easy to lose sight of and take for granted, the radical improvements made to information technology over my lifetime have been astounding. For instance, I now carry around a powerful computer in my pocket that is user-friendly, capable of accessing more information than I could feasibly process in my entire lifetime, and it also allows me to communicate instantly with strangers and friends all over the globe; truly amazing stuff. Of course, being the particular species that we are, such technology was almost instantly recognized and adopted as an efficient way of sending and receiving naked pictures, as well as trying to initiate new sexual or dating relationships. While the former goal has been achieved with a rousing success, the latter appears to still pose a few more complications, as evidenced by plenty of people complaining about online dating, but not about the ease by which they can send or receive naked pictures. As I’ve been turning my eye towards the job market these days, I decided it would be fun to try and focus on a more “applied” problem: specifically, how might online dating sites – like Tinder and OkCupid – be improved for their users?

Since the insertable portion of the internet market has been covered, I’ll stick to the psychological one.

The first question to consider is the matter of what problems people face when it comes to online dating: knowing what problems people are facing is obviously important if you want to make forward progress. Given that we are species in which females tend to provide the brunt of the obligate parental investment, we can say that, in general, men and women will tend to face some different problems when it comes to online dating; problems which mirror those faced in non-internet dating. In general, men are the more sexually-eager sex: accordingly, men tend to face the problem of drawing and retaining female interest, while women face the problem of selecting mates from among their choices. In terms of the everyday realities of online dating, this translates into women receiving incredible amounts of undesirable male attention, while men waste similar amounts of time making passes that are unlikely to pan out.

To get a sense for the problems women face, all one has to do is make an online dating profile as a woman. There have been a number of such attempts that have been documented, and the results are often the same: before the profile has even been filled out, it attracts dozens of men within the first few minutes of its existence. Perhaps unsurprisingly, the quality of messages that the profiles receive can also liberally be considered less-than optimal. While I have no data on the matter, almost every women who has talked with me about their online dating experiences tends to remark, at some point, that they are rather afraid of meeting up with anyone from the site owing to a fear of being murdered by them. Now it is possible that such dangers are, for whatever reason, being overestimated by women, but it also seems likely that women’s experiences with the men on the site might be driving some of that fear. After all, many of those same women also tell me that they start off replying to all or most of the messages they receive when they set up a profile, only to quickly stop doing that owing to their sheer volume or unappealing content. There are also reports of men becoming verbally aggressive when turned down by women, so it seems likely some of these fears about meeting someone from the site are not entirely without merit (to be clear, I think women are probably no more likely to be murdered by anyone they meet online relative to in person; it’s just that strangers online might be more likely to initiate contact than in person).

The problems that men face are a bit harder to appreciate for a number of reasons, one of which is likely owing to the fact that they take longer to appreciate. As I mentioned, women’s profiles attract attention within minutes of their creation; a rather dramatic effect. By contrast, were one to make a profile as a man, not much of anything would happen: you would be unlikely to receive messages or visitors for days, weeks, or months if you didn’t actively try to initiate such contact yourself. If you did try to initiate contact, you’d also find that most of it is not reciprocated and, of the replies you did receive, many would burn out before progressing into any real conversation. If men seen a bit overeager for contact and angry when it ceases, this might owe itself in part to the rarity with which such contact occurs. While being ignored might seem like a better problem to have than receiving a lot of unwanted attention (as the latter might involve aggression, whereas the former does not), one needs to bear in mind that without any attention there is no dating life. Women might be able to pull some desirable men from the pool of interested ones, even if most are undesirable; a man without an interested pool has no potential to draw from at all. Neither is necessarily better or worse than the other; they’re just different.

Relationships: Can’t live with them; can’t live without them

That said, bickering about whose problems are worse doesn’t actually solve any of them, so I don’t want to get mired in that debate. Instead, we want to ask, how do we devise a possible resolution to both sets of problems at the same time? At first glance, these problems might see opposed to one another: men want more attention and women want less of it. How could we make both sides relatively better off than they were before? My suggestion for a potential remedy is to make messages substantially harder to send. There are two ways I envision this might enacted: on the one hand, the number of messages a user could send (that were not replies to existing messages) could be limited  to a certain number in a given time period (say, for instance, people could send 5 or 10 initiate messages per week). Alternatively, people could set up a series of multiple choice screening questions on their profile, and only those people who answered enough questions “correctly” (i.e. the answer the user specifies) would be allowed to send a message to the user. Since these aren’t mutually exclusive, both could be implemented; perhaps the former as a mandatory restriction and the latter as an optional one.

Now, at first glance, these solutions might seem geared towards improving women’s experiences with online dating at the expense of men, since men are the ones sending most of the messages. If men aren’t allowed to send enough messages, how could they possibly garner attention, given that so many messages ultimately fail to capture any? The answer to that question comes in two parts, but it largely involves considering why so many messages don’t get responses. First, as it stands now, messaging is largely a costless endeavor. It can take someone all of 5 to 60 seconds to craft an opening message and send it, depending on how specific the sender wants to get with it. With such a low cost and a potentially high payoff (dates and/or sex), men are incentivized to send a great many of these messages. The problem is that every man is similarly incentivized. While it might be good for any man to send more messages out, when too many of them do it, women get buried beneath an avalanche of them. Since these messages are costless to send, they don’t necessarily carry any honest information about the man’s interest, so women might just start ignoring them altogether. There are, after all, non-negligible search costs for women to dig through and respond to all these messages – as evidenced by the many reports from women of their of starting out replying to all of them but quickly abandoning that idea – so the high volume of messages might actually make women less likely to respond in general, rather than more.

Indeed, judging by their profiles, many women pick up on this, explicitly stating that they won’t reply to messages that are little more than a “hey” or “what’s up?”. If messaging was restricted in some rather costly way, it would require men to be more judicious about both who they send the message to and the content of those messages; if you only have a certain number of opportunities, it’s best to not blow them, and that involves messaging people you’re more likely to be successful with in and in a less superficial way. So women, broadly speaking, would benefit by receiving a smaller number of higher-quality messages from men who are proportionately more interested in them. Since the messages are not longer costless to send, that a man chose to send his to that particular woman has some signal value; if the message was more personalized, the signal value increases. By contrast, men would, again, broadly speaking, benefit by lowering the odds of their messages being buried beneath a tidal wave of other messages from other men, and would need to send proportionately fewer of them to receive responses. In other words, the relative level of competition for mates might remain constant, but the absolute level of competition might fall.

Or, phrased as a metaphor: no one is responding to all that mess, so it’s better to not make it in the first place

Now, it should go without saying that this change, however implemented, would be a far cry from fixing all the interactions on dating sites: some people are less attractive than others, have dismal personalities, demand too much, and so on. Some women would continue to receive too many unwanted messages and some men would continue to be all but nonexistent as far as women were concerned. There would also undoubtedly be some potential missed connections. However, it’s important to bear in mind that all that happens already, and this solution might actually reduce the incidence of it. By everyone being willing to suffer a small cost (or the site administrators implementing them), they could avoid proportionately larger ones. Further, if dating sites became more user-friendly, they could also begin to attract new users and retain existing ones, improving the overall dating pool available. If women are less afraid of being murdered on dates, they might be more likely to go on them; if women receive fewer messages, they might be more inclined to respond to them. As I see it, this is a relatively cheap idea to implement and seems to have a great deal of theoretical plausibility to it. The specifics of the plan would need to be fleshed out more extensively and it’s plausibility tested empirically, but I think it’s a good starting point.

Having Their Cake And Eating It Too

Humans are a remarkably cooperative bunch of organisms. This is a remarkable fact because cooperation can open the door wide to all manner of costly exploitation. While it can be a profitable strategy for all involved parties, cooperation requires a certain degree of vigilance and, at times, the credible threat of punishment in order to maintain its existence. Figuring out how people manage to solve these cooperative problems has provided us with no shortage of research and theorizing, some of which is altogether more plausible than the rest. Though I haven’t quite figured out the appeal yet, there are many thoughtful people who favor the group selection accounts for explaining why people cooperate. They suggest that people will often cooperate in spite of its personal fitness costs because it serves to better the overall condition of the group to which they belong. While there haven’t been any useful predictions that appear to have fallen out of such a model, there are those who are fairly certain it can at least account for some known, but ostensibly strange findings.

That is a rather strange finding you got there. Thanks, Goodwill.

One human trait purported to require a group selection explanation is altruistic punishment and cooperation, especially in one-shot anonymous economic games. The basic logic goes as follows: in a prisoner’s dilemma game, so long as that game is a non-repeated event, there is really only one strategy, and that’s defection. This is because if you defect when your partner defects, you’re better off than if you cooperated; if you partner cooperated, on the other hand, you’re still better off if you defect. Economists might thus call the strategy of “always defect” to be a “rational” one. Further, punishing a defector in such conditions is similarly considered irrational behavior, as it only results in a lower payment for the punisher than they would have otherwise had. As we know from decades of research using these games, however, people don’t always behave “rationally”: sometimes they’ll cooperate with other people they’re playing with, and sometimes they’ll give up some of their own payment in order to punish someone who has either wronged them or, more importantly, wronged stranger. This pattern of behavior – paying to be nice to people who are nice, and paying to punish those who are not – has been dubbed “strong reciprocity”. (Fehr, Fischbacher, & Gachter, 2002)

The general raison d’etre of strong reciprocity seems to be that groups of people which had lots of individuals playing that strategy managed to out-compete other groups of people without them. Even though strong reciprocity is costly on the individual level, the society at large reaps larger overall benefits, as cooperation has the highest overall payoff, relative to any kind of defection. Strong reciprocity, then, helps to force cooperation by altering the costs and benefits to cooperation and defection on the individual level. There is a certain kind of unfairness inherent in this argument, though; a conceptual hypocrisy that can be summed up by the ever-popular phrase, “having one’s cake and eating it too”. To consider why, we need to understand the reason people engage in punishment in the first place. The likely, possibly-obvious candidate explanation just advanced is that punishment serves a deterrence function: by inflicting costs on those who engage in the punished behavior, those who engage in the behavior fail to benefit from it and thus stop behaving in that manner. This function, however, rests on a seemingly innocuous assumption: actors estimate the costs and benefits to acting, and only act when the expected benefits are sufficiently large, relative to the costs.

The conceptual hypocrisy is that this kind of cost-benefit estimation is something that strong reciprocators are thought to not to engage in. Specifically, they are punishing and cooperating regardless of the personal costs involved. We might say that a strong reciprocator’s behavior is inflexible with respect to their own payments. This example is a bit like playing the game of “chicken”, where two cars face each other from a distance and start driving at one another in a straight line. The first drive to turn away loses the match. However, if both cars continue on their path, the end result is a much greater cost to both drivers than is suffered if either one turns. If a player in this game was to adopt an inflexible strategy, then, by doing something like disabling their car’s ability to steer, they can force the other player to make a certain choice. Faced with a driver who cannot turn, you really only have one choice to make: continue going straight and suffer a huge cost, or turn and suffer a smaller one. If you’re a “rational” being, then, you can be beaten by an “irrational” strategy.

Flawless victory. Fatality.

So what would be the outcome if other individuals started playing the ever-present “always defect” strategy in a similarly inflexible fashion? We’ll call those people “strong defectors” for the sake of contrast. No matter what their partner does in these interactions, the strong defectors will always play defect, regardless of the personal costs and benefits. By doing so, these strong defectors might manage to place themselves beyond the reach of punishment from strong reciprocators. Why? Well, any amount of costly punishment directed towards a strong defector would be a net fitness loss from the group’s perspective, as costly punishment is a fitness-reducing behavior: it reduces the fitness of the person engaging in it (in the form of whatever cost they suffer to deliver the punishment) and it reduces the fitness of the target of the punishment. Further, the costs to punishing the defectors could have been directed towards benefiting other people instead – which are net fitness gains for the group – so there are opportunity costs to engaging in punishment as well. These fitness costs would need to be made up for elsewhere, from the group selection perspective.

The problem is that, because the strong defectors are playing an inflexible strategy, the costs cannot be made up for elsewhere; no behavioral change can be affected. Extending this game of chicken analogy to the group level, let’s say that turning away is the “cooperative” option, and dilemmas like these were at least fairly regular. They might not have involved cars, but they did involve a similar kind of payoff matrix: there’s only one benefit available, but there are potential costs in attempting to achieve it. Keeping in line with the metaphor, it would be in the interests of the larger population if no one crashed. It follows that between-group selective pressures favor turning every time, since the costs are guaranteed to be smaller for the wider population, but the sum of the benefits don’t change; only who achieves them does. In order to force the cooperative option, a strong reciprocator might disable their ability to turn so as it alters the cost and benefits to others.

The strong reciprocators shouldn’t be expected to be unaffected by costs and benefits, however; they ought to be affected by such considerations, just on the group level, rather than the individual one. Their strategy should be just as “rational” as any others, just with regard to a different variable. Accordingly, it can be beaten by other seemingly irrational strategies – like strong defection – that can’t be affected by the threats of costs. Strong defectors which refuse to turn will either force a behavioral change in the strong reciprocators or result in many serious crashes. In either case, the strong reciprocator strategy doesn’t seem to lead to benefits in that regard.

Now perhaps this example sounds a bit flawed. Specifically, one might wonder how appreciable portions of the population might come to develop an inflexible “always defect” strategy in the first place. This is because the strategy appears to be costly to maintain at times: there are benefits to cooperation and being able to alter one’s behavior in response to costs imposed through punishment, and people would be expected to be selected to achieve and avoid them, respectively. On top of that, there is also the distinct concern that repeated attempts at defection or exploitation can result in punishment severe enough to kill the defector. In other words, it seems that there are certain contexts in which strong defectors would be at a selective disadvantage, becoming less prevalent in the population over time. Indeed, such a criticism would be very reasonable, and that’s precisely the because the always defect population behaves without regard to their personal payoff. Of course, such a criticism applies in just as much force to the strong reciprocators, and that’s the entire point: using a limited budget to affect the lives of others regardless of its effects on you isn’t the best way to make the most money.

The interest on “making it rain” doesn’t compete with an IRA.

The idea of strong defectors seems perverse precisely because they act without regard to what we might consider their own rational interests. Were we to replace “rational” with “fitness”, the evolutionary disadvantage to a strategy that functions as if behaving in such a manner seems remarkably clear. The point is that the idea of a strong reciprocator type of strategy should be just as perverse. Those who attempt to put forth a strong reciprocator type of strategy as plausible account for cooperation and punishment attempt to create a context that allows them to have their irrational-agent cake and eat it as well: strong reciprocators need not behave within their fitness interests, but all the other agents are expected to. This assumption needs to be at least implicit within the models, or else they make no sense. They don’t seem to make very much sense in general, though, so perhaps that assumption is the least of their problems.

References: Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13, 1-25 DOI: 10.1007/s12110-002-1012-7

The Inferential Limits Of Economic Games

Having recently returned from the Human Behavior & Evolution Society’s (HBES) conference, I would like to take a moment to let everyone know what an excellent time I had there. Getting to meet some of my readers in person was a fantastic experience, as was the pleasure of being around the wider evolutionary research community and reconnecting with old friends. The only negative parts of the conference involved making my way through the flooded streets of Miami on the first two mornings (which very closely resembled this scene from the Simpsons) and the pool party at which I way over-indulged in drinking. Though there was a diverse array of research presented spanning many different areas, I ended up primarily in the seminars on cooperation, as the topic tends most towards my current research projects. I would like to present two of my favorite findings from those seminars, which serve as excellent cautionary tales concerning what conclusions one can draw from economic games. Despite the popular impression, there’s a lot more to evolutionary psychology than sex research.

Though the Sperm-Sun HBES logo failed to adequately showcase that diversity.

The first game to be discussed is the classic dictator game. In this game, two participants are brought into the lab and assigned the role of either ‘dictator; or ‘recipient’. The dictator is given a sum of money (say, $10) and is given the option to divide it however they want between the pair. If the dictator was maximally selfish – as standard economic rationality might suggest – they would consistently keep all the money and given none to the recipient. Yet this is not what we frequently see: dictators tend to give at least some of the money to the other person, and an even split is often made. While giving these participants anonymity from one another does tend to reduce offers, even ostensibly anonymous dictators continue to give. This result clashes somewhat with our every day experiences: after all, provided we have money in our pocket, we’re faced with possible dictator-like experiences every time we pass someone on the street, whether they’re homeless and begging for money or apparently well-off. Despite the near-constant opportunities during which we could transfer money to others, we frequently do not. So how do we reconcile the two experimental and everyday results?

One possibility is to suggest that the giving in dictator games is largely induced by experimental demand effects: subjects are being placed into a relatively odd situation and are behaving rather oddly because of it (more specifically, because they are inferring what the experimenter “wants” them to do). Of course, it’s not so easy to replicate the effects the contexts of the dictator game (a sudden windfall of a divisible asset and a potential partner to share it with) without subjects knowing they’re talking part in an experiment. Winking & Mizer (2013) manged to find a way around these problems in Las Vegas. In this field experiment, a confederate would be waiting at a bus stop when the ignorant subject approached. Once the subject was waiting for the bus as well, the confederate would pretend to take a phone call and move slightly away from the area with their back turned to the subject. It was at this point that the experiment approached on his cell, ostensibly in a hurry. As the experimenter passed the subject, he gave them $20 in poker chips, saying that he was late for his ride to the airport and didn’t have time to cash them in. These casino chips are an excellent stimuli, as they provided a good cover story for why they were being handed over: they only have value when cashed in, and the experimenter didn’t have time to do so. Using actual currency wouldn’t work well, as it might raise suspicions about the setup, since currency travels well from place to place.

In the first condition, the experimenter left and the confederate returned without further instruction; in the second condition, the experimenter said, “I don’t know. You can split them with that guy however you want” while gesturing at the confederate before he ran off. A third condition involved an explicit version of the dictator game experiment with poker chips, during which anonymity was granted. In the standard version of the experiment – when the subjects knew about the game explicitly – 83% of subjects offered at least some of the chips to other people with a median offer around $5, resembling previous experimental results fairly well. How about the other two conditions? Well, of the 60 participants who were not told they were explicitly taking part in the game, all of them kept all the money. This suggests very strongly that all – or at least most – of the giving we observe in dictator games is grounded in the nature of the experiment itself. Indeed, many of the subjects in the first condition, where the instruction to split was not given, seemed rather perplexed by the purpose of the study during the debriefing. The subjects wondered precisely why in the world they would split the money with the confederate in the first place. Like all of us walking down the street with money on our person, the idea that they would just give that money to other people seemed rather strange.

“I’m still not following: you want to do what with all this money, again?”

The second paper of interest looked at behavior in another popular game: the public goods game. In these games, subjects are typically placed together in groups of four and are provided with a sum of money. During each round, players can invest any amount of their money in the public pot and keep the rest. All the money in the pot is then multiplied by some amount and then divided equally amongst all the participants. In this game, the rational economic move is typically to not put any money in, as for each dollar you put in, you receive less than a dollar back (since the multiplier is below the number of subjects in the group); not a great investment. On the other hand, the group-maximizing outcome is for all the subjects to donate all their money, so everyone ends up richer than when they started. Again, we find that subjects in these games tend to donate some of their money to the public pot, and many researchers have inferred from this giving that people have prosocial preferences (i.e. making other people better off per se increases my subjective welfare). If such an inference was correct, then we ought to expect that subjects should give more money to the public good provided they know how much good they’re doing for others.

Towards examining this inference, Burton-Chellew & West (2013) put subjects into a public goods game in three different conditions. First, there was the standard condition, described above. Second was a condition like the standard game, except subjects received an additional piece of information in the form of how much the other players in the game earned. Finally, there was a third condition in which subjects didn’t even know the game was being played with other people; subjects were merely told they could donate some fraction of their money (from 0 to 40 units) to a “black box” which would perform a transformation on the money received and give them a non-negative payoff (which was the same average benefit they received in the game when playing with other people, but they didn’t know that). In total, 236 subjects played in one of the first two conditions and also in the black box condition, counterbalancing the order of the games (they were informed the two were entirely different experiments).

How did contributions change between the standard condition and the black box condition over time? They didn’t. Subjects that knew they were playing a public goods game donated approximately as much during each round as the subjects who were just putting payments into the black box and getting some payment out: donations started out relatively high, and declined over time (presumably and subjects were learning they tended to get less money by contributing). The one notable difference was in the additional information condition: when subjects could see the earnings of others, relative to their contributions, subjects started to contribute less money to the public good. As a control condition, all the above three games were replicated with a multiplication rule that led the profit-maximizing strategy to being donate all of one’s available money, rather than none. In these conditions, the change in donations between standard and black box conditions again failed to differ significantly, and contributions were still lower in the enhanced-information condition. Further, in all these games subjects tended to fail to make the profit-maximizing decision, irrespective of whether that decision was to donate all their money or none of it. Despite this strategy being deemed relatively to “easy” to figure out by researchers, it apparently was not.

Other people not included, or required

Both of these experiments pose some rather stern warnings about the inferences we might draw from the behavior of people playing economic games. Some our our experiments might end up inducing certain behaviors and preferences, rather than revealing them. We’re putting people into evolutionarily-strange situations in these experiments, and so we might expect some evolutionarily-strange outcomes. It is also worth noting that just because you observe some prosocial outcome – like people giving money apparently altruistically or contributing to the good of others – it doesn’t follow that these outcomes are the direct result of cognitive modules designed to bring them about. Sure, my behavior in some of these games might end up reducing inequality, for instance, but it doesn’t following that people’s psychology was selected to do such things. There are definite limits to how far these economic games can take us inferentially, and it’s important to be aware of them. Do these studies show that such games are worthless tools? I’d say certainly not, as behavior in them is certainly not random. We just need to be mindful of their limits when we try and draw conclusions from them.

References: Burton-Chellew MN, & West SA (2013). Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences of the United States of America, 110 (1), 216-21 PMID: 23248298

Winking, J., & Mizer, N. (2013). Natural-field dictator game shows no altruistic giving. Evolution and Human Behavior. http://dx.doi.org/10.1016/j.evolhumbehav.2013.04.002

Equality-Seeking Can Lift (Or Sink) All Ships

There’s a saying in economics that goes, “A rising tide lifts all ships”. The basic idea behind the saying is that marginal benefits that accrue from people exchanging goods and services is good for everyone involved – and even for some who are not directly involved – in much the same way that all the boats in a body of water will rise or fall in height together as the overall water level does. While there is an element of truth to the saying (trade can be good for everyone, and the resources available to the poor today can, in some cases, be better than those available to even the wealthy in generations past), economies, of course, are not like bodies of water that rise and fall uniformly; some people can end up radically better- or worse-off than others as economic conditions shift, and inequality is a persistent factor in human affairs. Inequality – or, more aptly, the perception of it – is also commonly used as a justification for furthering certain social or moral goals. There appears to be something (or somethings) about inequality that just doesn’t sit well with people.

And I would suggest that those people go and eat some cake.

People’s ostensible discomfort with inequality has not escaped the eyes of many psychological researchers. There are some who suggest that humans have a preference for avoiding inequality; an inequality aversion, if you will. Phrased slightly differently, there are some who suggest that humans have an egalitarian motive (Dawes et al, 2007) that is distinct from other motives, such as enforcing cooperation or gaining benefits. Provided I’m parsing the meaning of the phrase correctly, then, the suggestion being made by some is that people should be expected to dislike inequality per se, rather than dislike inequality for other, strategic reasons. Demonstrating evidence of a distinct preference for inequality aversion, however, can be difficult. There are two reasons for this, I feel: the first is that inequality is often confounded with other factors (such as someone not cooperating or suffering losses). The second reason is that I think it’s the kind of preference that we shouldn’t expect to exist in the first place.

Taking these two issues in order, let’s first consider the paper by Dawes et al (2007) that sought to disentangle some of these confounding issues. In their experiment, 120 subjects were brought into the lab in groups of 20. These groups were further divided into anonymous groups of 4, such that each participant played in five rounds of the experiment, but never with the same people twice. The subjects also did not know about anyone’s past behavior in the experiment. At the beginning of each round, every subject in each group received a random number of payment units between some unmentioned specific values, and everyone was aware of the payments of everyone else in their group. Naturally, this tended to create some inequality in payments. Subjects were given means by which to reduce this inequality, however: they could spend some of their payment points to either add or subtract from other people’s payments at a ratio of 3 to 1 (in other words, I could spend one unit of my payment to either reduce your payment by three points or add three points to your payment). These additions and deductions were all decided on in private an enacted simultaneously, so as to avoid retribution and cooperation factors. It wasn’t until the end of each round that subjects saw how many additions and reductions they had received. In total, each subject had 15 chances to either add to or deduct from someone else payment (3 people per round over 5 rounds).

The results showed that most subjects paid to either add to or deduct from someone else’s payment at least once: 68% of people reduced the payment of someone else at least once, whereas 74% increased someone’s payment at least once. It wasn’t what one might consider a persistent habit, though: only 28% reduced people’s payment more than five times, while 33% added, and only 6% reduced more than 10 times, whereas 10% added. This, despite their being inequality to be reduced in all cases. Further, an appreciable number of the modifications didn’t go in the equality-reducing direction: 29% of reductions went to below-average earners, and 38% of the additions went to above-average earners. Of particular interest, however, is the precise way in which subjects ended up reducing inequality: the people who earned the least in each round tended to spend 96% more on deductions than top earners. In turn, top earners averaged spending 77% more on additions than the bottom earners. This point is of interest because positing a preference for avoiding inequality does not necessarily help one predict the shape that equality will ultimately take.

You could also cut the legs off the taller boys in the left picture so no one gets to see.

The first thing worth point out here, then, is that about half of all the inequality-reducing behaviors that people engaged in ended up destroying overall welfare. These are behaviors in which no one is materially better off. I’m reminded of part of a standup routine by Louis CK, concerning that idea, in which he recounts the following story (starting at about a 1:40):

“My five-year old, the other day, one of her toys broke, and she demanded I break her sister’s toy to make it fair. And I did.”

It’s important to note this so as to point out that achieving equality itself doesn’t necessarily do anything useful. It is not as if equality automatically makes everyone – or anyone – better off. So what kind of useful outcomes might such spiteful behavior result in? To answer that question, we need to examine the ways people reduced inequality. Any player in this game could reduce the overall amount of inequality by either deducting from high earners payment or adding to low earners. This holds for both the bottom and top earners. This means that there are several ways of reducing inequality available to all players. Low earners, for instance, could reduce inequality by engaging in spiteful reductions towards everyone above them until they’re all down at the same low level; they could also reduce the overall inequality by benefiting everyone above them, until everyone (but them) is at the same high level. Alternatively, they could engage in a mixture of these strategies, benefiting some people and harming others. The same holds for high earners, just in the opposite directions. Which path people would take depends on what their set point for ‘equal’ is. Strictly speaking, then, a preference for equality doesn’t tell us which method people should opt for, nor does it tell us what levels of inequality will be relatively accepted and efforts to achieve equality will cease.

There are, however, other possibilities for explaining these results beyond a preference for inequality per se. One particularly strong alternative is that people use perceptions of inequality as inputs for social bargaining. Consider the following scenario: two people are working together to earn a joint prize, like a $10 reward. If they work together, they get the $10 to split; if they do not work together, neither will receive anything. Further, let’s assume one member of this pair is greedy, and in round one, after they cooperate, takes $9 of the pot for themselves. Now, strictly speaking, the person who received $1 is better off than if they received nothing at all, but that doesn’t mean they ought to accept that distribution, and here’s why: if the person with $1 refuses to cooperate during the next round, they only lose that single dollar; the selfish player would lose out on nine-times as much. This asymmetry in losses puts the poorer player in a stronger bargaining position, as they have far less to lose from not cooperating. It is from bargaining structures similar in structure to this that our sense of fairness likely emerged.

So let’s apply this analysis back to the results of the experiment: people all start off with different amounts of money and people are in positions to benefit or harm each other. Everyone wants to leave with as much benefit as possible, which means contributing nothing and getting additions from everyone else. However, since everyone is seeking this same outcome and they can’t all have it, certain compromises need to be reached. Those in high-earning positions face a different set of problems in that compromise than those in low-earning positions: while the high earners are doing something akin to trying to maintain cooperation by increasing the share of resources other people get (as in the previous example), low earners are faced with the problem of negotiating for a better payoff, threatening to cut off cooperation in the process. Both parties seem to anticipate this, with low earners disproportionately punishing high earners, and high earners disproportionately benefiting low earners. That there is no option for cooperation or bargaining present in this experiment is, I think besides the point, as our minds were not designed to deal with the specific context presented in the experiment. Along those same lines, simply telling people that “you’re now anonymous” doesn’t mean that their mind will automatically function as if it was positive no one could observe its actions, and telling people their computer can’t understand their frustration won’t stop them from occasionally yelling at it.

“Listen only to my voice: you are now anonymous. You are now anonymous”

As a final note, one should be careful about inferring a motive or preference for equality just because inequality was sometimes reduced. A relatively simple example should demonstrate why: consider an armed burglar who enters a store, points their gun at the owner, and demands all the money in the register. If the owner hands over the money, they have delivered a benefit to the burglar at a cost to themselves, but most of us would not understand this as an act of altruism on the part of the owner; the owner’s main concern is not getting shot, and they are willing to pay a small cost (the loss of money) so as to avoid a larger one (possible death). Other research has found, for instance, that when given the option to pay a fixed cost (a dollar) to reduce another person’s payment by any amount (up to a total of $12), when people engage in reduction, they’re highly likely to generate inequality that favors themselves. (Houser & Xiao, 2010). It would be inappropriate to suggest that people are equality-averse from such an experiment, however, and, more to the point, doing so wouldn’t further our understanding of human behavior much, if at all. We want to understand why people do certain things; not simply that they do them.

References: Dawes CT, Fowler JH, Johnson T, McElreath R, & Smirnov O (2007). Egalitarian motives in humans. Nature, 446 (7137), 794-6 PMID: 17429399

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment Economic Letters DOI: 10.1016/j.econlet.2010.07.008.