Who Deserves Healthcare And Unemployment Benefits?

As I find myself currently recovering from a cold, it’s a happy coincidence that I had planned to write about people’s intuitions about healthcare this week. In particular, a new paper by Jensen & Petersen (2016) attempted to demonstrate a fairly automatic cognitive link between the mental representation of someone as “sick” and of that same target as “deserving of help.” Sickness is fairly unique in this respect, it is argued, because of our evolutionary history with it: as compared with what many refer to as diseases of modern lifestyle (including those resulting from obesity and smoking), infections tended to strike people randomly; not randomly in the sense that anyone is equally as likely to get sick, but more in the sense that people often had little control over when they did. Infections were rarely the result of people intentionally seeking them out or behaving in certain ways. In essence, then, people view those who are sick as unlucky, and unlucky individuals are correspondingly viewed as being more deserving of help than those who are responsible for their own situation.

…and more deserving of delicious, delicious pills

This cognitive link between luck and deservingness can be partially explained by examining expected returns on investment in the social world (Tooby & Cosmides, 1996). In brief, helping others takes time and energy, and it would only be adaptive for an organism to sacrifice resources to help another if doing so was beneficial to the helper in the long term. This is often achieved by me helping you at a time when you need it (when my investment is more valuable to you than it is to me), and then you helping me in the future when I need it (when your investment is more valuable to me than it is to you). This is reciprocal altruism, known by the phrase, “I scratch your back and you scratch mine.” Crucially, the probability of receiving reciprocation from the target you help should depend on why that target needed help in the first place: if the person you’re helping is needy because of their own behavior (i.e., they’re lazy), their need today is indicative of their need tomorrow. They won’t be able to help you later for the same reasons they need help now. By contrast, if someone is needy because they’re unlucky, their current need is not as diagnostic of their future need, and so it is more likely they will repay you later. Because the latter type is more likely to repay than the former, our intuitions about who deserves help shift accordingly.

As previously mentioned, infections tend to be distributed more randomly; my being sick today (generally) doesn’t tell you much about the probability of my future ability to help you once I recover. Because of that, the need generated by infections tends to make sick individuals look like valuable targets of investment: their need state suggests they value your help and will be grateful for it, both of which likely translate into their helping you in the future. Moreover, the needs generated by illnesses can frequently be harmful, even to the point of death if assistance isn’t provided. The greater the need state to be filled, the greater the potential for alliances to be formed, both with and against you. To place that point in a quick, yet extreme, example, pulling someone from a burning building is more likely to ingratiate them to you than just helping them move; conversely, failing to save someone’s life when it’s well within your capabilities can set their existing allies against you.

The sum total of this reasoning is that people should intuitively perceive the sick as more deserving of help than those suffering from other problems that cause need. The particular other problem that Jensen & Petersen (2016) contrast sickness with is unemployment, which they suggest is a fairly modern problem. The conclusion drawn by the authors from these points is that the human mind – given its extensive history with infections and their random nature – should automatically tag sick individuals as deserving of assistance (i.e., broad support for government healthcare programs), while our intuitions about whether the unemployed deserve assistance should be much more varied, contingent on the extent to which unemployment is viewed as being more luck- or character-based. This fits well with the initial data that Jensen & Petersen (2016) present about the relative, cross-national support for government spending on healthcare and unemployment: not only is healthcare much more broadly supported than unemployment benefits (in the US, 90% vs 52% of the population support government assistance), but support for healthcare is also quite a bit less variable across countries.

Probably because the unemployed don’t have enough bake sales or ribbons

Some additional predictions drawn by the authors were examined across a number of studies in the paper, only two of which I would like to focus on for length constraints. The first of these studies presented 228 Danish participants with one of four scenarios: two in which the target was sick and two in which the target was unemployed. In each of these conditions, the target was also said to be lazy (hasn’t done much in life and only enjoys playing video games) or hardworking (is active and does volunteer work; of note, the authors label the lazy/hardworking conditions as high/low control, respectively, but I’m not sure that really captures the nature of the frame well). Participants were asked how much an individual like that deserved aid from the government when sick/unemployed on a 7-point scale (which was converted to a 0-1 scale for ease of interpretation).

Overall, support for government aid was lower in both conditions when the target was framed as being lazy, but this effect was much larger in the case of unemployment. When it came to the sick individual, support for healthcare for the hardworking target was about a 0.9, while support for the lazy one dipped to about 0.75; by contrast, the hardworking unemployed individual was supported with benefits at about 0.8, while the lazy one only received support around the 0.5 point. As the authors put it, the effect of the deservingness information was about 200% less influential when it came to sickness.

There is an obvious shortcoming in that study, however: being lazy has quite a bit less to do with getting sick than it does to getting a job. This issue was addressed better in the third study where the stimuli were more tailored to the problems. In the case of unemployed individuals, they were described as being unskilled workers who were told to get further training by their union, with the union even offering to help. The individual either takes or does not take the additional training, but either way eventually ends up unemployed. In the case of healthcare, the individual is described as being a long-term smoker who was repeatedly told by his doctor to quit. The person either eventually quits smoking or does not, but either way ends up getting lung cancer. The general pattern of results from study two replicated again: for the smoker, support for government aid hovered around 0.8 when he quit and 0.7 when he did not; for the unemployed person, support was about 0.75 when he took the training and around 0.55 when he did not.

“He deserves all that healthcare for looking so cool while smoking”

While there does seem to be evidence for sicknesses being cognitively tagged as more deserving of assistance than unemployment (there were also some association studies I won’t cover in detail), there is a recurrent point in the paper that I am hesitant about endorsing fully. The first mention of this point is found early on in the manuscript, and reads:

“Citizens appear to reason as if exposure to health problems is randomly distributed across social strata, not noting or caring that this is not, in fact, the case…we argue that the deservingness heuristic is built to automatically tag sickness-based needs as random events…”

A similar theme is mentioned later in the paper as well:

“Even using extremely well-tailored stimuli, we find that subjects are reluctant to accept explicit information that suggests that sick people are undeserving.”

In general I find the data they present to be fairly supportive of this idea, but I feel it could do with some additional precision. First and foremost, participants did utilize this information when determining deservingness. The dips might not have been as large as they were for unemployment (more on that later), but they were present. Second, participants were asked about helping one individual in particular. If, however, sickness is truly being automatically tagged as randomly distributed, then deservingness factors should not be expected to come into play when decisions involve making trade-offs between the welfare of two individuals. In a simple case, a hospital could be faced with a dilemma in which two patients need a lung transplant, but only a single lung is available. These two patients are otherwise identical except one has lung cancer due to a long history of smoking, while the other has lung cancer due to a rare infection. If you were to ask people which patient should get the organ, a psychological system that was treating all illness as approximately random should be indifferent between giving it to the smoker or the non-smoker. A similar analysis could be undertaken when it comes to trading-off spending on healthcare and non-healthcare items as well (such as making budget cuts to education or infrastructure in favor of healthcare). 

Finally, there are two additional factors which I would like to see explored by future research in this area. First, the costs of sickness and unemployment tend to be rather asymmetric in a number of ways: not only might sickness be more often life-threatening than unemployment (thus generating more need, which can swamp the effects of deservingness to some degree), but unemployment benefits might well need to be paid out over longer periods of time than medical ones (assuming sickness tends to be more transitory than unemployment). In fact, unemployment benefits might actively encourage people to remain unemployed, whereas medical benefits do not encourage people to remain sick. If these factors could somehow be held constant or removed, a different picture might begin to emerge. I could imagine deservingness information mattering more when a drug is required to alleviate discomfort, rather than save a life. Second - though I don’t know to what extent this is likely to be relevant – the stimulus materials in this research all ask about whether the government ought to be providing aid to sick/unemployed people. It is possible that somewhat different responses might have been obtained if some measures were taken about the participant’s own willingness to provide that aid. After all, it is much less of a burden on me to insist that someone else ought to be taking care of a problem relative to taking care of it myself.

References: Jensen, C. & Petersen, M. (2016). The deservingness heuristic and the politics of health care. American Journal of Political Science, DOI: 10.1111/ajps.12251

 Tooby, J. & Cosmides, L. (1996). Friendship and the banker’s paradox:Other pathways to the evolution of adaptations for altruism. Proceedings of the British Academy, 88, 119-143

Absolute Vs Relative Mate Preferences

As the comedian Louis CK quipped some time ago, “Everything is amazing right now and nobody is happy.” In that instance he was referring to the massive technological improvements that have arisen in the fairly-recent past which served to make our lives easier and more comfortable. Reflecting on the level of benefit that this technology has added to our lives (e.g., advanced medical treatments, the ability to communicate with people globally in an instant, or to travel globally in the matter of a few hours, etc), it might feel kind of silly that we aren’t content with the world; this kind of lifestyle sure beats living in the wilderness in a constant contest to find food, ward off predators and parasites, and endure the elements. So why aren’t we happy all the time? There are many ways to answer this question, but I wanted to focus on one in particular: specifically, given our nature as a social species, much of our happiness is determined by relative factors. If everyone is fairly well off in the absolute sense, you being well off doesn’t help you when it comes to being selected as a friend, cooperative partner, or mate because it doesn’t signal anything special about your value to others. What you are looking for in that context is not to be doing well on an absolute level, but to be doing better than others.

 If everyone has an iPhone, no one has an iPhone

To place this in a simple example, if you want to get picked for the basketball team, you’re looking to be taller than other people; increasing everyone’s height by 3 inches doesn’t uniquely benefit you, as your relative position and desirability has remained the same. On a related note, if you are doing well on some absolute metric but could be doing better, remaining content with one’s lot in life and forgoing those additional benefits is not the type of psychology one would predict to have proven adaptive. All else being equal, the male satisfied with a single mate that foregoes an additional one will be out-reproduced by the male who takes the second as well. Examples like these help to highlight the positional aspects of human satisfaction: even though some degree of our day-to-day lives are no doubt generally happier because people aren’t dying from smallpox and we have cell phones, people are often less happy than we might expect because so much of that happiness is not determined by one’s absolute state. Instead, our happiness is determined by our relative state: how good we could be doing relative to our current status, and how much we offer socially, relative to others.

A similar logic was applied in a recent paper by Conroy-Beam, Goetz, & Buss (2016) that examined people’s relationship satisfaction. The researchers were interested in testing the hypothesis that it’s not about how well one’s partner matches their ideal preferences on some absolute threshold when it comes to relationship satisfaction; instead, partner satisfaction is more likely to be a product of (a) whether more attractive alternative partners are available and (b) whether one is desirable enough to attract one of them. One might say that people are less concerned with how much they like their spouse and more concerned with whether they could get a better possible spouse: if one can move up in the dating world, then their satisfaction with their current partner should be relatively low; if one can’t move up, they ought to be satisfied with what they already have. After all, it makes little sense to abandon your mate for not meeting your preferences if your other options are worse.

These hypotheses were tested in a rather elegant and unique way across three studies, all of which utilized a broadly-similar methodology (though I’ll only be discussing two). The core of each involved participants who were currently in relationships completing four measures: one concerning how important 27 traits would be in an ideal mate (on a 7-point scale), another concerning how well those same traits described their current partner, a third regarding how those traits described themselves, and finally rating their relationship satisfaction.

To determine how well a participant’s current partner fulfilled their preferences, the squared difference between the participant’s ideal and actual partner was summed for all 27 traits and then the square root of that value was taken. This process generated a single number that provided a sense for how far off from some ideal an actual partner was across a large number of traits: the larger this number, the worse of a fit the actual partner was. A similar transformation was then carried out with respect to how all the other participants rated their partners on those traits. In other words, the authors calculated what percentage of other people’s actual mates fit the preferences of each participant better than their current partner. Finally, the authors calculated the discrepancy in mate value between the participant and their partner. This was done in a three-step process, the gist of which is that they calculated how well the participant and their partner met the average ideals of the opposite sex. If you are closer to the average ideal partner of the opposite sex than your partner, you have the higher mate value (i.e., are more desirable to others); if you are further away, you have the lower mate value.

 It’s just that simple!

In the interests of weeding out the mathematical complexity, there were three values calculated. Assuming you were taking the survey, they would correspond to (1) how well your actual partner matched your ideal (2) what percent of possible real mates out in the world are better overall fits, and (3) how much more or less desirable you are to others, relative to your partner. These values were then plugged into a regression predicting relationship satisfaction. As it turned out, in the first study (N = 260), the first value – how well one’s partner matched their ideal – barely predicted relationship satisfaction at all (ß = .06); by contrast, the number of other potential people who might make better fits was a much stronger predictor (ß = -.53), as was the difference in relative mate value between the participant and their partner (ß = .11). There was also an interaction between these latter two values (ß = .21). As the authors summarized these results:

Participants lower in mate value than their partners were generally satisfied regardless of the pool of potential mates; participants higher in mate value than their partners became increasingly dissatisfied with their relationships as better alternative partners became available”

So, if your partner is already more attractive than you, then you probably consider yourself pretty lucky. Even if there are a great number of better possible partners out there for you, you’re not likely to be able to attract them (you got lucky once dating up; better to not try your luck a second time). By contrast, if you are more attractive than your partner, then it might make sense to start looking around for better options. If few alternatives exist, you might want to stick around; if many do, then switching might be beneficial.

The second study addressed the point that partners in these relationships are not passive bystanders when it comes to being dumped; they’re wary about the possibility of their partner seeking greener pastures. For instance, if you understand that your partner is more attractive than you, you likely also understand (at least intuitively) that they might try to find someone who suits them better than you do (because they have that option). If you view being dumped as a bad thing (perhaps because you can’t do better than your current partner) you might try to do more to keep them around. Translating that into a survey, Conroy et al (2016) asked participants to indicate how often they engaged in 38 mate retention tactics over the course of the past year. These include a broad range of behaviors, including calling to check up on one’s partner, asking to deepen commitment to them, derogating potential alternative mates, buying gifts, or performing sexual favors, among others. Participants also filled out the mate preference measures as before.

The results from the first study regarding satisfaction were replicated. Additionally, as expected, there was a positive relationship between these retention behaviors and relationship satisfaction (ß = .20): the more satisfied one was with their partner, the more they behaved in ways that might help keep them around. There was also a negative relationship between trust and these mate retention behaviors (ß = -.38): the less one trusted their partner, the more they behaved in ways that might discourage them from leaving. While that might sound strange at first – why encourage someone you don’t trust to stick around? – it is fairly easy to understand to the extent that the perceptions of partner trust are intuitively tracking the probability that your partner can do better than you: it’s easier to trust someone who doesn’t have alternatives than it is to trust one who might be tempted.

It’s much easier avoid sinning when you don’t live around an orchard

Overall, I found this research an ingenious way to examine relationship satisfaction and partner fit across a wide range of different traits. There are, of course, some shortcomings to the paper which the authors do mention, including the fact that all the traits were given equal weighting (meaning that the fit for “intelligent” would be rated as being as important as the fit for “dominant” when determining how well your partner suited you) and the pool of potential mates was not considered in the context of a local sample (that is, it matters less if people across the country fit your ideal better than your current mate, relative to if people in your immediate vicinity do). However, given the fairly universal features of human mating psychology and the strength of the obtained results, these do not strike me as fatal to the design in any way; if anything, they raise the prospect that the predictive strength of this approach could actually be improved by tailoring it to specific populations.

References: Conroy-Beam, D., Goetz, C., & Buss, D. (2016). What predicts romantic relationship satisfaction and mate retention intensity: mate preference fulfillment or mate value discrepancies? Evolution & Human Behavior, DOI: http://dx.doi.org/10.1016/j.evolhumbehav.2016.04.003

Morality, Alliances, And Altruism

Having one’s research ideas scooped is part of academic life. Today, for instance, I’d like to talk about some research quite similar in spirit to work I intended to do as part of my dissertation (but did not, as it didn’t end up making the cut in the final approved package). Even if my name isn’t on it, it is still pleasing to see the results I had anticipated. The idea itself arose about four years ago, when I was discussing the curious case of Tucker Max’s donation to Planned Parenthood being (eventually) rejected by the organization. To quickly recap, Tucker was attempting to donate half-a-million dollars to the organization, essentially receiving little more than a plaque in return. However, the donation was rejected, it would seem, under fear of building an association between the organization and Tucker, as some people perceived Tucker to be a less-than-desirable social asset. This, of course, is rather strange behavior, and we would recognize it as such if it were observed in any other species (e.g., “this cheetah refused a free meal for her and her cubs because the wrong cheetah was offering it”); refusing free benefits is just peculiar.

“Too rich for my blood…”

As it turns out, this pattern of behavior is not unique to the Tucker Max case (or the Kim Kardashian one…); it has recently been empirically demonstrated by Tasimi & Wynn (2016), who examined how children respond to altruistic offers from others, contingent on the moral character of said others. In their first experiment, 160 children between the ages of 5 and 8 were recruited to make an easy decision; they were shown two pictures of people and told that the people in the pictures wanted to give them stickers, and they had to pick which one they wanted to receive the stickers from. In the baseline conditions, one person was offering 1 sticker, while the other was offering either 2, 4, 8, or 16 stickers. As such, it should come as no surprise that the person offering more stickers was almost universally preferred (71 of the 80 children wanted the person offering more, regardless of how many more).

Now that we’ve established that more is better, we can consider what happened in the second condition where the children received character information about their benefactors. One of the individuals was said to always be mean, having hit someone the other day while playing; the other was said to always be nice, having hugged someone the other day instead. The mean person was always offering more stickers than the nice one. In this condition, the children tended to shun the larger quantity of stickers in most cases: when the sticker ratio was 2:1, less than 25% of children accepted the larger offer from the mean person; the 4:1 and 8:1 ratios were accepted about 40% of the time, and the 16:1 ratio 65% of the time. While more is better in general, it is apparently not better enough for children to overlook the character information at times. People appear willing to forgo receiving altruism when it’s coming from the wrong type of person. Fascinating stuff, especially when one considers that such refusals end up leaving the wrongdoers with more resources than they would otherwise have (if you think someone is mean, wouldn’t you be better off taking those resources from them, rather than letting them keep them?).

This line was replicated in 64 very young children (approximately one-year old). In this experiment, the children observed a puppet show in which two puppets offered them crackers, with one offering a single cracker and the other offering either 2 or 8. Again, unsurprisingly, the majority of children accepted the larger offer, regardless of how much larger it was (24 of 32 children). In the character information condition, one puppet was shown to be a helper, assisting another puppet in retrieving a toy from a chest, whereas the other puppet was a hinderer, preventing another from retrieving a toy. The hindering puppet, as before, now offered the greater number of crackers, whereas the helper only offered one cracker. When the hindering puppet was offering 8 crackers, his offer was accepted about 70% of the time, which did not differ from the baseline group. However, when the hindering puppet was only offering 2, the acceptance rate was a mere 19%. Even young children, it would seem, are willing to avoid accepting altruism from wrongdoers, assuming the difference in offers isn’t too large.

“He’s not such a bad guy once you get $10 from him”

While neat, these results beg for a deeper explanation as to why we should expect such altruism to be rejected. I believe hints of this explanation are provided by the way Tasimi & Wynn (2016) write about their results:

Taken together, these findings indicate that when the stakes are modest, children show a strong tendency to go against their baseline desire to optimize gain to avoid ‘‘doing business” with a wrongdoer; however, when the stakes are high, children show more willingness to ‘‘deal with the devil…”

What I find strange about that passage is that children in the current experiments were not “doing business” or “making deals” with the altruists; there was no quid pro quo going on. The children were no more doing business with the others than they are doing business with a breastfeeding mother. Nevertheless, there appears to an implicit assumption being made here: an individual who accepts altruism from another is expected to pay that altruism back in the future. In other words, merely receiving altruism from another generates the perception of a social association between the donor and recipient.

This creates an uncomfortable situation for the recipient in cases where the donor has enemies. Those enemies are often interested in inflicting costs on the donor or, at the very least, withholding benefits from him. In the latter case, this makes that social association with the donor less beneficial than it otherwise might, since the donor will have fewer expected future resources to invest in others if others don’t help him; in the former case, not only does the previous logic hold, but the enemies of your donor might begin to inflict costs on you as well, so as to dissuade you from helping him. Putting this into a quick example Jon – your friend – goes out an hurts Bob, say, by sleeping with Bob’s wife. Bob and his friends, in response, both withhold altruism from Jon (as punishment) and might even be inclined to attack him for his transgression. If they perceive you as helping Jon – either by providing him with benefits or by preventing them from hurting Jon – they might be inclined to withhold benefits from or punish you as well until you stop helping Jon as a means of indirect punishment. To turn the classic phrase, the friend of my enemy is also my enemy (just as the enemy of my enemy is my friend).

What cues might they use to determine if you’re Jon’s ally? Well, one likely useful cue is whether Bob directs altruism towards you. If you are accepting his altruism, this is probably a good indication that you will be inclined to reciprocate it later (else risk being labeled a social cheater or free rider). If you wish to avoid condemnation and punishment by proxy, then, one route to take is to refuse benefits from questionable sources. This risk can be overcome, however, in cases where the morally-questionable donor is providing you a large enough benefit which, indeed, was precisely the pattern of results observed here. What will determine what counts as “large enough” should be expected to vary as a function of a few things, most notably the size and nature of the transgressions, as well as the degree of expected reciprocity. For example, receiving large donations from morally-questionable donors should be expected to be more acceptable to the extent the donation is made anonymously vs publicly, as anonymity might reduce the perceived social associations between donor and recipient.

You might also try only using “morally clean” money

Importantly (as far as I’m concerned) this data fits well within my theory of morality – where morality is hypothesized to function as an association-management mechanism – but not particularly well with other accounts: altruistic accounts of morality should predict that more altruism is still better, dynamic coordination says nothing about accepting altruism, as giving isn’t morally condemned, and self-interest/mutualistic accounts would, I think, also suggest that taking more money would still be preferable since you’re not trying to dissuade others from giving. While I can’t help but feel some disappointment that I didn’t carry this research out myself, I am both happy with the results that came of it and satisfied with the methods utilized by the authors. Getting research ideas scooped isn’t so bad when they turn out well anyway; I’m just happy enough to see my main theory supported.  

References: Tasimi, A. & Wynn, K. (2016). Costly rejection of wrongdoers by infants and children. Cognition, 151, 76-79.

Morality, Empathy, And The Value Of Theory

Let’s solve a problem together: I have some raw ingredients that I would like to transform into my dinner. I’ve already managed to prepare and combine the ingredients, so all I have left to do is cook them. How am I to solve this problem of cooking my food? Well, I need a good source of heat. Right now, my best plan is to get in my car and drive around for a bit, as I have noticed that, after I have been driving for some time, the engine in my car gets quite hot. I figure I can use the heat generated by driving to cook my food. It would come as no surprise to anyone if you have a couple of objections with my suggestion, mostly focused on the point that cars were never designed to solve the problems posed by cooking. Sure, they do generate heat, but that’s really more of a byproduct of their intended function. Further, the heat they do produce isn’t particularly well-controlled or evenly-distributed. Depending on how I position my ingredients or the temperature they require, I might end up with a partially-burnt, partially-raw dinner that is likely also full of oil, gravel, and other debris that has been kicked up into the engine. Not only is the car engine not very efficient at cooking, then, it’s also not very sanitary. You’d probably recommend that I try using a stove or oven instead.

“I’m not convinced. Get me another pound of bacon; I’m going to try again”

Admittedly, this example is egregious in its silliness, but it does make its point well: while I noted that my car produces heat, I misunderstood the function of the device more generally and tried to use it to solve a problem inappropriately as a result. The same logic also holds in cases where you’re dealing with evolved cognitive mechanisms. I examined such an issue recently, noting that punishment doesn’t seem to do a good job as a mechanism for inspiring trust, at least not relative to its alternatives. Today I wanted to take another run at the underlying issue of matching proximate problem to adaptive function, this time examining a different context: directing aid to the great number of people around the world who need altruism to stave off death and non-lethal, but still quite severe, suffering (issues like alleviating malnutrition and infectious diseases). If you want to inspire people to increase the amount of altruism directed towards these needy populations, you will need to appeal to some component parts of our psychology, so what parts should those be?

The first step in solving this problem is to think about what cognitive systems might increase the amount of altruism directed towards others, and then examine the adaptive function of each to determine whether they will solve the problem particularly efficiently. Paul Bloom attempted a similar analysis (about three years ago, but I’m just reading it now), arguing that empathetic cognitive systems seem like a poor fit for the global altruism problem. Specifically, Bloom makes the case that empathy seems more suited to dealing with single-target instances of altruism, rather than large-scale projects. Empathy, he writes, requires an identifiable victim, as people are giving (at least proximately) because they identify with the particular target and feel their pain. This becomes a problem, however, when you are talking about a population of 100 or 1000 people, since we simply can’t identify with that many targets at the same time. Our empathetic systems weren’t designed to work that way and, as such, augmenting their outputs somehow is unlikely to lead to a productive solution to the resource problems plaguing certain populations. Rather than cause us to give more effectively to those in need, these systems might instead lead us to over-invest further in a single target. Though Bloom isn’t explicit on this point, I feel he would likely agree that this has something to do with empathetic systems not having evolved because they solved the problems of others per se, but rather because they did things like help the empathetic person build relationships with specific targets, or signal their qualities as an associate to those observing the altruistic behavior.

Nothing about that analysis strikes me as distinctly wrong. However, provided I have understood his meaning properly, Bloom goes on to suggest that the matter of helping others involves the engagement of our moral systems instead (as he explains in this video, he believes empathy “fundamentally…makes the world worse,” in the moral sense of the term, and he also writes that there’s more to morality – in this case, helping others – than empathy). The real problem with this idea is that our moral systems are not altruistic systems, even if they do contain altruistic components (in much the same way that my car is not a cooking mechanism even if it does generate heat). This can be summed up in a number of ways, but simplest is in a study by Kurzban, DeScioli, & Fein (2012) in which participants were presented with the footbridge dilemma (“Would you push one person in front of a train – killing them – to save five people from getting killed by it in turn?”). If one was interested in being an effective altruist in the sense of delivering the greatest number of benefits to others, pushing is definitely the way to go under the simple logic that five lives saved is better than one life spared (assuming all lives have equal value). Our moral systems typically oppose this conclusion, however, suggesting that saving the lives of the five is impermissible if it means we need to kill the one. What is noteworthy about the Kurzban et al (2012) paper is that you can increase people’s willingness to push the one if the people in the dilemma (both being pushed and saved) are kin.

Family always has your back in that way…

The reason for this increase in pushing when dealing with kin, rather than strangers, seems to have something to do with our altruistic systems that evolved for delivering benefits to close genetic relatives; what we call kin-selected mechanisms (mammary glands being a prime example). This pattern of results from the footbridge dilemma suggests there is a distinction between our altruistic systems (that benefit others) and our moral ones; they function to do different things and, as it seems, our moral systems are not much better suited to dealing with the global altruism problem than empathetic ones. Indeed, one of the main features of our moral systems is nonconsequentialism: the idea that the moral value of an act depends on more than just the net consequences to others. If one is seeking to be an effective altruist, then, using the moral system to guide behavior seems to be a poor way to solve that problem because our moral system frequently focuses on behavior per se at the expense of its consequences. 

That’s not the only reason to be wary of the power of morality to solve effective altruism problems either. As I have argued elsewhere, our moral systems function to manage associations with others, most typically by strategically manipulating our side-taking behavior in conflicts (Marczyk, 2015). Provided this description of morality’s adaptive function is close to accurate, the metaphorical goal of the moral system is to generate and maintain partial social relationships. These partial relationships, by their very nature, oppose the goals of effective altruism, which are decidedly impartial in scope. The reasoning of effective altruism might, for instance, suggest that it would be better for parents to spend their money not on their child’s college tuition, but rather on relieving dehydration in a population across the world. Such a conclusion would conflict not only with the outputs of our kin-selected altruistic systems, but can also conflict with other aspects of our moral systems. As some of my own, forthcoming research finds, people do not appear to perceive much of a moral obligation for strangers to direct altruism towards other strangers, but they do perceive something of an obligation for friends and family to help each other (specifically when threatened by outside harm). Our moral obligations towards existing associates make us worse effective altruists (and, in Bloom’s sense of the word, morally worse people in turn).

While Bloom does mention that no one wants to live in that kind of strictly utilitarian world – one in which the welfare of strangers is treated equally to the welfare of friends and kin – he does seem to be advocating we attempt something close to it when he writes:

Our best hope for the future is not to get people to think of all humanity as family—that’s impossible. It lies, instead, in an appreciation of the fact that, even if we don’t empathize with distant strangers, their lives have the same value as the lives of those we love.

Appreciation of the fact that the lives of others have value is decidedly not the same thing as behaving as if they have the same value as the ones we love. Like most everyone else in the world, I want my friends and family to value my welfare above the welfare of others; substantially so, in fact. There are obvious adaptive benefits to such relationships, such as knowing that I will be taken care of in times of need. By contrast, if others showed no particular care for my welfare, but rather just sought to relieve as much suffering as they could wherever it existed in the world, there would be no benefit to my retaining them as associates; they would provide with me assistance or they wouldn’t, regardless of the energy I spent (or didn’t) maintaining social relationship with them. Asking the moral system to be a general-purpose altruism device is unlikely to be much more successful than asking my car to be an efficient oven, that people to treat others the world over as if they were kin, or that you empathize with 1000 people. It represents an incomplete view as to the functions of our moral psychology. While morality might be impartial with respect to behavior, it is unlikely to be impartial with regard to the social value of others (which is why, also in my forthcoming research, I find that stealing to defend against an outside agent of harm is rated as more morally acceptable than doing so to buy recreational drugs).  

“You have just as much value to me as anyone else; even people who aren’t alive yet”

To top this discussion off, it is also worth mentioning those pesky, unintended consequences that sometimes accompany even the best of intentions. By relieving deaths from dehydration, malaria, and starvation today, you might be ensuring greater harm in future generations in the form of increasing the rate of climate change, species extinction, and habitat destruction brought about by sustaining larger global human populations. Assuming for the moment that was true, would that mean that feeding starving people and keeping them alive today would be morally wrong? Both options – withholding altruism when it could be provided and ensuring harm for future generations – might get the moral stamp of disapproval, depending on the reference group (from the perspective of future generations dealing with global warming, it’s bad to feed; from the perspective of the starving people, it’s bad to not feed). This is why the slight majority of participants in Kurzban et al (2012) reported that pushing and not pushing can both be morally unacceptable courses of action.  If we are relying on our moral sense to guide our behavior in this instance, then, we would unlikely be very successful in our altruistic endeavors.

References: Kurzban, R., DeScioli, P., & Fein, D. (2012). Hamilton vs. Kant: Pitting adaptations for altruism against adaptation for moral judgment. Evolution & Human Behavior, 33, 323-333.

Marczyk, J. (2015). Moral alliance strategies theory. Evolutionary Psychological Science, 1, 77-90.

Punishment Might Signal Trustworthiness, But Maybe…

As one well-known saying attributed to Maslow goes, “when all you have is hammer, everything looks like a nail.” If you can only do one thing, you will often apply that thing as a solution to a problem it doesn’t fit particularly well. For example, while a hammer might make for a poor cooking utensil in many cases, if you are tasked with cooking a meal and given only a hammer, you might try to make the best of a bad situation, using the hammer as an inefficient, makeshift knife, spoon, and spatula. That you might meet with some degree of success in doing so does not tell you that hammers function as cooking implements. Relatedly, if I then gave you a hammer and a knife, and tasked with you the same cooking jobs, I would likely observe that hammer use drops precipitously while knife use increases quite a bit. It is also worth bearing in mind that if the only task you have to do is cooking, the only conclusion I’m realistically capable of drawing concerns whether a tool is designed for cooking. That is, if I give you a hammer and a knife and tell you to cook something, I won’t be able to draw the inference that hammers are designed for dealing with nails because nails just aren’t present in the task.

Unless one eats nails for breakfast, that is

While all that probably sounds pretty obvious in the cooking context, a very similar set up appears to have been used recently to study whether third-party punishment (the punishment of actors by people not directly affected by their behavior; hereafter TPP) functions to signal the trustworthiness of the punisher. In their study, Jordan et al (2016) has participants playing a two-stage economic game. The first stage was a TPP game. In this game, there are three players: player A is the helper, and is given 30 cents, player B is the recipient, and given nothing, and player C is the punisher, given 20 cents. The helper can choose to either give the recipient 15 cents or nothing. If the helper decides to give nothing, the punisher then has the option to pay 5 cents to reduce the helper’s pay by 15 cents, or not do so. In this first stage, the first participant would either play one round as a helper or a punisher, or play two rounds: one in the role of the helper and another in the role of the punisher.

The second stage of this game involved a second participant. This participant observed the behavior of the people playing the first game, and then played a trust game with the first participant. In this trust game, the second participant is given 30 cents and decides how much, if any, to send to the first participant. Any amount sent is tripled, and then the first participant decides how much of that amount, if any, to send back. The working hypothesis of Jordan et al (2016) is that TPP will be used a signal of trustworthiness, but only when it is the only possible signal; when participants have an option to send better signals of trustworthiness – such as when they are in the roll of the helper, rather than the punisher – punishment will lose its value as a signal for trust. By contrast, helping should always serve as a good signal of trustworthiness, regardless of whether punishment is an option.

Indeed, this is precisely what they found. When the first participant was only able to punish, the second participant tended to trust punishers more, sending them 16% more in the trust game than non-punishers; in turn, the punishers also tended to be slightly more trustworthy, sending back 8% more than non-punishers. So, the punishers were slightly, though not substantially, more trustworthy than the non-punishers when punishing was all they could do. However, when participants were in the helper role (and not the punisher role), those who transferred money to the recipient were in turn trusted more – being sent an average of 39% more in the trust game than non-helpers – and were, in fact, more trustworthy – returning an average of 25% more than non-helpers. Finally, when the first participant was in the role of both the punisher and the helper, punishment was less common (30% of participants in both roles punished, whereas 41% of participants who were only punishers did) and, controlling for helping, punishers were only trusted with 4% more in the second stage and actually returned 0.3% less.

The final task was less about trust and more about upper-body strength

To sum up, then, when people only had the option to punish others, punishment behavior was used by observers as a cue to trustworthiness. However, when helping was possible as well, punishment ceased to predict trustworthiness. From this set of findings, the authors make the rather strange conclusion that “clear support” was found for their model of punishment as signaling trustworthiness. My enthusiasm for that interpretation is a bit more tepid. To understand why, we can return to my initial example: you have given people a tool (a hammer/punishment) and a task (cooking/a trust game). When they use this tool in the task, you see some results, but they aren’t terribly efficient (16% more trusted and 8% more returned). Then, you give them a second tool (a knife/helping) to solve the same task. Now the results are much better (39% more trusted, 25% more returned). In fact, when they have both tools, they don’t seem to use the first one to accomplish the task as much (punishment falls 11%) and, when they do, they don’t end up with better outcomes (4% more trusted, 0.3% less returned). From that data alone, I would say that the evidence does not support the inference that punishment is a mechanism for signaling trustworthiness. People might try using it in a pinch, but its value seems greatly diminished compared to other behaviors.  

Further, the only tasks people were doing involved playing a dictator and trust game. If punishment serves some other purpose beyond signaling trustworthiness, you wouldn’t be able to observe it there because people aren’t in the right contexts for it to be observed. To make that point clear, we could consider other examples. First, let’s consider murder. If I condemn murder morally and, as a third party, punish someone for engaging in murder, does this tell you that I am more trustworthy than someone else who doesn’t punish it themselves? Probably not; almost everyone condemns murder, at least in the abstract, but the costs of engaging in punishment aren’t the same for all people. Someone who is just as trustworthy might not be willing or able to suffer the associated costs. What about something a bit more controversial: let’s say that, as a third party, I punish people for obtaining or providing abortions. Does hearing about my punishment make me seem like a more trustworthy person? That probably depends on what side of the abortion issue you fall on.

To put this in more precise detail, here’s what I think is going on: the second participant – the one sending money in the trust game, so let’s call him the sender – primarily wants to get as much money back as possible in this context. Accordingly, they are looking for cues that the first participant – the one they’re trusting, or the recipient – is an altruist. One good cue for altruism is, well, altruism. If the sender sees that the recipient has behaved altruistically by giving someone else money, this is a pretty good cue for future altruism. Punishment, however, is not the same thing as altruism. From the point of the view of the person benefiting from the punishment, TPP is indeed altruistic; from the point of view of the target of that TPP, the punishment is spiteful. While punishment can contain this altruistic component, it is more about trading off the welfare of others, rather than providing benefits to people per se. While that altruistic component of punishment can be used as a cue for trustworthiness in a pinch when no other information is available, that does not suggest to me sending such a signal is its only, or even its primary function.

Sure, they can clean the floors, but that’s not really why I hired them

In the real world, people’s behaviors are not ever limited to just the punishment of perpetrators. If there are almost always better ways to signal one’s trustworthiness, then TPP’s role in that regard is likely quite low. For what it’s worth, I happen to think that the roll of TPP has more to do with using transient states of need to manage associations (friendships) with others, as such an explanation works well outside the narrow boundaries of the present paper when things other than unfairness are being punished and people are seeking to do more than make as much money as possible. Finding a good friend is not the same thing as finding a good altruist, and friendships do not usually resemble trust games. However, when all you are observing is unfairness and cooperation, TPP might end up looking a little bit like a mechanism for building trust. Sometimes. If you sort of squint a bit.

References: Jordan, K., Hoffman, M., Bloom, P. & Rand. D. (2016). Third-party punishment as a costly signal of trustworthiness. Nature, 530, 473-476.

Thoughtful Suggestions For Communicating Sex Differences

Having spent quite a bit of time around the psychological literature – both academic and lay pieces alike – there are some words or phrases I can no longer read without an immediate, knee-jerk sense of skepticism arising in me, as if they taint everything that follows and precedes them. Included in this list are terms like bias, stereotype, discrimination, and, for the present purposes, fallacy. The reason these words elicit such skepticism on my end is due to the repeated failure of people using them to  consistently produce high-quality work or convincing lines of reasoning. This is almost surely due to the perceived social stakes when such terms are being used: if you can make members of a particular group appear uniquely talented, victimized, or otherwise valuable, you can subsequently direct social support towards and away from various ends. When the goal of argumentation becomes persuasion, truth is not a necessary component and can be pushed aside. Importantly, the people engaged in such persuasive endeavors do not usually recognize they are treating information or arguments differently, contingent on how it suits their ends.

“Of course I’m being fair about this”

There are few areas of research that seem to engender as much conflict – philosophically and socially – as sex differences, and it is here those words appear regularly. As there are social reasons people might wish to emphasize or downplay sex differences, it has steadily become impossible for me to approach most of the writing I see on the topic with the assumption it is at least sort of unbiased. That’s not to say every paper is hopelessly mired in a particular worldview, rejecting all contrary data, mind you; just that I don’t expect them to reflect earnest examinations of the capital-T, truth. Speaking of which, a new paper by Maney (2016) recently crossed my desk; a the paper that concerns itself with how sex differences get reported and how they ought to be discussed. Maney (2016) appears to take a dim view of the research on sex differences in general and attempts to highlight some perceived fallacies of people’s understandings of them. Unfortunately, for someone trying and educate people about issues surrounding the sex difference literature, the paper does not come off as one written by someone possessing a uniquely deep knowledge of the topic.

The first fallacy Maney (2016) seeks to highlight is the idea that sexes form discrete groups. Her logic for explaining why this is not the case revolves around the idea that while the sexes do indeed differ to some degree on a number of traits, they also often overlap a great deal on them. Instead, Maney (2016) argues that we ought to not be asking whether the sexes differ on a given trait, but rather by how much they do. Indeed, she even puts the word ‘differences’ in quotes, suggesting that these ‘differences’ between sexes aren’t, in many cases, real. I like this brief section, as it highlights well why I have grown to distrust words like fallacy. Taking her points in reverse order, if one is interested in how much groups (in this case, sexes) differ, then one must have, at least implicitly, already answered the question as whether or not they do. After all, if the sexes did not differ, it would pointless to talk about the extent of those non-differences; there simply wouldn’t be variation. Second, I know of zero researchers whose primarily interest resides in answering the question of whether the sexes differ to the exclusion of the extent of those differences. As far as I’m aware, Maney (2016) seems to be condemning a strange class of imaginary researchers who are content to find that a difference exists and then never look into it further or provide more details. Finally, I see little value in noting that the sexes often overlap a great deal when it comes to explaining the areas in which they do not. In much the same way, if you were interested in understanding the differences between humans and chimpanzees, you are unlikely to get very far by noting that we share a great deal of genes in common. Simply put, you can’t explain differences with similarities. If one’s goal is to minimize the perception of differences, though, this would be a helpful move.  

The second fallacy that Maney (2016) seeks to tackle is that idea that the cause of a sex differences in behavior can be attributed to differing brain structures. Her argument on this front is that it is logically invalid to do the following: (1) note that some brain structure between men and women differ, (2) note that this brain structure is related to a given behavior on which they also differ, and so (3) conclude that a sex difference in brain structure between men and women is responsible for that different behavior. Now while this argument is true within the rules of formal logic, it is clear that differences in brain structure will result in differences in behavior; the only way that idea could be false would be if brain structure was not connected to behavior, and I don’t know of anyone crazy enough to try and make that argument. The researchers engaging in the fallacy thus might not get the specifics right all the time, but their underlying approach is fine: if a difference exists in behavior (between sexes, species, or individuals), there will exist some corresponding structural differences in the brain. The tools we have for studying the matter are a far cry from perfect, making inquiry difficult, but that’s a different issue. Relatedly, then, noting that some formal bit of logic is invalid is assuredly not the same thing as demonstrating that a conclusion is incorrect or the general approach misguided. (Also worth noting is that the above validity issue stops being a problem when conclusions are probabilistic, rather than definitive.)

“Sorry, but it’s not logical to conclude his muscles might determine his strength”

The third fallacy Maney (2016) addresses is the idea that sex differences in the brain must be preprogrammed or fixed, attempting to dispel the notion that sex differences are rooted in biology and thus impervious to experience. In short, she is arguing against the idea of hard genetic determinism. Oddly enough, I have never met a single genetic determinist in person; in fact, I’ve never even read an article that advanced such an argument (though maybe I’ve just been unusually lucky…). As every writer on the subject I have come across has emphasized – often in great detail – the interactive nature of genes and environments in determining the direction of development, it again seems like Maney (2016) is attacking philosophical enemies that are more imagined than real. She could have, for instance, quoted researchers who made claims along the lines of, “trait X is biologically-determined and impervious to environmental inputs during development”; instead, it looks like everyone she cites for this fallacy is making a similar criticism of others, rather than anyone making the claims being criticized (though I did not check those references myself, so I’m not 100% there). Curiously, Maney (2016) doesn’t seem to be at all concerned about the people who, more-or-less, disregard the role of genetics or biology in understanding human behavior; at the very least she doesn’t devote any portion of her paper to addressing that particular fallacy. That rather glaring omission – coupled with what she does present – could leave one with the impression that she isn’t really trying to present a balanced view of the issue.

With those ostensibly fallacies out of the way, there are a few other claims worth mentioning in the paper. The first is that Maney (2016) seems to have a hard time reconciling the idea of sexual dimorphisms – traits that occur in one form typical of males and one typical of females – with the idea that the sexes overlap to varying degrees on many of them, such as height. While it’s true enough that you can’t tell someone’s sex for certain if you only know their height, that doesn’t mean you can’t make some good guesses that are liable to be right a lot more often than they’re wrong. Indeed, the only dimorphisms she mentions are the presence of sex chromosomes, external genitalia, and gonads and then continues to write as if these were of little to no consequence. Much like height, however, there couldn’t be selection for any physical sex differences if the sexes did not behave differently. Since behavior is controlled by the brain, physical differences between the sexes, like height and genitalia, are usually also indicative of some structural differences in the brain. This is the case whether the dimorphism is one of degree (like height) or kind (like chromosomes).

Returning to the main point, outside of these all-or-none traits, it is unclear what Maney (2016) would consider a genuine difference, much less any clear justification for that standard. For example, she notes some research that found a 90% overlap in interhemispheric connectivity between the male and female distributions, but then seems to imply that the corresponding 10% non-overlap does not reflect a ‘real’ sex difference. We would surely notice a 10% difference in other traits, like height, IQ, or number of fingers but, I suppose in the realm of the brain, 10% just doesn’t cut it.

Maney (2016) also seems to take an odd stance when it comes to explanations for these differences. In one instance, she writes about a study on multitasking that found a sex difference favoring men; a difference which, we are told, was explained by a ‘much larger difference in video game experience,’ rather than sex per se. Great, but what are we to make of that ‘much larger’ sex difference in video game experience? It would seem that that finding too requires an explanation, and one is not present. Perhaps video game experience is explained more by, I don’t know, competitiveness than sex, but then what are we to explain competitiveness with? These kinds of explanations usually end up going nowhere in a hurry unless they eventually land on some kind of adaptive endpoint, as once a trait’s reproductive value is explained, you don’t need to go any further. Unfortunately, Maney (2016) seems to oppose evolutionary explanations for sex differences, scolding those who propose ‘questionable’ functional or evolutionary explanations for sex differences for being genetic determinists who see no role for sociocultural influences. In her rush to condemn those genetic determinists (who, again, I have never met or read, apparently), Maney’s (2016) piece appears to fall victim to the warning laid out by Tinbergen (1963) several decades ago: rather than seeking to improve the shape and direction of evolutionary, functional analyses, Maney (2016) instead recommends that people simply avoid them altogether.

“Don’t ask people to think about these things; you’ll only hurt their unisex brains”

This is a real shame, as evolutionary theory is the only tool available for providing a deeper understanding of these sex differences (as well as our physical and psychological form more generally). Just as species will differ in morphology and behavior to the extent they have faced different adaptive problems, so too will the sexes within a species. By understanding the different challenges faced by the sexes historically, one can get a much clearer sense as to where psychological and physical difference will – and will not – be expected to exist, as well as why (this extra level of ‘why’ is important, as it allows you to better figure out where an analysis has gone wrong if the predictions don’t work). Maney (2016), it would seem, even missed a golden opportunity within her paper to explain to her readers that evolutionary explanations complement, rather than supplant, more proximate explanations when quoting an abstract that seemed to contrast the two. I suspect this opportunity was missed because she is either legitimately unaware of that point, or does not understand it (judging from the tone of her paper), believing (incorrectly) instead that evolutionary means genetic, and therefore immutable. If that is the case, it would be rather ironic for someone who does not seem to have much understanding of the evolutionary literature lecturing others on how it ought to be reported.

References: Maney, D. (2016). Perils and pitfalls of reporting sex differences. Philosophical Transactions B, 371, 1-11.

Tinbergen, N. (1964). On aims and methods of ethology. Zeitschrift für Tierpsychologie, 20, 410-433.

 

Clues To The Function Of Moralistic Punishment

One of the major questions I’ve spent the better part of the last few years trying to work up an answer to is the matter of why – in the adaptive sense – people punish others as third parties moralistically. That is to ask why people condemn and punish others for engaging in behavior perceived to be “wrong”. This is not the same question as why people take revenge on others for their behavior; that kind of punishment isn’t terribly mysterious. Chimps, for instance, will punish those who cheated them, but they seem fairly uninterested in punishing those who cheated others (Riedl, Jensen, Call, & Tomasello, 2012). Humans are a bit different in that regard: not only do we involve ourselves in the disputes of others, but we do so on behalf of people who aren’t kin or even social allies. This is rather strange behavior, even though we do not usually perceive it to be so. I’m not the only one who has attempted an evolutionary explanation for moralistic punishment, of course; in my hubris, however, I do feel that the explanation I ended up floating tends to fit the existing data better than other, contemporary theories. Today, I wanted to try and demonstrate that fit by considering some relative-recent empirical research.

“I don’t know, man; I think it might be a little baggy. Try a size down”

The paper in question (Kurzban, DeScioli, & O’Brien, 2007) concerns audience effects on punishment. Specifically, it addresses the following question: do people punish others more (as third parties) when they are being watched and their decisions are being made public, relative to when they’re anonymous? The question was examined across two experiments. In the first of these, 43 participants viewed the outcome of a previous trust game in which one player had the choice between two different outcomes: $39 for himself and $1 for the other player, or $20 for both. In this case, the player had selected the former, selfish option. The participants in question were provided with $7 and told they could spend any amount of that money – from $0 to $7 – to deduct from the selfish player. For each dollar they spent on deductions, the selfish player would lose $3. Importantly, the participants were either doing so in conditions under which their spending would be unknown to anyone but themselves (n = 24), or conditions where the experimenter would known how much each particular participant spent (n = 19). In the anonymous condition, 38% of participants spent at least some money on punishment, whereas in the public condition, 47% opted to punish something. Importantly, however, the amount punished varied between these conditions. Those in the public condition tended to spend more on punishment than those in the anonymous condition when they did. For instance, though precise numbers are not reported, approximately 20% of the participants in the public condition paid the full $7 to punish the selfish player, as compared with 0% in the private condition.

The second experiment examined peoples’ responses to a different kind of economic game – a prisoner’s dilemma – where one player attempted to cooperate with their partner, but was met with defection. In terms of the outcome, this means that rather than both players getting $25 (if both had cooperated), the cooperative partner got $0 while the defector got $30. The participants in this experiment were shown this outcome and given $10. As before, the participants could spend any amount of this $10 to deduct from the defecting player at the 3-to-1 rate. In this case, the punishment decisions would either be known by (a) no one else but the participant (n = 31), (b) the experimenter alone (n = 26), or (c) the entire group of participants (n = 30). Again, fewer participants opted to punish at all when punishment was anonymous (42%) relative to when punishment was known to either the experimenter (65%) or the whole group (67%); the amount spent on punishment increased as well (Ms = $1.06, $2.54, and $3.17, respectively). Further, those in the public conditions also tended to display more anger, disgust, and contempt at the cheating, suggesting that they were more upset by the transgression when other people were watching (or they were at least pretending to be).

The existence of audiences seemed to have an important impact on determining moralistic punishment: not only did the presence of other people affect the percent of third parties willing to punish at all, but it also positively influenced how much they did punish. In a sentence, we could say that the presence of observers was being used as an input by the cognitive systems determining moralistic sentiments. While this may sound like a result that could have been derived without needing to run the experiments, the simplicity and predictability of these findings by no means makes them trivial on a theoretical level when it comes to answering the question, “what is the adaptive value of punishment?” Any theory seeking to explain morality in general – and moral punishment in particular – needs to be able to present a plausible explanation for why cues to anonymity (or lack thereof) are being used as inputs by our moral systems. What benefits arise from public punishment that fail to materialize in anonymous cases?

“If you’re good at something, never do it for free…or anonymously”

The first theoretical explanation for morality that these results cut against is the idea that our moral systems evolved to deliver benefits to other per se. One of the common forms of this argument is that our moral systems evolved because they delivered benefits to the wider group (in the form of maintaining beneficial cooperation between members) even if doing so was costly in terms of individual fitness. This argument clearly doesn’t work for explaining the present data, as the potential benefits that could be delivered to others by deterring cheating or selfishness do not (seem to) change contingent on anonymity, yet moral punishment does. 

These results also cut against some aspects of mutualistic theories for morality. This class of theory suggests that, broadly speaking, our moral sense responds primarily to behavior perceived to be costly to the punisher’s personal interests. In short, third parties do not punish perpetrators because they have any interest in the welfare of the victim, but rather because punishers can enforce their own interests through that punishment, however indirectly. To place that idea into a quick example, I might want to see a thief punished not because I care about the people he harmed, but rather because I don’t want to be stolen from and punishing the thief for their behavior reduces that probability for me. Since my interests in deterring certain behaviors do not change contingent on my anonymity, the mutualistic account might feel some degree of threat from the present data. As a rebuttal to that point, the mutualistic theories could make the argument that my punishment being made public would deter others from stealing from me to a greater extent than if they did not know I was the one responsible for punishing. “Because I punished theft in a case where it didn’t effect me,” the rebuttal goes, “this is a good indication I would certainly punish theft which did affect me. Conversely, if I fail to punish transgressions against others, I might not punish them when I’m the victim.” While that argument seems plausible at face value, it’s not bulletproof either. Just because I might fail to go out of my way to punish someone else who was, say, unfaithful in their relationship, that does not necessarily mean I would tolerate infidelity in my own. This rebuttal would require an appreciable correspondence between my willingness to punish those who transgress against others and those who do so against me. As much of the data I’ve seen suggests a weak-to-absent link in both humans and non-humans on that front, that argument might not hold much empirical water.

By contrast, the present evidence is perfectly consistent with the association-management explanation posited in my theory of morality. In brief, this theory suggests that our moral sense helps us navigate the social world, identifying good and bad targets of our limited social investment, and uses punishment to build and break relationships with them. Morality, essentially, is an ingratiation mechanism; it helps us make friends (or, alternatively, not alienate others). Under this perspective, the role of anonymity makes quite a bit of sense: if no one will know how much you punished, or whether you did at all, your ability to use punishment to manage your social associations is effectively compromised. Accordingly, third-party punishment drops off in a big way. On the other hand, when people will know about their punishment, participants become more willing to invest in it in the face of better estimated social return. This social return need not necessarily reside with the actual person being harmed, either (who, in this case, was not present); it can also come from other observers of punishment. The important part is that your value as an associate can be publicly demonstrated to others.

The first step isn’t to generate value; it’s to demonstrate it

The lines between these accounts can seem a bit fuzzy at times: good associates are often ones who share your values, providing some overlap between mutualistic and association accounts. Similarly, punishment, at least from the perspective of the punisher, is altruistic: they are suffering a cost to provide someone else with a benefit. This provides some overlap between the association and altruistic accounts as well. The important point for differentiating these accounts, then, is to look beyond their overlap into domains where they make different predictions in outcomes, or predict the same outcome will obtain, but for different reasons. I feel the results of the present research not only help do that (inconsistent with group selection accounts), but also present opportunities for future research directions as well (such as the search for whether punishment as a third party appreciably predicts revenge).

References: Kurzban, R., DeScioli, P., & O’Brien, E. (2007). Audience effects on moralistic punishment. Evolution & Human Behavior, 28, 75-84.

Riedl, K., Jensen, K., Call, J., & Tomasello, M. (2012). No third-party punishment in chimpanzees. Proceedings of the National Academy of Science, 109, 14824–14829

The Politics Of Fear

There’s an apparent order of operations frequently observed in human reasoning: politics first, facts second. People appear perfectly willing to accept flawed arguments or incorrect statistics they would otherwise immediately reject, just so long as they support the reasoner’s point of view; Greg Cochran documented a few such cases (in his simple and eloquent style) a few days ago on his blog. Such a bias in our reasoning ability is not only useful – inasmuch as persuading people to join your side of a dispute tends to carry benefits, regardless of whether you’re right or wrong – but it’s also common: we can see evidence of it in every group of people, from the uneducated to those with PhDs and decades of experience in their field. In my case, the most typical contexts in which I encounter examples of this facet of our psychology – like many of you, I would suspect – is through posts shared or liked by others on social media. Recently, these links have been cropping up concerning the topic of fear. More precisely, there are a number of writers who think that people (or at least those who disagree with them) are behaving irrationally regarding their fears of Islamic terrorism and the threat it poses to their life. My goal here is not to say that people are being rational or irrational about such things – I happen to have a hard time finding substance in such terms – but rather to provide a different perspective than the ones offered by the authors; one that is likely in the minority among my professional and social peers.

You can’t make an omelette without alienating important social relations 

The first article on the chopping block was published on the New York Times website in June of last year. The article is entitled, “Homegrown extremists tied to deadlier toll than Jihadists in U.S. since 9/11,” and it attempts to persuade the reader that we, as a nation, are all too worried about the threat Islamic terrorism poses. In other words, American fears of terrorism are wildly out of proportion to the actual threat it presents. This article attempted to highlight the fact that, in terms of the number of bodies, right-wing, anti-government violence was twice as dangerous as Jihadist attacks in the US since 9/11 (48 deaths from non-Muslims; 26 by Jihadists). Since we seem to dedicate more psychological worry to Islam, something was wrong there There are three important parts of that claim to be considered: first, a very important word in that last sentence is “was,” as the body count evened out by early December in that year (currently at 48 to 45). This updated statistic yields some interesting questions: were those people who feared both types of attacks equally (if they existed) being rational or not on December 1st? Were those who feared right-wing attacks more than Muslim ones suddenly being irrational on the 2nd? The idea these questions are targeting is whether or not fears can only be viewed as proportionate (or rational) with the aid of hindsight. If that’s the case, rather than saying that some fears are overblown or irrational, a more accurate statement would be that such fears “have not yet been founded.” Unless those fears have a specific cut-off date (e.g., the fear of being killed in a terrorist attack during a given time period), making claims about their validity is something that one cannot do particularly well. 

The  second important point of the article to consider is that the count begins one day after a Muslim attack that killed over 3,000 people (immediately; that doesn’t count those who were injured or later died as a consequence of the events). Accordingly, if that count is set back just slightly, the fear of being killed by a Muslim terrorist attack would be much more statistically founded, at least in a very general sense. This naturally raises the question of why the count starts when it does. The first explanation that comes to mind is that the people doing the counting (and reporting about the counting) are interested in presenting a rather selective and limited view of the facts that support their case. They want to denigrate the viewpoints of their political rivals first, and so they select the information that helps them do that while subtly brushing aside the information that does not. That seems like a fairly straightforward case of motivated reasoning, but I’m open to someone presenting a viable alternative point of view as to why the count needs to start when it does (such as, “their primary interest is actually in ignoring outliers across the board”).    

Saving the largest for last, the final important point of the article to consider is that it appears to neglect the matter of base rates entirely. The attacks labeled as “right-wing” left a greater absolute number of bodies (at least at the time it was written), but that does not mean we learned right-wing attacks (or individuals) are more dangerous. To see why, we need to consider another question: how many bodies should we have expected? The answer to that question is by no means simple, but we can do a (very) rough calculation. In the US, approximately 42% of the population self-identifies as Republican (our right-wing population), while about 1% identifies as Muslim. If both groups were equally likely to kill others, then we should expect that the right-wing terrorist groups leave 42 bodies for every 1 that the Muslim group do. That ratio would reflect a genuine parity in threat. A count suggesting that this ratio was 2-to-1 at the time it written, and was 1-to-1 later that same year, we might reasonably conclude that the Muslim population, per individual member, is actually quite a bit more prone to killing others in terrorist attacks; if we factor in the 9/11 number, that ratio becomes something closer to 0.01-to-1, which is a far cry from demographic expectations.

Thankfully, you don’t have to report inconvenient numbers

Another example comes from The New Yorker, published just the other day (perhaps is it something about New York that makes people publish these pieces), entitled, “Thinking rationally about terror.” The insinuation, as before, is that people’s fears about these issues do not correspond well to the reality. In order to make the case that people’s fears are wrongheaded, Lawrence Krauss leans on few examples. One of these concerns the recent shootings in Paris. According to Lawrence, these attacks represented an effective doubling of the overall murder rate in Paris from the previous year (2.6 murders per 100,000 residents), but that’s really not too big of a deal because that just makes Paris as dangerous as New York City, and people aren’t that worried about being killed in NYC (or are they? No data on that point is mentioned). In fact, Lawrence goes on to say, the average Paris resident is about as likely to have been killed in a car accident during any given year than to have been killed during the mass shooting. This point is raised, presumably, to highlight an irrationality: people aren’t concerned about being killed by cars for the most part, so they should be just as unconcerned about being killed by a terrorist if they want to be rational.

This point about cars is yet another fine example of an author failing to account for base rates. Looking at the raw body count is not enough, as people in Paris likely interact with hundreds (or perhaps even thousands; I don’t have any real sense for that number) of cars every day for extended periods of time. By contrast, I would imagine Paris residents interact markedly less frequently with Muslim extremists. Per unit of time spent around cars, they would pose what is likely a much, much lower threat of death than Muslim extremists. Further, people do fear the harm caused by cars (we look both ways before crossing a street, we restrict licenses to individuals who demonstrate their competence to handle the equipment, have speed limits, and so on), and it is likely that the harm they inflict would be much greater if such fears were not present. In much the same way, it is also possible that the harms caused by terrorist groups would be much higher if people decided that such things were not worth getting worked up about and took no steps to assure their safety early on. Do considerations of these base rates and future risks fall under the umbrella of “rational” thinking? I would like to think so, and yet they seemed so easily overlooked by someone chiding others for being irrational: Lawrence at least acknowledges that future terror risks might increase for places like Paris, but notes that that kind of life is pretty much normal for Israel; the base-rate problems is not even mentioned.

While there’s more I could say on these topics, the major point I hope to get across is this: if you want to know why people experience fear about certain topics, it’s probably best to not start your analysis with the assumption that these people are wrong to feel the way they do. Letting one’s politics do the thinking is not a reliable way to get at a solid understanding of anything, even if it might help further your social goals. If we were interested in understanding the “why” behind such fears, we might begin, for instance, with the prospect that many people likely fear historically-relevant, proximate cues of danger, including groups of young, violent males making threats to your life based on your group membership, and cases where those threats are followed through and made credible. Even if such individuals currently reside many miles away, and even if only a few such threats have been acted upon, and even if the dangerous ones represent a small minority of the population, fearing them for one’s own safety does not – by default – seem to be an unreasonable thing to do; neither does fearing them for the safety of one’s relatives, social relations, or wider group members.

“My odds of getting hurt were low, so this isn’t worth getting worked up over”

Now, as I mentioned, all of this is not to say that people ought to fear some particular group or not; my current interests do not reside in directing your fears or their scope. I have no desire to tell you that your fears are well founded or completely off base (in no small part because I earnestly don’t know if they are). My interests are much more general than that, as this kind of thinking is present in all kinds of different contexts. There’s a real problem in beginning with the truth of your perspective and beginning your search for evidence only after the fact. The problem can run so deep that I actually find myself surprised to see someone take up the position that they were wrong after an earnest dig through the available evidence. Such an occurrence should be commonplace if rationality or truth were the goal in these debates, as people get things wrong (at least to some extent) all the time, especially when such opinions are formed in advance of such knowledge. Admissions of incorrect thinking does require, however, that one is willing to, at least occasionally, sacrifice a belief that used to be held quite dear; it requires looking like a fool publicly now and again; it even requires working against your own interests sometimes. These are things you will have to do; not just things that the opposition will. As such, I suspect these kinds of inadequate lines of reasoning will continue to pervade such discussions, which is a bit of a problem when the lives of others literally hang in the balance of the outcome.

Preferences For Equality?

People are social creatures. This is a statement that surprises no one, seeming trivial to the same degree it is widely recognized (which is to say, “very”). That many people will recognize such a statement in the abstract and nod their head in agreement when they hear it does not mean they will always apply it to their thinking in particular cases, though. Let’s start with a context in which people will readily apply this idea to their thinking about the world: a video in which pairs of friends watch porn together while being filmed by others who have the intention to put the video online for view by (at the time of writing) about 5,700,000 people worldwide. The video is designed to get people’s reactions to an awkward situation, but what precisely is it about that situation which causes the awkward reactions? As many of you will no doubt agree, I suspect that answer has to do with the aforementioned point that people are social creatures. Because we are social creatures, others in our environment will be relatively inclined (or disinclined) from associating with us contingent on, among other things, our preferences. If some preferences make us seem like a bad associate to others – such as, say, our preferences concerning what kind of pornography arouses us, or our interest in pornography more generally – we might try to conceal those preferences from public view. As people are trying to conceal their preferences, we likely observe a different pattern of reactions to – and searches for – pornography in the linked video, compared to what we might expect if those actors were in the comfort and privacy of their own home.

Or, in a pinch, in the privacy of an Apple store or Public Library 

Basically, we would be wrong to think we get a good sense for these people’s pornography preferences from their viewing habits in the video, as people’s behavior will not necessarily match their desires. With that in mind, we can turn to a rather social human behavior: punishment. Now, punishment might not be the first example of social behavior that pops into people’s heads when they think about social things, but make no mistake about it; punishment is quite social. A healthy degree of human gossip centers around what we believe ought to be and not be punished; a fact which, much to my dismay, seems to take up a majority of my social media feeds at times. More gossip still concerns details of who was punished, how much they were punished, why they were punished, and, sometimes, this information will lead to other people joining in the punishment themselves or trying to defend someone else from it. From this analysis, we can conclude a few things, chief among which are that, (a) some portion of our value as an associate to others (what I would call our association value) will be determined by the perception of our punishment preferences, and (b) punishment can be made most or less costly, contingent on the degree of social support our punishment receives from others. 

This large social component of punishment means that observing the results of people’s punishment decisions does not necessarily inform you as to their preferences for punishment; sometimes people might punish others more or less than they would prefer to, were it not for these public variables being a factor. With that in mind, I wanted to review two pieces of research to see what we can learn about human punishment preferences from people’s behavior. The first piece claims that human punishment mechanisms have – to some extent – evolved to seek equal outcomes between the punisher and the target of their punishment. In short, if someone does some harm to you, you will only desire to punish them to the extent that it will make you two “even” again. An eye for an eye, as the saying goes; not an eye for a head. The second piece makes a much different claim: that human punishment mechanisms are not designed for fairness at all, seeking instead to inflict large costs on others who harm you, so as to deter future exploitation. Though both of these papers do not assess punishment in a social context, I think they have something to tell us about that all  the same. Before getting to that point, though, let’s start by considering the research in question.

The first of these papers is from Bone & Raihani (2015). Without getting too bogged down in the details, the general methods of the paper go as follows: two players enter into a game together. Player A begins the game with $1.10 while player B begins with a payment ranging from $0.60 to also $1.10. Player B is then given a chance to “steal” some of player A’s money for himself. The important part about this stealing is that it would either leave player B (a) still worse off than A, (b) with an equal payment to A, or (c) with a better payment than A. After the stealing phase, player A has the chance to respond by “punishing” player B. This punishment was either efficient – where for each cent player A spent, player B would lose three – or inefficient – where for each cent player A spent, player B would only lose one. The results of this study turned up the following findings of interest: first, player As who were stolen from tended to punish the player Bs more, relative to when the As were not stolen from. Second, player As who had access to the more efficient punishment option tended to spend more on punishment than those who had access to the less efficient option. Third, those player As who had access to the efficient punishment option also punished player Bs more in cases where B ended up better off than them. Finally, when participants in that former case were punishing the player Bs, the most common amount of punishment they enacted was the amount which would leave both player A and B with the same payment. From these findings, Bone & Raihani (2015) conclude that:

Although many of our results support the idea that punishment was motivated primarily by a desire for revenge, we report two findings that support the hypothesis that punishment is motivated by a desire for equality (with an associated fitness-leveling function…)

In other words, the authors believe they have observed the output of two distinct preferences: one for punishing those who harm you (revenge), and one for creating equality (fitness leveling). But were people really that concerned with “being even” with their agent of harm? I take issue with that claim, and I don’t believe we can conclude that from the data. 

We’re working on preventing exploitation; not building a frame.

To see why I take issue with that claim, I want to consider an earlier paper by Houser & Xiao (2010). This study involves a slightly different setup. Again, two players are involved in a game: player A begins the game by receiving $8. Player A could then transfer some amount of that money (either $0, $2, $4, $6, or $8) to player B, and then keep whatever remained for himself (another condition existed in which this transfer amount was randomly determined). Following that transfer, both players received $2. Finally, player B was given the following option: to pay $1 for the option to reduce player A’s payment by as much as they wanted. The results showed the following pattern: first, when the allocations were random, player B rarely punished at all (under 20%) and, when they did punish, they tended to punish the other player irrespective of inequality. That is they were equally as likely to deduct at all, no matter the monetary difference, and the amount they deducted did not appear to aimed at achieving equality. By contrast, of the player Bs that received $0 or $2 intentionally, 54% opted to punish player A and, when they did punish, were most likely to deduct so much from player A that they ended up better off than him (that outcome obtained between 66-73% of the time). When given free reign over the desired punishment amount, then, punishers did not appear to be seeking equality as an outcome. This finding, the authors conclude, is inconsistent with the idea that people are motivated to achieve equality per se. 

What both of these studies do, then, is vary the cost of punishment. In the first, punishment is either inefficient (1-to-1 ratio) or quite efficient (3-to-1 ratio); in the second, punishment is unrestricted in its efficiency (X-to-1 ratio). In all cases, as punishment becomes more efficient and less costly, we observe people engaging in more of it. What we learn about people’s preferences for punishment, then, is that they seems to be based, in some part, on how costly punishment is to enact. With those results, I can now turn to the matter of what they tell us about punishment in a social context. As I mentioned before, the costs of engaging punishment can be augmented or reduced to the extent that other people join in your disputes. If your course of punishment is widely supported by others, this means its easier to enact it; if your punishment is opposed by others, not only is it costlier to enact, but you might in turn get punished for engaging in your excessive punishment. This idea is fairly easy to wrap one’s mind around: stealing a piece of candy from a corner store does not usually warrant the death penalty, and people would likely oppose (or attack) the store owner or some government agency if they attempted to hand down such a draconian punishment for the offense.

Now many of you might be thinking that third parties were not present in the studies I mentioned, so it would make no sense for people to be thinking about how these non-existent third parties might feel about their punishment decisions. Such an intuition, I feel, would be a mistake. This brings me back to the matter of pornography briefly. As I’ve written before, people’s minds tend to generate physiological arousal to pornography despite there being no current adaptive reason for that arousal. Instead, our minds – or, more precisely, specific cognitive modules – attend to particular proximate cues when generating arousal that historically correlated with opportunities to increase our genetic fitness. In modern environments, where that link between cue and fitness benefit is broken by digital media providing similar proximate cues, the result in maladaptive outputs: people get aroused by an image, which makes about as much adaptive sense as getting aroused by one’s chair.

The same logic can likely be applied to punishment here as well, I feel: the cognitive modules in our mind responsible for punishment decisions evolved in a world of social punishment. Not only would your punishment decisions become known to others, but those others might join in the conflict on your side or opposing you. As such, proximate cues that historically correlated with the degree of third party support are likely still being utilized by our brains in these modern experimental contexts where that link is being intentionally broken and interactions are anonymous and dyadic. What is likely being observed in these studies, then, is not an aversion to inequality as much as an aversion to the costs of punishment or, more specifically, the estimated social and personal costs of engaging in punishment in a world that other people exist in.

“We’re here about our concerns with your harsh punishment lately”

When punishment is rather cheap to enact for the individual in question – as it was in Houser & Xiao (2010) – the social factor probably plays less of a role in determining the amount of punishment enacted. You can think of that condition as one in which a king is punishing a subject who stole from him: while the king is still sensitive to the social costs of punishment (punish too harshly and the rabble will rise up and crush you…probably), he is free to punish someone who wronged him to a much greater degree than your average peasant on the street. By contrast, in Bone & Raihani (2015), the punisher is substantially less powerful and, accordingly, more interested in the (estimated) social support factors. You can think of those conditions as ones in which a knight or a peasant is trying to punish another peasant. This could well yield inequality-seeking punishment in the former study and equality-seeking punishment in the latter, as different groups require different levels of social support, and so scale their punishment accordingly. Now the matter of why third parties might be interested in inequality between the disputants is a different matter entirely, but recognition of the existence of that factor is important for understanding why inequality matters to second parties at all.

References: Bone, J. & Raihani, N. (2015). Human punishment is motivated both by a desire for revenge and a desire for equality. Evolution & Human Behavior, 36, 323-330.

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment. Economics Letters, 109, 20-23.

Benefits To Bullying

When it comes to assessing hypotheses of evolutionary function, there is a troublesome pair of intuitions which frequently trip many people up. The first of these is commonly called the naturalistic fallacy, though it also goes by the name of an appeal to nature: the idea that because something is natural, it ought to be good. As a typical argument using this line might go, because having sex is natural, we ought to – morally and socially – approve of it. The corresponding intuition to this is known as the moralistic fallacy: if something is wrong, then it’s not natural (or, alternatively, if something is good, it is natural). An argument using this type of reasoning might (and has, more or less) gone, because rape is morally wrong, it cannot be a natural behavior. In both cases, ‘natural’ is a bit of a wiggle word but, in general, it seems to refer to whether or not a species possesses some biological tendency to engage in the behavior in question. Put another way, ‘natural’ refers to whether a species possesses an adaptation(s) that functions so as to bring about a particular outcome. Extending these examples a little further, we might come up with the arguments that, because humans possess cognitive mechanisms which motivate sexual behavior, sex must be a moral good; however, because rape is a moral wrong, the human must not contain any adaptations that were selected for because they promoted such behavior.

An argument with which many people appear to disagree, apparently

This type of thinking is, of course, fallacious, as per the namesakes of the two fallacies. It’s quite easy to think of many moral wrong which might increase one’s reproductive fitness (and thus select for adaptations that produce them), just as it is easy to think of morally-virtuous behaviors that could lower one’s fitness: infanticide is certainly among the things people would consider morally wrong, and yet there is often an adaptive logic to be found in the behavior; conversely, while the ideal of universal altruism is praised by many as morally virtuous, altruistic behavior is often limited to contexts in which it will later be reciprocated or channeled towards close kin. As such, it’s probably for the best to avoid tethering one’s system of moral approval to natural-ness, or vice versa; you end up in some weird places philosophically if you do. Now this type of thinking is not limited to any particular group of people: scientists and laypeople alike can make use of these naturalistic and moralistic intuitions (intentionally or not), leading to cases where hypotheses of function are violently rejected for even considering that certain condemned behaviors might be the result of an adaptation for generating them, or other cases where weak adaptive arguments are made in the service of making other behaviors with which the arguer approves seem more natural and, accordingly, more morally acceptable.

With that in mind, we can turn to the matter of bullying: aggression enacted by more powerful individuals against weaker ones, typically peaking in frequency during adolescence. Bullying is a candidate behavior that might fall prey to the former fallacies because, well, it tends to generate many consequences people find unpleasant: having their lunch money taken, being hit, being verbally mocked, having slanderous rumors about them being spread, or other such nastiness. As bullying generates such proximately negative consequences for its victims, I suspect that many people would balk at the prospect that bullying might reflect a class of natural, adaptive behaviors, resulting in the bully gaining greater access to resources and reputation; in other words, doing evolutionarily useful things. Now that’s not to say that if you were to start bullying people you would suddenly find your lot in life improving, largely because bullying others tends to carry consequences; many people will not sit idly by and suffer the costs of your bullying; they will defend themselves. In order for bullying to be effective, then, the bully needs to possess certain traits that minimize, withstand, or remove the consequences of this retaliation, such as a greater physical formidability than their victim, a stronger social circle willing to protect them, or other means of backing up their aggression.

Accordingly, only those in certain conditions and possessing particular traits are capable of effectively bullying others (inflicting costs without suffering them in turn). Provided that is the case, those who engaged in bullying behaviors more often might be expected to achieve correspondingly greater reproductive success, as the same traits that make bullying an effective strategy also make the bully an attractive mating prospect. It’s probably worse to select a mate unable to defend themselves from aggression, relative to one able and willing to do so; not only would your mate (and perhaps you) be exploited more regularly, but such traits may well be passed onto your children in turn, leaving them open for exploitation as well. Conversely, the bully able to exploit others can likely can access to more plentiful resources, protect you from exploitation, and pass such useful traits along to their children. That bullying might have an adaptive basis was the hypothesis examined in a recent paper by Volk et al (2015). As noted in their introduction, previous data on the subject is consistent with the possibility that bullies are actually in relatively better condition than their victims, with bullies displaying comparable or better mental and physical health, as well as improved social and leadership skills, setting the stage for the prospect of greater mating success (as all of those traits are valuable in the mating arena). Findings like those run counter to some others suggestions floating around the wider culture that people bully others precisely because they lack social skills, intelligence, or are unhappy with themselves. While I understand that no one is particularly keen to paint a flattering picture of people they don’t like and their motives for engaging in behavior they seek to condemn, it’s important to not lose sight of reality while you try reduce the behavior and condemn its perpetrators.

“Sure, he does hit me regularly, but he’s a really great guy otherwise”

Volk et al (2015) examined the mating success of bullies by correlating people’s self-reports of their bullying behavior with their reports of dating and sexual behavior across two samples: 334 younger adolescents (11-18 years old) and 143 college freshman, all drawn from Canada. Both groups answered questions concerning how often they engaged in, and were a victim of, bullying behaviors, whether they have had sex and, if they had, how many partners they’ve had, whether they have dated and, if so, how many people they’ve dated, as well as how likable and attractive they found themselves to be. Self-reports are obviously not the ideal measures of such things, but at times they can be the best available option.

Focusing on the bullying results, Volk et al (2015) reported a positive relationship between bullying and engaging in dating and sexual relationships in both samples: controlling for age, sex, reported victimization, attractiveness, and likability, bullying not only emerged a positive predictor as to whether the adolescent had dated or had sex at all (about 1.3 to 2 times more likely), but also correlated with the number of sexual and, sometimes, dating partners; those who bullied people more frequently tended to have a greater number of sexual partners, though this effect was modest (bs ranging from 0.2 to 0.26). By contrast, being a victim of bullying did not consistently or appreciably effect the number of sexual partners one had (while victimization was positively correlated with participant’s number of dating partners, it was not correlated with their number of sexual partners. This might reflecting the possibility that those who seek to date frequently might be viewed as competitors by other same-sex individuals and bullied in order to prevent such behavior from taking place, though that much is only speculation).

While this data is by no means conclusive, it does present the possibility that bullying is not indicative of someone who is poor shape physically, mentally, or socially; quite the opposite, in fact. Indeed, that is probably why bullying often appears to be so one-sided: those being victimized are not doing more to fight back because they are aware of how well that would turn out for them. Understanding this relationship between bullying and sexual success might prove rather important for anyone looking to reduce the prevalence of bullying. After all, if bullying is providing access to desirable social resources – including sexual partners – it will be hard to shift the cost/benefit analysis away from bullying being the more attractive option barring some introduction of more attractive alternatives for achieving that goal. If, for instance, bullying serves a cue that potential mates might use for assessing underlying characteristics that make the bully more attractive to others, finding new, less harmful ways of signaling those traits (and getting bullies to use those instead) could represent a viable anti-bully technique.

But, until then, this kid is going to get so laid

As these relationships are merely correlational, however, there are other ways of interpreting them. It could be possible, for example, that the relationship between bullying and sexual success is accounted for by those who bully being more coercive towards their sexual partners as well as their victims, achieving a greater number of sexual partners, but not in the healthiest fashion. This interpretation would be somewhat complicated by the lack of a sex differences between men and women in the current data, however, as it seems unlikely that women who bully are also more likely to coerce their male partners into sex they don’t really want. The only sex difference reported involved the relationship between bullying and dating, with the older sample of women who bullied people more often having a greater number of dating relationships (r = 0.5), relative to men (r = 0.13), as well as a difference in the younger sample with respect to desire for dating relationships (female r = 0.28, male r = 0.03). It is possible, then, that men and women might bully others, at least at times, to obtain different goals, which ought to be expected when the interests of each sex diverge. Understanding those adaptive goals should prove key for effectively reducing bullying; at least I feel that understanding would be more profitable than positing that bullies are mean because they wish to make others as miserable as they are, crave attention, or other such implausible evolutionary functions.

References: Volk, A., Dane, A., Marini, Z., & Vaillancourt, T., (2015). Adolescent bullying, dating, and mating: Testing an evolutionary hypothesis. Evolutionary Psychology, DOI: 10.1177/1474704915613909