Getting Off Your Phone: Benefits?

The videos will almost be as good as being there in person

If you’ve been out to any sort of live event lately – be it a concert or other similar gathering; something interesting – you’ll often find yourself looking out over a sea of camera phones (perhaps through a camera yourself) in the audience. This has often given me a sense of general unease at times, namely for two reasons: first, I’ve taken such pictures before in the past and, generally speaking, they come out like garbage. Turns out it’s not the easiest thing in the world to get clear audio in a video at a loud concert, or even a good picture if you’re not right next to the stage. But, more importantly, I’ve found such activities to detract from the experience; either because you’re spending time on your phone instead of just watching what you’re there to see, or because it signals an interest to showing other people what you’re doing rather than just doing it and enjoying yourself. Some might say all those people taking pictures aren’t quite living for the moment, so to speak.

In fact, it has been suggested (Soares & Storm, 2018) that the act of taking a picture can actually make your memory for the event worse at times. Why might this be? There are two candidate explanations that come to mind: first, and perhaps most intuitively, screwing around on your phone is a distraction. When you’re busy trying to work the camera and get the right shot, you’re just not paying attention to what you’re photographing as much. It’s a boring explanation, but perfectly plausible, just like how texting makes people worse drivers; their attention is simply elsewhere.

The other explanation is a bit more involved, but also plausible. The basics go like this: memory is a biologically-costly thing. You need to devote resources to attending to information, creating memories, maintaining them, and calling them to mind when appropriate. If we remembered everything we ever saw, for instance, we would likely be devoting lots of resources to ultimately irrelevant information (no one really cares how many windows each building you pass on your way home from work has, so why remember it?), and finding the relevant memory amidst a sea of irrelevant ones would take more time. Those who store memories efficiently might thus be favored by selection pressures as they can more quickly recall important information with less investment. What does that have to do with taking pictures? If you happen to snap a picture, you now have a resource you could later consult for details. Rather than store this information in your head, you can just store it in the picture and consult the picture when needed. In this sense, the act of taking a picture may serve as a proximate cue to the brain that information needs to be attended to less deeply and committed less firmly to memory.

Too bad it won’t help everyone else forget about your selfies

Worth noting is that these explanations aren’t mutually exclusive: it could both be true that taking a picture is a cue you don’t need to remember information as well and that taking pictures is distracting. Nevertheless, both could explain the same phenomenon, and if you want to test to see if they’re true, you need a way of differentiating them; a context in which the two make opposing predictions about what would happen. As a spoiler warning, the research I wanted to cover today tries to do that, but ultimately fails at the task. Nevertheless, the information is still interesting, and appreciating why the research failed at its goal is useful for future designs, some of which I will list at the end.

Let’s begin with what the researchers did: they followed a classic research paradigm in this realm and had participants take part in a memory task. They were shown a series of images and then given a test about them to see how much they remembered. The key differentiating variable here was that some of the time participants would watch without taking pictures, take a picture of each target before studying it, or take a picture and delete it before studying the target. The thinking here was that – if the efficiency explanation was true – participants who took pictures in a way they knew they wouldn’t be able to consult later – such as when they are snapchatted or deleted – would instead commit more of the information to memory. If you can’t rely on the camera to have the pictures, it’s an unreliable source of memory offloading (the official term), and so we shouldn’t offload. By contrast, if the mere act of taking the picture was distracting and interfered with memory in some way because of that, whether the picture was deleted or not shouldn’t matter. The simple act of taking the picture should be what causes the memory deficits, and similar deficits should be observed regardless of whether the picture was saved or deleted.

Without going too deeply into the specifics, this is basically what the researchers found: when participants had merely taken a picture – regardless of whether it was deleted or stored – the memory deficits were similar. People remembered these images better when they weren’t taking pictures. Does this suggest that taking pictures is simply an attention problem on forming memories, rather than an offloading one?

Maybe the trash can is still a reliable offloading device

Not quite, and here’s why: imagine an experiment where you were measuring how much participants salivated. You think that the mere act of cooking will get people to salivate, and so construct two conditions: one in which hungry people cook and then get to eat the food after, and another in which hungry people cook the food and then throw it away before they get to eat (and they know in advance they will be throwing it away). What you’ll find in both cases is that people will salivate when cooking because the sights and smells of the food are proximate cues of getting to eat. Some part of their brains are responding to those cues that signal food availability, even if those cues do not ultimately correspond to their ability to eat it in the future. The part of the brain that consciously knows it won’t be getting food isn’t the same part responding to those proximate cues. While one part of you understands you’ll be throwing the food away, another part disagrees and thinks, “these cues mean food is coming,” and you start salivating anyway because of it.

This is basically the same problem the present research ran into. Taking a picture may be a proximate cue that information is stored somewhere else and so you don’t need to remember it as well, even if that part of the brain that is instructed to delete the picture believes otherwise. We don’t have one mind, but rather a series of smaller minds that may all be working with different assumptions and sets of information. Like a lot of research, then, the design here focuses too heavily on what people are supposed to consciously understand, rather than on what cues the non-conscious parts of the brain are using to generate behavior.

Indeed, the authors seem to acknowledge as much in their discussion, writing the following:

”Although the present results are inconsistent with an “explicit” form of offloading, they cannot rule out the possibility that through learned experience, people develop a sort of implicit transactive memory system with cameras such that they automatically process information in a way that assumes photographed information is going to be offloaded and available later (even if they consciously know this to be untrue). Indeed, if this sort of automatic offloading does occur then it could be a mechanism by which photo-taking causes attentional disengagement”

All things considered, that’s a good passage, but one might wonder why that passage was saved for the end of their paper, in the discussion section. Imagine instead that this passage appeared in the introduction:

“While it is possible that operating a camera taking a picture disrupts participants attention and results in a momentary encoding deficit, it is also completely possible that the mere act of taking picture is a proximate cue used by the brain to determine how thoroughly (largely irrelevant) information needs to be encoded. Thus, our experiment doesn’t actually differentiate between these alternative hypotheses, but here’s what we’re doing anyway…”

Does your interest in the results of the paper go up or down at that point? Because that would effectively be the same thing the discussion section said. As such, it seems probable that the discussion passage may well represent an addition made to the paper after the fact, per a reviewer request. In other words, the researchers probably didn’t think the idea through as fully as they might like.  With that in mind, here are a few other experimental conditions they could have run which would have been better at the task of separating the hypotheses:

  • Have participants do something with a phone that isn’t taking a picture to distract themselves. If this effect isn’t picture specific, but people simply remember less when they’ve been messing around on a phone (like typing out a word, then looking at the picture), then the attention hypothesis would look better, especially if the impairments to memory are effectively identical.
  • Have an experimenter take the pictures instead of the participant. That way participants would not be distracted by using a phone at all, but still have a cue that the information might be retrievable elsewhere. However, the experimenter could also be viewed as a source of information themselves, so there could be another condition where an experimenter is simply present doing something that isn’t taking a picture. If an experimenter taking a picture results in worse memory as well, then it might be something about the knowledge of a picture in general causing the effect.
  • Better yet, if messing around with the phone is only temporarily disrupting encoding, then having participants take a picture of the target briefly and then wait a period (say, a minute) before viewing the target for the 15 seconds proper should help differentiate the two hypotheses. If the mere act of taking a picture in the past (whether deleted or not) causes participants to encode information less thoroughly because of proximate cues for efficient offloading, then this minor time delay shouldn’t alleviate those memory deficits. By contrast, if messing with the phone is just distracting people momentarily, the time delay should help counteract the effect.

These are all productive avenues that could be explored in the future for creating conditions where these hypotheses make different predictions, especially the first and third ones. Again, both could be true, and that could show up in the data, but these designs give the opportunity for that to be observed.

And, until the research is conducted, do yourself a favor and enjoy your concerts instead of viewing them through a small phone screen. (The caveat here is that it’s unclear whether such results would generalize, as in real life people decide what to take pictures of, rather than taking pictures of things they probably don’t really care about).

References: Soares, J. & Storm, B. (2018). Forgot in a flash: A further investigation of the photo-taking-impairment effect. Journal of Applied Research in Memory & Cognition, 7, 154-160

The Connection Between Economics and Promiscuity

When it comes to mating, humans are a rather flexible species. In attempting to make sense of this variation, a natural starting point for many researchers is to try and tackle what might be seen as the largest question: why are some people more inclined to promiscuity or monogamy than others? Though many answers can be given to that question, a vital step in building towards a plausible and useful explanation of the variance is to consider the matter of function (as it always is). That is, we want to be asking ourselves the question, “what adaptive problems might be solved by people adopting long- or short-term mating strategies?” By providing answers to this question we can, in turn, develop expectations for what kind of psychological mechanisms exist to help solve these problems, explanations for how they could solve them, and then go examine the data more effectively for evidence of their presence or absence.

It will help until the research process is automated, anyway

The current research I wanted to talk about today begins to answer the question of function by considering (among other things) the matter of resource acquisition. Specifically, women face greater obligate biological costs when it comes to pregnancy than men. Because of this, men tend to be the more eager sex when it comes to mating and are often willing to invest resources to gain favor with potential mates (i.e., men are willing to give up resources for sex). Now, if you’re a woman, receiving this investment is an adaptive benefit, as it can be helpful in ensuring the survival and well-being of both yourself and your offspring. The question then becomes, “how can women most efficiently extract these resources from men?” As far as women are concerned, the best answer – in an ideal world – is to extract the maximum amount of investment from the maximum amount of men.

However, men have their own interests too; while they might be willing to pay-to-play, as it were, the amount they’re willing to give up depends on what they’re getting in return. What men are looking for (metaphorically or literally speaking) is what women have: a guarantee of sharing genes with their offspring. In other words, men are looking for paternity certainty. Having sex with a woman a single time increases the odds of being the father of one of her children, but only by a small amount. As such, men should be expected to prefer extended sexual access over limited access. Paternity confidence can also be reduced if a woman is having sex with one or more other men at the same time. This leads us to expect that men adjust their willingness to invest in women upwards if that investment can help them obtain one or both of those valued outcomes.

This line of reasoning lead the researchers to develop the following hypothesis: as female economic dependence on male investment increases, so too should anti-promiscuity moralization. That is, men and women should both increase their moral condemnation of short-term sex when male investment is more valuable to women. For women, this expectation arises because promiscuity threatens paternity confidence, and so their engaging in mating with multiple males should make it more difficult for them to obtain substantial male investment. Moreover, other women engaging in short-term sex similarly makes it more difficult for even monogamous women to demand male investment, and so would be condemned for their behavior as well. Conversely, since men value paternity certainty, they too should condemn promiscuity to a greater degree when their investment is more valuable, as they are effectively in a better position to bargain for what they want.

In sum, the expectation in the present study was that as female economic dependence increases, men and women should become more opposed to promiscuous mating.

“Wanted: Looking for paternity certainty. Will pay in cash”

This was tested in two different ways: in the first study, 656 US residents answered questions about their perceptions of female economic dependence on male investment in their social network, as well as their attitudes about promiscuity and promiscuous people. The correlation between the measures ended up being r = .28, which is a good proof of concept, though not a tremendous relationship (which is perhaps to be expected, given that multiple factors likely impact attitudes towards promiscuity). When economic dependence was placed into a regression to predict this sexual moralization, controlling for age, sex, religiosity, and conservatism in the first step, it was found that female economic dependence accounted for approximately 2% of the remaining variance in the wrongness of promiscuity ratings. That’s not nothing, to be sure, but it’s not terribly substantial either.

In the second study, 4,626 participants from across the country answered these same basic questions, along with additional questions, like their (and their partner’s) personal income. Again, there was a small correlation (r = .23) between female economic dependence and wrongness of promiscuity judgments. Also again, when entered into a regression, as before, an additional 2% of the variance in these wrongness judgments was predicted by economic dependence measures. However, this effect became more substantial when the analysis was conducted at the level of the states, rather than at the level of individuals. At the state level, the correlation between female economic dependence and attitudes towards promiscuity now rose to r = .66, with the dependence measure predicting 9% of the variance of promiscuity judgments in the regression with the other control factors.

Worth noting is that, though a women’s personal income was modestly predictive of her attitudes towards promiscuity, it was not as good of a predictor as her perception of the dependence of women she knows. There are two ways to explain this, though they are not mutually exclusive: first, it’s possible that women are adjusting their attitudes so as to avoid condemnation of others. If lots of women rely on this kind of investment, then she could be punished for being promiscuous even if it was in her personal interests. As such, she adopts anti-promiscuity attitudes as a way of avoiding punishment preemptively. The second explanation is that, given our social nature, our allies are important to us, and adjusting our moral attitudes so as to gain and maintain social support is also a viable strategy. It’s something of the other side of the same social support coin, and so both explanations can work together.

The dual-purpose marriage/friendship ring

Finally, I wanted to discuss a theoretical contradiction I find myself struggling to reconcile. Specifically, in the beginning of the paper, the authors mention that females will sometimes engage in promiscuous behavior in the service of obtaining resources from multiple males. A common example of this kind of behavior is prostitution, where a woman will engage in short-term intercourse with men in explicit exchange for money, though the strategy need not be that explicit or extreme. Rather than obtaining lots of investment from a single male, then, a viable female strategy should be to obtain several smaller investments from multiple males. Following this line of reasoning, then, we might end up predicting that female economic dependence on males might increase promiscuity and, accordingly, lower moral condemnation of it, at least in some scenarios.

If that were the case, the pattern of evidence we might predict is that, when female economic dependence is high, we should see attitudes towards promiscuity become more bi-modal, with some women more strongly disapproving of it while others become more strongly approving. As such, looking at the mean impact of these economic factors might be something of a wash (as they kind of were on the individual level). Instead, one might be interested in looking at the deviations from the mean instead, and see if those areas in which female economic dependence is the greatest show a larger standard deviation from the average moralization value than those in areas of lower dependence. Perhaps there are some theoretical reasons that this is implausible, but none are laid out in the paper.

References: Price, M., Pound, N., & Scott, I. (2014). Female economic dependence and the morality of promiscuity. Archives of Sexual Behavior, 43, 1289-1301.

Why Non-Violent Protests Work

It’s a classic saying: The pen is mightier than the sword. While this saying communicates some valuable information, it needs to be qualified in a significant way to be true. Specifically, in a one-on-one fight, the metaphorical pens do not beat swords. Indeed, as another classic saying goes: Don’t bring a knife to a gun fight. If knives aren’t advisable against guns, then pens are probably even less advisable. This raises the question as to how – and why – pens can triumph over swords in conflicts. These questions are particularly relevant, given some recent happenings in California at Berkeley where a protest against a speaking engagement by Milo Yiannopoulos took a turn for the violent. While those who initiated the violence might not have been students of the school, and while many people who were protesting might not engage in such violence themselves when the opportunity arises, there does appear to be a sentiment among some people who dislike Milo (like those leaving comments over on the Huffington Post piece) that such violence is to be expected, is understandable, and sometimes even morally justified or praiseworthy. The Berkeley riot was not the only such incident lately, either.

The Nazis shooting guns is a very important detail here

So let’s discuss why such violent behavior is often counterproductive for the swords in achieving their goals. Non-violent political movements, like those associated with leaders like Martin Luther King Jr. and Gandhi, appear to yield results, at least according to the only bit of data on the matter I’ve come across (for the link-shy: nonviolent campaigns combined complete and partial success rate was about 73%, while the comparable violent rate was about 33%). I even came across a documentary recently I intend to watch about a black man who purportedly got over 200 members of the KKK to leave the organization without force or the threat of it; he simply talked to them. That these nonviolent methods work at all seems rather unusual, at least if you were to frame it in terms of any other nonhuman species. Imagine, for instance, that a chimpanzee doesn’t like how he is being treated by the resident dominant male (who is physically aggressive), and so attempts to dissuade that individual from his behavior by nonviolently confronting him. No matter how many times the dominant male struck him, the protesting chimp would remain steadfastly nonviolent until he won over the other chimps in his group, and they all turned against the dominant male (or until the dominant male saw the error of his ways). As this would likely not work out for our nonviolent chimp, hopefully nonviolent protests are sounding a little stranger to you now; yet they often seem to work better than violence, at least for humans. We want to know why.

The answer to that question involves turning our attention back to the foundation of our moral sense: why do we perceive a dimension of right and wrong in the world in the first place? The short answer to this question, I think, is that when a dispute arises, those involved in the dispute find themselves in a state of transient need for social support (since numbers can decide the outcome of the conflict). Third parties (those not initially involved in the dispute) can increase their value as a social asset to one of the disputants by filling that need and assisting them in the fight against the rival. This allows third parties to leverage the transient needs of the disputants to build future alliances or defend existing allies. However, not all behaviors generate the same degree of need: the theft of $10 generates less need than a physical assault. Accordingly, our moral psychology represents a cognitive mechanism for determining what degree of need tends to be generated by behaviors in the interests of guiding where one’s support can best be invested (you can find the longer answer here). That’s not to say our moral sense will be the only input for deciding what side we eventually take – factors like kinship and interaction history matter too – but it’s an important part of the decision.

The applications of this idea to nonviolent protest ought to be fairly apparent: when property is destroyed, people are attacked, and the ability of regular citizens to go about their lives is disrupted by violent protests, this generates a need for social support on the part of those targeted or affected by the violence. It also generates worries in those who feel they might be targeted by similar groups in the future. So, while the protesters might be rioting because they feel they have important needs that aren’t being met (seeking to achieve them via violence, or the threat of it), third parties might come to view the damage inflicted by the protest as being more important or harmful (as they generate a larger, or more legitimate need). The net result of that violence is now that third parties side against the protesters, rather than with them. By contrast, a nonviolent protest does not create as large a need on the part of those it targets; it doesn’t destroy property or harm people. If the protesters have needs they want to see met and they aren’t inflicting costs on others, this can yield more support for the protester’s side.

I’m sure the owner of that car really had this coming…

This brings us to our third classic saying of the post: While I disagree with what you have to say, I will defend to the death your right to say it. Though such a sentiment might be seldom expressed these days, it highlights another important point: even if third parties agree with the grievances of the protesters (or, in this case, disagree with the behavior of the people being protested), the protesters can make themselves seem like suitably poor social assets by inflicting inappropriately-large costs (as disagreeing with someone generates less harm than stifling their speech through violence). Violence can alienate existing social support (since they don’t want to have to defend you from future revenge, as people who pick fights tend to initiate and perpetuate conflicts, rather than end them) and make enemies of allies (as the opposition now offers a better target of social investment, given their relative need). The answer as to why pens can beat swords, then, is not that pens are actually mightier (i.e., capable of inflicting greater costs), but rather that pens tend to be better at recruiting other swords to do their fighting for them (or, in more mild cases, pens can remove the social support from the swords, making them less dangerous). The pen doesn’t actually beat the sword; it’s the two or more swords the pen has persuaded to fight for it – and not the opposing sword – that do.

Appreciating the power of social support helps bolster our understanding of other possible interactions between pens and swords. For instance, when groups are small, swords will likely tend to be more powerful than pens, as large numbers of third parties aren’t around to be persuaded. This is why our nonviolent chimp example didn’t work well: chimps don’t reliably join disputes as third parties on the basis of behavior the way humans do. Without that third-party support, non-violence will fail. The corollary point here is that pens might find themselves in a bit of a bind when it comes to confrontations with other pens. Put in plain terms: nonviolence is a useful rallying cry for drawing social support if the other side of the dispute is being violent. If both sides abstain from violence, however, nonviolence per se no longer persuades people. You can’t convince someone to join your side in a dispute by pointing out something your side shares with the other. This should result in the expectation that people will frequently over-represent the violence of the opposition, perhaps even fabricating it completely, in the interests of persuading others. 

Yet another point that can be drawn from this analysis is that even “bad” ideas or groups (whether labeled as such because of moral or factual reasons) can recruit swords to their side if they are targeted by violence. Returning to the cases we began with – the riot at UC Berkeley and the incident where Richard Spencer got punched – if you hope to exterminate people who hold disagreeable views, then violence might seem like the answer. However, as we have seen, violence against others, even disagreeable others, who are not themselves behaving violently can rally support from third parties, as they might begin to worry that threats to free speech (or other important issues) are more harmful than the opinions and words we find disagreeable (again, hitting someone creates more need than talking does). On the other hand, if you hope to persuade people to join your side (or at least not join the opposition), you will need to engage with arguments and reasoning. Importantly, you need to treat those you hope to persuade as people and engage with the ideas and values they actually hold. If the goal in these disputes really is to make allies, you need to convince others that you have their best interests at heart. Calling those who disagree “baskets of deplorables,” suggesting they’re too stupid to understand the world, or anything to that extent doesn’t tend to win their hearts and minds. If anything, it sends a signal to them that you do not value them, giving them all the more reason to not spend their time helping you achieve your goals.  

“Huh; I guess I really am a moron and you’re right. Well done,” said no one, ever.

As a final matter, we could also discuss the idea that violence is useful at snuffing out threats preemptively. In other words, better to stop someone before they can try and attack you, rather than after their knife is already in your back. There are several reasons preemptive defense is just as suspect, so let’s run through a few: first, there are different legal penalties for acts like murder and attempted murder, as attempted – but incomplete acts – generate less needs than completed ones. As such, they garner less social support. Second, absent very strong evidence that the people targeted for violence would have eventually become violent, the preemptive attacks will not look defensive; they will simply look aggressive, returning to the initial problems violent protests face. Relatedly, it is unlikely to ever make allies of enemies; if anything, it will make deeper enemies of existing ones and their allies. Remember: when you hurt someone, you indirectly inflict costs on their friends, families, and other relations as well. Finally, some people will likely develop reasonable concerns about the probability of being attacked for holding other opinions or engaging in behaviors people find unpleasant or dangerous. With speech already being equated to violence among certain groups, this concern doesn’t seem unfounded. 

In the interests of persuading others – actors and third parties alike – nonviolence is usually the better first step. However, nonviolence alone is not enough, especially if your opposition is nonviolent as well. Not being violent does not mean you’ve already won the dispute; just that you haven’t lost it. It is at that point you need to persuade others that your needs are legitimate, your demands reasonable, and your position in their interests as well, all while your opposition attempts to be persuasive themselves. It’s not an easy task, to be sure, and it’s one many of us are worse at then we’d like to think; it’s just the best way forward.

What Might Research Ethics Teach Us About Effect Size?

Imagine for a moment that you’re in charge of overseeing medical research approval for ethical concerns. One day, a researcher approaches you with the following proposal: they are interested in testing whether a food stuff that some portion of the population occasionally consumes for fun is actually quite toxic, like spicy chilies. They think that eating even small doses of this compound will cause mental disturbances in the short term – like paranoia and suicidal thoughts – and might even cause those negative changes permanently in the long term. As such, they intend to test their hypothesis by bringing otherwise-healthy participants into the lab, providing them with a dose of the possibly-toxic compound (either just once or several times over the course of a few days), and then see if they observe any negative effects. What would your verdict on the ethical acceptability of this research be? If I had to guess, I suspect that many people would not allow the research to be conducted because one of the major tenants of research ethics is that harm should not befall your participants, except when absolutely necessary. In fact, I suspect that were you the researcher – rather than the person overseeing the research – you probably wouldn’t even propose the project in the first place because you might have some reservations about possibly poisoning people, either harming them directly and/or those around them indirectly.

“We’re curious if they make you a danger to yourself and others. Try some”

With that in mind, I want to examine a few other research hypotheses I have heard about over the years. The first of these is the idea that exposing men to pornography will cause a number of harmful consequences, such as increasing how appealing rape fantasies were, bolstering the belief that women would enjoy being raped, and decreasing the perceived seriousness of violence against women (as reviewed by Fisher et al, 2013). Presumably, the effect on those beliefs over time is serious as it might lead to real-life behavior on the part of men to rape women or approve of such acts on the parts of others. Other, less-serious harms have also been proposed, such as the possibility that exposure to pornography might have harmful effects on the viewer’s relationship, reducing their commitment, making it more likely that they would do things like cheat or abandon their partner. Now, if a researcher earnestly believed they would find such effects, that the effects would be appreciable in size to the point of being meaningful (i.e., are large enough to be reliably detected by statistical test in relatively small samples), and that their implications could be long-term in nature, could this researcher even ethically test such issues? Would it be ethically acceptable to bring people into the lab, randomly expose them to this kind of (in a manner of speaking) psychologically-toxic material, observe the negative effects, and then just let them go? 

Let’s move onto another hypothesis that I’ve been talking a lot about lately: the effects of violent media on real life aggression. Now I’ve been specifically talking about video game violence, but people have worried about violent themes in the context of TV, movies, comic books, and even music. Specifically, there are many researchers who believe that exposure to media violence will cause people to become more aggressive through making them perceive more hostility in the world, view violence as a more acceptable means of solving problems, or by making violence seem more rewarding. Again, presumably, changing these perceptions is thought to cause the harm of eventual, meaningful increases in real-life violence. Now, if a researcher earnestly believed they would find such effects, that the effects would be appreciable in size to the point of being meaningful, and that their implications could be long-term in nature, could this researcher even ethically test such issues? Would it be ethically acceptable to bring people into the lab, randomly expose them to this kind of (in a manner of speaking) psychologically-toxic material, observe the negative effects, and then just let them go?

Though I didn’t think much of it at first, the criticisms I read about the classic Bobo doll experiment are actually kind of interesting in this regard. In particular, researchers were purposefully exposing young children to models of aggression, the hope being that the children will come to view violence as acceptable and engage in it themselves. The reason I didn’t pay it much mind is that I didn’t view the experiment as causing any kind of meaningful, real-world, or lasting effects on the children’s aggression; I don’t think mere exposure to such behavior will have meaningful impacts. But if one truly believed that it would, I can see why that might cause some degree of ethical concerns. 

Since I’ve been talking about brief exposure, one might also worry about what would happen to researchers were to expose participants to such material – pornographic or violent – for weeks, months, or even years on end. Imagine a study that asked people to smoke for 20 years to test the negative effects in humans; probably not getting that past the IRB. As a worthy aside on that point, though, it’s worth noting that as pornography has become more widely available, rates of sexual offending have gone down (Fisher et al, 2013); as violent video games have become more available, rates of youth violent crime have done down too (Ferguson & Kilburn, 2010). Admittedly, it is possible that such declines would be even steeper if such media wasn’t in the picture, but the effects of this media – if they cause violence at all – are clearly not large enough to reverse those trends.

I would have been violent, but then this art convinced me otherwise

So what are we to make of the fact that these research was proposed, approved, and conducted? There are a few possibility to kick around. The first is that the research was proposed because the researchers themselves don’t give much thought to the ethical concerns, happy enough if it means they get a publication out of it regardless of the consequences, but that wouldn’t explain why it got approved by other bodies like IRBs. It is also possible that the researchers and those who approve it believe it to be harmful, but view the benefits to such research as outstripping the costs, working under the assumption that once the harmful effects are established, further regulation of such products might follow ultimately reducing the prevalence or use of such media (not unlike the warnings and restrictions placed on the sale of cigarettes). Since any declines in availability or censorship of such media have yet to manifest – especially given how access to the internet provides means for circumventing bans on the circulation of information – whatever practical benefits might have arisen from this research are hard to see (again, assuming that things like censorship would yield benefits at all) .

There is another aspect to consider as well: during discussions of this research outside of academia – such as on social media – I have not noted a great deal of outrage expressed by consumers of these findings. Anecdotal as this is, when people discuss such research, they do not appear to raising the concern that the research itself was unethical to conduct because it will doing harm to people’s relationships or women more generally (in the case of pornography), or because it will result in making people more violent and accepting of violence (in the video game studies). Perhaps those concerns exist en mass and I just haven’t seen them yet (always possible), but I see another possibility: people don’t really believe that the participants are being harmed in this case. People generally aren’t afraid that the participants in those experiments will dissolve their relationship or come to think rape is acceptable because they were exposed to pornography, or will get into fights because they played 20 minutes of a video game. In other words, they don’t think those negative effects are particularly large, if they even really believe they exist at all. While this point would be a rather implicit one, the lack of consistent moral outrage expressed over the ethics of this kind of research does speak to the matter of how serious these effects are perceived to be: at least in the short-term, not very. 

What I find very curious about these ideas – pornography causes rape, video games cause violence, and their ilk – is that they all seem to share a certain assumption: that people are effectively acted upon by information, placing human psychology in a distinctive passive role while information takes the active one. Indeed, in many respects, this kind of research strikes me as remarkably similar to the underlying assumptions of the research on stereotype threat: the idea that you can, say, make women worse at math by telling them men tend to do better at it. All of these theories seem to posit a very exploitable human psychology capable of being manipulated by information readily, rather than a psychology which interacts with, evaluates, and transforms the information it receives.

For instance, a psychology capable of distinguishing between reality and fantasy can play a video game without thinking it is being threatened physically, just like it can watch pornography (or, indeed, any videos) without actually believing the people depicted are present in the room with them. Now clearly some part of our psychology does treat pornography as an opportunity to mate (else there would be no sexual arousal generated in response to it), but that part does not necessarily govern other behaviors (generating arousal is biologically cheap; aggressing against someone else is not). The adaptive nature of a behavior depends on context.

Early hypotheses of the visual-arousal link were less successful empirically

As such, expecting something like a depiction to violence to translate consistently into some general perception that violence is acceptable and useful in all sorts of interactions throughout life is inappropriate. Learning that you can beat up someone weaker than you doesn’t mean it’s suddenly advisable to challenge someone stronger than you; relatedly, seeing a depiction of people who are not you (or your future opponent) fighting shouldn’t make it advisable for you to change your behavior either. Whatever the effects of this media, they will ultimately be assessed and manipulated internally by psychological mechanisms and tested against reality, rather than just accepted as useful and universally applied.  

I have seen similar thinking about information manipulating people another time as well: during discussions of memes. Memes are posited to be similar to infectious agents that will reproduce themselves at the expense of their host’s fitness; information that literally hijacks people’s minds for its own reproductive benefits. I haven’t seen much in the way of productive and successful research flowing from that school of thought quite yet – which might be a sign of its effectiveness and accuracy – but maybe I’m just still in the dark there. 

References: Ferguson, C. & Kilburn, J. (2010). Much ado about nothing: The misestimation and overinterpretation of violent video game effects in eastern and western nations: Comment on Anderson et al (2010). Psychological Bulletin, 136, 174-178.

Fisher, W., Kohut, T., Di Gioacchino, L., & Fedoroff , P. (2013). Pornography, sex crime, and paraphilia. Current Psychiatry Reports, 15, 362.

The Fight Against Self-Improvement

In the abstract, most everyone wants to be the best version of themselves they can. More attractive bodies, developing and improving useful skills, a good education, achieving career success; who doesn’t want those things? In practice, lots of people, apparently. While people might like the idea of improving various parts of their life, self-improvement takes time, energy, dedication, and restraint; it involves doing things that might not be pleasant in the short-term with the hope that long-term rewards will follow. Those rewards are by no means guaranteed, though, either in terms of their happening at all or the degree to which they do. While people can usually improve various parts of their life, not everyone can achieve the levels of success they might prefer no matter how much time they devote to their crafts. All of those are common reasons people will sometimes avoid improving themselves (it’s difficult and contains opportunity costs), but they do not straightforwardly explain why people sometimes fight against others improving.

“How dare they try to make a better life for themselves!”

I was recently reading an article about the appeal of Trump and came across this passage concerning this fight against the self-improvement of others:

“Nearly everyone in my family who has achieved some financial success for themselves, from Mamaw to me, has been told that they’ve become “too big for their britches.”  I don’t think this value is all bad.  It forces us to stay grounded, reminds us that money and education are no substitute for common sense and humility. But, it does create a lot of pressure not to make a better life for yourself…”

At first blush, this seems like a rather strange idea: if people in your community – your friends and family – are struggling (or have yet to build a future for themselves), why would anyone object to the prospect of their achieving success and bettering their lot in life? Part of the answer is found a little further down:

“A lot of these [poor, struggling] people know nothing but judgment and condescension from those with financial and political power, and the thought of their children acquiring that same hostility is noxious.”

I wanted to explore this idea in a bit more depth to help explain why these feelings might rear their head when faced with the social or financial success of others, be they close or distant relations.

Understanding these feelings requires drawing on a concept my theory of morality leaned heavily on: association value. Association value refers to the abstract value that others in the social world have for each other; essentially, it asks the question, “how desirable of a friend would this person make for me (and vice versa)?” This value comes in two parts: first, there is the matter of how much value someone could add to your life. As an easy example, someone with a lot of money is more capable of adding value to your life than someone with less money; someone who is physically stronger tends to be able to provide benefits a weaker individual could not; the same goes for individuals who are more physically attractive or intelligent. It is for this reason that most people wish they could improve on some or all of these dimensions if doing so were possible and easy: you end up as a more desirable social asset to others.

The second part of that association value is a bit trickier, however, reflecting the crux of the problem: how willing someone is to add value to your life. Those who are unwilling to help me have a lower value than those willing to make the investment. Reliable friends are better than flaky ones, and charitable friends are better than stingy ones. As such, even if someone has a great potential value they could add to my life, they still might be unattractive as associates if they are not going to turn that potential into reality. An unachieved potential is effectively the same thing as having no potential value at all. Conversely, those who are very willing to add to my life but cannot actually do so in meaningful ways don’t make attractive options either. Simply put, eager but incompetent individuals wouldn’t make good hires for a job, but neither would competent yet absent ones.

“I could help you pay down your crippling debt. Won’t do it, though”

With this understanding of association value, there is only one piece left to add to equation: the zero-sum nature of friendship. Friendship is a relative term; it means that someone values me more than they value others. If someone is a better friend to me, it means they are a worse friend to others; they would value my welfare over the welfare of others and, if a choice had to be made, would aid me rather than someone else. Having friends is also useful in the adaptive sense of the word: they help provide access to desirable mates, protection, provisioning, and can even help you exploit others if you’re on the aggressive side of things. Putting all these pieces together, we end up with the following idea: people generally want access to the best friends possible. What makes a good friend is a combination of their ability and willingness to invest in you over others. However, their willingness to do so depends in turn on your association value to them: how willing and able you are to add things to their lives. If you aren’t able to help them out – now or in the future – why would they want to invest resources into benefiting you when they could instead put those resources into others who could?

Now we can finally return to the matter of self-improvement. By increasing your association value through various forms of self-improvement (e.g., making yourself more physically attractive and stronger through exercise, improving your income by moving forward in your career, learning new things, etc) you make yourself a more appealing friend to others. Crucially, this includes both existing friends and higher-status individuals who might not have been willing to invest in you prior to your ability to add value to their life materializing. In other words, as your value as an associate rises, unless the value of your existing associates rises in turn, it is quite possible that you can now do better than them socially, so to speak. If you have more appealing social prospects, then, you might begin to neglect or break-off existing contacts in favor of newer, more-profitable friendships or mates. It is likely that your existing contacts understand this – implicitly or otherwise – and might seek to discourage you from improving your life, or preemptively break-off contact with you if you do, under the assumptions you will do likewise to them in the future. After all, if you’re moving on eventually they would be better off building new connections sooner, rather than later. They don’t want to invest in failing relationships anymore than you do.

In turn, those who are thinking about self-improvement might actually decide against pursuing their goals not necessarily because they wouldn’t be able to achieve them, but because they’re afraid that their existing friends might abandon them, or even that they themselves might be the ones who do the abandoning. Ironically, improving yourself can sometimes make you look like a worse social prospect.

To put that in a simple example, we could consider the world of fitness. The classic trope of weak high-schooler being bullied by the strong, jock type has been ingrained in many stories in our culture. For those doing the bullying, their targets don’t offer them much socially (their association value to others is low, while the bully’s is high) and they are unable to effectively defend themselves, making exploitation appear as an attractive option. In turn, those who are the targets of this bullying are, in some sense, wary of adopting some of the self-improvement behaviors that the jocks engage in, such as working out, because they either don’t feel they can effectively compete against the jocks in that realm (e.g., they wouldn’t be able to get as strong, so why bother getting stronger) or because they worry that improving their association value by working out will lead to them adopting a similar pattern of behavior to those they already dislike, resulting in their losing value to their current friends (usually those of similar, but relatively-low association value). The movie Mean Girls is an example of this dynamic struggle in a different domain.

So many years later, and “Fetch” still never happened…

This line of thought has, as far as I can tell, also been leveraged (again, consciously or otherwise) by one brand within the fitness community: Planet Fitness. Last I heard an advertisement for their company on the radio, their slogan appeared to be, “we’re not a gym; we’re planet fitness.” An odd statement to be sure, because they are a gym, so what are we to make of it? Presumably that they are in some important respects different from their competition. How are they different from other gyms? The “About” section on their website lays their differences out in true, ironic form:

“Make yourself comfy. Because we’re Judgement Free…you deserve a little cred just for being here. We believe no one should ever feel Gymtimidated by Lunky behavior and that everyone should feel at ease in our gyms, no matter what his or her workout goals are…We’re fiercely protective of our Planet and the rights of our members to feel like they belong. So we create an environment where you can relax, go at your own pace and just do your own thing without ever having to worry about being judged.”

This marketing is fairly transparent pandering to those who currently do not feel they can compete with those who are very fit or are worried about becoming a “lunk” themselves (they even have an alarm in the gym designed to bet set off if someone is making too much noise while lifting, or wearing the wrong outfit). However, in doing so, they devalue those who are successful or passionate in their pursuits of self-improvement. While I have never seen a gym more obsessed with judging their would-be members than Planet Fitness, so long as that judgment is pointed at the right targets, they try to appeal (presumably effectively) to certain portions of the population untapped by other gyms. Planet Fitness wants to be your friend; not the friend of those jerks who make you feel bad.

There is value in not letting success go to one’s head; no one wants a fair-weather friend who will leave the moment it’s expedient. Such an attitude undermines loyalty. The converse, however, is that using that as an excuse to avoid (or condemn) self-improvement will make you and others worse-off in the long term. A better solution to this dilemma is to improve yourself so you can improve those who matter the most to you, hoping they reciprocate in turn (or improve together for even better success).

Chivalry Isn’t Dead, But Men Are

In the somewhat-recent past, there was a vote in the Senate held on the matter of whether women in the US should be required to sign up for the selective service – the military draft – when they turn 18. Already accepted, of course, was the idea that men should be required to sign up; what appears to be a relatively less controversial idea. This represents yet another erosion of male privilege in modern society; in this case, the privilege of being expected to fight and die in armed combat, should the need arise. Now whether any conscription is likely to happen in the foreseeable future (hopefully not) is a somewhat different matter than whether women would be among the first drafted if that happened (probably not), but the question remains as to how to explain this state of affairs. The issue, it seems, is not simply one of whether men or women are better able to shoulder the physical demands of combat, however; it extends beyond military service into intuitions about real and hypothetical harm befalling men and women in everyday life. When it comes to harm, people seem to generally care less about it happening to men.

Meh

One anecdotal example of these intuitions I’ve encountered during my own writing is when an editor at Psychology Today removed an image in one my posts of a woman undergoing bodyguard training in China by having a bottle smashed over her head (which can be seen here; it’s by no means graphic). There was a concern expressed that the image was in some way inappropriate, despite my posting of other pictures of men being assaulted or otherwise harmed. As a research-minded individual, however, I want to go beyond simple anecdotes from my own life that confirm my intuitions into the empirical world where other people publish results that confirm my intuitions. While I’ve already written about this issue a number of times, it never hurts to pile on a little more.  Recently, I came upon a paper by FeldmanHall et al (2016) that examined these intuitions about harm directed towards men and women across a number of studies that can help me do just that.

The first of the studies in the paper was a straightforward task: fifty participants were recruited from Mturk to respond to a classic morality problem called the footbridge dilemma. Here, the life of five people can be saved from a train by pushing one person in front of it. When these participants were asked whether they would push a man or woman to their death (assuming, I think, that they were going to push one of them), 88% of participants opted for killing the man. Their second study expanded a bit on that finding using the same dilemma, but asking instead how willing they would be (on a 1-10 scale) to push either a man, woman, or a person of unspecified gender without other options existing. The findings here with regard to gender were a bit less dramatic and clear-cut: participants were slightly more likely to indicate that they would push a man (M = 3.3) than a woman (M = 3.0), though female participants were nominally less likely to push a woman (roughly M = 2.3) than men were (roughly M = 3.8), perhaps counter to what might be predicted. That said, the sample size for this second study was fairly small (only about 25 per group), so that difference might not be worth making much over until more data is collected.

When faced with a direct and unavoidable trade-off between the welfare of men and women, then, the results overwhelmingly showed that the women were being favored; however, when it came to cases where men or women could be harmed alone, there didn’t seem to be a marked difference between the two. That said, that moral dilemma alone can only take us so far in understanding people’s interests about the welfare of others in no small part because of their life-and-death nature potentially introducing ceiling effects (man or woman, very few people are willing to throw someone else in front of a train). In other instances where the degree of harm is lowered – such as, say, male vs female genital cutting – differences might begin to emerge. Thankfully, FeldmanHall et al (2016) included an additional experiment that brought these intuitions out of the hypothetical and into reality while lowering the degree of harm. You can’t kill people to conduct psychological research, after all.

Yet…

In the next experiment, 57 participants were recruited and given £20. At the end of the experiment, any money they had would be multiplied by ten, meaning participants could leave with a total of £200 (which is awfully generous as far as these things go). As with most psychology research, however, there was a catch: the participants would be taking part in 20 trials where £1 was at stake. A target individual – either a man or a woman – would be receiving a painful electric shock, and the participants could give up some of that £1 to reduce its intensity, with the full £1 removing the shock entirely. To make the task a little less abstract, the participants were also forced to view videos of the target receiving the shocks (which, I think, were prerecorded videos of real shocks – rather than shocks in real time – but I’m not sure from my reading of the paper if that’s a completely accurate description).

In this study, another large difference emerged: as expected, participants interacting with female targets ended up keeping less money by the end (M = £8.76) than those interacting with male targets (M = £12.54; d = .82). In other words, the main finding of interest was that participants were willing to give up substantially more money to prevent women from receiving painful shocks than they were to help men. Interestingly, this was the case in spite of the facts that (a) the male target in the videos was rated more positively overall than the female target, and (b) in a follow-up study where participants provided emotional reactions to thinking about being a participant in the former study, the amount of reported aversion to letting the target suffer shocks was similar regardless of the target’s gender. As the authors conclude:

While it is equally emotionally aversive to hurt any individual—regardless of their gender—that society perceives harming women as more morally unacceptable, suggests that gender bias and harm considerations play a large role in shaping moral action.

So, even though people find harming others – or letting them suffer harm for a personal gain – to generally be an uncomfortable experience regardless of their gender, they are more willing to help/avoid harming women than they are men, sometimes by a rather substantial margin.

Now onto the fun part: explaining these findings. It doesn’t go nearly far enough as an explanation to note that “society condones harming men more than women,” as that just restates the finding; likewise, we only get so far by mentioning that people perceive men to have a higher pain tolerance than women (because they do), as that only pushes the question back a step to the matter of why men tolerate more pain than women. As for my thoughts, first, I think these findings highlight the importance of a modular understanding of psychological systems: our altruistic and moral systems are made up of a number of component pieces, each with a distinct function, and the piece that is calculating how much harm is generated is, it would seem, not the same piece deciding whether or not to do something about it. The obvious reason for this distinction is that alleviating harm to others isn’t always adaptive to the same extent: it does me more adaptive good to help kin relative to non-kin, friends relative to strangers, and allies relative to enemies, all else being equal. 

“Just stay out of it; he’s bigger than you”

Second, it might well be the case that helping men, on average, tends to pay off less than helping women. Part of the reason for that state of affairs is that female reproductive potential cannot be replaced quite as easily as male potential; male reproductive success is constrained by the number of available women much more than female potential is by male availability (as Chris Rock put it, “any money spent on dick is a bad investment“). As such, men might become particularly inclined to invest in alleviating women’s pain as a form of mating effort. The story clearly doesn’t end there, however, or else we would predict men being uniquely likely to benefit women, rather than both sexes doing similarly. This raises two additional possibilities to me: one of these is that, if men value women highly as a form of mating effort, that increased social value could also make women more valuable to other women in turn. To place that in a Game of Thrones example, if a powerful house values their own children highly, non-relatives may come to value those same children highly as well in the hopes of ingratiating themselves to – or avoiding the wrath of – the child’s family.

The other idea that comes to mind is that men are less willing to reciprocate aid that alleviated their pain because to do so would be an admission of a degree of weakness; a signal that they honestly needed the help (and might in the future as well), which could lower their relative status. If men are less willing to reciprocate aid, that would make men worse investments for both sexes, all else being equal; better to help out the person who would experience more gratitude for your assistance and repay you in turn. While these explanations might or might not adequately explain these preferential altruistic behaviors directed towards women, I feel they’re worthwhile starting points.

References: FeldmanHall, O., Dalgleish, T., Evans, D., Navrady, L., Tedeschi, E., & Mobbs, D. (2016). Moral chivalry: Gender and harm sensitive predict costly altruism. Social Psychological & Personality Science, DOI: 10.1177/1948550616647448

Morality, Alliances, And Altruism

Having one’s research ideas scooped is part of academic life. Today, for instance, I’d like to talk about some research quite similar in spirit to work I intended to do as part of my dissertation (but did not, as it didn’t end up making the cut in the final approved package). Even if my name isn’t on it, it is still pleasing to see the results I had anticipated. The idea itself arose about four years ago, when I was discussing the curious case of Tucker Max’s donation to Planned Parenthood being (eventually) rejected by the organization. To quickly recap, Tucker was attempting to donate half-a-million dollars to the organization, essentially receiving little more than a plaque in return. However, the donation was rejected, it would seem, under fear of building an association between the organization and Tucker, as some people perceived Tucker to be a less-than-desirable social asset. This, of course, is rather strange behavior, and we would recognize it as such if it were observed in any other species (e.g., “this cheetah refused a free meal for her and her cubs because the wrong cheetah was offering it”); refusing free benefits is just peculiar.

“Too rich for my blood…”

As it turns out, this pattern of behavior is not unique to the Tucker Max case (or the Kim Kardashian one…); it has recently been empirically demonstrated by Tasimi & Wynn (2016), who examined how children respond to altruistic offers from others, contingent on the moral character of said others. In their first experiment, 160 children between the ages of 5 and 8 were recruited to make an easy decision; they were shown two pictures of people and told that the people in the pictures wanted to give them stickers, and they had to pick which one they wanted to receive the stickers from. In the baseline conditions, one person was offering 1 sticker, while the other was offering either 2, 4, 8, or 16 stickers. As such, it should come as no surprise that the person offering more stickers was almost universally preferred (71 of the 80 children wanted the person offering more, regardless of how many more).

Now that we’ve established that more is better, we can consider what happened in the second condition where the children received character information about their benefactors. One of the individuals was said to always be mean, having hit someone the other day while playing; the other was said to always be nice, having hugged someone the other day instead. The mean person was always offering more stickers than the nice one. In this condition, the children tended to shun the larger quantity of stickers in most cases: when the sticker ratio was 2:1, less than 25% of children accepted the larger offer from the mean person; the 4:1 and 8:1 ratios were accepted about 40% of the time, and the 16:1 ratio 65% of the time. While more is better in general, it is apparently not better enough for children to overlook the character information at times. People appear willing to forgo receiving altruism when it’s coming from the wrong type of person. Fascinating stuff, especially when one considers that such refusals end up leaving the wrongdoers with more resources than they would otherwise have (if you think someone is mean, wouldn’t you be better off taking those resources from them, rather than letting them keep them?).

This line was replicated in 64 very young children (approximately one-year old). In this experiment, the children observed a puppet show in which two puppets offered them crackers, with one offering a single cracker and the other offering either 2 or 8. Again, unsurprisingly, the majority of children accepted the larger offer, regardless of how much larger it was (24 of 32 children). In the character information condition, one puppet was shown to be a helper, assisting another puppet in retrieving a toy from a chest, whereas the other puppet was a hinderer, preventing another from retrieving a toy. The hindering puppet, as before, now offered the greater number of crackers, whereas the helper only offered one cracker. When the hindering puppet was offering 8 crackers, his offer was accepted about 70% of the time, which did not differ from the baseline group. However, when the hindering puppet was only offering 2, the acceptance rate was a mere 19%. Even young children, it would seem, are willing to avoid accepting altruism from wrongdoers, assuming the difference in offers isn’t too large.

“He’s not such a bad guy once you get $10 from him”

While neat, these results beg for a deeper explanation as to why we should expect such altruism to be rejected. I believe hints of this explanation are provided by the way Tasimi & Wynn (2016) write about their results:

Taken together, these findings indicate that when the stakes are modest, children show a strong tendency to go against their baseline desire to optimize gain to avoid ‘‘doing business” with a wrongdoer; however, when the stakes are high, children show more willingness to ‘‘deal with the devil…”

What I find strange about that passage is that children in the current experiments were not “doing business” or “making deals” with the altruists; there was no quid pro quo going on. The children were no more doing business with the others than they are doing business with a breastfeeding mother. Nevertheless, there appears to an implicit assumption being made here: an individual who accepts altruism from another is expected to pay that altruism back in the future. In other words, merely receiving altruism from another generates the perception of a social association between the donor and recipient.

This creates an uncomfortable situation for the recipient in cases where the donor has enemies. Those enemies are often interested in inflicting costs on the donor or, at the very least, withholding benefits from him. In the latter case, this makes that social association with the donor less beneficial than it otherwise might, since the donor will have fewer expected future resources to invest in others if others don’t help him; in the former case, not only does the previous logic hold, but the enemies of your donor might begin to inflict costs on you as well, so as to dissuade you from helping him. Putting this into a quick example Jon – your friend – goes out an hurts Bob, say, by sleeping with Bob’s wife. Bob and his friends, in response, both withhold altruism from Jon (as punishment) and might even be inclined to attack him for his transgression. If they perceive you as helping Jon – either by providing him with benefits or by preventing them from hurting Jon – they might be inclined to withhold benefits from or punish you as well until you stop helping Jon as a means of indirect punishment. To turn the classic phrase, the friend of my enemy is also my enemy (just as the enemy of my enemy is my friend).

What cues might they use to determine if you’re Jon’s ally? Well, one likely useful cue is whether Bob directs altruism towards you. If you are accepting his altruism, this is probably a good indication that you will be inclined to reciprocate it later (else risk being labeled a social cheater or free rider). If you wish to avoid condemnation and punishment by proxy, then, one route to take is to refuse benefits from questionable sources. This risk can be overcome, however, in cases where the morally-questionable donor is providing you a large enough benefit which, indeed, was precisely the pattern of results observed here. What will determine what counts as “large enough” should be expected to vary as a function of a few things, most notably the size and nature of the transgressions, as well as the degree of expected reciprocity. For example, receiving large donations from morally-questionable donors should be expected to be more acceptable to the extent the donation is made anonymously vs publicly, as anonymity might reduce the perceived social associations between donor and recipient.

You might also try only using “morally clean” money

Importantly (as far as I’m concerned) this data fits well within my theory of morality – where morality is hypothesized to function as an association-management mechanism – but not particularly well with other accounts: altruistic accounts of morality should predict that more altruism is still better, dynamic coordination says nothing about accepting altruism, as giving isn’t morally condemned, and self-interest/mutualistic accounts would, I think, also suggest that taking more money would still be preferable since you’re not trying to dissuade others from giving. While I can’t help but feel some disappointment that I didn’t carry this research out myself, I am both happy with the results that came of it and satisfied with the methods utilized by the authors. Getting research ideas scooped isn’t so bad when they turn out well anyway; I’m just happy enough to see my main theory supported.  

References: Tasimi, A. & Wynn, K. (2016). Costly rejection of wrongdoers by infants and children. Cognition, 151, 76-79.

Morality, Empathy, And The Value Of Theory

Let’s solve a problem together: I have some raw ingredients that I would like to transform into my dinner. I’ve already managed to prepare and combine the ingredients, so all I have left to do is cook them. How am I to solve this problem of cooking my food? Well, I need a good source of heat. Right now, my best plan is to get in my car and drive around for a bit, as I have noticed that, after I have been driving for some time, the engine in my car gets quite hot. I figure I can use the heat generated by driving to cook my food. It would come as no surprise to anyone if you have a couple of objections with my suggestion, mostly focused on the point that cars were never designed to solve the problems posed by cooking. Sure, they do generate heat, but that’s really more of a byproduct of their intended function. Further, the heat they do produce isn’t particularly well-controlled or evenly-distributed. Depending on how I position my ingredients or the temperature they require, I might end up with a partially-burnt, partially-raw dinner that is likely also full of oil, gravel, and other debris that has been kicked up into the engine. Not only is the car engine not very efficient at cooking, then, it’s also not very sanitary. You’d probably recommend that I try using a stove or oven instead.

“I’m not convinced. Get me another pound of bacon; I’m going to try again”

Admittedly, this example is egregious in its silliness, but it does make its point well: while I noted that my car produces heat, I misunderstood the function of the device more generally and tried to use it to solve a problem inappropriately as a result. The same logic also holds in cases where you’re dealing with evolved cognitive mechanisms. I examined such an issue recently, noting that punishment doesn’t seem to do a good job as a mechanism for inspiring trust, at least not relative to its alternatives. Today I wanted to take another run at the underlying issue of matching proximate problem to adaptive function, this time examining a different context: directing aid to the great number of people around the world who need altruism to stave off death and non-lethal, but still quite severe, suffering (issues like alleviating malnutrition and infectious diseases). If you want to inspire people to increase the amount of altruism directed towards these needy populations, you will need to appeal to some component parts of our psychology, so what parts should those be?

The first step in solving this problem is to think about what cognitive systems might increase the amount of altruism directed towards others, and then examine the adaptive function of each to determine whether they will solve the problem particularly efficiently. Paul Bloom attempted a similar analysis (about three years ago, but I’m just reading it now), arguing that empathetic cognitive systems seem like a poor fit for the global altruism problem. Specifically, Bloom makes the case that empathy seems more suited to dealing with single-target instances of altruism, rather than large-scale projects. Empathy, he writes, requires an identifiable victim, as people are giving (at least proximately) because they identify with the particular target and feel their pain. This becomes a problem, however, when you are talking about a population of 100 or 1000 people, since we simply can’t identify with that many targets at the same time. Our empathetic systems weren’t designed to work that way and, as such, augmenting their outputs somehow is unlikely to lead to a productive solution to the resource problems plaguing certain populations. Rather than cause us to give more effectively to those in need, these systems might instead lead us to over-invest further in a single target. Though Bloom isn’t explicit on this point, I feel he would likely agree that this has something to do with empathetic systems not having evolved because they solved the problems of others per se, but rather because they did things like help the empathetic person build relationships with specific targets, or signal their qualities as an associate to those observing the altruistic behavior.

Nothing about that analysis strikes me as distinctly wrong. However, provided I have understood his meaning properly, Bloom goes on to suggest that the matter of helping others involves the engagement of our moral systems instead (as he explains in this video, he believes empathy “fundamentally…makes the world worse,” in the moral sense of the term, and he also writes that there’s more to morality – in this case, helping others – than empathy). The real problem with this idea is that our moral systems are not altruistic systems, even if they do contain altruistic components (in much the same way that my car is not a cooking mechanism even if it does generate heat). This can be summed up in a number of ways, but simplest is in a study by Kurzban, DeScioli, & Fein (2012) in which participants were presented with the footbridge dilemma (“Would you push one person in front of a train – killing them – to save five people from getting killed by it in turn?”). If one was interested in being an effective altruist in the sense of delivering the greatest number of benefits to others, pushing is definitely the way to go under the simple logic that five lives saved is better than one life spared (assuming all lives have equal value). Our moral systems typically oppose this conclusion, however, suggesting that saving the lives of the five is impermissible if it means we need to kill the one. What is noteworthy about the Kurzban et al (2012) paper is that you can increase people’s willingness to push the one if the people in the dilemma (both being pushed and saved) are kin.

Family always has your back in that way…

The reason for this increase in pushing when dealing with kin, rather than strangers, seems to have something to do with our altruistic systems that evolved for delivering benefits to close genetic relatives; what we call kin-selected mechanisms (mammary glands being a prime example). This pattern of results from the footbridge dilemma suggests there is a distinction between our altruistic systems (that benefit others) and our moral ones; they function to do different things and, as it seems, our moral systems are not much better suited to dealing with the global altruism problem than empathetic ones. Indeed, one of the main features of our moral systems is nonconsequentialism: the idea that the moral value of an act depends on more than just the net consequences to others. If one is seeking to be an effective altruist, then, using the moral system to guide behavior seems to be a poor way to solve that problem because our moral system frequently focuses on behavior per se at the expense of its consequences. 

That’s not the only reason to be wary of the power of morality to solve effective altruism problems either. As I have argued elsewhere, our moral systems function to manage associations with others, most typically by strategically manipulating our side-taking behavior in conflicts (Marczyk, 2015). Provided this description of morality’s adaptive function is close to accurate, the metaphorical goal of the moral system is to generate and maintain partial social relationships. These partial relationships, by their very nature, oppose the goals of effective altruism, which are decidedly impartial in scope. The reasoning of effective altruism might, for instance, suggest that it would be better for parents to spend their money not on their child’s college tuition, but rather on relieving dehydration in a population across the world. Such a conclusion would conflict not only with the outputs of our kin-selected altruistic systems, but can also conflict with other aspects of our moral systems. As some of my own, forthcoming research finds, people do not appear to perceive much of a moral obligation for strangers to direct altruism towards other strangers, but they do perceive something of an obligation for friends and family to help each other (specifically when threatened by outside harm). Our moral obligations towards existing associates make us worse effective altruists (and, in Bloom’s sense of the word, morally worse people in turn).

While Bloom does mention that no one wants to live in that kind of strictly utilitarian world – one in which the welfare of strangers is treated equally to the welfare of friends and kin – he does seem to be advocating we attempt something close to it when he writes:

Our best hope for the future is not to get people to think of all humanity as family—that’s impossible. It lies, instead, in an appreciation of the fact that, even if we don’t empathize with distant strangers, their lives have the same value as the lives of those we love.

Appreciation of the fact that the lives of others have value is decidedly not the same thing as behaving as if they have the same value as the ones we love. Like most everyone else in the world, I want my friends and family to value my welfare above the welfare of others; substantially so, in fact. There are obvious adaptive benefits to such relationships, such as knowing that I will be taken care of in times of need. By contrast, if others showed no particular care for my welfare, but rather just sought to relieve as much suffering as they could wherever it existed in the world, there would be no benefit to my retaining them as associates; they would provide with me assistance or they wouldn’t, regardless of the energy I spent (or didn’t) maintaining social relationship with them. Asking the moral system to be a general-purpose altruism device is unlikely to be much more successful than asking my car to be an efficient oven, that people to treat others the world over as if they were kin, or that you empathize with 1000 people. It represents an incomplete view as to the functions of our moral psychology. While morality might be impartial with respect to behavior, it is unlikely to be impartial with regard to the social value of others (which is why, also in my forthcoming research, I find that stealing to defend against an outside agent of harm is rated as more morally acceptable than doing so to buy recreational drugs).  

“You have just as much value to me as anyone else; even people who aren’t alive yet”

To top this discussion off, it is also worth mentioning those pesky, unintended consequences that sometimes accompany even the best of intentions. By relieving deaths from dehydration, malaria, and starvation today, you might be ensuring greater harm in future generations in the form of increasing the rate of climate change, species extinction, and habitat destruction brought about by sustaining larger global human populations. Assuming for the moment that was true, would that mean that feeding starving people and keeping them alive today would be morally wrong? Both options – withholding altruism when it could be provided and ensuring harm for future generations – might get the moral stamp of disapproval, depending on the reference group (from the perspective of future generations dealing with global warming, it’s bad to feed; from the perspective of the starving people, it’s bad to not feed). This is why the slight majority of participants in Kurzban et al (2012) reported that pushing and not pushing can both be morally unacceptable courses of action.  If we are relying on our moral sense to guide our behavior in this instance, then, we would unlikely be very successful in our altruistic endeavors.

References: Kurzban, R., DeScioli, P., & Fein, D. (2012). Hamilton vs. Kant: Pitting adaptations for altruism against adaptation for moral judgment. Evolution & Human Behavior, 33, 323-333.

Marczyk, J. (2015). Moral alliance strategies theory. Evolutionary Psychological Science, 1, 77-90.

Punishment Might Signal Trustworthiness, But Maybe…

As one well-known saying attributed to Maslow goes, “when all you have is hammer, everything looks like a nail.” If you can only do one thing, you will often apply that thing as a solution to a problem it doesn’t fit particularly well. For example, while a hammer might make for a poor cooking utensil in many cases, if you are tasked with cooking a meal and given only a hammer, you might try to make the best of a bad situation, using the hammer as an inefficient, makeshift knife, spoon, and spatula. That you might meet with some degree of success in doing so does not tell you that hammers function as cooking implements. Relatedly, if I then gave you a hammer and a knife, and tasked with you the same cooking jobs, I would likely observe that hammer use drops precipitously while knife use increases quite a bit. It is also worth bearing in mind that if the only task you have to do is cooking, the only conclusion I’m realistically capable of drawing concerns whether a tool is designed for cooking. That is, if I give you a hammer and a knife and tell you to cook something, I won’t be able to draw the inference that hammers are designed for dealing with nails because nails just aren’t present in the task.

Unless one eats nails for breakfast, that is

While all that probably sounds pretty obvious in the cooking context, a very similar set up appears to have been used recently to study whether third-party punishment (the punishment of actors by people not directly affected by their behavior; hereafter TPP) functions to signal the trustworthiness of the punisher. In their study, Jordan et al (2016) has participants playing a two-stage economic game. The first stage was a TPP game. In this game, there are three players: player A is the helper, and is given 30 cents, player B is the recipient, and given nothing, and player C is the punisher, given 20 cents. The helper can choose to either give the recipient 15 cents or nothing. If the helper decides to give nothing, the punisher then has the option to pay 5 cents to reduce the helper’s pay by 15 cents, or not do so. In this first stage, the first participant would either play one round as a helper or a punisher, or play two rounds: one in the role of the helper and another in the role of the punisher.

The second stage of this game involved a second participant. This participant observed the behavior of the people playing the first game, and then played a trust game with the first participant. In this trust game, the second participant is given 30 cents and decides how much, if any, to send to the first participant. Any amount sent is tripled, and then the first participant decides how much of that amount, if any, to send back. The working hypothesis of Jordan et al (2016) is that TPP will be used a signal of trustworthiness, but only when it is the only possible signal; when participants have an option to send better signals of trustworthiness – such as when they are in the roll of the helper, rather than the punisher – punishment will lose its value as a signal for trust. By contrast, helping should always serve as a good signal of trustworthiness, regardless of whether punishment is an option.

Indeed, this is precisely what they found. When the first participant was only able to punish, the second participant tended to trust punishers more, sending them 16% more in the trust game than non-punishers; in turn, the punishers also tended to be slightly more trustworthy, sending back 8% more than non-punishers. So, the punishers were slightly, though not substantially, more trustworthy than the non-punishers when punishing was all they could do. However, when participants were in the helper role (and not the punisher role), those who transferred money to the recipient were in turn trusted more – being sent an average of 39% more in the trust game than non-helpers – and were, in fact, more trustworthy – returning an average of 25% more than non-helpers. Finally, when the first participant was in the role of both the punisher and the helper, punishment was less common (30% of participants in both roles punished, whereas 41% of participants who were only punishers did) and, controlling for helping, punishers were only trusted with 4% more in the second stage and actually returned 0.3% less.

The final task was less about trust and more about upper-body strength

To sum up, then, when people only had the option to punish others, punishment behavior was used by observers as a cue to trustworthiness. However, when helping was possible as well, punishment ceased to predict trustworthiness. From this set of findings, the authors make the rather strange conclusion that “clear support” was found for their model of punishment as signaling trustworthiness. My enthusiasm for that interpretation is a bit more tepid. To understand why, we can return to my initial example: you have given people a tool (a hammer/punishment) and a task (cooking/a trust game). When they use this tool in the task, you see some results, but they aren’t terribly efficient (16% more trusted and 8% more returned). Then, you give them a second tool (a knife/helping) to solve the same task. Now the results are much better (39% more trusted, 25% more returned). In fact, when they have both tools, they don’t seem to use the first one to accomplish the task as much (punishment falls 11%) and, when they do, they don’t end up with better outcomes (4% more trusted, 0.3% less returned). From that data alone, I would say that the evidence does not support the inference that punishment is a mechanism for signaling trustworthiness. People might try using it in a pinch, but its value seems greatly diminished compared to other behaviors.  

Further, the only tasks people were doing involved playing a dictator and trust game. If punishment serves some other purpose beyond signaling trustworthiness, you wouldn’t be able to observe it there because people aren’t in the right contexts for it to be observed. To make that point clear, we could consider other examples. First, let’s consider murder. If I condemn murder morally and, as a third party, punish someone for engaging in murder, does this tell you that I am more trustworthy than someone else who doesn’t punish it themselves? Probably not; almost everyone condemns murder, at least in the abstract, but the costs of engaging in punishment aren’t the same for all people. Someone who is just as trustworthy might not be willing or able to suffer the associated costs. What about something a bit more controversial: let’s say that, as a third party, I punish people for obtaining or providing abortions. Does hearing about my punishment make me seem like a more trustworthy person? That probably depends on what side of the abortion issue you fall on.

To put this in more precise detail, here’s what I think is going on: the second participant – the one sending money in the trust game, so let’s call him the sender – primarily wants to get as much money back as possible in this context. Accordingly, they are looking for cues that the first participant – the one they’re trusting, or the recipient – is an altruist. One good cue for altruism is, well, altruism. If the sender sees that the recipient has behaved altruistically by giving someone else money, this is a pretty good cue for future altruism. Punishment, however, is not the same thing as altruism. From the point of the view of the person benefiting from the punishment, TPP is indeed altruistic; from the point of view of the target of that TPP, the punishment is spiteful. While punishment can contain this altruistic component, it is more about trading off the welfare of others, rather than providing benefits to people per se. While that altruistic component of punishment can be used as a cue for trustworthiness in a pinch when no other information is available, that does not suggest to me sending such a signal is its only, or even its primary function.

Sure, they can clean the floors, but that’s not really why I hired them

In the real world, people’s behaviors are not ever limited to just the punishment of perpetrators. If there are almost always better ways to signal one’s trustworthiness, then TPP’s role in that regard is likely quite low. For what it’s worth, I happen to think that the roll of TPP has more to do with using transient states of need to manage associations (friendships) with others, as such an explanation works well outside the narrow boundaries of the present paper when things other than unfairness are being punished and people are seeking to do more than make as much money as possible. Finding a good friend is not the same thing as finding a good altruist, and friendships do not usually resemble trust games. However, when all you are observing is unfairness and cooperation, TPP might end up looking a little bit like a mechanism for building trust. Sometimes. If you sort of squint a bit.

References: Jordan, K., Hoffman, M., Bloom, P. & Rand. D. (2016). Third-party punishment as a costly signal of trustworthiness. Nature, 530, 473-476.

Clues To The Function Of Moralistic Punishment

One of the major questions I’ve spent the better part of the last few years trying to work up an answer to is the matter of why – in the adaptive sense – people punish others as third parties moralistically. That is to ask why people condemn and punish others for engaging in behavior perceived to be “wrong”. This is not the same question as why people take revenge on others for their behavior; that kind of punishment isn’t terribly mysterious. Chimps, for instance, will punish those who cheated them, but they seem fairly uninterested in punishing those who cheated others (Riedl, Jensen, Call, & Tomasello, 2012). Humans are a bit different in that regard: not only do we involve ourselves in the disputes of others, but we do so on behalf of people who aren’t kin or even social allies. This is rather strange behavior, even though we do not usually perceive it to be so. I’m not the only one who has attempted an evolutionary explanation for moralistic punishment, of course; in my hubris, however, I do feel that the explanation I ended up floating tends to fit the existing data better than other, contemporary theories. Today, I wanted to try and demonstrate that fit by considering some relative-recent empirical research.

“I don’t know, man; I think it might be a little baggy. Try a size down”

The paper in question (Kurzban, DeScioli, & O’Brien, 2007) concerns audience effects on punishment. Specifically, it addresses the following question: do people punish others more (as third parties) when they are being watched and their decisions are being made public, relative to when they’re anonymous? The question was examined across two experiments. In the first of these, 43 participants viewed the outcome of a previous trust game in which one player had the choice between two different outcomes: $39 for himself and $1 for the other player, or $20 for both. In this case, the player had selected the former, selfish option. The participants in question were provided with $7 and told they could spend any amount of that money – from $0 to $7 – to deduct from the selfish player. For each dollar they spent on deductions, the selfish player would lose $3. Importantly, the participants were either doing so in conditions under which their spending would be unknown to anyone but themselves (n = 24), or conditions where the experimenter would known how much each particular participant spent (n = 19). In the anonymous condition, 38% of participants spent at least some money on punishment, whereas in the public condition, 47% opted to punish something. Importantly, however, the amount punished varied between these conditions. Those in the public condition tended to spend more on punishment than those in the anonymous condition when they did. For instance, though precise numbers are not reported, approximately 20% of the participants in the public condition paid the full $7 to punish the selfish player, as compared with 0% in the private condition.

The second experiment examined peoples’ responses to a different kind of economic game – a prisoner’s dilemma – where one player attempted to cooperate with their partner, but was met with defection. In terms of the outcome, this means that rather than both players getting $25 (if both had cooperated), the cooperative partner got $0 while the defector got $30. The participants in this experiment were shown this outcome and given $10. As before, the participants could spend any amount of this $10 to deduct from the defecting player at the 3-to-1 rate. In this case, the punishment decisions would either be known by (a) no one else but the participant (n = 31), (b) the experimenter alone (n = 26), or (c) the entire group of participants (n = 30). Again, fewer participants opted to punish at all when punishment was anonymous (42%) relative to when punishment was known to either the experimenter (65%) or the whole group (67%); the amount spent on punishment increased as well (Ms = $1.06, $2.54, and $3.17, respectively). Further, those in the public conditions also tended to display more anger, disgust, and contempt at the cheating, suggesting that they were more upset by the transgression when other people were watching (or they were at least pretending to be).

The existence of audiences seemed to have an important impact on determining moralistic punishment: not only did the presence of other people affect the percent of third parties willing to punish at all, but it also positively influenced how much they did punish. In a sentence, we could say that the presence of observers was being used as an input by the cognitive systems determining moralistic sentiments. While this may sound like a result that could have been derived without needing to run the experiments, the simplicity and predictability of these findings by no means makes them trivial on a theoretical level when it comes to answering the question, “what is the adaptive value of punishment?” Any theory seeking to explain morality in general – and moral punishment in particular – needs to be able to present a plausible explanation for why cues to anonymity (or lack thereof) are being used as inputs by our moral systems. What benefits arise from public punishment that fail to materialize in anonymous cases?

“If you’re good at something, never do it for free…or anonymously”

The first theoretical explanation for morality that these results cut against is the idea that our moral systems evolved to deliver benefits to other per se. One of the common forms of this argument is that our moral systems evolved because they delivered benefits to the wider group (in the form of maintaining beneficial cooperation between members) even if doing so was costly in terms of individual fitness. This argument clearly doesn’t work for explaining the present data, as the potential benefits that could be delivered to others by deterring cheating or selfishness do not (seem to) change contingent on anonymity, yet moral punishment does. 

These results also cut against some aspects of mutualistic theories for morality. This class of theory suggests that, broadly speaking, our moral sense responds primarily to behavior perceived to be costly to the punisher’s personal interests. In short, third parties do not punish perpetrators because they have any interest in the welfare of the victim, but rather because punishers can enforce their own interests through that punishment, however indirectly. To place that idea into a quick example, I might want to see a thief punished not because I care about the people he harmed, but rather because I don’t want to be stolen from and punishing the thief for their behavior reduces that probability for me. Since my interests in deterring certain behaviors do not change contingent on my anonymity, the mutualistic account might feel some degree of threat from the present data. As a rebuttal to that point, the mutualistic theories could make the argument that my punishment being made public would deter others from stealing from me to a greater extent than if they did not know I was the one responsible for punishing. “Because I punished theft in a case where it didn’t effect me,” the rebuttal goes, “this is a good indication I would certainly punish theft which did affect me. Conversely, if I fail to punish transgressions against others, I might not punish them when I’m the victim.” While that argument seems plausible at face value, it’s not bulletproof either. Just because I might fail to go out of my way to punish someone else who was, say, unfaithful in their relationship, that does not necessarily mean I would tolerate infidelity in my own. This rebuttal would require an appreciable correspondence between my willingness to punish those who transgress against others and those who do so against me. As much of the data I’ve seen suggests a weak-to-absent link in both humans and non-humans on that front, that argument might not hold much empirical water.

By contrast, the present evidence is perfectly consistent with the association-management explanation posited in my theory of morality. In brief, this theory suggests that our moral sense helps us navigate the social world, identifying good and bad targets of our limited social investment, and uses punishment to build and break relationships with them. Morality, essentially, is an ingratiation mechanism; it helps us make friends (or, alternatively, not alienate others). Under this perspective, the role of anonymity makes quite a bit of sense: if no one will know how much you punished, or whether you did at all, your ability to use punishment to manage your social associations is effectively compromised. Accordingly, third-party punishment drops off in a big way. On the other hand, when people will know about their punishment, participants become more willing to invest in it in the face of better estimated social return. This social return need not necessarily reside with the actual person being harmed, either (who, in this case, was not present); it can also come from other observers of punishment. The important part is that your value as an associate can be publicly demonstrated to others.

The first step isn’t to generate value; it’s to demonstrate it

The lines between these accounts can seem a bit fuzzy at times: good associates are often ones who share your values, providing some overlap between mutualistic and association accounts. Similarly, punishment, at least from the perspective of the punisher, is altruistic: they are suffering a cost to provide someone else with a benefit. This provides some overlap between the association and altruistic accounts as well. The important point for differentiating these accounts, then, is to look beyond their overlap into domains where they make different predictions in outcomes, or predict the same outcome will obtain, but for different reasons. I feel the results of the present research not only help do that (inconsistent with group selection accounts), but also present opportunities for future research directions as well (such as the search for whether punishment as a third party appreciably predicts revenge).

References: Kurzban, R., DeScioli, P., & O’Brien, E. (2007). Audience effects on moralistic punishment. Evolution & Human Behavior, 28, 75-84.

Riedl, K., Jensen, K., Call, J., & Tomasello, M. (2012). No third-party punishment in chimpanzees. Proceedings of the National Academy of Science, 109, 14824–14829