Moral Stupefaction

I’m going to paint a picture of loss. Here’s a spoiler alert for you: this story will be a sad one.

Mark is sitting in a room with his cat, Tigger. Mark is a 23-year-old man who has lived most of his life as a social outcast. He never really fit in at school and he didn’t have any major accomplishments to his name. What Mark did have was Tigger. While Mark had lived a lonely life in his younger years, that loneliness had been kept at bay when, at the age of 12, he adopted Tigger. The two had been inseparable ever since, with Mark taking care of the cat with all of his heart. This night, as the two laid together, Tigger’s breathing was labored. Having recently become infected with a deadly parasite, Tigger was dying. Mark was set on keeping his beloved pet company in its last moments, hoping to chase away any fear or pain that Tigger might be feeling. Mark held Tigger close, petting him as he felt each breath grow shallower. Then they stopped coming all together. The cat’s body went limp, and Mark watched the life of only thing he had loved, and that had loved him, fade away.

As the cat was now dead and beyond experiencing any sensations of harm, Mark promptly got up to toss the cat’s body into the dumpster behind his apartment. On his way, Mark passed a homeless man who seemed hungry. Mark handed the man Tigger’s body, suggesting he eat it (the parasite which had killed Tigger was not transmittable to humans). After all, it seemed like a perfectly good meal shouldn’t go to waste. Mark even offered to cook the cat’s body thoroughly.

Now, the psychologist in me wants to know: Do you think what Mark did was wrong? Why do you think that? 

Also, I think we figured out the reason no one else liked Mark

If you answered “yes” to that question, chances are that at least some psychologists would call you morally dumbfounded. That is to say you are holding moral positions that you do not have good reasons for holding; you are struck dumb with confusion as to why you feel the way you do. Why might they call you this, you ask? Well, chances are because they would find your reasons for the wrongness of Mark’s behavior unpersuasive. You see, the above story has been carefully crafted to try and nullify any objections about proximate harms you might have. As the cat is dead, Mark isn’t hurting it by carelessly disposing of the body or even by suggesting that others eat it. As the parasite is not transmittable to humans, no harm would come of consuming the cat’s body. Maybe you find Mark’s behavior at the end disgusting or offensive for some reason, but your disgust and offense don’t make something morally wrong, the psychologists would tell you. After hearing these counter arguments, are you suddenly persuaded that Mark didn’t do something wrong? If you still feel he did, well, consider yourself morally dumbfounded as, chances are, you don’t have any more arguments to fall back on. You might even up saying, “It’s wrong but I don’t know why.”

The above scenario is quite similar to the ones presented to 31 undergraduate subjects in the now-classic paper on moral dumbfounding by Haidt, Bjorklund, & Murphy (2000). In the paper, subjects are presented with one reasoning task (the Heinz dilemma, asking whether a man should steal to help his dying wife) that involves trading off the welfare of one individual for another, and four other scenarios, each designed to be “harmless, yet disgusting:” a case of mutually-consensual incest between a brother and sister where pregnancy was precluded (due to birth control and condom use); a case where a medical student cuts a piece of flesh from a cadaver to eat, (the cadaver is about to be cremated and had been donated for medical research); a chance to drink juice that had a dead, sterilized cockroach stirred in for a few seconds and then removed; and a case where participants would be paid a small sum to sign and then destroy a non-binding contract that gave their soul to the experimenter. In the former two cases – incest and cannibalism –  participants were asked whether they thought the act was wrong and, if they did, to try and provide reasons for why; in the latter two cases – roach and soul – participants were asked if they would perform the task and, if they would not, why. After the participants stated their reasons, the experimenter would challenge their arguments in a devil’s-advocate type of way to try and get them to change their minds.

As a brief summary of the results: the large majority of participants reported that having consensual incest and removing flesh from a human cadaver to eat were wrong (in the latter case, I imagine they would similarly rate the removal of flesh as wrong even if it were not eaten, but that’s besides the point), and a similarly-large majority were also unwilling to drink from the roached water or the sign the soul contract. On average, the experimenter was able to change about 16% of the participants’ initial stances by countering their stated arguments. The finding of note that got this paper its recognition, however, is that, in many cases, participants would state reasons for their decisions that contradicted the story (i.e., that a child born of incest might have birth defects, though no child was born due to the contraceptives) and, when those concerns had been answered by the experimenter, that they still believed these acts to be wrong even if they could no longer think of any reasons for that judgment. In other words, participants appeared to generate their judgments of an act first (their intuitions), with the explicit verbal reasoning for their judgments being generated after the fact and, in some cases, seemingly disconnected from the scenarios themselves. Indeed, in all cases except the Heinz dilemma, participants rated their judgments as arising more from “gut feelings” than reasoning.

“fMRI scans revealed activation of the ascending colon for moral judgments…”

A number of facets of this work on moral dumbfounding are curious to me, though. One of those things that has always stood out to me as dissatisfying is that moral dumbfounding claims being made here are not what I would call positive claims (i.e., “people are using variable X as an input for determining moral perceptions”), but rather they seem to be negative ones (“people aren’t using conscious reasoning, or at least the parts of the brain doing the talking aren’t able to adequately articulate the reasoning”). While there’s nothing wrong with negative claims per se, I just happen to find them less satisfying than positive ones. I feel that this dissatisfaction owes its existence to the notion that positive claims help guide and frame future research to a greater extent than negative ones (but that could just be some part of my brain confabulating my intuitions).

My main issue with the paper, however, hinges on the notion that the acts in question were “harmless.” A lot is going to turn on what is meant by that term. An excellent analysis of this matter is put forth in a paper by Jacobson (2012), in which he notes that there are perfectly good, harm-based reasons as to why one might oppose, say, consensual incest. Specifically, what participants might be responding to was not the harm generated by the act in a particular instance so much as the expected value of the act. One example offered to help make that point concerns gambling:

Compare a scenario I’ll call Gamble, in which Mike and Judy—who have no creditors or dependents, but have been diligently saving for their retirement—take their nest egg, head to Vegas, and put it all on one spin of the roulette wheel. And they win! Suddenly their retirement becomes about 40 times more comfortable. Having gotten lucky once, they decide that they will never do anything like that again. Was what Mike and Judy did prudent?

 The answer, of course, is a resounding “no.” While the winning game of roulette might have been “harmless” in the proximate sense of the word, such an analysis would ignore risk. The expected value of the act was, on the whole, rather negative. Jacobson (2012) goes on to expand the example, asking now whether it would have been OK for the gambling couple to have used their child’s college savings instead. The point here is that consensual incest can be considered similarly dangerous. Just because things turned out well in that instance, it doesn’t mean that harm-based justifications for the condemnation are discountable ones; it could instead suggest that there exists a distinction between harm and risk that 30 undergraduate subjects are not able to articulate well when being challenged by a researcher. Like Jacobson, (2012), I would condemn drunk driving as well, even if it didn’t result in an accident.

To bolster that case, I would also like to draw attention to one of the findings of the moral dumbfounding paper I mentioned before: about 16% of participants reversed their moral judgments when their harm-based reasoning was challenged. Though this finding is not often the one people focus on when considering the moral dumbfounding paper, I think it helps demonstrate the importance of this harm dimension. If participants were not using harm (or risk of harm) as an input for their moral perceptions, but rather only a post-hoc justification, these reversals of opinion in the wake of reduced welfare concerns would seem rather strange. Granted, not every participant changes their mind – in fact, many did not – but that any of them did requires an explanation. If judgments of harm (or risk) are coming after the fact and not being used an inputs, why would they subsequently have any impact whatsoever?

“I have revised my nonconsequentialist position in light of those consequences”

Jacobson (2012) makes the point that perhaps there’s a case to be made that the subjects were not necessarily morally dumbfounded as much as the researchers looking at the data were morally stupefied. That is to say, it’s not that the participants didn’t have reasons for their judgments (whether or not they were able to articulate them well) so much as the researchers didn’t accept their viability or weren’t able to see their validity owing to their own theoretical blinders. If participants did not want to drink juice that had a sterilized cockroach dunked in it because they found it disgusting, they are not dumbfounded as to why they don’t want to drink it; the researchers just aren’t accepting the subject’s reasons (it’s disgusting) as valid. If, returning to the initial story in this post, people appear to be opposed to behaving toward beloved (but dead) pets in ways that appear more consistent with feelings of indifference or contempt because it is offensive, that seems like a fine reason for doing so. Whether or not offense is classified as a harm by a stupefied research is another matter entirely.

References: Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished Manuscript.

Jacobson, D., (2012). Moral dumbfounding and moral stupefaction. Oxford Studies in Normative Ethics, 2, DOI:10.1093/acprof:oso/9780199662951.003.0012

Are Consequences Of No Consequence?

Some recent events have led me back to considering the topic of moral nonconsequentialism. I’ve touch on the topic a few times before (here and here). Here’s a quick summary of the idea: we perceive the behaviors of others along some kind of moral dimension, ranging from morally condemnable (wrong) to neutral (right) to virtuous (praiseworthy). To translate those into everyday examples, we might have murder, painting, and jumping on a bomb to save the lives of others. The question of interest is what factors our minds use as inputs to move our perceptions along that moral spectrum; what things make an act appear more condemanble or praiseworthy? According to a consequentialist view, what moves our moral perceptions should be what results (or consequences) an act brings about. Is lying morally wrong? Well, that depends on what things happened because you lied. By contrast, the nonconsequentialist view suggests that some acts are wrong due to their intrinsic properties, no matter what consequences arise from them.

  “Since it’d be wrong to lie, the guy you’re trying to kill went that way”

Now, at first glance, both views seem unsatisfactory. Consequentialism’s weakness can be seen in the responses of people to what is known as the footbridge dilemma: in this dilemma, the lives of five people can saved from a train by pushing another person in front of it. Around 90% of the time, people judge the pushing to be immoral not permissible, even though there’s a net welfare benefit that arises from the pushing (+4 net lives). Just because more people are better off, it doesn’t mean an act will be viewed as moral. On the other hand, nonconsequentialism doesn’t prove wholly satisfying either. For starters, it doesn’t necessarily convincingly outline what kind of thing(s) make an act immoral and why they might do so; just that it’s not all in the consequences. Referencing the “intrinsic wrongness” of an act to explain why it is wrong doesn’t get us very far, so we’d need further specification. Further, consequences clearly do matter when it comes to making moral judgments. If – as a Kantian categorical imperative might suggest – lying is wrong per se, then we should consider it immoral for a family in 1940s Germany to lie to the Nazi’s about hiding a Jewish family in their attic (and something tells me we don’t). Finally, we also tend to view acts not just as wrong or right, but wrong to differing degrees. As far as I can tell, the nonconsequentialist view doesn’t tell us much about why, say, murder is viewed as worse than lying. As a theory of psychological functioning, nonconsequentialism doesn’t seem to make good predictions.

This tension between moral consequentialism and nonconsequentialism can be resolved, I think, so long as we are clear about what consequences we are discussing. The most typical type of consequentialism I have come across defines positive consequences in a rather specific way: the most amount of good (i.e., generating happiness, or minimize suffering) for people (or other living things) on the whole. This kind of consequentialism clearly doesn’t describe how human moral psychology functions very well, as it would predict people would say that killing one person to save five is the moral thing to do; since we don’t tend to make such judgments, something must be wrong. If we jettison this view that increasing aggregate welfare is something our psychology was selected to do and replace it instead with the idea that our moral psychology functions to strategically increasing the welfare of certain parties at the expense of others, then the problem largely dissolves. Explaining that last part requires more space than I have here (which I will happily make public once my paper is accepted for publication), but I can at least provide an empirical example of what I’m talking about now.

This example will make use of the act of lying. If I have understood the Kantian version of nonconsequentialism correctly, then lying should be immoral regardless of why it was done. Phrased in terms of a research hypothesis concerning human psychology, people should rate lying as immoral, regardless of what consequences accrued from the lie. If we’re trying to derive predictions from the welfare maximization type of consequentialism, we should predict that people will rate lying as immoral only when the negative consequences of lying outweigh the positive ones. At this point, I imagine you can all already think of cases where both of those predictions won’t work out, so I’m probably not spoiling much by telling you that they don’t seem to work out in the current paper either.

Spoiler alert: you probably don’t need that spoiler

The paper, by Brown, Trafimow, & Gregory (2005) contained three experiments, though I’m only going to focus on the two involving lying for the sake of consistency. In the first of these experiments, 52 participants read about a person – Joe – who had engaged in a dishonest behavior for one of five reasons: (1) for fun, (2) to gain $1,000,000, (3) to avoid losing $1,000,000, (4) to save his own life, or (5) to save someone else’s life. The subjects were then asked to, among other things, rate Joe on how moral they thought he was from -3 (extremely immoral) to +3 (extreme moral). Now a benefit of $1,000,000 should, under the consequentialist view, make lying more acceptable than when it was done just for fun, as there is a benefit to the liar to take into account; the nonconsequentialist account, however, suggests that people should discount the million when making their judgments of morality.

Round 1, in this case, went to the nonconsequentalists: when it came to lying just for fun, Joe was morally rated at a -1.33 on average; lying for money didn’t seem to budge the matter much, with a -1.73 rating for the gaining a million and a -0.6 for losing a million. Statistical analysis found no significant differences between the two money conditions and no difference between the combined money conditions and the “for fun” category. Round 2 went the consequentialists, however: when it came to the saving lives category, lying to save one’s own life was rated as slightly morally positive (0.81), as was lying to save someone else’s (M = 1.36). While the difference was not significant between the two life saving groups, the two were different than the “for fun” group. That last finding required a little bit of qualification, though, as the situation being posed to the subjects was too vague. Specifically, the question had read “Joe was dishonest to a friend to save his life”, which could be interpreted as suggesting that either Joe was saving his own life or his friend’s life. The wording was amended in next experiment to read that “…was dishonest to a friend to save his own life”. The “for fun” was also removed, leaving the dishonest behavior without any qualification in the control group.

With the new wording, 96 participants were recruited and given one of three contexts: George being dishonest for no stated reason, to save his own life, or to save his friend’s life. This time, when participants were asked about the morality of George’s behavior, a new result popped up: being dishonest for no reason was rated somewhat negatively (M = -0.5) as before, but this time, being dishonest to save one’s own life was similarly negative (M = -0.4). Now saving a life is arguably more of a positive consequence than being dishonest is negative when considered in a vacuum, so the consequentialist account doesn’t seem to be faring so well. However, when George was being dishonest to save his friend’s life, the positive assessments returned (M = 1.03). So while there was no statistical difference between George lying for no reason and to save his own life, both conditions were different than George lying to save the life of another. Framed in terms of the Nazi analogy, I don’t see many people condemning the family for hiding Anne Frank.

 The jury is still out on publishing her private diary without permission though…

So what’s going on here? One possibility that immediately comes to mind from looking at these results is that consequences matter, but not in the average-welfare-maximization sense. In both of these experiments lying was deemed to be OK so long as someone other than the liar was benefiting. When someone was lying to benefit himself – even when that benefit was large – is was deemed unacceptable. So it’s not just that the consequences, in the absolute sense, matter; their distribution appears to be important. Why should we expect this pattern of results? My suggestion is that it has to do with the signal that is sent by the behavior in question regarding one’s value as a social asset. Lying to benefit yourself demonstrates a willingness to trade-off the welfare of others for your own, which we want to minimize in our social allies; lying to benefit others sends a different signal.

Of course, it’s not just that benefiting others is morally acceptable or praiseworthy: lying to benefit a socially-undesirable party is unlikely to see much moral leniency. There’s a reason the example people use for thinking about the morality of lying uses hiding Jews from the Nazis, rather than lying to Jews to benefit the Nazis. Perhaps the lesson here is that trying to universalize morality doesn’t do us much good when it comes to understanding it, despite our natural inclinations to view morality not a matter of personal preferences.

References: Brown, J., Trafimow, D., & Gregory, W. (2005). The generality of negativity hierarchically restrictive behaviors. British Journal of Social Psychology, 44, 3-13.

Lots And Lots Of Hand-Wringing

There’s a well-known quote that was said to be uttered when someone heard about Darwin’s theory of evolution for the first time: “Let us hope it is not true, but if it is, let us pray it does not become widely known”. Darwinian theory was certainly not the first theory that got people worrying about the implications of it being true, nor will it be the last. Pascal’s wager, for instance, attempted to suggest that belief in a deity would be a fine one to adopt, as the implications for being wrong about the belief might involve spending an eternity in torment (depending on which version of said deity we’re talking about), but believing incorrectly that a god exists doesn’t carry nearly as many potential costs. More recent worries have suggested that if global warming is real and caused by human activity, then we might want to knock it off with all the fossil fuel burning before we do (anymore) serious damage to the planet; others worry about the implications of that belief being wrong, suggesting it might harm the economy to impose new regulations on business owners over nothing. While we could document a seemingly-endless list of examples of people worrying about the implication of this or that idea, today we actually get a rare chance to examine whether some of those worries about the implications of an idea are grounded in reality.

“I don’t believe in your academic work….because of the implication

Now, of course, the implications which flow from a belief if it were true in no way affect whether or not the belief happens to be true. Our Victorian woman fretting over what might happen if evolution is true in no way changed the truth value of the claim. Given that the truth value isn’t affected, and that we here in the academic portion of world might fancy ourselves as fighters over truth of a claim, the implications which flow from an idea can be shrugged off as matters that don’t concern us. Still, one might wonder what precisely our Victorian was wringing her hands about; what consequences the world might suffer if people began to belief evolution was true and behaved accordingly. If she’s anything like some of the more contemporary critics of evolutionary theory in general – and evolutionary psychology in particular – she might have been worried that if people believe that the theory is true, then people have no reason to avoid being amoral psychopaths, killing and raping their way through life. The argument, I think, is that people might begin to justify things like rape and murder as natural if [behavior is genetically determined in some sense/God didn't create people and care very deeply about what they do], and therefore justifiable. If one is interested in avoiding nasty the consequences of beliefs, well, all that rape and murder might be a good one to avoid.

On a philosophical level, I happen to think that such a concern is rather strange. This strangeness arises from the fact that if, say, rape and murder are natural (and therefore justifiable, according to the argument), condemnation of such acts is, well, also natural and therefore acceptable. I’m not sure that this line of argument really gets anyone anywhere. It’s the same kind of reasoning that crops up concerning the issue of free will and morality: in short, when confronted with the idea of determinism, people seem to feel that acts like murder don’t require a justification, but acts like morally condemning others for murder do, leaving us with the rather odd situation where people feel it wouldn’t be justifiable to condemn someone for killing another person, but the killing itself is fine. Why people attach so much importance to trying to justify their moral judgments like that is certainly an interesting topic, but I wanted to bring the focus away from philosophy and back to the implications of evolutionary theories.

Some people have made the argument that if people believe in an evolutionary theory, then they will subsequently fail to condemn something the arguer would like to condemn. Foregoing the matter of whether the evolutionary theory in question is true, we can consider whether the concern about its implications is warranted. This is precisely what Dar-Nimrod et al (2011) set out to do. The authors set out to examine whether exposing male participants to different explanations for a behavior – specifically, an evolutionary explanation and a social-constructivist one – led to any changes in their condemnation of sex crimes, relative to a control condition. If evolutionary theories are used, even non-consciously, to justify certain behaviors morally (what the authors call a “get-out-of-jail-free card”), we should expect that evolutionary explanations will lead people to be less punitive of the sexual crimes. In the first experiment, the authors examined men’s reactions to an instance of a man soliciting a prostitute for sex; in the second, they examined men’s reactions to an instance of rape.

“Participants were subsequently followed to see if they sought out prostitutes”

The first study only made use of 58 participants (two of which were dropped) across three conditions, which makes me a little wary owing to small sample size concerns. Nevertheless, the participants either read about a social-constructivist theory (stressing power structures between men and women in relationship to sexual behavior), an evolutionary theory (stressing parental investment and reproductive potential), or neither. They were subsequently asked to suggest how much bail a man (John) should have to pay for attempting to solicit a prostitute that was actually an undercover policewoman (anywhere from $50-1000). After controlling for how much bail the participants set for a shoplifter, the results showed a significant difference between the conditions: in the control condition, men set an average bail of $267 for John. In the evolutionary condition, the bail was set around $301, and was around $461 in the social-constructivist condition. This difference was significant between the social-constructivist position and the evolutionary condition, but not between the evolutionary and control conditions.

In the next study, the setup was largely similar. Sixty-seven participants read about an evolutionary argument concerning why rape might have been adaptive, a social-constructivist argument about how more porn in circulation was correlated with more rape, or a control condition about, I think, sexual relationships between older people. They were asked to assess the scientific significance of the evidence they read about, and then asked about the acceptability of the behavior of a man (“Thomas”) who persisted in asserting his sexual desires on a woman who willingly kissed him but explicitly objected to anything further (date rape). The results showed that men rated the scientific significance of each theory to be comparable (which, I should note, is funny, given that the relationship between porn and rape goes in the opposite direction). Additionally, those reading the social explanation thought men had more control over their sexual urges (M = 5.4), relative to the control condition (M = 4.6) or evolutionary condition (M = 4.2). Similarly, those in the social condition rated sexual aggression less positively in the social condition (M = 3.0) relative to the evolutionary (M = 3.8) and control (M = 3.6) conditions. Finally, the same pattern held for punitive judgments.

Summarizing the results, then, we get the following pattern: while exposure to a certain social theories enhanced people’s moral condemnation of particular criminal sex acts, relative to the control condition, the evolutionary theories didn’t have any effect in particular. They certainly didn’t seem to justify sexual assault, as some feared they might. Precisely why the social theories enhanced condemnation is a separate matter, with the authors postulating that it might have something to due with the language and variable-focus that they used and note that, with different phrasing, it might be possible to eliminate that difference. The important point as far I’m concerned, though, is that evolutionary explanations (at least these ones) didn’t seem to lead to any of the horrific consequences detractors of the field sometimes imagine they would. In other words, if the evolutionary theories are true, we need not pray they do not become widely known.

So we can all safely move onto the next moral panic

Given that many critics of evolutionary psychology have made reference to this get-out-of-jail-free concern, it seems plausible that their worries are based on some misunderstandings of, or misinformation, about the field (or, more generously, a concern that other people will generate such misunderstandings intuitively, even though the critic is in masterful command of the subject himself). That is to say, roughly, someone says “evolutionary” and the receiver hears “genetic”, “predetermined”, and/or fears that other people will. However, if that explanation is true, I would find it curious that it didn’t seem to show up in the results. More precisely, if people substitute “genetic” for “evolutionary”, we might have expected to see the evolutionary explanations reduce judgments of condemnation, rather than do nothing to them*. It is possible that the effect could be witnessed in other topics than sex, perhaps owing to people treating sex differences in behavior as intuitively genetic based, but I suppose only future research will shed light on the issue.

*(For some reason, genetic explanations seem to reduce the severity of moral judgments. I would be interested to see if participants reading about how moral condemnation is genetically determined subsequently condemn more or less than others)

References: Dar-Nimrod, I., Heine, S., Cheung, B., & Schaller, M. (2011). Do scientific theories affect men’s evaluations of sex crimes? Aggressive Behavior, 37, 440-449.

“There Are No Girls On The Internet”

“I’ve discovered through the internet you can do anything you want so long as no one sees your face; it’s like the wild west over here” -Carl

Today is another leisurely day for me, so I’ll be writing about something less research based and more in the realm of argumentative fun. Many people have recently become aware of the site 4chan, owing to the site being the platform for the recent massive leak of celebrity nude photos acquired from breaches of their accounts on iCloud servers. The leak has been dubbed “The Fappening”, which seamlessly combines the internet’s collective love of both masturbation and M. Night Shyamalan puns. In any case, as anyone remotely familiar with 4chan should know, the users, at least some and perhaps most of them, pride themselves on the fact that the site is widely considered to be a cesspool of the internet’s waste. This allows them a certain leisure in expressing views which are, shall we say, less than orthodox. There is a saying originating from the site that goes, “There are no girls on the internet”, though most of you have probably heard it by another name: “Tits or gtfo”. Examining this phrase in somewhat greater detail provides us with an interesting window in men and women’s psychology: both in terms of how we tend to perceive the world, and how others in the world tend to perceive and react to us in turn. Buckle up, because today should be fun.

Always take proper precautions when venturing into the internet

So let’s start with a quick breakdown of the phrase, “There are no girls on the internet”. One 4chan user helpfully provides the meaning of the phrase here, and the heart of the idea is as follows: in offline life, people tend to respond to women in certain, positive ways simply because they are women, rather than because of anything else particularly noteworthy about them. By contrast, the user implies that life on the internet is more of a meritocracy where gender should play no particular role in how people respond to you. Accordingly, when women try to draw attention to their gender online, they are trying to cheat the system and receive a certain type of preferential treatment on that basis alone; the implication is that people online don’t, or shouldn’t, take kindly to that kind of behavior. This idea of, “there are no girls on the internet” was then morphed into the phrase “tits or gtfo”, with the latter phrase suggesting that if women want to call attention to their gender, they should just post a naked picture of themselves as an admission that there is nothing else interesting about them and they can’t stand on their own personality and intellect without relying on their gender to support them.

Now this sentiment might strike some people as profoundly misogynistic, perhaps owing to the manner in which it is expressed. At it’s core, though, it seems to be a rather egalitarian idea: gender shouldn’t matter when it comes to how people interact with each other and preferential treatment on that basis should be done away with. The reason I’m discussing this sentiment is to contrast it with another perspective I’ve come across recently; one that suggests women aren’t welcome on the internet. This perspective holds that women online – and offline, for that matter – are subject to disproportionate amounts of harassment simply because they are women, rather than owing to any kind of behavior they enact or things they say. These two perspectives seem to be at substantial odds with one another with respect to one critical detail: do people like women for being women, or do people hate women for being women?

Obviously, the question is too simplistic and paints the issues with far too broad of a brush to be a meaningful one, but let’s try to answer it anyway; just for fun. To answer such a question one needs to begin with some kind of standard as for what counts as appropriate or inappropriate treatment. Let’s return to the Fappening as an example. Some posts on Jezebel.com find it appalling that certain sites won’t take down the nude pictures of Jennifer Lawrence, citing concerns for the privacy and sensitivity of the women in question as the justification for their being removed. Other posts suggest that it is good that charitable donations motivated by the Fappening are being refused, because the money isn’t coming from the right places. Now that we know Jezebel’s stance on the matter of respecting people’s privacy, we can turn to their sister site, Gawker.com (both sites are owned by Gawker Media). Gawker seemed more than happy to take a stand against respecting people’s privacy by previously posting links to the Hulk Hogan sex tape, suggesting that “we love to watch famous people have sex” and are not terribly troubled by the fact that Hogan was secretly filmed and did not want this video to be released; they were so unimpressed by Hogan’s complaints, in fact, that they tried to refuse to comply with his request to have it removed.

Sure, the situations are a bit different: Jennifer had her privacy breached by people breaking into her online account, whereas Hogan was covertly and unknowingly filmed. While I can’t say for certain whether the writers at Jezebel would be totally happy with someone filming himself having sex with Jennifer without her knowledge and releasing the tape despite her protests, my inclination is to think they would condemn such actions. Also, to the best of my knowledge, no one has claimed that whoever released the Hulk Hogan tape “loathes men” and “wants to punish men just for existing”, though some have suggested this as being the motivation for the Fappening pictures being stolen, just substituting “women” for “men”.

“The only conceivable reason to want to see her naked is because you hate women…”

Though not conclusive by any means, these two cases suggest that it’s plausible that the same behavior directed at, or enacted by, men and women will not always be met with a uniform response. Maddox, over at The Best Page In The Universe, recently put out a new article and video outlining other instances of this kind of double stand with respect to comic book characters; a topic which I have touched on before myself with respect to both superheroes and Rolling Stone covers. In the video, Maddox shows, quite clearly, that Spiderman and Spiderwoman have been depicted in almost identical poses on the cover of comics, but the female version was apparently perceived to be overly sexualized and an embarrassment by some, despite the male version apparently never being noticed. There’s also the research I’ve covered before suggesting that women appear to get reduced sentences for similar crimes, relative to men, if you’re looking for something less anecdotal.

Now none of this is intended to generate some kind of competition over whether men or women, as groups, have it worse. Rather, the point of this analysis is to suggest that men and women, on the whole, tend to have it differently. There are relatively-unique adaptive problems that each sex has tended to face over our evolutionary history, and, as such, we should expect some differences in the psychological modules possessed by men and women. This can cause something of a problem when it comes to discourse regarding whether, say, women are facing a disproportionate amount of harassment online, because what counts as harassment in the first place might be perceived differently; we are all swimming in seas of subjective perceptions that our minds create, rather than bringing them in from the outside world in some kind of objective fashion. What is “threatening” to one person might be innocuous to another, depending on the precise nature of the stimulus and of the mind perceiving it.

For instance, Amanda Hess references an uncited study that found “accounts with feminine usernames incurred an average of 100 sexually explicit or threatening messages a day. Masculine names received 3.7.” Why are “sexually explicit” and “threatening” messages grouped together in that sentence? Well, if the results of Clark and Hatfield’s classic 1989 study are any indication, it’s likely because women might perceive a good deal of unsolicited sexual attention as unpleasant or harassment. However, men might receive that same sexual attention as pleasant and welcome. It is also likely that women will receive a lot more unsolicited sexual attention from men than men will from women, owing the minimum requisite biological costs to reproduction. Grouping “threatening” or “harassment” in the same category as “propositioning” strikes me as precisely the type of thing that can lead to disagreement over how much harassment is going on online. (I think this study is what Amanda is referring to, in which case “feeling horny?” counts as a threatening or sexually explicit message; it’s certainly one of those things, anyway…)

This is a somewhat long-winded way of suggesting that men and women might, and likely do, tend to both perceive the world differently and expect to be treated in particular fashions. If people expect some standard of treatment they are not receiving, they might come to perceive the treatment they get as being overly hostile, unwelcoming, or unfair even if they receive the same treatment as everyone else. This point works just as well for people reacting to the treatment of others: if I expect you to get a certain level of treatment and you don’t, I might try to come to your aid and condemn others for how they behaved on your behalf. That’s not to say that people are, in fact, getting equal treatment in all cases regardless of gender (they often don’t); just to point out that our perceptions of it might differ even if they were.

I’m not saying that such treatment isn’t hostile either; plenty of treatment people receive online is downright threatening, from death threats, to abuse fantasies, to plain old public shaming and ridicule. I’ve received a series of what one might consider abusive messages from strangers online after winning a game we were playing, and that was only after 30 seconds to five minutes of interacting without even talking in a recreational activity; an experience not unique to me by any means. One could imagine that the frequency and intensity of this abuse increases substantially as one becomes more publicly known or begins voicing controversial opinions widely (like calling an entire subculture bigoted or not supporting dedicated servers for your FPS).

“Thanks for your thoughtful message, XxXx420NoScopeFgtxXxX”

In fact, one very reasonable suggestion is that the vitriol present in some of the harassment people receive online is designed specifically to get a rise out of the person receiving it; it’s the M.O. of the internet troll. When it comes to women receiving harassment, for instance, we might expect that women receive particular types of abuse because women tend to be most bothered by it, but they do not receive abuse because they are women. The goal of those sending the abuse is not to make some kind of social or political statement about an issue or express contempt for an entire gender; it’s just to get under someone’s skin.  However, when a different group is being targeted for harassment, the content of the harassment should be expected to shift accordingly.

A good example of this would be 4chan’s trolling of the MMA fighter War Machine (which, I am told, is now his legal name): when users on 4chan found out that War Machine’s father had died after his son’s unsuccessful CPR attempt, they began to tell War Machine he had killed his father (on the anniversary of the death, I would add). This harassment didn’t take that form because people hate those who perform CPR, fathers, MMA fighters, or men more generally; it only took that form because it was what people thought would get the best rise out of him. Judging from the subsequent self-inflicted injuries War Machine documented publicly, the attempt was pretty successful.

“That’ll show ‘em…”

However, just like the immediate point of many trolling comments is to upset others, rather than to make some honest statement, the reactions people have to online harassment should be expected to be every bit as strategic as the trolls themselves, even if not consciously so. Just like the Gawker sites don’t appear to be consistently concerned with privacy (“Yes” with respect to Jennifer, “No” with respect to Hogan), and just like people don’t perceive Spiderman and Spiderwoman to be equally sexualized despite near identical poses on their covers, so too might outrage over online harassment not be evenly spread between targets, even if the harassment itself is quite similar. So, whether the internet is a place of general equality with respect to gender or hostility towards women depends, in no small part, on what kind of treatment people are expecting each gender to receive.

That said I wouldn’t want to accuse any person or group of over-reacting to the harassment they receive just for being them; I’m sure that harassment is particularly unique, and evidence of a widespread bias against you and your friends.

Perverse Punishment

There have been a variety of studies conducted in psychology examining what punishment is capable of doing; mathematical models have been constructed too. As it turns out, when you give people the option to inflict costs on others, the former group are pretty good at manipulating the behavior of the latter. The basic principle is, well, pretty basic: there are costs and benefits to acting in various fashions and, if you punish certain behaviors, you shift the plausible range of self-interested behaviors. Stealing might be profitable in some cases, unless I know that it will, say, land me jail for 5 years. Since five years in jail is a larger cost than benefit I might reap from stealing (provided I am detected, of course), the incentive to not steal is larger and people don’t do take things which aren’t theirs. The power of punishment is such that, in theory, it is capable of making people behave in pretty much any conceivable fashion so long as they are making decisions on the basis of some kind of cost/benefit calculation. All you have to do is make the alternative courses of action costlier, and you can push people towards any particular path (though if people behave irrespective of the costs and benefits, punishment is no longer effective).

Now, in most cases, the main focus of this research on punishment has been on what one might dub “normal” punishment. A case of normal punishment would involve, say, Person A defecting on Person B, followed by person B then punishing person A. So, someone behaves in an anti-social fashion and gets punished for it. This kind of punishment is great for maintaining cooperation and pointing out how altruistic people are. However, a good deal of punishment in these experiments is what one might dub “perverse”.

“Yes; quite perverse indeed…”

By perverse punishment, I am referring to instances of punishment where people are punished for giving up their own resources and benefiting others. That people are getting punished for behaving altruistically is rather interesting, as the pro-social behavior being targeted for punishment is, at least in the typical experiments, benefiting the people enacting the punishment. As we tend to punish behavior we want to see less of, and self-benefiting behavior is generally something we want more of, the punishment of others for benefiting the punisher appears to be rather strange. Now I think this strangeness can be resolved, but, before doing that, it is worthwhile to consider an experiment examining whether or not punishment is also capable of reducing perverse punishment.

The experiment – by Cinyabuguma, Page, & Putterman, (2006) – began with a voluntary contribution game. In games like these (which are also known as public goods game), a number of players start off with a certain pool of resources. In the first stage of the game, each player has the option to contribute any amount of their resources towards the public pool. The resources in this pool get multiplied by some amount and then distributed equally among all the players. The payout of these games are such that everyone could do better if they all contributed, but at the individual level contributions make one worse off. So, in other words, you make the most money when everyone else contributes the most and you contribute nothing. In the second stage of the game, the amount that each player has donated to the public good becomes known to everyone else, and each person has the option to “punish” others, which involves giving up some of your own payment to reduce someone else’s payment by 4 times the amount you paid.

The twist in this experiment is the addition of another condition. In that condition, after the first two steps (First subjects contribute and, second, subjects learn of the contributions of others and can punish them), there was then a round of second-order punishment. What this means is that, after people punished the first time, each participant got to see who punished who, and could then punish each other again. Simply put: I could punish someone for either punishing me or for punishing someone else. So the first condition allowed for the punishment of contributions alone, whereas the second allowed for both the punishment of contributions and the punishment of punishment. The question of interest is whether or not perverse punishment and/or cooperation was any different between the two.

“It’s still looking pretty perverse to me”

The answer to that question is yes, but the differences are quite slight, and often not significant. When people could only punish contributions, the average contribution was 7.09 experimental dollars (each person could contribution up to 10); when punishment of punishment was also permitted, the average contribution rose ever so slightly to in between 7.35 and 7.97 units. Similarly, earnings increased when people could punish punishment: when the second-order punishment was an option, people earned more (about 13.35 units) relative to when second-order punishment wasn’t an option (around 12.86 units). So, though these differences weren’t terribly significant, allowing for the punishment of punishers tended to increase the overall amount of money people made slightly.

Also of interest, though, is the nature of the punishment itself. In particular, there are two findings I would like to draw attention to: the first of these is that if someone received punishment for punishing others, they tended to punish less during later periods. In other words, since punishing others was itself punished, less punishment took place (though this seemed to affect the perverse punishment more so than the normal type). This is a fairly expected result.

The second finding I would like to draw attention to concerns the matter of free-riders. Free-riders are individuals who benefit from the public good, but do not themselves contribute to it. Now, in the case of this economic game we’ve been discussing, there are two types of free-riders: the first are people who don’t contribute much to the public good and, accordingly, are targeted for “normal” punishment. However, there are also second-order free-riders; I know this must be getting awfully hard to keep track of, but these second-order free-riders are people who benefit from free-riders being punished, but do not themselves punish others. To put that in simple terms, I’m better off if anti-social people are punished and if I don’t have to be the one to punish them personally. What I find interesting in these results is that these second-order free-riders were not targeted for punishment; instead, those who punished – either normally or perversely – ended up getting punished more as revenge. Predictably, then, those who failed to punish ended up with an advantage over those who did punish. Not only did they not have to spend money on punishing others, but they also weren’t the target of revenge punishment.

So what does all this tell us when it comes to helping us understand perverse punishment, and punishment more generally?  Well, part of that answer comes from considering the fact that it was predominately people who were above/below the average contribution level of the group doing most of the punishing; relatedly, they were largely targeting each other. This suggests, to me, anyway, that a good deal of “perverse” punishment is a kind of preemptive defense (or, as some might call it, an offense) against one’s probably rivals. Since low contributors likely have some inkling that those who contribute a lot will preferentially target them for punishment, this “perverse” punishment could simply reflect that knowledge. Such an explanation makes the “perverse” punishment seem a bit less perverse. Instead of reflecting people punishing against their interests, perverse punishment might work in their interests to some degree. They don’t want to be punished, and they are trying to inflict costs on those who would inflict costs on them.

Which at least makes more sense than the “He’s just an asshole” hypothesis…

I think it helps to also think about what patterns of punishment were not observed to answer our question. As I mentioned initially, people’s payoffs in these games would be maximized if everyone else contributed the maximum and they personally contributed nothing. It follows, then, that one might be able to make himself better off by punishing anyone else who contributes less than the maximal amount, irrespective of how much the punisher contributed. Yet this isn’t what we see. This raises the question, then, of why average contributors don’t receive much punishment, despite them still contributing less than that highest donors. The answer to these questions no doubt lies, in part, on the fact that punishing others is costly, as previously mentioned Thinking about when punishment becomes less costly should shed light on the matter, but since this has already gone a bit long, I’ll save that speculation for when my next paper gets published.

Reference: Cinyabuguma, M., Page, T., & Putterman, L. (2006). Can second-order punishment deter perverse punishment? Experimental Economics, 9, 265-279.

Is Morality All About Being Fair?

What makes humans moral beings? This is the question that leads off the abstract of a paper by Baumard et al (2013), and certainly one worth considering. However, before one can begin to answer that question, one should have a pretty good idea in mind as to what precisely they mean by the term ‘moral’. On that front, there appears to be little in the way of consensus: some have equated morality with things like empathy, altruism, impartiality, condemnation, conscience, welfare gains, or fairness. While all of these can be features of moral judgments, none of these intuitions about what morality is tends to differentiate it from the non-moral domain. For instance, mammary glands are adaptations for altruism, but not necessarily adaptations for morality; people can empathize with the plight of sick individuals without feeling that the issue is a moral one. If one wishes to have a productive discussion of what makes humans moral beings, it would seem to beneficial to begin from some solid conceptualization of what morality is and what it has evolved to do. If you don’t start from that point, there’s a good chance you’ll end up talking about a different topic than morality.

Thankfully, academia is no place for productivity.

The current paper up for examination by Baumard et al (2013) is a bit of an offender in that regard: their account explicitly mentions that a definition for the term is harm to agree upon and they use the word “moral” to mean “fair”. To understand this issue, first consider the model that the authors put forth: their account attempts to explain moral sentiments by suggesting that selection pressures might have been expected to shape people to seek out the best possible social deals they could get. In simple terms, the idea contains the following points: (1) people are generally better off cooperating than not, but (2) some individuals are better cooperative partners than others. Since (3) people only have a limited budget of time and energy to spend on these cooperative interactions and can’t cooperate with everyone, we should expect that (4) so long as people have a choice as to whom they cooperate with, people will tend to choose to spend their limited time with the most productive partners. The result is that overly-selfish or unfair individuals will not be selected as partners, resulting in selection pressures generating cognitive mechanisms concerned with fairness or altruism. Their model, in other words, centers around managing the costs and benefits from cooperative interactions. People are moral (fair) because it leads to them be preferred as an interaction partner.

Now that all sounds well and good – and I would agree with each of the points in the line of thought – but it doesn’t sound a whole lot like a discussion about what makes people moral. One way of conceptualizing the idea is to think about a simple context: shopping. If I’m in the market for, say, a new pair of shoes, I have a number of different stores I might buy my shoes from and a number of potential shoes in each store. Shopping around for a shoe that I like with the most for a reasonable price fills all the above criteria in some sense, but shoe-shopping is not itself often a moral task. That a shoe I like is priced at a range higher than I am willing to pay does not necessarily mean I will say that such pricing is wrong the way I might say stealing is wrong. Baumard et al (2013) recognize this issue, noting that a challenge is explaining why people don’t just have selfish motives, but also moral motives that lead to them to respect other people’s interests per se.

Now, again, this would be an excellent time to have some kind of working definition of what precisely morality is, because, if one doesn’t, it might seem a bit peculiar to contrast moral and selfish motivations – which the authors do – as if the two are opposite ends of some spectrum. I say that because Baumard et al (2013) go on to discuss how people who have truly moral concerns for the welfare of others might be chosen as cooperative partners more often because they’re more altruistic, building up a reputation as a good cooperator, and this is, I think, supposed to explain why we have said moral concerns. So the first problem here is that the authors are no longer explaining morality per se, but rather altrustic behaviors. As I mentioned in the first paragraph, mechanisms for altruism need not be moral mechanisms. The second problem I see is that, provided their reasoning about reputation is accurate (and I think it is), it seems perfectly plausible for non-moral mechanisms to make that judgment as well: I could simply be selfishly interested in being altruistic (that is to say, I would care about your interests out of my own interests, the same way people might not murder each other because they’re afraid of going to jail or possibly being killed in the process themselves).The authors never address that point, which bodes poorly for their preferred explanation.

“It’s a great fit if you can just look passed all the holes…”

More troublingly for the partner-choice model of morality, it doesn’t seem to explain why people punish others for acts deemed immoral. The only type of punishment it seems to account for would be, essentially, revenge, where an individual punishes another to secure their own self-interest and defend against future aggression; it might also be able to explain why someone might not wish to continue working in an unfair relationship. This would leave the model unable to explain any kind of moral condemnation from third parties (those not initially involved in the dispute). It would seem to have little to say about why, for instance, an American might care about the woes suffered by North Korean citizens under the current dictatorship. As far as I can tell, this is because the partner-choice account for morality is a conscience-centric account, and conscience does not explain condemnation; that I might wish to cooperate with ‘fair’ people doesn’t explain why I think someone should be punished for behaving unfairly towards a stranger. The model at least posits that moral condemnation ought to be proportional to the offense (i.e. an eye for an eye), seeking to restore fairness, but not only is this insight not a unique prediction, it’s also contradicted by some data on drunk driving I covered before (that is, unless a man hitting a woman while driving his car is more “unfair” than drunk woman hitting a man).

Though I don’t have time to cover every issue I see with the paper in depth (in large part owing to it’s length), the main issue I see with the account is that Baumard et al (2013) never really define what it is they mean by morality in the first place. As a result, the authors appear to just substitute “altruism”  or “fairness” for morality instead. Now if they want to explain either of those topics, they’re more than welcome to; it’s just that calling them morality instead of what they actually mean (fairness) tends to generate quite a bit of well-deserved confusion. In the interests of progress, then, let’s return to the concern I raised about the opening question. When we are asking about what makes people moral, we need to start by considering what morality is. The short answer to that question is that morality is, roughly, a perception: at a basic level, it’s the ability to perceive acts in or states of the world along a dimension of “right” or “wrong” in much the same way we might perceive sensations as painful or pleasure. This spectrum seems to range from the morally-praiseworthy at one end to the morally-condemnable at the other, with a neutral point somewhere in the middle.

Framed in this light, we can see a few, rather large problems with conflating morality with things like fairness. The first of these is that perceiving an outcome as immoral would require that one first perceives it as unfair and then as immoral, as neither the reverse ordering, or one in which both perceptions appeared simultaneously, does not makes any sense. If one can have a perception of fairness divorced from a moral perception, then, it seems that one could use that perception to do the behavioral heavy lifting when it comes to partner choice. Again, people could be selfishly fair. The second problem that becomes apparent is that we can consider whether perceptions of immorality can be generated in response to acts that do not appear to deal with fairness or altruism. As sexual and solitary behaviors (like incest or drug use) are moralized with some frequency, the fairness account seems to be lacking. In fact, there are even issues where altruistic behavior has been morally condemned by others, which is precisely the opposite of what the Baumard et al (2013) model would seem to predict.

If we reconceptualize such behaviors properly, though…

Instead of titling their paper, ”A mutualistic approach to morality”, the authors might have been better served with the title “A mutualistic approach to fairness”. Then again, this would only go so far when it comes to remedying the issue, as Baumard et al (2013) never really define what they mean by “fair” either. Since people seem to disagree on that issue with frequency, we’re still left with more than a bit of a puzzle. Is it fair that very few people in the world hold so much wealth? Would it be fair for that wealth to be taken from them and given to others? People likely have different answers to those questions.

Now the authors argue that this isn’t really that large of a problem for their account, as people might, for instance, disagree as to the truth of a matter while all holding the same concept of truth. Accordingly, Baumard et al (2013) posit that people can disagree about what is fair even if they hold the same concept of fairness. The problem with that analogy, as far as I see it, is that people don’t seem to have competing senses of the word “truth” while they do have different senses of the word “fair”: fairness based on outcome (everyone gets the same amount), based on effort (everyone gets in proportion to what they put in), based on need (those who need the most get the most), and perhaps others still. Which of these concepts people favor is likely going to be context-specific. However, I don’t know of that the same can be said of different senses of the word “true”. Are there multiple senses in which something might or might not be true? Are these senses favored contextually? Perhaps there are different senses of the word, but none come to mind as readily.

Baumard et al (2013) might also suggest that by “fair” they actually mean “mutually beneficial” (writing, “Ultimately, the mutualistic approach considers that all moral decisions should be grounded in consideration of mutual advantage”), but we’d still be left with the same basic set of problems. Bouncing interchangeably between three different terms (moral, fair, and mutually-beneficial) is likely to generate more confusion than clarity. It is better to ensure one has a clear idea of what one is trying to explain before one sets out to explain it.

References: Baumard, N., Andre, JB., & Sperber, D. (2013). A mutualistic approach to morality: The evolution of fairness by partner choice. Behavioral & Brain Sciences, 36, 59-122.

 

Making Your Business My Business (Part 2)

Social little creatures that we happen to be, people are found to frequently, and often unpleasantly, involve themselves in the affairs of others. The unpleasantness seems to result from the fact that people’s involvement in these affairs is hardly ever selfless or unattached. After all, we wouldn’t tend to get involved in regulating the behaviors of others if there was no benefit to doing so. Sometimes people’s motives can be fairly transparent: for instance, that single guy who’s sexually interested in a girl will probably give her somewhat-different relationship advice than that girl’s female friends with no such interest. Now that example happens to be one where a consciously-held proximate motive happens to line up fairly-well with the ultimate reason for that motive’s existence (i.e. the man is sexually interested in the girl, and the advice he gives might have functioned to secure additional mating opportunities for himself). Other cases are less transparent, owing to people not having much conscious insight into the reasons they have certain thoughts or feelings: as another for instance, people might be interested in legislating the sexual behavior of others in order to improve the viability of their sexual strategy. However, on a conscious level all they might experience is a feeling of “that’s gross”, or “that’s not right”.

  “I hate mustard. You shouldn’t be allowed to eat it”

These less-transparent cases pose us with some rather interesting mysteries to solve, mostly because explaining why I might not wish to do something is a much different task than explaining why I might want to stop other people from doing that thing. One such mystery has recently landed at my feet, and I wanted to explore it today in order to try and figure out my own thoughts on the matter (despite the Onion recently telling me such a feat is folly). I came across this example owing to my new-found love for an online card game called Hearthstone. The game operates similar to other existing card games: you have a pool of cards from which you can select a certain number to build your deck. Every player starts out with a beginning pool of cards for free which they can then expand and improve by opening online packs or unlocking certain class-specific cards through playing that class enough. If you’re really into showing off, there are also golden versions of the cards, which function precisely as the normal ones do, but are much rarer and harder to get. They’re basically the spinning rims of the game.

Now having access to all of the cards in the game provides an additional degree of flexibility in terms of what decks you can make. Accordingly, many people would prefer to get full collections. Owing to the free-to-play setup of the game, this either takes a lot of time (spent earning in game packs through play) or a lot of money (you can also buy packs). For the most part, I have taken the former route, and during my time not spent playing, when I should be doing other things, I sometimes like to watch YouTube videos of other people playing to learn and improve (and put off meaningful work). If you’re really looking to kill some time without deluding yourself into thinking you’re spending it productively, though, there are also videos of people just opening online packs. It just so happens that one of those videos happens to be what drew my attention. In the video (which can be viewed here if you have over 2 hours to waste), a man has bought 600 of these online packs for his three-year-old son’s account which he records opening. In real money, that works out to roughly $750.

What drew my interest was not the video itself; that part is actually quite dull. Instead, I found my reaction to it rather noteworthy: when I saw the video, the first thought to cross my mind was, in essence, “This guy is an asshole for spending so much money on those packs”. This sentiment was apparently not unique to me, judging by some comments. Choice selections of them include: he’s an idiot (“Blizzard won the battle. Got this idiot to spend more than $750 for his son…“), he’s neglectful of his children (“This isn’t a good dad in any way…I’d take love and care above 600 hearthstone packs anyday“), or just an all-around bad person (“Who is this twat?”). I find my initial reaction, as well as the ones in the comments section, rather puzzling. Specifically, why do such harsh, negative reactions arise to someone spending his money on, at worst, something he likes, and, at best, a gift for his child? At first glance, it certainly doesn’t seem like this purchase is doing any harm to the world.

“Who does that bitch think she is, buying gifts for her child?!”

There are a number of candidate explanations for this apparent moral outrage we might explore. One of these explanations is that perhaps people were outraged because they thought this man was buying a competitive advantage for himself or his child. This is a common complaint leveled by gamers against ostensibly “free-to-play” games: while the game might be “free” to play, one might need to “pay-to-win”, barring an incredible investment of time and energy which very few people with real responsibilities can make. In one respect, then, buying all those packs might have been condemned for the same reason that people condemned wealthy individuals for “renting” disabled tour guides to skip the lines at busy theme parks. If the father is paying to make his child better off relative to other people, then those other people might understandably get a bit salty about their newly-found competitive disadvantage. Without going too in-depth into the mechanics of the game, however, this explanation seems to hinge on the idea that having access to all those cards provides a competitive edge one could not get without them, and there’s a good deal of evidence to suggest that the advantage of the additional cards, if it exists, might be relatively minimal in some cases. In any case, I wouldn’t rule that explanation out as part of the overall picture.

Nevertheless, I don’t think this could possibly be the whole story. I say that for a number of reasons. First, there was a story that I discussed semi-recently: the reactions another comments section had to Kim Kardashian donating 10% of her eBay auction proceeds to charity. In this case, she was not disadvantaging anyone – quite the opposite – but was still branded as a horrible person by many commenters. Second, I try to imagine whether or not people would have the same reaction to the initial father in question buying 600 packs for someone, but instead of that someone being his son, that someone was a stranger. Though I have no data to support this intuition at moment, I would predict that you’d see a marked decrease in the amount of condemnation the act would receive if it were directed towards non-kin, despite the fact that the competitive edge it provides did not change. By contrast, I think people would still condemn my buying a handicapped tour guide for someone else to skip the lines at Disney world. I also get the sense that if the father bought his child a $750 toy (or $750 worth of them) instead, people would similarly condemn his for doing so, even though toys provide no real competitive edge in any sense.

Here’s an alternative explanation, and what I think is going on here: in brief, I think the ultimate function of morality, broadly defined, is to manage association values (how valuable you are to me as an ally relative to others, all things considered). The reason acts like Kim’s or this father’s receive moral condemnation is that they evidence a very low welfare-tradeoff-ratio towards others (which is just what it sounds like; how willing I am to trade my welfare off for yours). To use a simple example, the world is full of people who need various things: let’s use food in this instance. As I acquire food, the marginal value of each additional bit of it drops. That is to say the first bite of food on an empty stomach is worth more than the second bite, which is worth more than the third, and so on. So, as I consumed food, the value of additional food for me drops until it’s hit the point where I am, on a practical level, not getting any real benefit from having more.

However, as I consume food, other people’s desire for it does not change. The value of the resource for me has thus hit negligible returns at a certain point, whereas the value of that resource for others has not diminished. By continuing to acquire resources for myself at that point, my actions would, implicitly, be suggesting that I value my having a relatively small benefit over someone else having a relatively large one. Given that kin share our genes, acquiring those resources for closely-related others likely sends the same message: I care about my own interests much more than I can about others. This, in many instances, can make me seem like a poor associate. A very selfish friend doesn’t tend to be a very good one. It’s for this reason that taking care of an orphaned child receives more praise than caring for one’s own child. If one instead acquires a vast amount of resources for someone else, however, there is still that diminishing returns issue, but you would instead be demonstrating a rather high welfare-tradeoff-ratio towards others, rather than yourself (“I value someone else getting this resource much more than I value myself getting it”). Accordingly, people should probably condemn it less, if not outright praise it.

“Sure; someone has to buy it, and they’re going to hell, but at least you’re good”

While such ultimate considerations might be responsible for shaping the cognitive mechanisms that generated the feelings people expressed in these comments sections, that’s no guarantee that people will understand them as such consciously. All that’s required is that people take the appropriate behavioral actions, such as morally condemning and insulting those who appear to behave overly selfish, or refusing to associate themselves with such individuals. Proximately, all people might experience is a feeling of moral outrage or disgust. They might not be able to tell you why they experience those things, but that’s probably because understanding the why is much less relevant. In much the same way, animals don’t need to know that sex leads to reproduction in order to have it feel good and thus be motivated to engage in it. If the moral condemnation leads people to make different, other-benefiting choices in the future, so much the better for those others. It doesn’t hurt that by expressing how selfish we think other people are we might be able to make ourselves look like better associates in the process (i.e. “I find your behavior overly selfish, so I must not be similarly self-interested…”).

Punch-Ups In Bars

For those of you unfamiliar with the literature in economics, there is a type of experimental paradigm called the dictator game. In this game, there are two players, one of which is given a sum of money and told they can do whatever they want with it. They could keep it all for themselves, or they could divide it however they want between themselves and the other player. In general, you often find that many dictators – the ones in charge of dividing the money – give at least some of the wealth to the other player, with many people sharing it evenly. Some people have taken that finding to suggest there is something intrinsically altruistic about human behavior towards others, even strangers. There are, however, some concerns regarding what those results actually tell us. For instance, when you take the game out of the lab and into a more naturalistic setting, dictators don’t really tend to give other people any money at all, suggesting that most, or perhaps all, of the giving we see in these experiments is being driven by the demand characteristics of the experiment, rather than altruism per se. This should ring true to anyone who has even had a wallet full of money and not given some of it away to a stranger for no reason. Real life, it would seem, is quite unlike dictator games in many respects.

Dictators are not historically known for their benevolence.

Relatedly, around two years ago, Rob Kurzban wondered to what extent the role of ostensibly altruistic punishment had been overstated by laboratory experiments. Altruistic punishment refers to cases in which someone – the punisher – will incur costs themselves (typically by paying a sum of money in these experiments) to inflict costs on others (typically by deducting a sum of money from another person). What inspired this wondering was a video entitled “bike thief“, where a man tries to steal his own bike, using a crowbar, hacksaw, and power tool to cut the locks securing the bike to various objects. Though many people pass by the man as he tries to “steal” his bike, almost no one intervenes to try and determine what’s going on. This video appears to show the same pattern of results as a previous one also dealing with bike theft: in that video, third parties are somewhat more likely to intervene when a white man tries to steal the bike than in the first video one, but, in general, they don’t tend to involve themselves much, if at all (they are more likely to intervene if the ostensible thief is black or a woman. In the former case, people are more likely to confront him or call the police; in the latter case, some people intervened to help the woman, not to condemn her).

I have long found these videos fascinating, in that I feel they raise a lot of questions worthy of further consideration. The first of these is how do people decide when to become involved in the affairs of others? The act itself (sawing through a bike lock) is inherently ambiguous: is the person trying to steal the bike, or is the bike theirs but they have lost the key? Further, even if the person is stealing the bike, there are certain potential risks to confronting them about it that might be better avoided. The second question is, given someone has decided to become involved, what do they do? Do they help or hinder the thief? Indeed, when the “thief” suggests that they lost the key, the third parties passing by seem willing to help, even when the thief is black; similarly, even when the woman all but says she is stealing the bike, people (typically men) continue to help her out. When third parties opt instead to punish someone, do they do so themselves, or do they try to enlist others to do the punishing (like police and additional third parties)? These two questions get at the matter of how prevalent/important is third-party punishment outside of the lab, and under what circumstance might that importance be modified?

Though there is a lack of control one faces from moving outside of the lab into naturalistic field studies, the value of these studies for understanding punishment should be hard to overstate. As we saw initially with the dictator games, it is possible that all the altruistic behavior we observe in the lab is due to experimental demand characteristics; the same might be true of third-party moral condemnation. Admittedly, naturalistic observations of third-party involvement in conflicts is rare, likely owing to how difficult it is to get good observations of immoral acts that people might prefer you didn’t see (i.e. real bike thieves likely go through some pains to not be seen so others might be unlikely to become involved, unlike the actors in the videos). One particularly useful context for gathering these observations, then, is one in which the immoral act is unlikely to be planned and people’s inhibitions are reduced: in this case, when people are drinking at bars. As almost anyone who has been out to a bar can tell you, when people are drinking tempers can flare, people overstep boundaries, and conflicts break out. When that happens, there often tends to be a number of uninvolved third parties who might intervene, making it a fairly ideal context for studying the matter.

“No one else wears this shirt on my watch. No one”

A 2013 paper by Parks et al examined around 800 such incidents of what was deemed to be verbal or physical aggression to determine what kinds of conflicts arose, what types people tends to get involved in them, and how they became involved. As an initial note – and this will become relevant shortly – aggression was defined in a particular way that I find to be troublesome: specifically, there was physical aggression (like hitting or pushing), verbal aggression (like insults), and unwanted or persistent sexual overtures. The problem here is that though failed or crude attempts at flirting might be unwanted, they are by no means aggressive in the same sense that hitting someone is, so aggression might have been defined too broadly here. That said, the “aggressive” acts were coded for severity and intent, third-party intervention was coded as present or absent and, when present, whether it was an aggressive or non-aggressive intervention, and all interactions were coded for the sex of the parties and their level of intoxication.

The first question is obviously how often did third parties become involved in an aggressive encounter? The answer is around a third of the time on average, so third-party involvement in disputes is by no means an infrequent occurrence. Around 80% of the third parties that intervened were also male. Further, when third parties did become involved, they were about twice as likely to become involved in an non-aggressive fashion, relative to an aggressive one (so they were more often trying to diffuse the situation, rather than escalating it). Perhaps unsurprising in the fact that most disputes tended to be initiated by people who appeared to be relatively more intoxicated, and the aggressive third parties tended to be drunker than the non-aggressive ones. So, as is well known, being drunk tended to lead to people being more aggressive, whether it came to initiating conflicts or joining them. Third parties also tended to become more likely to get involved in disputes as the severity of the disputes rose: minor insults might not lead to much involvement on the parts of others, while throwing a punch or pulling out a knife will. This also meant that mutually-aggressive encounters – ones that are likely to escalate – tended to draw more third-party involvement that one-sided aggression.

Of note is that the degree of third party involvement did fluctuate markedly: the disputes that drew the most third-party involvement were the male-on-male mutually-aggressive encounters. In those cases, third parties got involved around 70% of the time; more than double the average involvement level. By contrast, male-on-female aggression drew the least amount of third-party intervention; only around 17% of the time. This is, at first, a very surprising finding, given that women tend to receive lighter sentences for similar crimes, and violence against women appears to be less condoned than violence against men. So why would women garner less support when men are aggressing against them? Well, likely because unwanted sexual attention falls under the umbrella term of aggression in this study. Because “aggressive” does not equate to “violent” in the paper, all of the mixed-sex instances of “aggression” need to be interpreted quite cautiously. The authors note as much, wondering if male-on-female aggression generated less third-party involvement because it was perceived as being less severe. I think that speculation is on the right track, but I would take it further: most of the mixed-sex “aggression” might have not been aggressive at all. By contrast, when it was female-female mutual aggression (less likely to be sexual in nature, likely involving a fight or the threat of one), third parties intervened around 60% of the time. In other words, people were perfectly happy to intervene on behalf of either sex, so long as the situation was deemed to be dangerous.

“It’s only one bottle; let’s not be too hasty to get involved yet…”

Another important caveat to this research is that the relationship of the third parties that became involved to the initial aggressors was not known. That is, there was no differentiation between a friend or a stranger coming to someone’s aid when aggression broke out. If I had to venture a guess – and this is probably a safe one – I would assume that most of the third parties likely had some kind of a relationship to the people in the initial dispute. I would also guess that non-violent involvement (diffusing the situation) would be more common when the third parties had some relationship to both of the people involved in the initial dispute, relative to when it was their friend against a stranger. I happen to feel that the relationship between the parties who become involved in these disputes has some rather large implications for understanding morality more generally but, since that data isn’t available, I won’t speculate too much more about it here. What I will say is that the focus on how strangers behave towards one another in the lab – as is standard for most research on moral condemnation – is likely missing a large part of how morality works, just like how experimental demand characteristics seemed to make people look more altruistic than they are in naturalistic settings. Getting friends together for research poses all sorts of logistically issues, but it is a valuable source of information to start considering.

 References: Parks, M., Osgood, D., Felson, R., Wells, S., & Graham, K., (2013). Third party involvement in barroom conflicts. Aggressive Behavior, 39, 257-268.

Dante’s Inferno

I’d like to begin with a quick apology for the tardiness of this latest update. It’s not that there’s anyone holding me to my usual weekly schedule except myself, but I am disappointed I haven’t gotten around to updating sooner. I had planned to take a week to myself to enjoy a new game (and enjoy it thoroughly I did, so mission accomplished there), but that week ended with my getting sick for another one, and I haven’t been able to concentrate on much as a result. With those excuses out of the way, I’m going to start off today like most middle/high-school students: by summarizing part of a book (or epic poem, really) that I haven’t read personally. Instead, I’ll be summarizing it – or at least part of it – by using the Wikipedia cliff notes. That story, as the title suggests, is Dante’s Inferno. What I really like about this Wikipedia page describing the story is that Inferno is kind enough to order the circles of hell for the reader with respect to increasing wickedness; the deeper one goes, the worse the sins needed to get there. The reason I like this neat and tidy ordering is that it gives us some insight as to the author’s moral sense.

I might not have read the book, but I did play the video game. Close enough.

As a quick rundown of the circles of hell, from least bad to worst, there’s: Limbo, Lust, Gluttony, Greed, Wrath, Heresy, Violence, Fraud, and finally Treachery. Now what’s particularly interesting here is that, according to Dante, it would seem to be worse to be a flatterer or a corrupt politician than a murderer. Misrepresenting your stances about people or politics is bad bad bad in Dante’s book. More interesting still is the inner-most circle: Treachery. Treachery seems to represent a particular kind of fraud: one in which the victim is expected to have some special relationship to the perpetrator. For instance, family members betraying each other seems to be worse than strangers doing similar harms. In general, kin are expected to behave more altruistically towards each other, owing in no small part to the fact that they share genes in common with one another. Helping one’s kin, in the evolutionary-sense of things, is quite literally like helping (part of) yourself. So if kin are expected to trade off their own welfare for family members at a higher rate than they would for strangers, but instead display the opposite tendency, this makes kin-directed immoral acts appear particularly heinous.

Now, of course, Dante’s take on things isn’t the only game in town. A paper which I have repeatedly discussed (DeScioli & Kurzban, 2013) has a different take on the issue of morality. That take is that morality serves, more or less, a coordination function for punishers: the goal is to get most people in agreement about who should be punished in order to avoid the fighting costs that are associated with disagreement in that realm. In order for this coordination function to work, however, the pair suggest that morality needs to function on the basis of acts; not the identity of the actors. As DeScioli & Kurzban (2013) put it:

“The dynamic coordination theory of morality holds that evolution favored individuals equipped with moral intuitions who choose sides in conflicts based, in part, on “morality” rather than relationship or status”

Identity shouldn’t come into play when it comes to moral condemnation, then; it is “[crucial that the signal] must not be tied to individual identity”. As Monty Python put it, “let’s not bicker and argue about who killed who“, and let’s not do that because killing should be equally as wrong no matter who does it and who ends up on the receiving end.

Now in fairness to DeScioli & Kuzrban (2013), they also hedge their theoretical bets, suggesting that identity also ought to matter when it comes to picking sides in disputes. However, it seems that, according to the dynamic coordination model, anyway, when people do take sides on the basis loyalty to their friends or family, they should be motivated by systems that do not deal with morality. This suggestion seems to be at odds with Dante’s less-formalized interpretation of the importance of identity in the realm of morality, who instead would appear to hypothesize, at least implicitly, that the identity of the actors ought to matter a great deal. So let’s take a look at some research bearing on the matter.

And let’s do so quickly, before I get back to being addicted to this game.

The first piece of research comes to us from Lieberman & Linke (2007) who were examining whether the identity of an individual (either a foreigner, schoolmate, or family member) mattered when it came to the wrongness of an act and amount of punishment deemed to be appropriate for it (in this case, stealing $1500). When the individual in question was the perpetrator, participants (N = 268) suggested that the foreigner deserved more punishment than the schoolmate, and that the schoolmate deserved more punishment than the family member. Family members were also perceived to be more remorseful about their act, relative to schoolmates, relative to strangers. On the other hand, people’s rating of the immorality of the act did not vary as a function of the actor’s identity; no matter who one was, the act was rated as just as morally wrong (though ratings in all cases were close to ceiling levels here).

The next experiment (N = 288) examined essentially the same question, but this time the individual in question was the victim of the offense, rather than the perpetrator. When the offense was committed against a family member, people tended to be more punitive towards the perpetrator than when it was committed against a schoolmate or foreigner. Again, however, moral judgments remained uniformly at ceiling levels in all cases. In a final experiment (N = 78) participants were asked about how much they would be willing to personally invest in order to track down the perpetrator of various deeds. As before, people reported being willing to take more days off from work without pay to try and find the thief when a family member had been robbed (M = 12.85 days), relative to a schoolmate (M = 2.24) or a foreigner (M = 2.10). Now whether or not people would actually do these things (I don’t recall many people taking time off work to play Batman and help strangers track down thieves), people are at least expressing sentiments indicating that they think people should be punished to a greater degree for victimizing their kin, and that their kin deserve less punishment.

The results could be taken to favor either account – Dante’s or DeScioli & Kurzban’s – I feel. On the one hand, rating of morality appeared stubbornly impartial: the act was rated as being just as morally wrong, no matter the identity of the perpetrator or victim. This might suggest people were coordinating around the behavior, and not the identity of the actors. However, people were also not coordinating their behavior in the sense that what they actually wanted to see done after they had decided the act was morally wrong varied on the basis of identity. To express this tension in a different context, we might consider the following: imagine that most people agree with the statement, “freedom is a good thing”; good for America. However, that certainly does not mean that most people would be in agreement when it came to what precisely that sentence is supposed to mean: that is, what limits are to be put in place, and how those limits should be enacted?

Just exercising his freedom to pepper-spray protesters.

That said, the paper by Lieberman & Linke (2007) doesn’t exactly get at what Dante was proposing: Dante didn’t, as far as I know, anyway, say that lust was any better or worse when a family member does it. After all, everyone is someone’s family member, or friend, or foreigner. Instead, what Dante appeared to be proposing is that the relationship of the perpetrator to the victim is the crucial variable. As I’ve discussed previously, some initial research has tentatively borne out Dante’s hypothesis: while acts are rated as morally worse than omissions between strangers, this difference is reduced when the interaction occurs between friends, and the act is rated as more morally wrong overall. A more formal test of these competing hypotheses appears to await data. I’ll be sure to get right on that personally; just as soon as I’m done being addicted to this game in the next three or four years.

References: DeScioli, P. & Kurzban, R. (2013). A solution to the mysteries of morality. Psychological Bulletin, 139, 477-496.

Lieberman, D. & Linke, L. (2007). The effect of social category on third party punishment. Evolutionary Psychology, 5, 289-305.

When Are Equivalent Acts Not Equal?

There’s been an ongoing debating in the philosophical literature on morality for some time. That debate focuses on whether the morality of an act should be determined on the basis of either (a) the act’s outcome, in terms of its net effects on people’s welfare, or (b) whether the morality of an act is determined by…something else; intuitions, feelings, or what have you (i.e. “Incest is just wrong, even if nothing but good were to come of it”). These stances can be called the consequentialist and nonconsequentialist stances, respectively, and it’s at topic I’ve touched upon before. When I touched on the issue, I had this to say:

There are more ways of being consequentialist than with respect to the total amount of welfare increase. It would be beneficial to turn our eye towards considering strategic welfare consequences that likely to accrue to actors, second parties, and third parties as a result of these behaviors.

In other words, moral judgments might focus not only on the acts per se (the nonconsequentalist aspects) or their net welfare outcomes (the consequences), but also on the distribution of those consequences. Well, I’m happy to report that some very new, very cool research speaks to that issue and appears to confirms my intuition. I happen to know the authors of this paper personally and let me tell you this: the only thing about the authors that are more noteworthy than their good looks and charm is how humble one of them happens to be.

Guess which of us is the humble one?

The research (Marczyk & Marks, in press) was examining responses to the classic trolley dilemma and a variant of it. For those not well-versed in the trolley dilemma, here’s the setup: there’s an out-of-control train heading towards five hikers who cannot get out of the way in time. If the train continues on it’s part, then all five wills surely die. However, there’s a lever which can be pulled to redirect the train onto a side track where a single hiker is stuck. If the lever is pulled, the five will live, but the one will die (pictured here). Typically, when asked whether it would be acceptable for someone to pull the switch, the majority of people will say that it is. However, in past research examining the issue, the person pulling the switch has been a third party; that is, the puller was not directly involved in the situation, and didn’t stand to personally benefit or suffer because of the decision. But what would happen if the person pulling the switch was one of the hikers on one of the tracks; either on the side track (self-sacrifice) or the main track (self-saving)? Would it make a difference in terms of people’s moral judgments?

Well, the nonconsequentist account would say, “no; it shouldn’t matter”, because the behavior itself (redirecting a train onto a side track where it will kill one) remains constant; the welfare-maximizing consequentialist account would also say, “no; it shouldn’t matter”, because the welfare calculations haven’t changed (five live; one dies). However, this is not what we observe. When asked about how immoral it was for the puller to redirect the train, ratings were lowest in the self-sacrifice condition (M = 1.40/1.16 on a 1 to 5 scale in international and US samples, respectively), in the middle for the standard third-party context (M = 2.02/1.95), and highest in the self-saving condition (M = 2.52/2.10). In terms of whether or not it was morally acceptable to redirect the train, similar judgments cropped up: the percentage of US participants who said it was acceptable dropped as self-interested reasons began to enter into the question (the international sample wasn’t asked this question). In the self-sacrifice condition, these judgments of acceptability were highest (98%), followed by the third-party condition (84%), with the self-saving condition being the lowest (77%).

Participants also viewed the intentions of the pullers to be different, contingent on their location in this dilemma: specifically, the more one could benefit him or herself by pulling, the more people assumed that was the motivation for doing so (as compared with the puller’s motivations to help others: the more they could help themself, the less they were viewed as intending to help others). Now that might seem unsurprising: “of course people should be motivated to help themselves”, you might say. However, nothing in the dilemma itself spoke directly to the puller’s intentions. For instance, we could consider the case where a puller just happens to be saving their own life by redirecting the train away from others. From that act alone, we learn nothing about whether or not they would sacrifice their own life to save the lives of others. That is, one’s position in the self-beneficial context might simply be incidental; their primary motivation might have been to save the largest number of lives, and that just so happens to mean saving their own in the process. However, this was not the conclusion people seemed to be drawing.

*Side effects of saving yourself include increased moral condemnation.

Next, we examined a variant of the trolley dilemma that contained three tracks: again, there were five people on the main track and one person on each side track. As before, we varied who was pulling the switch: either the hiker on the main track (self-saving) or the hiker on the side track. However, we now varied what the options of the hiker on the side track were: specifically, he could direct the train away from the five on the main track, but either send the train towards or away from himself (the self-sacrifice and other-killing conditions, respectively). The intentions of the hiker on the side track, now, should have been disambiguated to some degree: if he intended to save the lives of others with no regard for his own, he would send the train towards himself; if he intended to save the lives of the hikers on the main track while not harming himself, he would send the train towards another individual. The intentions of the hiker on the main track, by contrast, should be just as ambiguous as before; we shouldn’t know whether that hiker would or would not sacrifice himself, given the chance.

What is particularly interesting about the results from this experiment is how closely the ratings of the self-saving and other-killing actors matched up. Whether in terms of how immoral it was to direct the train, whether the puller should be punished, how much they should be punished, or how much they intended to help themselves and others, ratings were similar across the board in both US and international samples. Even more curious is that the self-saving puller – the one whose intentions should be the most ambiguous – was typically rated as behaving more immorally and self-interestedly – not less – though this difference wasn’t often significant. Being in a position to benefit yourself from acting in this context seems to do people no favors in terms of escaping moral condemnation, even if alternative courses of actions aren’t available and the act is morally acceptable otherwise.

One final very interesting result of this experiment concerned the responses participants gave to the open-ended questions, “How many people [died/lived] because the lever was pulled?” On a factual level, these answers should be “1″ and “5″ respectively. However, our participants had a somewhat different sense of things. In the self-saving condition, 35% of the international sample and 12% of the US sample suggest that only 4 people were saved (in the other-killing condition, these percentages were 1% and 9%, and in the self-sacrifice condition they were 1.9% and 0%, respectively). Other people said 6 lives had been saved: 23% and 50% in the self-sacrifice condition, 1.7% and 36% in the self-saving condition, and 13% and 31% in the (international and US respectively). Finally, a minority of participants suggested that 0 people died because the train was redirected (13% and 11%), and these responses were almost exclusively found in the self-sacrifice conditions. These results suggest that our participants were treating the welfare of the puller in a distinct manner from the welfare of others in the dilemma. The consequences of acting, it would seem, were not judged to be equivalent across scenarios, even though the same number of people actually lived and died in each.

“Thanks to the guy who was hit by the train, no one had to die!”

In sum, the experiments seemed to demonstrate that these questions of morality are not to be limited to considerations of just actions and net consequences: to whom those consequences accrue seems to matter as well. Phrased more simply, in terms of moral judgments, the identity of actors seems to matter: my benefiting myself at someone else’s expense seems to have much different moral feel than someone else benefiting me by doing exactly the same thing. Additionally, the inferences we draw about why people did what they did – what their intentions were – appear to be strongly affected by whether that person is perceived to have benefited as a result of their actions. Importantly, this appears to be true regardless of whether that person even had any alternative courses of action available to them. That latter finding is particularly noteworthy, as it might imply that moral judgments are, at least occasionally, driving judgments of intentions, rather than the typically-assumed reverse (that intentions determine moral judgments). Now if only there was a humble and certainly not self-promoting psychologist who would propose some theory for figuring out how and why the identity of actors and victims tends to matter…

References: Marczyk, J. & Marks, M. (in press). Does it matter who pulls the switch? Perceptions of intentions in the trolley dilemma. Evolution and Human Behavior.