Are Consequences Of No Consequence?

Some recent events have led me back to considering the topic of moral nonconsequentialism. I’ve touch on the topic a few times before (here and here). Here’s a quick summary of the idea: we perceive the behaviors of others along some kind of moral dimension, ranging from morally condemnable (wrong) to neutral (right) to virtuous (praiseworthy). To translate those into everyday examples, we might have murder, painting, and jumping on a bomb to save the lives of others. The question of interest is what factors our minds use as inputs to move our perceptions along that moral spectrum; what things make an act appear more condemanble or praiseworthy? According to a consequentialist view, what moves our moral perceptions should be what results (or consequences) an act brings about. Is lying morally wrong? Well, that depends on what things happened because you lied. By contrast, the nonconsequentialist view suggests that some acts are wrong due to their intrinsic properties, no matter what consequences arise from them.

  “Since it’d be wrong to lie, the guy you’re trying to kill went that way”

Now, at first glance, both views seem unsatisfactory. Consequentialism’s weakness can be seen in the responses of people to what is known as the footbridge dilemma: in this dilemma, the lives of five people can saved from a train by pushing another person in front of it. Around 90% of the time, people judge the pushing to be immoral not permissible, even though there’s a net welfare benefit that arises from the pushing (+4 net lives). Just because more people are better off, it doesn’t mean an act will be viewed as moral. On the other hand, nonconsequentialism doesn’t prove wholly satisfying either. For starters, it doesn’t necessarily convincingly outline what kind of thing(s) make an act immoral and why they might do so; just that it’s not all in the consequences. Referencing the “intrinsic wrongness” of an act to explain why it is wrong doesn’t get us very far, so we’d need further specification. Further, consequences clearly do matter when it comes to making moral judgments. If – as a Kantian categorical imperative might suggest – lying is wrong per se, then we should consider it immoral for a family in 1940s Germany to lie to the Nazi’s about hiding a Jewish family in their attic (and something tells me we don’t). Finally, we also tend to view acts not just as wrong or right, but wrong to differing degrees. As far as I can tell, the nonconsequentialist view doesn’t tell us much about why, say, murder is viewed as worse than lying. As a theory of psychological functioning, nonconsequentialism doesn’t seem to make good predictions.

This tension between moral consequentialism and nonconsequentialism can be resolved, I think, so long as we are clear about what consequences we are discussing. The most typical type of consequentialism I have come across defines positive consequences in a rather specific way: the most amount of good (i.e., generating happiness, or minimize suffering) for people (or other living things) on the whole. This kind of consequentialism clearly doesn’t describe how human moral psychology functions very well, as it would predict people would say that killing one person to save five is the moral thing to do; since we don’t tend to make such judgments, something must be wrong. If we jettison this view that increasing aggregate welfare is something our psychology was selected to do and replace it instead with the idea that our moral psychology functions to strategically increasing the welfare of certain parties at the expense of others, then the problem largely dissolves. Explaining that last part requires more space than I have here (which I will happily make public once my paper is accepted for publication), but I can at least provide an empirical example of what I’m talking about now.

This example will make use of the act of lying. If I have understood the Kantian version of nonconsequentialism correctly, then lying should be immoral regardless of why it was done. Phrased in terms of a research hypothesis concerning human psychology, people should rate lying as immoral, regardless of what consequences accrued from the lie. If we’re trying to derive predictions from the welfare maximization type of consequentialism, we should predict that people will rate lying as immoral only when the negative consequences of lying outweigh the positive ones. At this point, I imagine you can all already think of cases where both of those predictions won’t work out, so I’m probably not spoiling much by telling you that they don’t seem to work out in the current paper either.

Spoiler alert: you probably don’t need that spoiler

The paper, by Brown, Trafimow, & Gregory (2005) contained three experiments, though I’m only going to focus on the two involving lying for the sake of consistency. In the first of these experiments, 52 participants read about a person – Joe – who had engaged in a dishonest behavior for one of five reasons: (1) for fun, (2) to gain $1,000,000, (3) to avoid losing $1,000,000, (4) to save his own life, or (5) to save someone else’s life. The subjects were then asked to, among other things, rate Joe on how moral they thought he was from -3 (extremely immoral) to +3 (extreme moral). Now a benefit of $1,000,000 should, under the consequentialist view, make lying more acceptable than when it was done just for fun, as there is a benefit to the liar to take into account; the nonconsequentialist account, however, suggests that people should discount the million when making their judgments of morality.

Round 1, in this case, went to the nonconsequentalists: when it came to lying just for fun, Joe was morally rated at a -1.33 on average; lying for money didn’t seem to budge the matter much, with a -1.73 rating for the gaining a million and a -0.6 for losing a million. Statistical analysis found no significant differences between the two money conditions and no difference between the combined money conditions and the “for fun” category. Round 2 went the consequentialists, however: when it came to the saving lives category, lying to save one’s own life was rated as slightly morally positive (0.81), as was lying to save someone else’s (M = 1.36). While the difference was not significant between the two life saving groups, the two were different than the “for fun” group. That last finding required a little bit of qualification, though, as the situation being posed to the subjects was too vague. Specifically, the question had read “Joe was dishonest to a friend to save his life”, which could be interpreted as suggesting that either Joe was saving his own life or his friend’s life. The wording was amended in next experiment to read that “…was dishonest to a friend to save his own life”. The “for fun” was also removed, leaving the dishonest behavior without any qualification in the control group.

With the new wording, 96 participants were recruited and given one of three contexts: George being dishonest for no stated reason, to save his own life, or to save his friend’s life. This time, when participants were asked about the morality of George’s behavior, a new result popped up: being dishonest for no reason was rated somewhat negatively (M = -0.5) as before, but this time, being dishonest to save one’s own life was similarly negative (M = -0.4). Now saving a life is arguably more of a positive consequence than being dishonest is negative when considered in a vacuum, so the consequentialist account doesn’t seem to be faring so well. However, when George was being dishonest to save his friend’s life, the positive assessments returned (M = 1.03). So while there was no statistical difference between George lying for no reason and to save his own life, both conditions were different than George lying to save the life of another. Framed in terms of the Nazi analogy, I don’t see many people condemning the family for hiding Anne Frank.

 The jury is still out on publishing her private diary without permission though…

So what’s going on here? One possibility that immediately comes to mind from looking at these results is that consequences matter, but not in the average-welfare-maximization sense. In both of these experiments lying was deemed to be OK so long as someone other than the liar was benefiting. When someone was lying to benefit himself – even when that benefit was large – is was deemed unacceptable. So it’s not just that the consequences, in the absolute sense, matter; their distribution appears to be important. Why should we expect this pattern of results? My suggestion is that it has to do with the signal that is sent by the behavior in question regarding one’s value as a social asset. Lying to benefit yourself demonstrates a willingness to trade-off the welfare of others for your own, which we want to minimize in our social allies; lying to benefit others sends a different signal.

Of course, it’s not just that benefiting others is morally acceptable or praiseworthy: lying to benefit a socially-undesirable party is unlikely to see much moral leniency. There’s a reason the example people use for thinking about the morality of lying uses hiding Jews from the Nazis, rather than lying to Jews to benefit the Nazis. Perhaps the lesson here is that trying to universalize morality doesn’t do us much good when it comes to understanding it, despite our natural inclinations to view morality not a matter of personal preferences.

References: Brown, J., Trafimow, D., & Gregory, W. (2005). The generality of negativity hierarchically restrictive behaviors. British Journal of Social Psychology, 44, 3-13.

One comment on “Are Consequences Of No Consequence?

  1. Pingback: Are Consequences Of No Consequence? (Part 2) | Pop Psychology