In my last post, I discussed the matter of nonconsequentialism: the idea that, when determining the moral value of an action, the consequences of that action are, in some sense, besides the point; instead, some acts are just wrong regardless of their consequences. The thrust of my argument there was that those arguing that moral cognitions are nonconsequentialist in nature seem to have a rather restricted view of precisely how consequences should matter. Typically, that view consists of whether aggregate welfare was increased or decreased by the act in question. My argument was that we need to consider other factors, such as the distribution of those welfare gains and losses. Today, I want to expand on that point a bit by quickly considering three other papers examining how people respond to moral violations.
Turning the other cheek when you’re being hit helps to even out the scars
The first of these papers comes from DeScioli, Gilbert, & Kurzban, (2012), and it examines people’s perceptions of victims in response to moral transgressions. Their research question concerns the temporal ordering of things: do people need to first perceive a victim in order to perceive an immoral behavior, or do people perceive an immoral behavior and then look for potential victims? If the former idea is true, then people should not rate acts without apparent victims as wrong; if the latter is true, then people might be inclined to, essentially, invent victims (i.e. mentally represent them) when none are readily available. There is, of course, another way people might see things if they were nonconsequentialists: they might perceive an act as wrong without representing a victim. After all, if negative consequences from an act aren’t necessary for perceiving something as wrong, then there would be no need to perceive a victim.
To test these competing alternatives, DeScioli, Gilbert, & Kurzban (2012) presented 65 subjects with a number of ostensibly “victimless” offenses (including things like suicide, grave desecration, prostitution, and mutually-consensual incest). The results showed that when people perceived an act as wrong, they represented a victim for that act around 90% of the time; when acts were perceived to not be wrong, victims were only represented 15% of the time. While it’s true enough that many of the victims people nominated – like “society” or “the self” – were vague or unverifiable, the fact remains that they did represent victims. From a nonconsequentialist standpoint, representing ambiguous or unverifiable victims seems like a rather peculiar thing to do; better to just call the act wrong regardless of what welfare implications it might have. The authors suggest that a possible function of such victim representation would be to recruit other people to the side of the condemners but, absent the additional argument that people respond to consequences suffered by victims (i.e., that people are consequentialists), this explanation would be incompatible with the nonconsequentialist view.
The next paper I wanted to review comes from Trafimow & Ishikawa (2012). This paper is a direct follow-up from the paper I discussed in my last post. In this paper, the authors were examining what kind of attributions people made about others who lied: specifically, whether people who lied were judged to be honest or dishonest. Now this sounds like a fairly straight-forward kind of question: someone who lies should, by definition, be rated as dishonest, yet that’s not quite what ended up happening. In this experiment, 151 subjects were given one of four stories in which someone either did or not lie. When the story did not represent any reason for the honesty or dishonesty, those who lied were rated as relatively dishonest, whereas those who told the truth were rated as relatively honest, as one might expect. However, there was a second condition in which a reason for the lie was provided: the person was lying to help someone else. In this case, if the person told the truth, that someone else would suffer a cost. Here, an interesting effect emerged: in terms of their rated honesty, the liars who were helping someone else were rated as being as honest as those who told the truth and harmed someone else because of it.
“I only lied to make my girlfriend better off…”
In the words of the authors, “participants who lie when lying helps another person are absolved, whereas truth tellers do not get credit for telling the truth when a lie would have helped another person“. Now, in the interests of beating this point to death, a nonconsequentialist moral psychology should not be expected to generate that output, as that output is contingent on consequences. Despite that, honesty which harmed was no different than dishonesty which helped. Nevertheless, these judgments were ostensibly about honesty – not morality – so that lying and truth-telling were rated comparably does require some explanation.
While I can’t say for certain what that explanation is, my suspicion is that the mind typically represents some acts – like lying – as wrong because, historically, they did tend to reliably carry costs. In this case, the cost is that behaving on the basis of incorrect information typically leads to worse fitness outcomes than behaving on the basis of accurate information; conversely, receiving new, true information can help improve decision making. As people want to condemn those who inflict costs, they typically represent lying as wrong and those whom people want to condemn because of their lying are labeled dishonest. In other words, “dishonest” doesn’t refer to someone who fails to tell the truth so much as it refers to someone people wish to condemn for failing to tell the truth. However, when considering a context in which lying provides benefits, people don’t wish to condemn the liars – at least not as strongly – so they don’t use the label. Similarly, they don’t want to praise truth-tellers who harm others, and so avoid the honest label as well. While necessarily speculative, my analysis is also ruthlessly consequentialist, as any strategic explanation would need to be.
The final paper I wanted to discuss can be discussed quickly. In this last paper, Reeder et al (2002) examined the matter of whether situational characteristics can make morally unacceptable acts more acceptable. These immoral acts included driving cleat spikes into a player during a sports game, administering a shock to another person, or shaking someone off a ladder. The short version of the results is that when the person being harmed previously instigated in some way – either through insults or previous physical harm – it became more acceptable (though not necessarily very acceptable) to harm them. However, when someone harmed another person for their own financial gain, it was rated as less acceptable regardless of the size of that gain. At the risk of not saying this enough, a nonconsequentialist moral psychology should output the decision that harming people is equally wrong regardless of what they might or might not have done to you beforehand because, well, it’s only attending to the acts in question; not their precursors or consequences.
I could have sworn I just saw it move…
Now, as I mentioned above, people will tend to represent lying as morally wrong across a wide range of scenarios because lying tends to inflict costs. The frequency with which people do that could provide the facade of moral nonconsequentialism. However, even in cases where lying is benefiting one person, as in Trafimow & Ishikawa (2012), it is likely harming another. To the extent that people don’t tend to advocate for harming others, they would rather that one both (a) avoid the costs inflicted by truth-telling and (b) avoid the costs inflicted by lying. This is likely why some Kantians (from what I have seen) seem to advocate for simply failing to provide a response in certain moral dilemmas, rather than to lie, as the morally acceptable (though not necessarily desirable) option. That said, even the Kantians seem to respond to the consequences of the actions by in large; if they didn’t, they wouldn’t see any dilemma when it came to lying about Jews in the attic to Nazis during the 1940s which, as far as I can tell, they seem to. Then again, I don’t suppose many people see lying to Nazis to save lives as much of a dilemma; I imagine that has something to do with the consequences…
References: Descioli, P., Gilbert, S., & Kurzban, R. (2012). Indelible victims and persistent punishers in moral cognition. Psychological Inquiry, 23, 143-149.
Reeder, G., Kumar, S., Hesson-McInnis, M., & Trafimow, D. (2002). Inferences about the morality of an aggressor: The role of perceived motive. Journal of Personality & Social Psychology, 83, 789-803.
Trafimow, D. & Ishikawa, Y. (2012). When violations of perfect duties do not cause strong trait attributions. The American Journal of Psychology, 125, 51-60.