Moral Stupefaction

I’m going to paint a picture of loss. Here’s a spoiler alert for you: this story will be a sad one.

Mark is sitting in a room with his cat, Tigger. Mark is a 23-year-old man who has lived most of his life as a social outcast. He never really fit in at school and he didn’t have any major accomplishments to his name. What Mark did have was Tigger. While Mark had lived a lonely life in his younger years, that loneliness had been kept at bay when, at the age of 12, he adopted Tigger. The two had been inseparable ever since, with Mark taking care of the cat with all of his heart. This night, as the two laid together, Tigger’s breathing was labored. Having recently become infected with a deadly parasite, Tigger was dying. Mark was set on keeping his beloved pet company in its last moments, hoping to chase away any fear or pain that Tigger might be feeling. Mark held Tigger close, petting him as he felt each breath grow shallower. Then they stopped coming all together. The cat’s body went limp, and Mark watched the life of only thing he had loved, and that had loved him, fade away.

As the cat was now dead and beyond experiencing any sensations of harm, Mark promptly got up to toss the cat’s body into the dumpster behind his apartment. On his way, Mark passed a homeless man who seemed hungry. Mark handed the man Tigger’s body, suggesting he eat it (the parasite which had killed Tigger was not transmittable to humans). After all, it seemed like a perfectly good meal shouldn’t go to waste. Mark even offered to cook the cat’s body thoroughly.

Now, the psychologist in me wants to know: Do you think what Mark did was wrong? Why do you think that? 

Also, I think we figured out the reason no one else liked Mark

If you answered “yes” to that question, chances are that at least some psychologists would call you morally dumbfounded. That is to say you are holding moral positions that you do not have good reasons for holding; you are struck dumb with confusion as to why you feel the way you do. Why might they call you this, you ask? Well, chances are because they would find your reasons for the wrongness of Mark’s behavior unpersuasive. You see, the above story has been carefully crafted to try and nullify any objections about proximate harms you might have. As the cat is dead, Mark isn’t hurting it by carelessly disposing of the body or even by suggesting that others eat it. As the parasite is not transmittable to humans, no harm would come of consuming the cat’s body. Maybe you find Mark’s behavior at the end disgusting or offensive for some reason, but your disgust and offense don’t make something morally wrong, the psychologists would tell you. After hearing these counter arguments, are you suddenly persuaded that Mark didn’t do something wrong? If you still feel he did, well, consider yourself morally dumbfounded as, chances are, you don’t have any more arguments to fall back on. You might even up saying, “It’s wrong but I don’t know why.”

The above scenario is quite similar to the ones presented to 31 undergraduate subjects in the now-classic paper on moral dumbfounding by Haidt, Bjorklund, & Murphy (2000). In the paper, subjects are presented with one reasoning task (the Heinz dilemma, asking whether a man should steal to help his dying wife) that involves trading off the welfare of one individual for another, and four other scenarios, each designed to be “harmless, yet disgusting:” a case of mutually-consensual incest between a brother and sister where pregnancy was precluded (due to birth control and condom use); a case where a medical student cuts a piece of flesh from a cadaver to eat, (the cadaver is about to be cremated and had been donated for medical research); a chance to drink juice that had a dead, sterilized cockroach stirred in for a few seconds and then removed; and a case where participants would be paid a small sum to sign and then destroy a non-binding contract that gave their soul to the experimenter. In the former two cases – incest and cannibalism –  participants were asked whether they thought the act was wrong and, if they did, to try and provide reasons for why; in the latter two cases – roach and soul – participants were asked if they would perform the task and, if they would not, why. After the participants stated their reasons, the experimenter would challenge their arguments in a devil’s-advocate type of way to try and get them to change their minds.

As a brief summary of the results: the large majority of participants reported that having consensual incest and removing flesh from a human cadaver to eat were wrong (in the latter case, I imagine they would similarly rate the removal of flesh as wrong even if it were not eaten, but that’s besides the point), and a similarly-large majority were also unwilling to drink from the roached water or the sign the soul contract. On average, the experimenter was able to change about 16% of the participants’ initial stances by countering their stated arguments. The finding of note that got this paper its recognition, however, is that, in many cases, participants would state reasons for their decisions that contradicted the story (i.e., that a child born of incest might have birth defects, though no child was born due to the contraceptives) and, when those concerns had been answered by the experimenter, that they still believed these acts to be wrong even if they could no longer think of any reasons for that judgment. In other words, participants appeared to generate their judgments of an act first (their intuitions), with the explicit verbal reasoning for their judgments being generated after the fact and, in some cases, seemingly disconnected from the scenarios themselves. Indeed, in all cases except the Heinz dilemma, participants rated their judgments as arising more from “gut feelings” than reasoning.

“fMRI scans revealed activation of the ascending colon for moral judgments…”

A number of facets of this work on moral dumbfounding are curious to me, though. One of those things that has always stood out to me as dissatisfying is that moral dumbfounding claims being made here are not what I would call positive claims (i.e., “people are using variable X as an input for determining moral perceptions”), but rather they seem to be negative ones (“people aren’t using conscious reasoning, or at least the parts of the brain doing the talking aren’t able to adequately articulate the reasoning”). While there’s nothing wrong with negative claims per se, I just happen to find them less satisfying than positive ones. I feel that this dissatisfaction owes its existence to the notion that positive claims help guide and frame future research to a greater extent than negative ones (but that could just be some part of my brain confabulating my intuitions).

My main issue with the paper, however, hinges on the notion that the acts in question were “harmless.” A lot is going to turn on what is meant by that term. An excellent analysis of this matter is put forth in a paper by Jacobson (2012), in which he notes that there are perfectly good, harm-based reasons as to why one might oppose, say, consensual incest. Specifically, what participants might be responding to was not the harm generated by the act in a particular instance so much as the expected value of the act. One example offered to help make that point concerns gambling:

Compare a scenario I’ll call Gamble, in which Mike and Judy—who have no creditors or dependents, but have been diligently saving for their retirement—take their nest egg, head to Vegas, and put it all on one spin of the roulette wheel. And they win! Suddenly their retirement becomes about 40 times more comfortable. Having gotten lucky once, they decide that they will never do anything like that again. Was what Mike and Judy did prudent?

 The answer, of course, is a resounding “no.” While the winning game of roulette might have been “harmless” in the proximate sense of the word, such an analysis would ignore risk. The expected value of the act was, on the whole, rather negative. Jacobson (2012) goes on to expand the example, asking now whether it would have been OK for the gambling couple to have used their child’s college savings instead. The point here is that consensual incest can be considered similarly dangerous. Just because things turned out well in that instance, it doesn’t mean that harm-based justifications for the condemnation are discountable ones; it could instead suggest that there exists a distinction between harm and risk that 30 undergraduate subjects are not able to articulate well when being challenged by a researcher. Like Jacobson, (2012), I would condemn drunk driving as well, even if it didn’t result in an accident.

To bolster that case, I would also like to draw attention to one of the findings of the moral dumbfounding paper I mentioned before: about 16% of participants reversed their moral judgments when their harm-based reasoning was challenged. Though this finding is not often the one people focus on when considering the moral dumbfounding paper, I think it helps demonstrate the importance of this harm dimension. If participants were not using harm (or risk of harm) as an input for their moral perceptions, but rather only a post-hoc justification, these reversals of opinion in the wake of reduced welfare concerns would seem rather strange. Granted, not every participant changes their mind – in fact, many did not – but that any of them did requires an explanation. If judgments of harm (or risk) are coming after the fact and not being used an inputs, why would they subsequently have any impact whatsoever?

“I have revised my nonconsequentialist position in light of those consequences”

Jacobson (2012) makes the point that perhaps there’s a case to be made that the subjects were not necessarily morally dumbfounded as much as the researchers looking at the data were morally stupefied. That is to say, it’s not that the participants didn’t have reasons for their judgments (whether or not they were able to articulate them well) so much as the researchers didn’t accept their viability or weren’t able to see their validity owing to their own theoretical blinders. If participants did not want to drink juice that had a sterilized cockroach dunked in it because they found it disgusting, they are not dumbfounded as to why they don’t want to drink it; the researchers just aren’t accepting the subject’s reasons (it’s disgusting) as valid. If, returning to the initial story in this post, people appear to be opposed to behaving toward beloved (but dead) pets in ways that appear more consistent with feelings of indifference or contempt because it is offensive, that seems like a fine reason for doing so. Whether or not offense is classified as a harm by a stupefied research is another matter entirely.

References: Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished Manuscript.

Jacobson, D., (2012). Moral dumbfounding and moral stupefaction. Oxford Studies in Normative Ethics, 2, DOI:10.1093/acprof:oso/9780199662951.003.0012

Comments are closed.