Dinner, With A Side Of Moral Stances

One night, let’s say you’re out to dinner with your friends (assuming, of course, that you’re the type with friends). One of these friends decides to order a delightful medium-rare steak with a side of steamed carrots. By the time that the orders arrive, however, some mistake in the kitchen has led said friend to receive the salmon special instead. Now, in the event you’ve ever been out to dinner and this has happened, one of these two things probably followed: (1) your friend doesn’t react, eats the new dish as if they had ordered it, and then goes on about how they made such a good decision to order the salmon, or (2) they grab the waiter and yell a string of profanities at him until he breaks down in tears.

OK; maybe a bit of an exaggeration, but the pattern of behavior that we see in the event of a mixed-up order at a restaurant typically more closely resembles the latter pattern. Given that most people can recognize that they didn’t receive the order they actually made, what are we to make about the proposition that people seem to have trouble recognizing some moral principles they just endorsed?

“I’ll endorse what she’s endorsing…”

A new study by Hall et al (2012) examined, what they’re calling, “choice blindness”, which is, apparently, quite a lot like “change blindness”, except with decisions instead of people. In this experiment, a researcher with a survey about general moral principles or moral stances on certain specific issues approached 160 strangers who happened to be walking through the park. Once the subjects had filled out the first page of the survey and flipped the piece of  paper over the clipboard to move onto the second, an adhesive on the back of the clipboard held on to and removed the lightly-attached portion of the survey to reveal a new set of questions. The twist is that the new set of questions were the opposite set of moral stances, so if a subject said they agreed that the government shouldn’t be monitoring emails, the new question would imply that the subject felt the government should be monitoring emails.

Overall, only about a third to a half of the subjects appeared to catch that the questions had been altered, a number which is very similar to the results found for the change blindness research. Further, many of the subjects that missed the deception also went on to give verbal justifications for their ‘decisions’ that appeared to be in opposition to their initial choice on the survey. That said, only about a third of the subjects who expressed extremely polarized scores (a 1 or a 9) failed to catch the manipulation, and authors also found that those who rated themselves as more politically involved were similarly more likely to detect the change.

So what are we to make of these findings? The authors suggest their is no straight-forward interpretation, but also suggest that choice blindness disqualifies vast swaths of research from being useful, as the results suggest that people don’t have “real” opinions. Though they say they are hesitant to suggest such an interpretation, Hall et al (2012) feel those interpretations need to be taken seriously as well, so perhaps they aren’t so hesitant after all. It might almost seem ironic that Hall et al (2012) seem “blind” to the opinion they had just expressed (don’t want to suggest such alternatives, but also do want to suggest such alternatives), despite that opinion being in print, and both opinions residing within the same sentence.

“Alright, alright; I’ll get the coin…”

It would seem plausible that the authors have no solid explanation of their results because they seemed to have gone into the study without any clearly stated theory. Such is the unfortunate state of much of the research in psychology; a dead-horse issue I will continue to beat. Describing an effect as a psychological “blindness” alone does not tell us anything; it merely restates the finding, and restatements of findings without additional explanations are not terribly useful for understanding what we’re seeing.

There are a number of points to consider regarding these results, so let’s start with the obvious: these subjects were not seeking to express their opinions so much as they were approached by a stranger with a survey. It seems plausible that at least some of these subjects really weren’t paying much attention to what they were doing or not really engaged in the task at hand. I can’t say to what extent this would be a problem, but it’s at least worth keeping in mind. One possible way of remedying this might be to have subjects first not only mark their agreement with an issue on the scale, but also briefly justify that opinion. If you got subjects to then try and argue against their previously stated justifications moments later, that might be a touch more interesting.

Given that there’s no strategic context under which these morals stances are being made in this experiment, some random fluctuation in answers might be expected. In fact, lack of context might be the reason that some subjects may not have been particularly engaged in the task in the first place, as evidenced by people who had more extreme scores or who were more involved in politics being more attentive to these changes. Accordingly, another potential issue here concerns the mere expectation of consistency in responses: research has already shown that people don’t hold universally to one set of moral principles or moral stances (i.e. the results from various versions of the trolley and footbridge dilemmas, among others). Indeed, we should expect moral judgments (and justifications for those judgments) to be made strategically, not universally, for the very simple reason that universal behaviors will not always lead to useful outcomes. For instance, eating when you’re hungry is a good idea; continuing to eat at all points, even when you aren’t hungry, is generally not. What that’s all getting at is that the justification of a moral stance is a different task than the generation of a moral stance, and if memory fails to retain information about what you wrote on a survey some strange researcher just handed you when you’re trying to get through the park,  you’re perfectly capable of reasoning about why some other moral stance is acceptable.

“I could have sworn I was against gay marriage. Ah well”

Phrased in those terms (“when people don’t remember what stance they just endorsed – after being approached by a stranger that was asking them to endorse some stance they might not have given any thought to until moments prior – they’re capable of articulating supportive arguments for an opposing stance”), the results of this study are not terribly strange. People often have to reason differently about whether a moral act is acceptable or not, contingent on where they currently stand in any moral interaction. For example, deciding whether an instance of murder was morally acceptable or not will probably depend, in large part, on which side of that murder you happen to stand on: did you just kill someone you don’t like, or did someone else just kill someone you did like? An individual that stated murder is always wrong in all contexts might be at something of a disadvantage, relative to one with a bit more flexibly in their moral justifications (to the extent that those justifications will persuade others about whether to punish the act or not, of course).

One could worry about what people’s “real” opinions are, then, but it would seem that doing so fundamentally misstates the question. Saying that when something bad happens to you is wrong, and when that same something bad happens to someone you dislike is right, both represent real opinions, but they’re not universal opinions; they’re context-specific. Asking about “real” universal moral opinions would be like asking about “real” universal emotions or states (“Ah, but how happy is he really? He might be happy now, but he won’t be tomorrow, so he’s not actually happy, is he?”). Now, of course, some opinions might be more stable than others, but that will likely be the case only insomuch as the contexts surrounding those judgments doesn’t tend to change.

References: Hall, L., Johansson, P., & Strandberg, T. (2012). Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming Survey PLOS ONE

2 comments on “Dinner, With A Side Of Moral Stances

  1. Pingback: Anonymous