Quid Pro Quo

Managing relationships is a task that most people perform fairly adeptly. That’s not to say that we do so flawlessly – we certainly don’t – but we manage to avoid most major faux pas with regularity. Despite our ability to do so, many of us would not be able to provide compelling answers that help others understand why we do what we do. Here’s a frequently referenced example: if you invited your friend over for dinner, many of you would likely find it rather strange – perhaps even insulting – if after the meal your friend pulled out his wallet and asked how much he owed you for the food. Though we would find such behavior strange or rude, when asked to explain what is rude about it, most people would verbally stumble. It’s not that the exchange of money for food is strange; that part is really quite normal. We don’t expect to go into a restaurant, be served, eat, and then leave without paying. There are also other kinds of strange goods and services – such a sex and organs – that people often do see something wrong with exchanging resources for, at least so long as the exchange is explicit; despite that, we often have less of a problem with people giving such resources away.

Alright; not quite implicit enough, but good try

This raises all sorts of interesting questions, such as why is it acceptable for people to give away things but not accept money for them? Why would it be unacceptable for a host to expect his guests to pay, or for the guests to offer? The most straightforward answer is that the nature of these relationships are different: two friends have different expectations of each other than two strangers, for instance. While such an answer is true enough, it don’t really deepen our understanding of the matter; it just seems to note the difference. One might go a bit further and begin to document some of the ways in which these relationships differ, but without a guiding functional analysis of why they differ we would be stuck at the level of just noting differences. We could learn not only that business associates treat each other differently than friends (which we knew already), but also some of the ways they do. While documenting such things does have value, it would be nice to place such facts in a broader framework. On that note, I’d like to briefly consider one such descriptive answer to the matter of why these relationships differ before moving onto the latter point: the distinction between what has been labeled exchange relationships and communal relationships. 

Exchange relationships are said to be those in which one party provides a good or service to the other in the hopes of receiving a comparable benefit in return; the giving thus creates the obligation for reciprocity. This is the typical consumer relationship that we have with businesses as customers: I give you money, you give me groceries. Communal relationships, by contrast, do not carry similar expectations; instead, these are relationships in which each party cares about the welfare of the other, for lack of a better word, intrinsically. This is more typically of, say, mother-daughter relationships, where the mother provisions her daughter not in the hopes of her daughter one day provisioning her, but rather because she earnestly wishes to deliver those benefits to her daughter.On the descriptive level, then, this difference between expectations of quid pro quo are supposed to differentiate the two types of relationships. Friends offering to pay for dinner are viewed as odd because they’re treating a communal relationship as an exchange one.

Many other social disasters might arise from treating one type of social relationship as if it were another. One of the most notable examples in this regard is the ongoing disputes over “nice guys”, nice guys, and the women they seek to become intimate with. To oversimplify the details substantially, many men will lament that women do not seem to be interested in guys who care about their well-being, but rather seek men who offer resources or treat them as less valuable. The men feel they are offering a communal relationship, but women opt for the exchange kind. Many women return the volley, suggesting instead that many of the “nice guys” are actually entitled creeps who think women are machines you put niceness coins into to get them to dispense sex. Now, it’s the men seeking the exchange relationships (i.e., “I give you dinner dates and you give me affection”), whereas the women are looking for the communal ones. But are these two types of relationships – exchange and communal – really that different? Are communal relationships, especially those between friends and couples, free of the quid-pro-quo style of reciprocity? There are good reasons to think that they are not quite different in kind, but rather different in respect to the  details of the quids and quos.

A subject our good friend Dr. Lecter is quite familiar with

To demonstrate this point, I would invite you to engage in a little thought experiment: imagine that your friend or your partner decided one day to behave as if you didn’t exist: they stopped returning your messages, they stopped caring about whether they saw you, they stopped coming to your aid when you needed them, and so on. Further, suppose this new-found cold and callous attitude wouldn’t change in the future. About how long would it take you to break off your relationship with them and move onto greener pastures? If your answer to that question was any amount of time whatsoever, then I think we have demonstrated that the quid-pro-quo style of exchange still holds in such relationships (and if you believe that no amount of that behavior on another’s part would ever change how much you care about that person, I congratulate you on the depths of your sunny optimism and view of yourself as an altruist; it would also be great if you could prove it by buying me things I want for as long as you live while I ignore you). The difference, then, is not so much whether there are expectations of exchanges in these relationships, but rather concerning the details of precisely what is being exchanged for what, the time frame in which those exchanges take place, and the explicitness of those exchanges.

(As an aside, kin relationships can be free of expectations of reciprocity. This is because, owing to the genetic relatedness between the parties, helping them can be viewed – in the ultimate, fitness sense of the word – as helping yourself to some degree. The question is whether this distinction also holds for non-relatives.)

Taking those matters in order, what gets exchanged in communal relationships is, I think, something that many people would explicitly deny is getting exchanged: altruism for friendship. That is to say that people are using behavior typical of communal relationships as an ingratiation device (Batson, 1993): if I am kind to you today, you will repay with [friendship/altruism/sex/etc] at some point in the future; not necessarily immediately or at some dedicated point. These types of exchange, as one can imagine, might get a little messy to the extent that the parties are interested in exchanging different resources. Returning to our initial dinner example, if your guest offers to compensate you for dinner explicitly, it could mean that he considers the debt between you paid in full and, accordingly, is not interested in exchanging the resource you would prefer to receive (perhaps gratitude, complete with the possibility that he will be inclined to benefit you later if need be). In terms of the men and women example for before, men often attempt to exchange kindness for sex, but instead receive non-sexual friendship, which was not the intended goal. Many women, by contrast, feel that men should value the friendship…unless of course it’s their partner building friendship with another woman, in which case it’s clearly not just about friendship between them.

But why aren’t these exchanges explicit? It seems that one could, at least in principle, tell other people that you will invite them over for dinner if they will be your friend in much the same way that a bank might extend a loan to person and ask that it be repaid over time. If the implicit nature of these exchanges were removed, it seems that lots of people could be saved a lot of headache. The reason such exchanges cannot be made explicit, I think, has to do with the signal value of the exchange. Consider two possible friends: one of those friends tells you they will be your friend and support you so long as you don’t need too much help; the other tells you they will support you no matter what. Assuming both are telling the truth, the latter individual would make the better friend for you because they have a greater vested interest in your well-being: they will be less likely to abandon you in times of need, less likely to take better social deals elsewhere, less likely to betray you, and the like. In turn, that fact should incline you to help the latter more than the former individual. After all, it’s better for you to have your very-valuable allies alive and well-provisioned if you want them to be able to continue to help you to their fullest when you need it. The mere fact that you are valuable to them makes them valuable to you.

“Also, your leaving would literally kill me, so…motivation?”

This leaves people trying to walk a fine line between making friendships valuable in the exchange-sense of the word (friendships need to return more than they cost, else they could not have been selected for), while maintaining the representation that they not grounded in explicit exchanges publicly so as to make themselves appear to be better partners. In turn, this would create the need for people to distinguish between what we might call “true friends” – those who have your interests in mind – and “fair-weather friends” – those who will only behave as your friend so long as it’s convenient for them. In that last example we assumed both parties were telling the truth about how much they value you; in reality we can’t ever be so sure. This strategic analysis of the problem leaves us with a better sense as for why friendship relationships are different from exchange ones: while both involve exchanges, the nature of the exchanges do not serve the same signaling function, and so their form ends up looking different. People will need to engage in proximately altruistic behaviors for which they don’t expect immediate or specific reciprocity in order to credibly signal their value as an ally. Without such credible signaling, I’d be left taking you at your word that you really have my interests at heart, and that system is way too open to manipulation.

Such considerations could help explain, in part, why people are opposed to exchanging things like selling organs or sex for money but have little problem with such things being given for free. In the case of organ sales, for instance, there are a number of concerns which might crop up in people’s minds, one of the most prominent being that it puts an explicit dollar sign on human life. While we clearly need to do so implicitly (else we could, in principle, be willing to exhaust all worldly resources trying to prevent just one person from dying today), to make such an exchange implicit turns the relationship into an exchange one, sending a message along the lines of, “your life is not worth all that much to me”. Conversely, selling an organ could send a similar message: “my own life isn’t worth that much to me”. Both statements could have the effect of making one look like a worse social asset even if, practically, all such relationships are fundamentally based in exchanges; even if such a policy would have an overall positive effect on a group’s welfare.

References: Batson, C. (1993). Communal and exchange relationships: What is the difference? Personality & Social Psychology Bulletin, 19, 677-683.

DeScioli, P. & Kurzban, R. (2009). The alliance hypothesis for human friendship. PLoS ONE, 4(6): e5802. doi:10.1371/journal.pone.0005802

Some Thoughts On Side-Taking

Humans have a habit of inserting themselves in the disputes of other people. We often care deeply about matters concerning what other people do to each other and, occasionally, will even involve ourselves in disputes that previously had nothing to do with us; at least not directly. Though there are many examples of this kind of behavior, one of the most recent concerned the fatal shooting of a teen in Ferguson, Missouri, by a police officer. People from all over the country and, in some cases, other countries, were quick to weigh in on the issue, noting who they thought was wrong, what they think happened, and what punishment, if any, should be doled out. Phenomena like that one are so commonplace in human interactions it’s likely the case that the strangeness of the behavior often goes almost entirely unappreciated. What makes the behavior strange? Well, the fact that intervention in other people’s affairs and attempts to control their behavior or inflict costs on them for what they did tends to be costly. As it turns out, people aren’t exactly keen on having their behavior controlled by others and will, in many cases, aggressively resist those attempts.

Not unlike the free-spirited house cat

Let’s say, for instance, that you have a keen interest in killing someone. One day, you decide to translate that interest into action, attacking your target with a knife. If I were to attempt and intervene in that little dispute to try and help your target, there’s a very real possibility that some portion of your aggression might become directed at me instead. It seems as if I would be altogether safer if I minded my own business and let you get on with yours. In order for there to be selection for any psychological mechanisms that predispose me to become involved in other people’s disputes, then, there need to be some fitness benefits that outweigh the potential costs I might suffer. Alternatively, there might also be costs to me for not becoming involved. If the costs to non-involvement are greater than the costs of involvement, then there can also be selection for my side-taking mechanisms even if they are costly. So what might some of those benefits or costs be?

One obvious candidate is mutual self-interest. Though that term could cover a broad swath of meanings, I intend it in the proximate sense of the word at the moment. If you and I both desire that outcome X occurs, and someone else is going to prevent that outcome if either of us attempt to achieve it, then it would be in our interests to join forces – at least temporarily – to remove the obstacle in both of our paths. Translating this into a concrete example, you and I might be faced by an enemy who wishes to kill both of us, so by working together to kill him first, we can both achieve an end we desire. In another, less direct case, if my friend became involved in a bar fight, it would be in my best interests to avoid seeing my friend harmed, as an injured (or dead) friend is less effective at providing me benefits than a healthy one. In such cases, I might preferentially side with my friend so as to avoid seeing costs inflicted on him. In both cases, both the other party and I share a vested interest in the same outcome obtaining (in this case, the removal of a mutual threat).

Related to that last example is another candidate explanation: kin selection. As it is adaptive for copies of my genes to reproduce themselves regardless of which bodies they happen to be located in, assisting genetic relatives in disputes could similarly prove to be useful. A partially-overlapping set of genetic interests, then, could (and likely does) account for a certain degree of side-taking behavior, just as overlapping proximate interests might. By helping my kin, we are achieving a mutually-beneficial (ultimate-level) goal: the propagation of common genes.

A third possible explanation could also be grounded in reciprocal altruism, or long-term alliances. If I take your side today to help you achieve our goals, this might prove beneficial in the long term to the extent that it encourages you to take my side in the future. This explanation would work even in the absence of overlapping proximate or genetic interests: maybe I want to build my house where others would prefer I did not and maybe you want to get warning labels attached to ketchup bottles.You don’t really care about my problem and I don’t really care about yours, but so long as you’re willing to help me scratch my back on my problem, I might also be willing to help you scratch yours.

Also not unlike the free-spirited house cat

There is, however, another prominent reason we might take the side of another individual in a dispute: moral concerns. That is, people could take sides on the basis of whether they perceive someone did something “wrong”. This strategy, then, relies on using people’s behavior to take sides. In that domain, locating the benefits to involvement or the costs to non-involvement becomes a little trickier. Using behavior to pick sides can carry some costs: you will occasionally side against your interests, friends, and family by doing so (to the extent that those groups behave in immoral ways towards others). Nevertheless, the relative upsides to involvement in disputes on the basis of morality need to exist in some form for the mechanisms generating that behavior to have been selected for. As moral psychology likely serves the function of picking sides in disputes, we could consider how well the previous explanations for side taking fare for explaining moral side taking.

We can rule out the kin selection hypothesis immediately as explaining the relative benefits to moral side taking, as taking someone’s side in a dispute will not increase your genetic relatedness to them. Further, a mechanism that took sides on the basis of kinship should be primarily using genetic relatedness as an input for side-taking behavior; a mechanism that uses moral perceptions should be relatively insensitive to kinship cues. Relatedness is out.

A mutualistic account of morality could certainly explain some of the variance we see in moral side-taking. If both you and I want to see a cost inflicted on an individual or group of people because their existence presents us with costs, then we might side against people who engage in behaviors that benefit them, representing such behavior as immoral. This type of argument has been leveraged to understand why people often oppose recreational drug use: the opposition might help people with long-term sexual strategies inflict costs on the more promiscuous members of a population. The complication that mutualism runs into, though, is that certain behaviors might be evaluated inconsistently in that respect. As an example, murder might be in my interests when in the service of removing my enemies or the enemies of my allies; however, murder is not in my interests when used against me or my allies. If you side against those who murder people, you might also end up siding against people who share your interests and murder people (who might, in fact, further your interests by murdering others who oppose them).

While one could make the argument that we also don’t want to be murdered ourselves – accounting for some or all of that moral representation  of murder as wrong – something about that line doesn’t sit right with me: it seems to conceive of the mutual interest in an overly broad manner. Here’s an example of what I mean: let’s say that I don’t want to be murdered and you don’t want to be murdered. In some sense, we share an interest in common when it comes to preventing murder; it’s an outcome we both want to avoid. So let’s say one day I see you being attacked by someone who intends to murder to you. If I were to come to your aid and prevent you from being killed, I have not necessarily achieved my goal (“I don’t want to be murdered”); I’ve just helped you achieve yours (“You don’t want to be murdered”). To use an even simpler example, if both you and I are hungry, we both share an interest in obtaining food; that doesn’t mean that my helping you get food is filling my interests or my stomach. Thus, the interest in the above example is not necessarily a mutual one. As I noted previously, in the case of friends or kin it can be a mutual interest; it just doesn’t seem to be the case when thinking about the behavior per se. My preventing your murder is only useful (in the fitness sense of the word) to the extent that doing so helps me in some way in the future.

Another account of morality which differs from the above positions posits that side-taking on the basis of behavior could help reduce the costs of becoming involved in the disputes of others. Specifically, if all (or at least a sizable majority of) third parties took the same side in a dispute, one side would back down without the need for fights to be escalated to determine the winner (as more evenly-matched fights might require increased fighting costs to determine a winner, whereas lopsided ones often do not). This is something of a cost-reduction model. While the idea that morality functions as a coordination device – the same way, say, a traffic light does – raises an interesting possibility, it too comes with a number of complications. Chief among those complications is that coordination need not require a focus on the behavior of the disputants. In much the same way that the color of a traffic light bears no intrinsic relationship to driving behavior but is publicly observable, so too might coordination in the moral domain need not bear any resemblance to the behavior of the disputants. Third parties could, for instance, coordinate around the flip of a coin, rather than the behavior of the disputants. If anything, coin flips might be better tools than disputant’s behavior as, unlike behavior, the outcome of coin flips are easily observable. Most immoral behavior is notably not publicly observable, making coordination around it something of a hassle.

 And also making trials a thing…

What about the alliance-building idea? At first blush, taking sides on the basis of behavior seems like a much different type of strategy than siding on the basis of existing friendships. With some deeper consideration, though, I think there’s a lot of merit to the idea. Might behavior work as a cue for who would make a good alliance partner for you? After all, friendships have to start somewhere, and someone who was just stolen from might have a sudden need for partial partners that you might fill by punishing the perpetrator. Need provides a catalyst for new relationships to form. On the reverse end, that friend of yours who happens to be killing other people is probably going to end up racking up more than a few enemies: both the ones he directly impacted and the new ones who are trying to help his victims. If these enemies take a keen interest in harming him, he’s a riskier investment as costs are likely coming his way. The friendship itself might even become a liability to the extent that the people he put off are interested in harming you because you’re helping him, even if your help is unrelated to his acts. At such a point, his behavior might be a good indication that his value as a friend has gone down and, accordingly, it might be time to dump your friend from your life to avoid those association costs; it might even pay to jump on the punishing bandwagon. Even though you’re seeking partial relationships, you need impartial moral mechanisms to manage that task effectively.

This could explain why strangers become involved in disputes (they’re trying to build friendships and taking advantage of a temporary state of need to do so) and why side-taking on the basis of behavior rather than identity is useful at times (your friends might generate more hassle than they’re worth due to their behavior, especially since all the people they’re harming look like good social investments to others). It’s certainly an idea that deserves more thought.

Moral Stupefaction

I’m going to paint a picture of loss. Here’s a spoiler alert for you: this story will be a sad one.

Mark is sitting in a room with his cat, Tigger. Mark is a 23-year-old man who has lived most of his life as a social outcast. He never really fit in at school and he didn’t have any major accomplishments to his name. What Mark did have was Tigger. While Mark had lived a lonely life in his younger years, that loneliness had been kept at bay when, at the age of 12, he adopted Tigger. The two had been inseparable ever since, with Mark taking care of the cat with all of his heart. This night, as the two laid together, Tigger’s breathing was labored. Having recently become infected with a deadly parasite, Tigger was dying. Mark was set on keeping his beloved pet company in its last moments, hoping to chase away any fear or pain that Tigger might be feeling. Mark held Tigger close, petting him as he felt each breath grow shallower. Then they stopped coming all together. The cat’s body went limp, and Mark watched the life of only thing he had loved, and that had loved him, fade away.

As the cat was now dead and beyond experiencing any sensations of harm, Mark promptly got up to toss the cat’s body into the dumpster behind his apartment. On his way, Mark passed a homeless man who seemed hungry. Mark handed the man Tigger’s body, suggesting he eat it (the parasite which had killed Tigger was not transmittable to humans). After all, it seemed like a perfectly good meal shouldn’t go to waste. Mark even offered to cook the cat’s body thoroughly.

Now, the psychologist in me wants to know: Do you think what Mark did was wrong? Why do you think that? 

Also, I think we figured out the reason no one else liked Mark

If you answered “yes” to that question, chances are that at least some psychologists would call you morally dumbfounded. That is to say you are holding moral positions that you do not have good reasons for holding; you are struck dumb with confusion as to why you feel the way you do. Why might they call you this, you ask? Well, chances are because they would find your reasons for the wrongness of Mark’s behavior unpersuasive. You see, the above story has been carefully crafted to try and nullify any objections about proximate harms you might have. As the cat is dead, Mark isn’t hurting it by carelessly disposing of the body or even by suggesting that others eat it. As the parasite is not transmittable to humans, no harm would come of consuming the cat’s body. Maybe you find Mark’s behavior at the end disgusting or offensive for some reason, but your disgust and offense don’t make something morally wrong, the psychologists would tell you. After hearing these counter arguments, are you suddenly persuaded that Mark didn’t do something wrong? If you still feel he did, well, consider yourself morally dumbfounded as, chances are, you don’t have any more arguments to fall back on. You might even up saying, “It’s wrong but I don’t know why.”

The above scenario is quite similar to the ones presented to 31 undergraduate subjects in the now-classic paper on moral dumbfounding by Haidt, Bjorklund, & Murphy (2000). In the paper, subjects are presented with one reasoning task (the Heinz dilemma, asking whether a man should steal to help his dying wife) that involves trading off the welfare of one individual for another, and four other scenarios, each designed to be “harmless, yet disgusting:” a case of mutually-consensual incest between a brother and sister where pregnancy was precluded (due to birth control and condom use); a case where a medical student cuts a piece of flesh from a cadaver to eat, (the cadaver is about to be cremated and had been donated for medical research); a chance to drink juice that had a dead, sterilized cockroach stirred in for a few seconds and then removed; and a case where participants would be paid a small sum to sign and then destroy a non-binding contract that gave their soul to the experimenter. In the former two cases – incest and cannibalism –  participants were asked whether they thought the act was wrong and, if they did, to try and provide reasons for why; in the latter two cases – roach and soul – participants were asked if they would perform the task and, if they would not, why. After the participants stated their reasons, the experimenter would challenge their arguments in a devil’s-advocate type of way to try and get them to change their minds.

As a brief summary of the results: the large majority of participants reported that having consensual incest and removing flesh from a human cadaver to eat were wrong (in the latter case, I imagine they would similarly rate the removal of flesh as wrong even if it were not eaten, but that’s besides the point), and a similarly-large majority were also unwilling to drink from the roached water or the sign the soul contract. On average, the experimenter was able to change about 16% of the participants’ initial stances by countering their stated arguments. The finding of note that got this paper its recognition, however, is that, in many cases, participants would state reasons for their decisions that contradicted the story (i.e., that a child born of incest might have birth defects, though no child was born due to the contraceptives) and, when those concerns had been answered by the experimenter, that they still believed these acts to be wrong even if they could no longer think of any reasons for that judgment. In other words, participants appeared to generate their judgments of an act first (their intuitions), with the explicit verbal reasoning for their judgments being generated after the fact and, in some cases, seemingly disconnected from the scenarios themselves. Indeed, in all cases except the Heinz dilemma, participants rated their judgments as arising more from “gut feelings” than reasoning.

“fMRI scans revealed activation of the ascending colon for moral judgments…”

A number of facets of this work on moral dumbfounding are curious to me, though. One of those things that has always stood out to me as dissatisfying is that moral dumbfounding claims being made here are not what I would call positive claims (i.e., “people are using variable X as an input for determining moral perceptions”), but rather they seem to be negative ones (“people aren’t using conscious reasoning, or at least the parts of the brain doing the talking aren’t able to adequately articulate the reasoning”). While there’s nothing wrong with negative claims per se, I just happen to find them less satisfying than positive ones. I feel that this dissatisfaction owes its existence to the notion that positive claims help guide and frame future research to a greater extent than negative ones (but that could just be some part of my brain confabulating my intuitions).

My main issue with the paper, however, hinges on the notion that the acts in question were “harmless.” A lot is going to turn on what is meant by that term. An excellent analysis of this matter is put forth in a paper by Jacobson (2012), in which he notes that there are perfectly good, harm-based reasons as to why one might oppose, say, consensual incest. Specifically, what participants might be responding to was not the harm generated by the act in a particular instance so much as the expected value of the act. One example offered to help make that point concerns gambling:

Compare a scenario I’ll call Gamble, in which Mike and Judy—who have no creditors or dependents, but have been diligently saving for their retirement—take their nest egg, head to Vegas, and put it all on one spin of the roulette wheel. And they win! Suddenly their retirement becomes about 40 times more comfortable. Having gotten lucky once, they decide that they will never do anything like that again. Was what Mike and Judy did prudent?

 The answer, of course, is a resounding “no.” While the winning game of roulette might have been “harmless” in the proximate sense of the word, such an analysis would ignore risk. The expected value of the act was, on the whole, rather negative. Jacobson (2012) goes on to expand the example, asking now whether it would have been OK for the gambling couple to have used their child’s college savings instead. The point here is that consensual incest can be considered similarly dangerous. Just because things turned out well in that instance, it doesn’t mean that harm-based justifications for the condemnation are discountable ones; it could instead suggest that there exists a distinction between harm and risk that 30 undergraduate subjects are not able to articulate well when being challenged by a researcher. Like Jacobson, (2012), I would condemn drunk driving as well, even if it didn’t result in an accident.

To bolster that case, I would also like to draw attention to one of the findings of the moral dumbfounding paper I mentioned before: about 16% of participants reversed their moral judgments when their harm-based reasoning was challenged. Though this finding is not often the one people focus on when considering the moral dumbfounding paper, I think it helps demonstrate the importance of this harm dimension. If participants were not using harm (or risk of harm) as an input for their moral perceptions, but rather only a post-hoc justification, these reversals of opinion in the wake of reduced welfare concerns would seem rather strange. Granted, not every participant changes their mind – in fact, many did not – but that any of them did requires an explanation. If judgments of harm (or risk) are coming after the fact and not being used an inputs, why would they subsequently have any impact whatsoever?

“I have revised my nonconsequentialist position in light of those consequences”

Jacobson (2012) makes the point that perhaps there’s a case to be made that the subjects were not necessarily morally dumbfounded as much as the researchers looking at the data were morally stupefied. That is to say, it’s not that the participants didn’t have reasons for their judgments (whether or not they were able to articulate them well) so much as the researchers didn’t accept their viability or weren’t able to see their validity owing to their own theoretical blinders. If participants did not want to drink juice that had a sterilized cockroach dunked in it because they found it disgusting, they are not dumbfounded as to why they don’t want to drink it; the researchers just aren’t accepting the subject’s reasons (it’s disgusting) as valid. If, returning to the initial story in this post, people appear to be opposed to behaving toward beloved (but dead) pets in ways that appear more consistent with feelings of indifference or contempt because it is offensive, that seems like a fine reason for doing so. Whether or not offense is classified as a harm by a stupefied research is another matter entirely.

References: Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished Manuscript.

Jacobson, D., (2012). Moral dumbfounding and moral stupefaction. Oxford Studies in Normative Ethics, 2, DOI:10.1093/acprof:oso/9780199662951.003.0012

Charitable Interpretations Were Never My Strong Suit

Never attribute to malice what is adequately explained by stupidity – Halon’s Razor

Disagreement and dispute are pervasive parts of human life, arising for a number of reasons. As Halon’s razor suggests, the charitable response to disagreement would be to just call someone stupid for disagreeing, rather than evil. Thankfully, these are not either/or types of aspersion we can cast, and we’re free to consider those who disagree with us both stupid and evil if we so desire. Being the occasional participant in discussions – both in the academic and online worlds – I’m no stranger to either of those labels. The question of the accuracy of the aspersions remains, however: calling someone ignorant or evil could serve the function of spreading accurate information; then again, it could also serve the function of persuading others to not listen to what the target has to say.

“The other side doesn’t have the best interests of the Empire in mind like I do”

When persuasion gets involved, we are entering the realm where perceptions can be inaccurate, yet still be adaptive. Usually being wrong about the world carries costs, as incorrect information yields worse decision making. Believing inaccurately that cigarettes don’t increase the probability of developing lung cancer will not alter the probability of developing a tumor after picking up a pack-a-day habit. If, however, my beliefs can cause other people to behave differently, then I could do myself some good and being wrong isn’t quite as bad. For instance, even if my motives in a debate are purely and ruthlessly selfish, I might be able to persuade other people to support my side anyway through both (1) suggesting that my point of view is not being driven by my underlying biases – but rather by the facts of the matter and my altruistic tendencies – and (2) that my opponent’s perspective is not to be trusted (usually for the opposite set of reasons). The explanation for why people frequently accuse others of not understanding their perspective, or of sporting particular sets of biases, in debates, then, might have little to do with accuracy and more to do with convincing other people to not listen; to the extent that they happen to be accurate might be more be accidental than anything.

One example I discussed last year concerned the curious case of Kim Kardashian. Kim had donated 10% of some eBay sales to disaster relief, prompting many people to deride Kim’s behavior as selfishly motivated (even evil) and, in turn, also suggest that her donation be refused by aid organizations or the people in need themselves. It seemed to me that people were more interested in condemning Kim because they had something against her in particular, rather than because any of what she did was traditionally wrong or otherwise evil. It also seemed to me that, putting it lightly, Kim’s detractors might have been exaggerating her predilection towards evil by just a little bit. Maybe they were completely accurate – it’s possible, I suppose – it just didn’t seem particularly likely, especially given that many of the people condemning her probably knew very little about Kim on a personal level. If you want to watch other people make uncharitable interpretations of other people’s motives, I would encourage you to go observe a debate between people passionately arguing over an issue you couldn’t care less about. If you do, I suspect you will be struck by a sense that both sides of the dispute are, at least occasionally, being a little less than accurate when it comes to pinning motives and views on the other.

Alternatively, you could just observe the opposite side of a dispute you actually are invested in; chances are you will see your detractors as being dishonest and malicious, at least if the results obtained by Reeder et al (2005) are generalizable. In their paper, the researchers sought to examine whether one’s own stance on an issue tended to color their perceptions about the opposition’s motives. In their first study, Reeder et al (2005) posed about 100 American undergradutes with a survey asking them both about their perceptions of the US war in Iraq (concerning matters such as what motivated Bush to undertake the conflict and how likely particular motives were to be part of that reason), as well as whether they supported the war personally and what their political affiliation was. How charitable were the undergraduates when it came to assessing the motives for other people’s behavior?

“Don’t spend it all in one place”

The answer, predictably, tended to hinge on whether or not the participant favored the war themselves. In the open-ended responses, the two most common motives listed for going to war were self-defense and bringing benefits to the Iraqi people, freeing it from a dictatorship; the next two most common reasons were proactive aggression and hidden motives (like trying and take US citizen’s minds off other issues, like the economy). Among those who favored the war, 73% listed self-defense as a motive for the war, compared to just 39% who opposed it; conversely, proactive aggression was listed by 30% of those who supported the war, relative to 73% of those who oppose it. The findings were similar for ratings of self-serving motives: on a 1-7 scale (from being motivated by ethical principles to selfishness), those in favor of the war gave Bush a mean of 2.81; those opposed to the war gave him a 6.07. It’s worth noting at this point that (assuming the scale is, in fact measuring two opposite ends of a spectrum) both groups cannot be accurate in their perceptions of Bush’s motives. Given that those not either opposed to or supportive of the war tended to fall in between those two groups in their attributions of motives, it is also possible that both sides could well be wrong.

Interestingly – though not surprisingly – political affiliation per se did not have much predictive value for determining what people thought of Bush’s motives for the war when one’s own support for the war was entered into a regression model with it. What predicted people’s motive attributions was largely their own view about the war. In other words, Republicans who opposed to the war tended to view Bush largely the same as Democrats opposed to the war, just as Democrats supportive of the war viewed Bush the same as Republicans in favor of it. Reeder et al (2005) subsequently replicated these findings in a sample of Candian undergraduates who, at the time, were far less supportive of the war on the whole than the American sample. Additionally, this pattern of results was also replicated when asking about the motives of other people who support/oppose the war, rather than asking about Bush specifically. Further, when issues other than the war (in this case, abortion and gay marriage) were used, the same pattern of results obtained. In general, opposing an issue made those who support it look more self-serving and biased, and vice versa.

The last set of findings – concerning abortion and gay marriage – was particularly noteworthy because of an addition to the survey: a measure of personal involvement in the issue. Rather than just being asked about whether they support or oppose one side of the issue, they were also asked about how important the issue was to them and how likely they were to change their mind about their stance. As one might expect, this tendency to see your opposition as selfish, biased, close-minded, and ignorant was magnified by the extent to which one found the issue personally important. Though I can’t say for certain, I would venture a guess that, in general, the importance of an issue to me is fairly uncorrelated with how much other people know about it. In fact, if these judgments of other people’s motives and knowledge were driven by the facts of the matter, then the authors should not have observed this effect of issue importance. That line of reasoning, again, suggests that these perceptions are probably aimed more at persuasion than accuracy. The extent to which they’re accurate is likely besides the point.

“Damn it all; I was aiming for the man”

While I find this research interesting, I do wish that it had been grounded in the theory I had initially mentioned, concerning persuasion and accuracy. Instead, Reeder et al (2005) ground their account in naive realism, the tenets of which seem to be (roughly) that (a) people believe they are objective observers and (b) that other objective observers will see the world as they do, so (c) anyone who doesn’t agree must be ignorant or biased. Naive realism looks more like a description of results they found, rather than an explanation for them. In the interests of completeness, the authors also ground their research in self-categorization theory, which states that people seek to differentiate their group from other groups in terms of values, with the goal of making their own group look better. Again, this sounds like a description of behavior, rather than an explanation for it. As the authors don’t seem to share my taste for a particular type of theoretical explanation grounded in considerations of evolutionary function here (at least in terms of what they wrote), I am forced to conclude that they’re at least ignorant, if not downright evil*

References: Reeder, G., Pryor, J., Wohl, M., & Griswell, M. (2005). On attributing negative motives to others who disagree with our opinions. Personality & Social Psychology Bulletin, 31, 1498-1510.

*Not really