Proximate And Ultimate Moral Culpability

Back in September, I floated an idea about or moral judgments: that intervening causes between an action and outcome could serve to partially mitigate their severity. This would owe itself to the potential that each intervening cause has for presenting a new potential target of moral responsibility and blame (i.e. “if only the parents had properly locked up their liquor cabinet, then their son wouldn’t have gotten drunk and wrecked their car”). As the number of these intervening causes increases, the potential number of blameable targets increases, which should be expected to diminish the ability of third-party condemners to achieve any kind of coordination their decisions. Without coordination, enacting moral punishment becomes costlier, all else being equal, and thus we might expect people to condemn others less harshly in such situations. Well, as it turns out, there’s some research that has been conducted on this topic a mere four decades ago that I was unaware of at the time. Someone call Cold Stone, because it seems I’ve been scooped again.

To get your mind off that stupid pun, here’s another.

One of these studies comes from Brickman et al (1975), and involved examining how people would assign responsibility for a car accident that had more than one potential cause. Since there are a number of comparisons and causes I’ll be discussing, I’ve labeled them for ease of following along. The first of these causes were  proximate in nature: internal alone (1. a man hit a tree because he wasn’t looking at the road) or external alone (2. a man hit a tree because his steering failed). However, there were also two ultimate causes for these proximate causes, leading to four additional sets: two internal (3. a man hit a tree because he wasn’t looking at the road; he wasn’t looking at the road because he was daydreaming), two external (4. a man hit a tree because his steering failed; his steering failed because the mechanic had assembled it poorly when repairing it), or a mix of the two. The first of these (5) mixes was a man hitting a tree because his steering failed, but his steering failed because he had neglected to get it checked in over a year; the second (6) concerned a man hitting a tree because he wasn’t paying attention to the road due to someone on the side of the road yelling.

After the participants had read about one of these scenarios, they were asked to indicate how responsible the driver was for the accident, how foreseeable the accident was, and how much control the driver had in the situation. Internal causes for the accident resulted in higher scores on all these variables relative to external ones (1 vs. 2). There’s nothing too surprising there: people get blamed less for their steering failing than their not paying attention to the road. The next analysis compared the presence of one type of cause alone to that type of cause with an identical ultimate cause (1 vs. 3, and 2 vs. 4). When both proximate and ultimate causes were internal (1 vs 3), no difference was observed in the judgments of responsibility. However, when both proximate and ultimate causes were external (2 vs. 4), moral condemnation appeared to be softened by the presence of an ultimate explanation. Two internal causes didn’t budge judgments from a single cause, but two external judgments diminished perceptions of responsibility beyond a single one.

Next, Brickman et al (1975) turned to the matter of what happens when the proximate and ultimate causes were of different types (1 vs. 6 and 2 vs. 5). When the proximate cause was internal but the ultimate cause was external (1 vs. 6), there was a drop in judgments of moral responsibility (from 5.4 to 3.7 on a 0 to 6 scale), foreseeability (from 3.7 to 2.4), and control (from 3.4 to 2.7). The exact opposite trend was observed when the proximate cause was external, but the ultimate cause was internal 2 vs. 5). In that case, there was an increase in judgments of responsibility (from 2.3 to 4.1), foreseeability (from 2.3 to 3.4) and control (2.6 to 3.4). As Brickman et al (1975) put it:

“…the nature of prior cause eliminated the effects of the immediate cause on attributions of foreseeability and control, although a main effect of immediate cause remained for attributions of responsibility,”

So that’s some pretty neat stuff and, despite the research not being specifically about the topic, I think these findings might have some broader implications for understanding the opposition to evolutionary psychology more generally.

They’re so broad people with higher BMIs might call the suggestion insensitive.

As a fair warning, this section will contain a fair bit of speculation, since there doesn’t exist much data (that I know of, anyway) bearing on people’s opposition towards evolutionary explanations. That said, let’s talk about what anecdata we do have. The first curious thing that has struck me about the opposition to certain evolutionary hypotheses is that they tend to focus exclusively or nearly-exclusively on topics that have some moral relevance. I’ve seen fairly-common complaints about evolutionary explanations for hypotheses that concern moralized topics like violence, sexual behavior, sexual orientation, and male/female differences. What you don’t tend to see are complaints about research in areas that do not tend to be moralized, like vision, language, or taste preference. That’s not to say that such objections don’t ever crop up, of course; just that complaints about the latter do not appear to be as frequent or protracted as the former. Further, when the latter topics do appear, it’s typically in the middle of some other moral issue surrounding the topic.

This piece of anecdata ties in with another, related piece: one of the more common complaints against evolutionary explanations is that people perceive evolutionary researchers as trying to justify some particular morally-blameworthy behavior. The criticism, misguided as is it, tends to go something like this: “if [Behavior X] is the product of selection, then we can’t hold people accountable for what they do. Further, we can’t hope to do much to change people’s behavior, so why bother?”. As the old saying goes, if some behavior is the product of selection, we might as well just lie back and think of England. Since people don’t want to just accept these behaviors (and because they note, correctly, that behavior is modifiable), they go on to suggest that it’s the ultimate explanation must be wrong, rather than their assessment of its implications.

“Whatever; go ahead and kill people, I guess. I don’t care…”

The similarities between these criticisms of evolutionary hypotheses and the current study are particularly striking: if selection is responsible for people’s behavior, then the people themselves seem to be less responsible and in control of their behavior. Since people want to condemn others for this behavior, they have a strategic interest in downplaying the role of other causes in generating it. The fewer potential causes for a behavior there are the more easily moral condemnation can be targeted, and the more likely others are to join in the punishment. It doesn’t hurt that what ultimate explanations are invoked – patriarchy being the most common, in my experience – are also things that these people are interesting in morally condemning.

What’s interesting – and perhaps ironic – about the whole issue to me is that there are also the parallels to the debates people have about free will and moral responsibility. Let’s grant that the aforementioned criticisms were accurate and evolutionary explanations offer some kind of justification for things like murder, rape, and the like. It would seem, then, that such evolutionary explanations could similarly justify the moral condemnation and punishment of such behaviors just as well. Surely, there are adaptations we possess to avoid outcomes like being killed, and we also possess adaptations capable of condemning such behavior. We wouldn’t need to justify our condemnation of them anymore than people would need to justify their committing the act itself. If murder could be justified, then surely punishing murders could be as well.

References: Brickman, P., Ryan, K., & Wortman, C. (1975). Causal chains: Attribution of responsibility as a function of immediate and prior causes. Journal of Personality and Social Psychology, 32, 1060-1067.

Begging Questions About Sexualization

There’s an old joke that goes something like this: If a man wants to make a woman happy, it’s really quite simple. All he has to do is be a chef, a carpenter, brave, a friend, a good listener, responsible, clean, warm, athletic, attractive, tender, strong, tolerant, understanding, stable, ambitious, and compassionate. Men should also not forget to compliment a women frequently, give her attention while expecting little in return, give her freedom to do what she wants without asking too many questions, and love to go shopping with her, or at least support the habit. So long as a man does/is all those things, if he manages to never forget birthdays, anniversaries, or other important dates, he should be easily able to make a woman happy. Women, on the other hand, can also make men happily with a few simple steps: show up naked and bring beer. (For the unabridged list, see here). While this joke, like many great jokes, contains an exaggeration, it also manages to capture a certain truth: the qualities that make a man attractive to a woman seem to be a bit more varied than the qualities that make a woman attractive to a man.

“Yeah; he’s alright, I guess. Could be a bit taller and richer…”

Even if men did value the same number of traits in women that women value in men, the two sexes do not necessarily value the same kinds of traits, or value them to the same degree (though there is, of course, frequently some amount of overlap). Given that men and women tend to value different qualities in one another, what should this tell us about the signals that each sex sends to appeal to the other? The likely answer is that men and women might end up behaving or altering their appearance in different ways when it comes to appealing to the opposite sex. As a highly-simplified example, men might tend to value looks and women might tend to value status. If a man is trying to appeal to women under such circumstance, it does him little good to signal his good looks, just as it does a woman no favors to try and signal her high status to men.

So when people start making claims about how one sex – typically women – are being “sexualized” to a much greater extent than the other, we should be very specific about what we mean by the term. A recent paper by Hatton & Trautner (2011) set forth to examine (a) how sexualized men and women tend to be in American culture and (b) whether that sexualization has seen a rise over time. The proxy measure they made use of for their analysis were about four decades worth of Rolling Stone covers, spanning from 1967 to 2009, as these covers contain pictures of various male and female cultural figures. The authors suggest that this research has value because of various other lines of research suggesting that these depictions might have negative effects on women’s body satisfaction, men’s negative attitudes about women, as well threatening to increase the amount of sexual harassment that women face. Somewhat surprisingly, in the laundry list of references attesting to these negative effects on women, there is no explicit mention of any possible negative effects on men. I find that interesting. Anyway…

As for the research itself, Hatton & Trautner (2011) examined approximately 1000 covers of Rolling Stone, of which 720 focused on men and 380 focused on women. The pictures were coded with respect to (a) the degree of nudity, from unrevealing to naked on a 6-point scale, (b) whether there was touching, from none to explicitly sexual on a 4-point scale, (c) pose, from standing upright to explicitly sexual on a 3-point scale, (d) mouth…pose (I guess), from not sexual to sexual on a 3-point scale, (e)  whether breasts/chest, genitals, or buttocks were exposed and/or the focal point of the image, all on 3-point scales, (f) whether the text on the cover line related to sex, (g) whether the shot focused on the head or the body, (h) whether the model was engaged in a sex act or not, and finally (i) whether there were hints of sexual role play suggested at. So, on the one hand, it seems like these pictures were analyzed thoroughly. On the other, however, consider this list of variables they were assessing and compare them to the initial joke. By my count, all of them appear to fall more on the end of “what makes men happy” rather than “what makes women happy”.

Which might cause a problem in translation from one sex to the other

Images were considered to be “hypersexualized” if they scored 10 or more points (out of the possible 23), but only regular “sexualized” if they scored from 5 to 9 points. In terms of sexualization, the authors found that it appeared to be increasing over time: in the ’60s, 11% of men and 44% of women were sexualized; by the ’00s these rose to 17% and 89% respectively. So Hatton & Trautner (2011) concluded that men were being sexualized less than women overall, which is reasonable given their criteria. However, those percentages captured both the “sexualized” and “hypersexualized” pictures. Examining the two groups separately, the authors found that around 1-3% of men on the covers were hypersexualized in any given decade, whereas the comparable range for women was 6% to 61%. Not only did women tend to be sexualized more often, they also tended to sexualized to a great degree. The authors go so far as to suggest that the only appropriate label for such depictions of women were as sex objects.

The major interpretative problem that is left unaddressed by Hatton & Trautner (2011) and their “high-powered sociological lens”, of course, is that they fail to consider whether the same kinds of displays make men and women equally sexually appealing. As the initial joke might suggest, men are unlikely to win many brownie points with a prospective date if they showed up naked with beer; they might win a place on some sex-offender list though, which falls short of the happy ending they would have liked. Indeed, many of the characteristics highlighted in the list of ways to make a woman happy – such as warmth, emotional stability, and listening skills – are not quite as easily captured by a picture, relative to physical appearance. To make matters even more challenging for the interpretation of the authors, there is the looming fact that men tend to be far more open to offers of casual sex in the first place. In other words, there might about as much value to signaling that a man is “ready for sex” as there is to signaling that a starving child is “ready for food”. It’s something that is liable to be assumed already.

To put this study in context, imagine I was to run a similar analysis to the authors, but started my study with the following rationale: “It’s well known that women tend to value the financial prospects of their sexual partners. Accordingly, we should be able to measure the degree of sexualization on Rolling Stone covers by assessing the net wealth of the people being photographed”.  All I would have to do is add in some moralizing about how depiction of rich men is bad for poorer men’s self-esteem and women’s preferences in relationships, and that paper would be a reasonable facsimile to the current one. If this analysis found that the depicted men tended to be wealthier than the depicted women, this would not necessarily indicate that the men, rather than the women, were being depicted as more attractive mates. This is due to the simple, aforementioned fact, that we should expect an interaction between signalers and receivers. It doesn’t pay for a signaler to send a signal that the intended receiver is all but oblivious to: rather, we should expect the signals to be tailored to the details of the receptive systems it is attempting to influence.

The sexualization of images like this might go otherwise unnoticed.

It seems that the assumptions made by the authors stacked the deck in favor of them finding what they thought they would. By defining sexualization in a particular way, they partially begged their way to their conclusion. If we instead defined sexualization in other ways that considered variables beyond how much or what kind of skin was showing, we’d likely come to different conclusions about the degree of sexualization. That’s not to say that we would find an equal degree of it between the sexes, mind you, but it would be a realization that there are many factors that can go into making someone sexually attractive which are not always able to be captured in a photo. We’ve seen complaints of sexualization like these leveled against the costumes that superheroes of various sexes tend to wear, and the same oversight is present in them as well. Unless the initial joke would work just as well if the sexes were reversed, these discussions will require more nuance concerning sexualization to be of much profitable use.

References: Hatton E. & Trautner, M. (2011). Equal opportunity objectification? The sexualization of men and women on the cover of Rolling Stone. Sexuality and Culture, 15, 256-278.

Truth And Non-Consequences

A topic I’ve been giving some thought to lately concerns the following question: are our moral judgments consequentialist or nonconsequentialist? As the words might suggest, the question concerns to what extent our moral judgments are based in the consequences that result from an action or the behavior per se that people engage in. We frequently see a healthy degree of inconsistency around the issue. Today I’d like to highlight a case I came across while rereading The Blank Slate, by Steven Pinker. Here’s part of what Steven had to say about whether any biological differences between groups could justify racism or sexism:

“So could discoveries in biology turn out to justify racism and sexism? Absolutely not! The case against bigotry is not a factual claim that humans are biologically indistinguishable. It is a moral stance that condemns judging an individual according to the average traits of certain groups to which the individual belongs.”

This seems like a reasonable statement, on the face of it. Differences between groups, on the whole, does not necessarily mean any differences on the same trait between any two given individuals. If a job calls for a certain height, in other words, we should not discriminate against women just because men tend to be taller. That average difference does not mean that many men and women are the not same height, or that the reverse relationship never holds.

Even if it generally does…

Nevertheless, there is something not entirely satisfying about Steven’s position, namely that people are not generally content to say “discrimination is just wrong“. People like to try and justify their stance that it is wrong, lest the proposition be taken to simply be an arbitrary statement with no more intrinsic appeal than “trimming your beard is just wrong“. Steven, like the rest us, thus tries to justify his moral stance on the issue of discrimination:

Regardless of IQ or physical strength or any other trait that can vary, all humans can be assumed to have certain traits in common. No one likes being enslaved. No one likes being humiliated. No one likes being treated unfairly, that is, according to traits that the person cannot control. The revulsion we feel toward discrimination and slavery comes from a conviction that however much people vary on some traits, they do not vary on these.”

Here, Steven seems to be trying to have his nonconsequentialist cake and eat it too*. If the case against bigotry is “absolutely not” based on discoveries in biology or a claim that people are biologically indistinguishable, then it seems peculiar to reference biological facts concerning some universal traits to try and justify one’s stance. Would the discovery that certain people might dislike being treated unfairly to different degrees justify doing so, all else being equal? If it would, the first quoted idea is wrong; if it would not, the second statement doesn’t make much sense. What is also notable about these two quotes is that they are not cherry-picked from difference sections of the book; the second quote comes from the paragraph immediately following the first. I found their juxtaposition is rather striking.

With respect to the consequentialism debate, the fact that people try to justify their moral stances in the first place seems strange from a nonconsequentialist perspective: if a behavior is just wrong, regardless of the consequences, then it needs no explanation. or justification. Stealing, in that view, should be just wrong; it should matter who stole from who, or the value of the stolen goods. A child stealing a piece of candy from a corner store should be just as wrong as an adult stealing a TV from Best Buy; it shouldn’t matter that Robin Hood stole from the rich and gave to the poor, because stealing is wrong no matter the consequences and he should be condemned for it. Many people would, I imagine, agree that not all acts of theft are created equal though. On the topic of severity, many people would also agree that murder is generally worse than theft. Again, from a nonconsequentialist perspective, this should only be the case for arbitrary reasons, or at least reasons that have nothing at all to do with the fact that murder and theft have different consequences. I have tried to think of what those other, nonconsequentialist reasons might be, but I appear to suffer from a failure of imagination in that respect.

Might there be some findings that one might ostensibly support the notion that moral judgments are, at least in certain respects, nonconsequentialist? Yes; in fact there are. The first of these are a pair of related dilemmas known as the trolley and footbridge dilemmas. In both contexts one life can be sacrificed so that five lives are saved. In the former dilemma, a train heading towards five hikers can be diverted to a side track where there is only a single hiker; in the latter, a train heading towards five hikers can be stopped by pushing a person in front of it. In both cases the welfare outcomes are identical (one dead; five not), so it seems that if moral judgments only track welfare outcomes, there should be no difference between these scenarios. Yet there are: about 90% of people will support diverting the train, and only 10% tend to support pushing (Mikhail, 2007). This would certainly be a problem for any theory of morality that claimed the function of moral judgments more broadly is to make people better off on the whole. Moral judgments that fail to maximize welfare would be indicative of poor design for such a function.

Like how this bathroom was poorly optimized for personal comfort.

There are concerns with the idea that this finding supports moral nonconsequentialism, however: namely, the judgments of moral wrongness for pushing or redirecting are not definitively nonconsequentialist. People oppose pushing others in front of trains, I would imagine, because of the costs that pushing inflicts on the individual being pushed. If the dilemma was reworded to one in which acting on a person would not harm them but save the lives of others, you’d likely find very little opposition to it (i.e. pushing someone in front a train in order to send a signal to the driver, but with enough time so the pushed individual can exit the track and escape harm safely). This relationship holds in the trolley dilemma: when an empty side track is available, redirection to said track is almost universally preferred, as might be expected (Huebner & Hauser, 2011).  One who favors the nonconsequentialist account might suggest that such a manipulation is missing the point: after all, it’s not that pushing someone in front a train is immoral, but rather that killing someone is immoral. This rejoinder would seem to blur the issue, as it suggests, somewhat confusingly, that people might judge certain consequences non-consequentially. Intentionally shooting someone in the head, in this line of reasoning, would be wrong not because it results in death, but because killing is wrong; death just so happens to be a necessary consequence of killing. Either I’m missing some crucial detail or distinction seems unhelpful, so I won’t spend anymore time on it. 

Another matter of evidence touted as evidence of moral nonconsequentialism is the research done on moral dumbfounding (Haidt et al, 2000). In brief, research has found that when presented with cases where objective harms are absent, many people continue to insist that certain acts are wrong. The most well-known of these involves a bother-sister case of consensual incest on a single occasion. The sister is using birth control and the brother wears a condom; they keep their behavior a secret and feel closer because of it. Many subjects (about 80%) insisted that the act was wrong. When pressed for an explanation, many initially referenced harms that might occur as a result, those these harms were always countered by the context (no pregnancy, no emotional harm, no social stigma, etc). From this, it was concluded that conscious concerns for harm appear to represent post hoc justifications for an intuitive moral intuition.

One needs to be cautious in interpreting these results as evidence of moral nonconsequentialism, though, and a simple example would explain why. Imagine in that experiment what was being asked about was not whether the incest itself was wrong, but instead why the brother and sister pair had sex in the first place. Due to the dual contraceptive use, there was no probability of conception. Therefore, a similar interpretation might say, this shows that people are not consciously motivated to have sex because of children. While true enough that most acts of intercourse might not be motivated by the conscious desire for children, and while the part of the brain that’s talking might not have access to information concerning how other cognitive decision rules are enacted, it doesn’t mean the probability of conception plays no role shaping in the decision to engage in intercourse; despite what others have suggested, sexual pleasure per se is not adaptive. In fact, I would go so far as to say that the moral dumbfounding results are only particularly interesting because, most of the time, harm is expected to play a major role in our moral judgments. Pornography manages to “trick” our evolved sexual motivation systems by providing them with inputs similar to those that reliably correlate with the potential for conception; perhaps certain experimental designs – like the case of brother-sister incest – manage to similarly “trick” our evolved moral systems by providing them with inputs similar to those that reliably correlated with harm.

Or illusions; whatever your preferred term is.

In terms of making progress the consequentialism debate, it seems useful to do away with the idea that moral condemnation functions to increase welfare in general: not only are such claims clearly empirically falsified, they could only even be plausible in the realm of group selection, which is a topic we should have all stopped bothering with long ago. Just because moral judgments fail the test of group welfare improvement, however, it does not suddenly make the nonconsequentialist position tenable. There are more ways of being consequentialist than with respect to the total amount of welfare increase. It would be beneficial to turn our eye towards considering strategic welfare consequences that likely to accrue to actors, second parties, and third parties as a result of these behaviors. In fact, we should be able to use such considerations to predict contexts under which people should flip back and forth from consciously favoring consequentialist and nonconsequentialist kinds of moral reasoning. Evolution is a consequentialist process, and we should expect it to produce consequentialist mechanisms. To the extent we are not finding them, the problem might owe itself more to a failure of our expectations for the shape of these consequences than an actual nonconsequentialist mechanism.

References: Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished manuscript.

Huebner, B. & Hauser, M. (2011). Moral judgments about altruistic self-sacrifice: When philosophical and folk intuitions clash. Philosophical Psychology, 24, 73-94.

Mikhail, J. (2007). Universal moral grammar: Theory, evidence, and the future. Trends in Cognitive Science, 11, 143-151.

 

*Later, Steven writes:

“Acknowledging the naturalistic fallacy does not mean that facts about human nature are irrelevant to our choices…Acknowledging the naturalistic fallacy implies only that discoveries about human nature do not, by themselves, dictate our choices…”

I am certainly sympathetic to such arguments and, as usual, Steven’s views on the topic are more nuanced than the these quotes alone are capable of displaying. Steven does, in fact, suggest that all good justifications for moral stances concern harms and benefits. Those two particular quotes are only used to highlight the frequent inconsistencies between people’s stated views.

Towards Understanding The Action-Omission Distinction

In moral psychology, one of the most well-known methods of parsing the reasons outcomes obtain involves the categories of actions and omissions. Actions are intuitively understandable: they are behaviors which bring about certain consequences directly. By contrast, omissions represent failures to act that result in certain consequences. As a quick example, a man who steals your wallet commits an act; a man who finds your lost wallet, keeps it for himself, and says nothing to you commits an omission. Though actions and omissions might result in precisely the same consequences (in that case, you end up with less money and the man ends up with more), they do not tend to be judged the same way. Specifically, actions tend to be judged as more morally wrong than comparable omissions and more deserving of punishment. While this state of affairs might seem perfectly normal to you or I, a deeper understanding of it requires us to take a step back and consider why it is, in fact, rather strange.

And so long as I omit the intellectual source of that strategy, I sound more creative.

From an evolutionary standpoint this action-omission distinction is strange for a clear reason: evolution is a consequentialist process. If I’m worse off because you stole from me or because you failed to return my wallet when you could have, I’m still worse off. Organisms should be expected to avoid costs, regardless of their origin. Importantly, costs need not only be conceptualized as what one might typically envision them to be, like inflictions of physical damage or stealing resources; they can also be understood as failures to deliver benefits. Consider a new mother: though the mother might not kill the child directly, if she fails to provision the infant with food, the infant will die all the same. From the perspective of the child, the failure of the mother to provide food could well be considered a cost inflicted by negligence. So, if someone could avoid harming me – or could provide me with some benefit -  but does not, why should it matter whether that outcome obtained because of an action or an omission?

The first part of that answer concerns a concept I mentioned in my last post: the welfare tradeoff ratio. Omissions are, generally speaking, less indicative of one’s underlying WTR than acts. Let’s consider the wallet example again: when a wallet is stolen, this act expresses that one is willing to make me suffer a cost so they can benefit; when the wallet is found and not returned, this represents a failure of an individual to deliver a benefit to me at some cost to themselves (the time required to track me down and forgoing the money in my wallet). While the former expresses a negative WTR, the latter simply fails to express an overtly-positive one. To the extent that moral punishment is designed to recalibrate WTRs, then, acts provide us with more accurate estimates of WTRs, and might subsequently tend to recruit those cognitive moral systems to a greater degree. Unfortunately, this explanation is not entirely fulfilling yet, owing to the consequentialist facts of the matter: it can be as good, from my perspective, to increase the WTR of the thief towards me as it is for me to increase the omitter’s WTR. Doing either means I would have more money than if I had not, which is a useful outcome. Costs and benefits, in this world, are tallied on the same score board.

The second part of the answer, then, needs to invoke the costs inherent in enacting this modification of WTRs through moral punishment. Just as it’s good for me if others hold a high WTR with respect me, it’s similarly good for others if I held a high WTR with respect to them. This means that people, unsurprisingly, are often less-than-accommodating when it comes to giving up their welfare for another without the proper persuasion; persuasion which happens to take time and energy to enact, and comes with certain risks of retaliation. Accordingly, we ought to expect mechanisms that function to enact moral condemnation strategically: when the costs of doing so are sufficiently low or the benefits to doing so are sufficiently high. After all, it’s the case that every living person right now could, in principle, increase their WTR towards you, but trying to morally condemn every living person for not doing so is unlikely to be a productive strategy. Not only would such a strategy result in the condemner undertaking many endeavors that are unlikely to be successful relative to the invested effort, but someone increasing their WTR towards you requires they lower their WTR towards someone else, and those someone elses would typically not be tickled by the prospect.

“You want my friend’s investment? Then come and take it, tough guy”

Given the costs involved in indiscriminate moral condemnation on non-maximal WTRs, we can focus the considerations of the action-omission distinction down to the following question: what is it about punishing omissions that tends to be less-productive than punishing actions? One possible explanation comes from DeScioli, Bruening, & Kurzban (2011). The trio posit that omissions are judged less harshly than actions because omissions tend to leave less overt evidence of wrongdoing. As punishment costs tend to decrease as the number of punishers increases, if third party punishers make use of evidence in deciding whether or not to become involved, then material evidence should make punishment easier to enact. Unfortunately, the design that the researchers used in their experiments does not appear to definitively speak to their hypothesis. Specifically, they found the effect they were looking for – namely, the reduction of the action-omission effect – but they only managed to do so via reframing an omission (failing to turn a train or stop a demolition) into an action (pressing a button that failed to turn a train or stop a demolition). It is not clear that such a manipulation solely varied the evidence available without fundamentally altering other morally-relevant factors.

There is another experiment that did manage to substantially reduce the action-omission effect without introducing such a confound, however: Haidt & Baron (1996). In this paper, the authors presented subjects with a story about a person selling his car. The seller knows that there is a 1/3 chance the car contains a manufacturing defect that will cause it to fall apart soon; a potential defect specific to the year the car was made. When a buyer inquires about the year of the manufacturing defect the seller either (a) lies about it or (b) doesn’t correct the buyer, who had suddenly exclaimed that they remember which year it was, though they were incorrect. When asked how wrong it was for the seller to do (or fail to do) what they did, the action-omission effect was observed when the buyer was not personally known to the seller. However, if the seller happened to be good friends with the buyer, the degree of the effect was reduced by almost half. In other words, when the buyer and seller were good friends, it mattered less whether the seller cheated the buyer through action or omission; both were deemed to be relatively unacceptable (and, interestingly, both were deemed to be more wrong overall as well). However, when the buyer and the seller were all but strangers, people rated the cheat via omission to be relatively less wrong than the action. Moral judgments in close relationships appeared to generally become more consequentialist.

If evidence was the deciding factor in the action-omission distinction, then the closeness of the relationship between the actor or omitted and the target should not be expected to have any effect on moral judgments (as the nature of the relationship does not itself generate any additional observable evidence). While this finding does not rule out the role of evidence in the action-omission distinction altogether, it does suggest that evidence concerns alone are insufficient for understanding the distinction. The nature of the relationship between the actor and victim is, however, predicted to have an effect when considering the WTR model. We expect our friends, especially our close friends, to have relatively high WTRs with respect to us; we might even expect them to go out of their way to suffer costs to help us if necessary. Indications that they are unwilling to do so – whether through action or omission – represent betrayals of that friendship. Further, when a friend behaves in a manner indicating a negative WTR towards us, the gulf between the expected (highly positive) and actual (negative) WTR is far greater than if a stranger behaved comparably (as we might expect a neutral starting point for strangers).

“I hate when girls lie online about having a torso!”

Though this analysis does not provide a complete explanation of the action/omission distinction by any means, it does point us in the right direction. It would seem that actions actively advertise WTRs, whereas omissions do not necessarily do likewise. Morally condemning all those who do not display positive WTRs per se does not make much sense, as the costs involved in doing so are so high as to preclude efficiency. Further, those who simply fail to express a positive WTR towards you might be less liable to inflict future costs, relative to those who express a negative one (i.e. the man who fails to return your wallet is not necessarily as liable to hurt you in the future as the one who directly steals from you). Selectively directing that condemnation at those who display negative appreciably low or negative WTRs, then, appears to be a more viable strategy: it could help direct condemnation towards where it’s liable to do the most good. This basic premise should hold especially given a close relationship with the perpetrator: such relationships entail more frequent contact and, accordingly, more opportunities for one’s WTR towards you to matter.

References: DeScioli, P., Bruening, R., & Kurzban. R. (2011). The omission effect in moral cognition: Toward a functional explanation. Evolution and Human Behavior, 32, 204-215.

Haidt, J. & Baron, J. (1996). Social roles and the moral judgment of acts and omissions. European Journal of Social Psychology, 26, 201-218.

When Giving To Disaster Victims Is Morally Wrong

Here’s a curious story: Kim Kardashian recently decided to sell some personal items on eBay. She also mentioned that 10% of the proceeds would be donated to typhoon relief in the Philippines. On the face of it, there doesn’t appear to be anything morally objectionable going on here: Kim is selling items on eBay (not an immoral behavior) and then giving some of her money freely to charity (not immoral). Further, she made this information publicly available, so she’s not lying or being deceitful about how much money she intends to keep and how much she intends to give (also not immoral). If the coverage of the story and the comments about it are any indication, however, Kim has done something morally condemnable. To select a few choice quotes, Kim is, apparently “the centre of all evil in the universe“, is “insulting” and “degrading” people, is “greedy” and “vile“. She’s also a “horrible bitch” and anyone who takes part in the auction is “retarded“. One of the authors expressed the hope that ”…[the disaster victims] give you back your insulting “portion of the proceeds” which is a measly 10% back to you so you can choke on it“. Yikes.

Just shred the money, add some chicken and ranch, and you’re good to go.

Now one could wonder whether the victims of this disaster would actually care that some of the money being used to help them came from someone who only donated 10% of her eBay sales. Sure; I’d bet the victims would likely prefer to have more money donated from every donor (and non-donor), but I think just about everyone in the world would rather have more money than they currently do. Though I might be mistaken, I don’t think there are many victims who would insist that the money be sent back because there wasn’t enough of it. I would also guess that, in terms of the actual dollar amount provided, Kim’s auctions probably resulted in more giving than many or most other actual donors, and definitely more than anyone lambasting Kim who did not personally give (of which I assume there are many). Besides the elements of hypocrisy that are typical to disputes on this nature, there is one facet of this condemnation that really caught my attention: people are saying Kim is a bad person for doing this not because she did anything immoral per se, but because she failed to do something laudable to a great-enough degree. This is akin to suggesting someone should be punished for only holding a door open for five people, despite them not being required to hold it open for anyone.

Now one might suggest that what Kim did wasn’t actually praiseworthy because she made money off of it: Kim is self-interested and is using this tragedy to advance her personal interests, or so the argument goes. Perhaps Kim was banking on the idea that giving 10% to charity would result in people paying more for the items themselves and offsetting the cost. Even if that was the case, however, it still wouldn’t make what she was doing wrong for two reasons: first, people profit from selling good or services continuously, and, most of the time, people don’t deem those acts as morally wrong. For instance, I just bought groceries, but I didn’t feel a moral outrage that the store I bought them from profited off me. Secondly, it would seem that even if Kim did benefit by doing this, it’s a win-win situation for her and the typhoon victims. While mutual benefit make make gauging Kim’s altruistic intentions difficult, it would not make the act immoral per se. Furthermore, it’s not as if Kim’s charity auction coerced anyone into paying more than they otherwise would have; how much to pay would be the decision of the buyers, whom Kim could not directly control. If Kim ended up making more money off those than she otherwise would have, it’s only because other people willingly gave her more. So why are people attempting to morally condemn her? She wasn’t dishonest, she didn’t do anyone direct harm, she didn’t engage in any behavior that is typically deemed “immoral”, and the result of her actions were that people were better off. If one wants to locate the focal point of people’s moral outrage about Kim’s auction, then, it will involve digging a little deeper psychologically.

One promising avenue to begin our exploration of the matter is a chapter by Petersen, Sell, Tooby, & Cosmides (2010) that discussed our evolved intuitions about criminal justice. In it, they discuss the concept of a welfare tradeoff ratio (WTR). A WTR is, essentially, one person’s willingness to give up some amount of personal welfare to deliver some amount of welfare to another. For instance, if you were given the choice between $6 for yourself and $1 for someone else or $5 for both of you, choosing the latter would represent a higher WTR: you would be willing to forgo $1 so that another individual could have an additional $4. Obviously, it would be good for you if other people maintained a high WTR towards you, but others are not so willing to give up their own welfare without some persuasion. One way (among many) of persuading someone to put more stock in your welfare is to make yourself look like a good social investment. If benefiting you will benefit the giver in the long run – perhaps because you are currently experiencing the bad luck of a typhoon destroying your home, but you can return to being a productive associate in the future if you get help – then we should expect people to up-regulate their WTR towards you.

Some other pleas for assistance are less liable to net good payoffs.

The intuition that Kim’s moral detractors appear to be expressing, then, is not that Kim is wrong for displaying a mildly positive WTR per se, but that the WTR she displayed was not sufficiently high, given her relative wealth and the disaster victim’s relative need. This makes her appear to be a potentially-poor social investment, as she is relatively-unwilling to give up much of her own welfare to help others, even when they are in desperate need. Framing the discussion in this light is useful insomuch as it points us in the right direction, but it only gets us so far. We are left with the matter of figuring out why, for instance, most other people who were giving to charity were not condemned for not giving as much as they realistically could have, even if it meant them foregoing or giving up some personal items or pleasurable experiences themselves (i.e. “if you ate out less this week, or sold some of your clothing, you too could have contributed more to the aid efforts; you’re a greedy bitch for not doing so”).

It also doesn’t explain why anyone would suggest that it would have been better for Kim to have given nothing at all instead of what she did give. Though we see that kind of rejection of low offers in bargaining contexts – like ultimatum games – we typically don’t see as much of it in altruistic ones. This is because rejecting the money in bargaining contexts has an effect on the proposer’s payoff; in altruistic contexts, rejection has no negative effect on the giver and should effect their behavior far less. Even more curious, though: if the function of such moral condemnation is to increase one’s WTR towards others more generally, suggesting that Kim giving no amount would have been somehow better than what she did give is exceedingly counterproductive. If increasing WTRs was the primary function of moral condemnation, it seems like the more appropriate strategy would be to start with condemning those people – rich or not – who contributed nothing, rather than something (as those who give nothing, arguably, displayed a lower WTR towards the typhoon victims than Kim did). Despite that, I have yet to come across any articles berating specific individuals or groups for not giving at all; they might be out there, but they generated much less publicity if they were. We need something else to complete the account of why people seem to hate Kim Kardashian for not giving more.

Perhaps that something more is that the other people who did not donate were also not trying to suggest they were behaving altruistically; that is, they were not trying to reap the benefits of being known as an altruist, whereas Kim was, but only halfheartedly. This would mean Kim was sending a less-than-honest signal. A major complication with that account, however, is that Kim was, for all intents and purposes, acting altruistically; she could have been praised very little for what she did, rather than condemned. Thankfully, the condemnation towards Kim is not the only example of this we have to draw upon. These kinds of claims have been advanced before: when Tucker Max tried to donate $500,000 to planned parenthood, only to be rejected because some people didn’t want to associate with him. The arguments being made against accepting that sizable donation centered around (a) the notion that he was giving for selfish reasons and (b) that others would stop supporting planned parenthood if Tucker became associated with them. My guess is that something similar is at play here. Celebrities can be polarizing figures (for reasons which I won’t speculate about here), drawing overly hostile or positive reactions from people who are not affected by them personally. For whatever reason, there are many people who dislike Kim and would like to either avoid being associated with her altogether and/or see her fall from her current position in society. This no doubt has an effect on how they view her behavior. If Kim wasn’t Kim, there’s a good chance no one would care about this kind of charity-involving auction.

Much better; now giving only 10% is laudable.

As I mentioned in my last post, children appear to condone harming others with whom they do not share a common interest. The same behavior – in this case, giving 10% of your sales to help others – is likely to be judged substantially differently contingent on who precisely is enacting the behavior. Understanding why people express moral outrage at welfare-increasing behaviors requires a deeper examination of their personal strategic interests in the matter. We should expect that state of affairs for a simple reason: benefiting others more generally is not universally useful, in the evolutionary sense of the word. Sometimes it’s good for you if certain other people are worse off (though this argument is seldom made explicitly). Now, of course, that does mean that people will, at times, ostensibly advocate for helping a group of needy people, but then shun help, even substantial amounts of help, when it comes from the “wrong” sources. They likely do what they do because such condemnation will either harm those “wrong” sources directly or because allowing the association could harm the condemner in some way. Yes; that does mean the behavior of these condemners has a self-interested component; the very thing they criticized Kim for. Without considerations of these strategic, self-interested motivations, we’d be at a loss for understanding why giving to typhoon victims is sometimes morally wrong.

References: Petersen, M.B., Sell, A., Tooby, J., & Cosmides, L. (2010). Evolutionary psychology and criminal justice: A recalibration theory of punishment and reconciliation. In Human Morality & Sociality: Evolutionary & Comparative Perspectives, edited by Hogh-Oleson, H., Palgrace MacMillian, New York.