Health Food Nazis

“Hitler was a vegetarian. Just goes to show, vegetarianism, not always a good thing. Can, in some extreme cases, lead to genocide.” – Bill Bailey

There’s a burgeoning new field of research in psychology known as health licensing*. Health licensing is the idea that once people do something health-promoting, they subsequently give themselves psychological license to do other, unhealthy things. A classic example of this kind of research might go something like this: an experimenter will give participants a chance to do something healthy, like go on a jog or eat a nutritious lunch. After participants engage in this healthy behavior, they are then given a chance to do something unhealthy, like break their own legs. Typical results show that once people have engaged in these otherwise healthy behaviors, they are significantly more likely to engage in self-destructive ones, like leg-breaking, in order to achieve a balance between their healthy and unhealthy behaviors. This is just one more cognitive quirk to add to the ever-lengthening list of human psychological foibles.

Now that you engaged in hospital-visiting behavior, feel free to burn yourself to even it out.

Now many of you are probably thinking one or both of two things: “that sounds strange” and “that’s not true”. If you are thinking those things, I’m happy that we’re on the same page so far. The problems with the above hypothetical area of research are clear. First, it seems strange that people would go do something unhealthy and harmful because they had previously done something which was good for them; it’s not like healthy and unhealthy behaviors need to be intrinsically balanced out for any reason, at least not one that readily comes to mind. Second, it seems strange that people would want to engage in the harmful behaviors at all. Just because an option to do something unhealthy is presented, it doesn’t mean people are going to want to take it, as it might have little appeal to them. When people typically engage in behaviors which are deemed harmful in the long-term – such as smoking, overeating junk food, or other such acts which are said to be psychologically ‘licensed’ by healthy behaviors – they do so because of the perceived short-term benefits of such things. People certainly don’t drink for the hangover; they drink for the pleasant feelings induced by the booze.

So, with that in mind, what are we to make of a study that suggests doing something healthy can give people a psychological license to adopt immoral political stances? In case that sounds too abstract, the research on the table today examines whether drinking sauerkraut juice make people more likely to endorse Nazi-like politics, and no; I’m not kidding (as much as I wish I was). The paper (Messner & Brugger, 2015) itself leans heavily on moral licensing: the idea that engaging in moral behaviors activates compensating psychological mechanisms that encourage the actor to engage in immoral ones. So, if you told the truth today, you get to lie tomorrow to balance things out. Before moving further into the details of the paper, it’s worth mentioning that the authors have already bumped up against one of the problems from my initial example: I cannot think of a reason that ‘moral’ and ‘immoral’ behaviors need to be “balanced out” psychologically (whatever that even means), and none is provided. Indeed, as some people continuously refrain from immoral (or unhealthy) behaviors, whereas others continuously indulge in them, compensation or balance doesn’t seem to factor into the equation in the same way (or at all) for everyone.

Messner & Brugger (2015) try to draw on a banking analogy, whereby moral behavior gives one “credit” into their account that can be “spent” on immoral behavior. However, this analogy is largely unhelpful as you cannot spend money you do not have, but you can engage in immoral behaviors even if you have no morally-good “credit”. It’s also unhelpful in that it presumes immoral behavior is something one wants to spend their moral credit on; the type of immoral behavior seems to be besides the point, as we will soon see. Much like my leg-breaking example, this too seems to make little sense: people don’t seem to want to engage in immoral behavior because it is immoral. As the bank account analogy is not at all helpful for understanding the phenomenon in question, it seems better to drop it altogether, since it’s only likely to sow confusion in the minds of anyone trying to really figure out what’s going on here. Then again, perhaps the confusion is only present in the paper to compensate for all the useful understanding the researchers are going to provide us later.

“We broke half the lights to compensate for the fact that the other half work”

Moving forward, the authors argue that, because health-relevant behavior is moralized, engaging in some kind of health-promoting behavior – in this case, drinking sauerkraut juice (high in fiber and vitamin C, we are told) – ought to give people good moral “credit” which they will subsequently spend on immoral behavior (in much the same way buying eco-friendly products leads to people giving themselves a moral license to steal, we are also told). Accordingly, the authors first asked 128 Swiss students to indicate who was more moral: someone who drinks sauerkraut juice or someone who drinks Nestea. As predicted, 78% agreed that the sauerkraut-juice drinker was more moral, though whether a “neither, and this question is silly” option existed is not mentioned. The students also indicated how morally acceptable and right wing a number of attitudes were; statements which related to, according to the authors, a number of nasty topics like devaluing the culture of others (i.e., seeing a woman wearing a burka making someone uncomfortable), devaluing other nations (viewing foreign nationals as a burden on the state), affirming antisemitism (disliking some aspects of Israeli politics), devaluing the humanity of others (not agreeing that all public buildings ought to be modified for handicapped access), and a few others. Now all of these statements were rated as immoral by the students, but whether they represent what the authors think they do (Nazi-like politics) is up for interpretation.

In any case, another 111 participants were then collected and assigned to drink sauerkraut juice, Nestea, or nothing. Those who drank the sauerkraut juice rated it as healthier than those who drank the Nestea and, correspondingly, were also more likely to endorse the Nazi-like statements (M = 4.46 on a 10-point scale) than those who drank Nestea (M = 3.82) or nothing (M = 3.73). Neat. There are, however, a few other major issues to address. The first of these is that, depending on who you sample, you’re going to get different answers to the “are these attitudes morally acceptable?” questions. Since it’s Swiss students being assessed in both cases, I’ll let that issue slide for the more pressing, theoretical one: the author’s interpretation of the results would imply that the students who indicated that such attitudes are immoral also wished to express them. That is to say, because they just did something healthy (drank sauerkraut juice) they now want to engage in immoral behavior. They don’t seem to picky about what immoral behavior they engage in either, as they’re apparently more willing to adopt political stances they would otherwise oppose, were it not for the disgusting, yet healthy, sauerkraut juice.

This strikes me very much as the kind of metaphorical leg-breaking I mentioned earlier. When people engage in immoral (or unhealthy) behaviors, they typically do so because of some associated benefit: stealing grants you access to resources you otherwise wouldn’t obtain; eating that Twinkie gives you the pleasant taste and the quick burst of calories, even if they make you fat when you do that too much. What benefits are being obtained by the Swiss students who are now (slightly) more likely to endorse right-wing, Nazi-like politics? None are made clear in the paper and I’m having a hard time thinking up any myself. This seems to be a case of immoral behavior for the sake of it, which could only arise from a rather strange psychology. Perhaps there is something worth noting going on here that isn’t being highlighted well; perhaps the authors just stumbled on a statistical fluke (which does happen regularly). In either case, the idea of moral licensing doesn’t seem to help us understand what’s happening at all, and the banking metaphors and references to “balancing” and “compensation” seem similarly impotent to move us forward.

“Just give him the money; he eats well, so it’s OK”

The moral licensing idea is even worse than all that, though, as it doesn’t engage with the main adaptive reason people avoid self-beneficial, but immoral behaviors: other people will punish you for them. If I steal from someone else, they or their allies might well take revenge on me; that I assure them of my healthy diet will likely create little to no effective deterrence against the punishment I would soon receive. If that is the case – and I suspect it is – then this self-granted “moral license” would be about as useful as my simply believing that stealing from others isn’t wrong and won’t be punished (which is to say, “not at all”). Any type of moral license needs to be granted by potential condemners in order to be of any practical use in that regard, and the current research does not assess whether that is the case. This limited focus on conscience – rather than condemnation – complete with the suggestion that people are likely to adopt social politics they would otherwise oppose for the sake of achieving some kind of moral balance after drinking 100 ml of gross sauerkraut juice makes for a very strange paper indeed.

References: Messner, C. & Brugger, A. (2015). Nazis by Kraut: A playful application of moral self-licensing. Psychology, 6, http://dx.doi.org/10.4236/psych.2015.69112

*This statement has not been evaluated by the FDA or any such governmental body; the field doesn’t actually exist to the best of my knowledge, but I’ll tell you it does anyway.

 

How Many Foundations Of Morality Are There?

If you want to understand and explain morality, the first useful step is to be sure you’re clear about what kind of thing morality is. This first step has, unfortunately, been a stumbling point for many researchers and philosophers. Many writers on the topic of morality, for example, have primarily discussed (and subsequently tried to explain) altruism: behaviors which involves actors suffering costs to benefit someone else. While altruistic behavior can often be moralized, altruism and morality are not the same thing; a mother breastfeeding her child is engaged in altruistic behavior, but this behavior does not appear to driven by moral mechanisms. Other writers (as well as many of the same ones) have also discussed morality in conscience-centric terms. Conscience refers to self-regulatory cognitive mechanisms that use moral inputs to influence one’s own behavior. As a result of that focus, many moral theories have not adequately been able to explain moral condemnation: the belief that others ought to be punished for behaving immorally (DeScioli & Kurzban, 2009). While the value of being clear about what one is actually discussing is large, it is often, and sadly, not the case that many treaties on morality begin by being clear about what they think morality is, nor is it the case that they tend to avoid conflating morality with other things, like altruism.

“It is our goal to explain the function of this device”

When one is not quite clear on what morality happens to be, you can end up at a loss when you’re trying to explain it. For instance, Graham et al (2012), in their discussion of how many moral foundations there are, write:

We don’t know how many moral foundations there really are. There may be 74, or perhaps 122, or 27, or maybe only five, but certainly more than one.

Sentiments like these suggest a lack of focus on what it is precisely the authors are trying to understand. If you are unsure whether the thing you are trying to explain is 2, 5, or over 100 things, then it is likely time to take a step back and refine your thinking a bit. As Graham et al (2012) do not begin their paper with a mention of what kind of thing morality is, they leave me wondering what precisely it is they are trying to explain with 5 or 122 parts. What they do posit is that morality is innate (organized in advance of experience), modified by culture, the result of intuitions first and reasoning second, and that is has multiple foundations; none of that, however, removes my wondering of what precisely they mean when they write “morality”.

The five moral foundations discussed by Graham et al (2012) include kin-directed altruism (what they call the harm foundation), mechanisms for dealing with cheaters (fairness), mechanisms for forming coalitions (loyalty), mechanisms for managing coalitions (authority), and disgust (sanctity). While I would agree that navigating these different adaptive problems are all important for meeting the challenges of survival and reproduction, there seems to be little indication that these represent different domains of moral functioning, rather than simply different domains upon which a single, underlying moral psychology might act (in much the same way, a kitchen knife is capable of cutting a variety of foods, so one need not carry a potato knife, a tomato knife, a celery knife, and so on). In the interests of being clear where others are not, by morality I am referring to the existence of the moral dimension itself; the ability to perceive “right” and “wrong” in the first place and generate the associated judgments that people who engage in immoral behaviors ought to be condemned and/or punished (DeScioli & Kurzban, 2009). This distinction is important because it would appear that species are capable of navigating the above five problems without requiring the moral psychology humans possess. Indeed, as Graham et al (2012) mention, many non-human species share one or many of these problems, yet whether those species possess a moral psychology is debatable. Chimps, for instance, do not appear to punish others for engaging in harmful behavior if said behavior has no effect on them directly (though chimps do take revenge for personal slights). Why, then, might a human moral psychology lead us to condemn others whereas it does not seem to exist in chimps, despite us sharing most of those moral foundations? That answer is not provided, or even discussed, throughout the length of moral foundations paper.

To summarize up this point, the moral foundation piece is not at all clear on what type of thing morality is, resulting in it being unclear when attempting to make a case that many – not one – distinct moral mechanisms exist. It does not necessarily tackle how many of these distinct mechanisms might exist, and it does not address the matter of why human morality appears to differ from whatever nonhuman morality there might – or might not – be. Importantly, the matter of what adaptive function morality has – what adaptive problems it solved and how it solved them – is left all but untouched. Graham et al (2012) seem to fall into the same pit trap that so many before them have of believing they have explained the adaptive value of morality because they outline an adaptive value for somethings like kin-direct altruism, reciprocal altruism, and disgust, despite these concepts not being the same thing as morality per se.

Such pit traps often prove fatal for theories

Making explicit hypotheses of function for understanding morality – as with all of psychology – is crucial. While Graham et al (2012) try to compare these different hypothetical domains of morality to different types of taste receptors on our tongues (one for sweet, bitter, sour, salt, and umami), that analogy glosses over the fact that these different taste receptors serve entirely separate functions by solving unique adaptive problems related to food consumption. Without any analysis of which unique adaptive problems are solved by morality in the domain of disgust, as opposed to, say, harm-based morality, as opposed to fairness-based morality – and so on – the analogy does not work. The question of importance in this case is what function(s) these moral perceptions serve and whether that (or those) function(s) vary when our moral perceptions are raised in the realm of harm or disgust. If that function is consistent across domains, then it is likely handled by a single moral mechanism; not many of them.

However, one thing Graham et al (2012) appear sure about is that morality cannot be understood through a single dimension, meaning they are putting their eggs in the many-different-functions basket; a claim with which I take issue. A prediction that this multiple morality hypothesis put forth by moral foundations theory might make, if I am understanding it correctly, would be that you ought to be able to selectively impair people’s moral cognitions via brain damage. For example, were you to lesion some hypothetical area of the brain, you would be able to remove a person’s ability to process harm-based morality while leaving their disgust-based morality otherwise unaffected (likewise for fairness, sanctity, and loyalty). Now I know of no data bearing on this point, and none is mentioned in the paper, but it seems that, were such a effect possible, it likely would have been noticed by now.

Such a prediction also seems unlikely to hold true in light of a particular finding: one curious facet of moral judgments is that, given someone perceives an act to be immoral, they almost universally perceive (or rather, nominate) someone – or a group of someones – to have been harmed by it. That is to say they perceive one or more victims when they perceive wrongness. If morality, at least in some domains, was not fundamentally concerned with harm, this would be a very strange finding indeed. People ought not need to perceive a victim at all for certain offenses. Nevertheless, it seems that people do not appear to perceive victim-less moral wrongs (despite their inability to always consciously articulate such perceptions), and will occasionally update their moral stances when their perceptions of harms are successfully challenged by others. The idea of victim-less moral wrongs, then, appears to originate much more from researchers claiming that an act is without a victim, rather than from their subject’s perceptions.

Pictured: a PhD, out for an evening of question begging

There’s a very real value to being precise about what one is discussing if you hope to make any forward momentum in a conversation. It’s not good enough for a researcher to use the word morality when it’s not at all clear to what that word is referring. When such specifications are not made, people seem to end up doing all sorts of things, like explaining altruism, or disgust, or social status, rather than achieving their intended goal. A similar problem was encountered when another recent paper on morality attempted to define “moral” as “fair”, and then not really define what they meant by “fair”: the predictable result was a discussion of why people are altruistic, rather than why they are moral. Moral foundations theory seems to only offer a collection of topics about which people hold moral opinions; not a deeper understanding of how our morality functions.

References: DeScioli, P. & Kurzban, R. (2009) Mysteries of morality. Cognition, 112, 281-299.

Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., & Ditto, P. (2012). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55–130.

Understanding Conspicuous Consumption (Via Race)

Buckle up, everyone; this post is going to be a long one. Today, I wanted to discuss the matter of conspicuous consumption: the art of spending relatively large sums of money on luxury goods. When you see people spending close to $600 on a single button-up shirt, two-months salary on engagement rings, or tossing spinning rims on their car, you’re seeing examples of conspicuous consumption. A natural question that many people might (and do) ask when confronted with such outrageous behavior is, “why do you people seem to (apparently) waste money?” A second, related question that might be asked once we have an answer to the first question (indeed, our examination of this second question should be guided by – and eventually inform – our answer to the first) is how can we understand who is most likely to spend money in a conspicuous fashion? Alternatively, this question could be framed by asking about what contexts tend to favor conspicuous consuming behavior. Such information should be valuable to anyone looking to encourage or target big-ticket spending or spenders or, if you’re a bit strange, you could also try to create contexts in which people spend their money more responsibly.

But how fun is sustainability when you could be buying expensive teeth  instead?

The first question – why do people conspicuously consume – is perhaps the easier question to initially answer, as it’s been discussed for the last several decades. In the biological world, when you observe seemingly gaudy ornaments that are costly to grow and maintain – peacock feathers being the go-to example – the key to understanding their existence is to examine their communicative function (Zahavi, 1975). Such ornaments are typically a detriment to an organism’s survival; peacocks could do much better for themselves if they didn’t have to waste time and energy growing the tail feathers which make it harder to maneuver in the world and escape from predators. Indeed, if there was some kind of survival benefit to those long, colorful tail feathers, we would expect that both sexes would develop them; not just the males.

However, it is because these feathers are costly that they are useful signals, since males in relatively poor condition could not shoulder their costs effectively. It takes a healthy, well-developed male to be able to survive and thrive in spite of carrying these trains of feathers. The costs of these feathers, in other words, ensures their honesty, in the biological sense of the word. Accordingly, females who prefer males with these gaudy tails can be more assured that their mate is of good genetic quality, likely leading to offspring well-suited to survive and eventually reproduce themselves. On the other hand, if such tails were free to grow and develop – that is, if they did not reliably carry much cost – they would not make good cues for such underlying qualities. Essentially, a free tail would be a form of biological cheap talk. It’s easy for me to just say I’m the best boxer in the world, which is why you probably shouldn’t believe such boasts until you’ve actually seen me perform in the ring.

Costly displays, then, owe their existence to the honesty they impart on a signal. Human consumption patterns should be expected to follow a similar pattern: if someone is looking to communicate information to others, costlier communications should be viewed as more credible than cheap ones. To understand conspicuous consumption we would need to begin by thinking about matters such as what signal someone is trying to send to others, how that signal is being sent, and what conditions tend to make the sending of particular signals more likely? Towards that end, I was recently sent an interesting paper examining how patterns of conspicuous consumption vary among racial groups: specifically, the paper examined racial patterns of spending on what was dubbed visible goods: objects which are conspicuous in anonymous interactions and portable, such as jewelry, clothing, and cars. These are good designed to be luxury items which others will frequently see, relative to other, less-visible luxury items, such as hot tubs or fancy bed sheets.

That is, unless you just have to show off your new queen mattress

The paper, by Charles et al (2008), examined data drawn from approximately 50,000 households across the US, representing about 37,000 White 7,000 Black, and 5,000 Hispanic households between the ages of 18 and 50. In absolute dollar amounts, Black and Hispanic households tended to spend less on all manner of things than Whites (about 40% and 25%, respectively), but this difference needs to be viewed with respect to each group’s relative income. After all, richer people tend to spend more than poorer people. Accordingly, the income of these households was estimated through their reports of their overall reported spending on a variety of different goods, such as food, housing, etc. Once a household’s overall income was controlled for, a better picture of their relative spending on a number of different categories emerged. Specifically, it was found that Blacks and Hispanics tended to spend more on visible  goods (like clothing, cars, and jewelry) than Whites by about 20-30%, depending on the estimate, while consuming relatively less in other categories like healthcare and education.

This visible consumption is appreciable in absolute size, as well. The average white household was spending approximately $7,000 on such purchases each year, which would imply that a comparably-wealthy Black or Hispanic household would spend approximately $9,000 on such purchases. These purchases come at the expense of all other categories as well (which should be expected, as the money has to come from somewhere), meaning that the money spent on visible goods often means less is spent on education, health care, and entertainment.

There are some other interesting findings to mention. One – which I find rather notable, but the authors don’t see to spend any time discussing – is that racial differences in consumption of visible goods declines sharply with age: specifically, the Black-White gap in visible spending was 30% in the 18-34 group, 23% in the 35-49 group, and only 15% in the 50+ group. Another similarly-undiscussed finding is that visible consumption gap appears to decline as one goes from single  to married. The numbers Charles et al (2009) mention estimate that the average percentage of budgets used on visible purchases was 32% higher for single Black men, 28% higher for single Black women, and 22% higher for married Black couples, relative to their White counterparts. Whether these declines represent declines in absolute dollar amounts or just declines in racial differences, I can’t say, but my guess is that it represents both. Getting old and getting into relationships tended to reduce the racial divide in visible good consumption.

Cool really does have a cut-off age…

Noting these findings is one thing; explaining them is another, and arguably the thing we’re more interested in doing. The explanation offered by Charles et al (2009) goes roughly as follows: people have a certain preference for social status, specifically with respect to their economic standing. People are interested in signaling their economic standing to others via conspicuous consumption. However, the degree to which you have to signal depends strongly on the reference group to which you belong. For example, if Black people have a lower average income than Whites, then people might tend to assume that a Black person has a lower economic standing. To overcome this assumption, then, Black individuals should be particularly motivated to signal that they do not, in fact, have a lower economic standing more typical of their group. In brief: as the average income of a group drops, those with money should be particularly inclined to signal that they are not as poor as other people below them in their group.

In support of this idea, Charles et al (2008) further analyzed their data, finding that the average spending on visible luxury goods declined in states with higher average incomes, just as it also declined among racial groups with higher average incomes. In other words, raising the average income of a racial group within a state tended to strongly impact what percentage of consumption was visible in nature. Indeed, the size of this effect was such that, controlling for the average income of a race within a state, the racial gaps almost entirely disappeared.

Now there are a few things to say about this explanation, first of which being that it’s incomplete as stands. From my reading of it, it’s a bit unclear to me how the explanation works for the current data. Specifically, it would seem to posit that people are looking to signal that they are wealthier than those immediately below them in the social ladder. This could explain the signaling in general, but not the racial divide. To explain the racial divide, you need to add something else; perhaps that people are trying to signal to members of higher income groups that, though one is a member of a lower income group, one’s income is higher than the average income. However, that explanation would not explain the age/marital status information I mentioned before without adding on other assumption, nor would directly explain the benefits which arise from signaling one’s economic status in the first place. Moreover, if I’m understanding the results properly, it wouldn’t directly explain why visible consumption drops as the overall level of wealth increases. If people are trying to signal something about their relative wealth, increasing the aggregate wealth shouldn’t have much of an impact, as “rich” and “poor” are relative terms.

“Oh sure, he might be rich, but I’m super rich; don’t lump us together”

So how might this explanation be altered to fit the data better? The first step is to be more explicit about why people might want to signal their economic status to others in the first place. Typically, the answer to this question hinges on the fact that being able to command more resources effectively makes one a more valuable associate. The world is full of people who need things – like food and shelter – so being able to provide those things should make one seem like a better ally to have. For much the same reason, being in command of resources also tends to make one appear to be a more desirable mate as well. A healthy portion of conspicuous signaling, as I mentioned initially, has to do with attracting sexual partners. If you know that I am capable of providing you with valuable resources you desire, this should, all else being equal, make me look like a more attractive friend or mate, depending on your sexual preferences.

However, recognition of that underlying logic helps make a corollary point: the added value that I can bring you, owing to my command of resources, diminishes as overall wealth increases. To place it in an easy example, there’s a big difference between having access to no food and some food; there’s less of a difference between having access to some food and good food; there’s less of a difference still between good food and great food. The same holds for all manner of other resources. As the marginal value of resources decreases as access to resources increases overall, we can explain the finding that increases in average group wealth decrease relative spending on visible goods: there’s less of a value in signaling that one is wealthier than another if that wealth difference isn’t going to amount to the same degree of marginal benefit.

So, provided that wealth has a higher marginal value in poorer communities – like Black and Hispanic ones, relative to Whites – we should expect more signaling of it in those contexts. This logic could explain the racial gap on spending patterns. It’s not that people are trying to avoid a negative association with a poor reference group as much as they’re only engaging in signaling to the extent that signaling holds value to others. In other words, it’s not about my signaling to avoid being thought of as poor; it’s about my signaling to demonstrate that I hold a high value as a partner, socially or sexually, relative to my competition.

Similarly, if signaling functions in part to attract sexual partners, we can readily explain the age and martial data as well. Those who are married are relatively less likely to engage in signaling for the purposes of attracting a mate, as they already have one. They might engage in such purchases for the purposes of retaining that mate, though such purchases should involve spending money on visible items for other people, rather than for themselves. Further, as people age, their competition in the mating market tends to decline for a number reasons, such as existing children, inability to compete effectively, and fewer years of reproductive viability ahead of them. Accordingly, we see that visible consumption tends to drop off, again, because the marginal value of sending such signals has surely declined.

“His most attractive quality is his rapidly-approaching demise”

Finally, it is also worth noting other factors which might play an important role in determining the marginal value of this kind of conspicuous signaling. One of these is an individual’s life history. To the extent that one is following a faster life history strategy – reproducing earlier, taking rewards today rather than saving for greater rewards later – one might be more inclined to engage in such visible consumption, as the marginal value of signaling you have resources now is higher when the stability of those resources (or your future) is called into question. The current data does not speak to this possibility, however. Additionally, one’s sexual strategy might also be a valuable piece of information, given the links we saw with age and martial status. As these ornaments are predominately used to attract the attention of prospective mates in nonhuman species, it seems likely that individuals with a more promiscuous mating strategy should see a higher marginal value in advertising their wealth visibly. More attention is important if you’re looking to get multiple partners. In all cases, I feel these explanations make more textured predictions than the “signaling to not seem as poor as others” hypothesis, as considerations of adaptive function often do.

References: Charles, K., Hurst, E., & Roussanov, N. (2008). Conspicuous consumption and race. The Journal of Quarterly Economics, 124, 425-467.

Zahavi, A. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.

 

A Curious Case Of Welfare Considerations In Morality

There was a stage in my life, several years back, where I was a bit of a chronic internet debater. As anyone who has engaged in such debates – online or off, for that matter – can attest to, progress can be quite slow if any is observed at all. Owing to the snail’s pace of such disputes, I found myself investing more time in them than I probably should have. In order to free up my time while still allowing me to express my thoughts, I created my own site (this one) where I could write about topics that interested me, express my view points, and then be done with them, freeing me from the quagmire of debate. Happily, this is a tactic that has not only proven to be effective, but I like to think that it has produced some positive externalities for my readers in the form of several years worth of posts that, I am told, some people enjoy. Occasionally, however, I do still wander back into a debate here and there, since I find them fun and engaging. Sharing ideas and trading intellectual blows is nice recreation.

 My other hobbies follow a similar theme

In the wake of the recent shooting in Charleston, the debate I found myself engaged in concerned the arguments for the moral and legal removal guns from polite society, and I wanted to write a bit about it here, serving both the purposes of cleansing it from my mind and, hopefully, making an interesting point about our moral psychology in the process. The discussion itself centered around a clip from one of my favorite comedians, Jim Jefferies, who happens to not be a fan of guns himself. While I recommend watching the full clip and associated stand-up because Jim is a funny man, for those not interested in investing the time and itching to get to the moral controversy, here’s the gist of Jim’s views about guns:

“There’s one argument and one argument alone for having a gun, and this is the argument: Fuck off; I like guns”

While Jim notes that there’s nothing wrong with saying, “I like something; don’t take it away from me”, the rest of the routine goes through various discussions of how other arguments for the owning of guns are, in Jim’s word’s, bullshit (including owning guns for self-defense or the overthrow of an oppressive government. For a different comedic perspective, see Bill Burr).

Laying my cards on the table, I happen to be one of those people who enjoys shooting recreationally (just target practice; I don’t get fancy with it and I have no interest in hunting). That said, I’m not writing today to argue with any of Jim’s points; in fact, I’m quite sympathetic to many of the concerns and comments he makes: on the whole, I feel the expected value of guns, in general, to be a net cost for society. I further feel that if guns were voluntarily abandoned by the population, there would probably be many aggregate welfare benefits, including reduced rates of suicide, homicide, and accidental injury (owing to the possibility that many such conflicts are heat of the moment issues, and lacking the momentary ability to employ deadly force might mean it’s never used at all later). I’m even going to grant his point I quoted above: the best justification for owning a gun is recreational in nature. I don’t ask that you agree or disagree with all this; just that you follow the logical form of what’s to come.

Taking all of that together, the argument for enacting some kind of legal ban of guns – or at the very least the moral condemnation of the ability to own them – goes something like this: because the only real benefit to having a gun is that you get to have some fun with it, and because the expected costs to all those guns being around tend to be quite high, we ought to do away with the guns. The welfare balance just shifts away from having lots of deadly weapons around. Jim even notes that while most gun owners will never use their weapons intentionally or accidentally to inflict costs on others or themselves, the law nevertheless needs to cater to the 1% or so of people who would do such things. So, this thing – X – generates welfare costs for others which far outstrip its welfare benefits, and therefore should be removed. The important point of this argument, then, would seem to focus on these welfare concerns.

Coincidentally, owning a gun may make people put a greater emphasis on your concerns

The interesting portion of this debate is that the logical form of the argument can be applied to many other topics, yet it will not carry the same moral weight; a point I tried to make over the course of the discussion with a very limited degree of success. Ideas die one person at a time, the saying goes, and this debate did not carry on to the point of anyone losing their life.

In the case, we can try and apply the above logic to the very legal, condoned, and often celebrated topic of alcohol. On the whole, I would expect that the availability of alcohol is a net cost for society: drunk driving deaths in the US yield about 10,000 bodies (a comparable number to homicides committed with a firearm), which directly inflict costs on non-drinkers. While it’s more difficult to put numbers on other costs, there are a few non-trivial matters to consider, such as the number of suicides, assaults, and non-traffic accidents encouraged by the use of alcohol, the number of unintended pregnancies and STIs spread through more casual and risky drunk sex, as well as the number of alcohol-related illnesses and liver damage. Broken homes, abused and neglected children, spirals of poverty, infidelity, and missed work could also factor into these calculations somewhere. Both of these products – guns and booze – tend to inflict costs on individuals other than the actor when they’re available, and these costs appear to be substantial,

So, in the face of all those costs, what’s the argument in favor of alcohol being approved of, legally or moally? Well, the best and most common argument seems to be, as Jim might say, “Fuck off; I like drinking”. Now, of course, there are some notable differences between drinking and owning guns, mainly being that people don’t often drink to inflict costs on others while many people do use guns to intentionally do harm. While the point is well taken, it’s worth bearing in mind that the arguments against guns are not the same arguments against murder. The argument as it pertains to guns seemed to be, as I noted above, that regular people should not be allowed to own guns because some small portion of the population that does have one around will do something reprehensible or stupid with it, and that these concerns trump the ability of the responsible owners to do what they enjoy. Well, presumably, we could say the same thing about booze: even if most people who drink don’t drive while drunk, and even if not all drunk drivers end up killing someone, our morals and laws need to cater to that percentage of people that do.

(As an aside, I spent the past few years at New Mexico State University. One day, while standing outside a classroom in the hall, I noticed a poster about drunk driving. The intended purpose of the flyer seemed to be to inform students that most people don’t drive drunk; in fact, about 75% students reported not driving under the influence, if I recall correctly. That does mean, of course, that about 1 in 4 students did at some point, which is a worrying figure; perhaps enough to make a solid argument for welfare concerns)

There is also the matter of enforcement: making alcohol illegal didn’t work out well in the past; making guns illegal could arguably be more successful on a logistical level. While such a point is worth thinking about, it is also a bit of a red herring from the heart of the issue: that is, most people are not opposed to the banning of alcohol because it’s difficult in practice, but otherwise supportive of the measure on principle; instead, people seem as if they would oppose the idea even if it could be implemented efficiently. People’s moral judgments can be quite independent of enforcement capacity. Computationally, it seems like the judgments concerning whether something is worth condemning in the first place ought to proceed judgments about whether it could be done feasibly, simply because the latter estimation is useless without the former. Spending time thinking about what one could punish effectively without any interest in following through would be like thinking about all the things one could chew and swallow when they’re hungry, even if they wouldn’t want to eat them.

Plenty of fiber…and there’s lots of it….

There are two points to bear in mind from this discussion to try and tie it back to understanding our own moral psychology and making a productive point. The first is that there is some degree of variance in moral judgments that is not being determined by welfare concerns. Just because something ends up resulting in harm to others, people are not necessarily going to be willing to condemn it. We might (not) accept a line of reasoning for condemning a particular act because we have some vested interest in (encouraging) preventing it while categorically (accepting) rejecting that same line in other cases where our strategic interests run in the opposite direction; interests which we might not even be consciously aware of in many cases. This much, I suspect, will come as no surprise to anyone, especially because other people in debates are known for being so clearly biased to you, the dispassionate observer. Strategic interests lead us to preference our own concerns.

The other point worth considering, though, is that people raise or deny these welfare concerns in the interests of being persuasive to others. The welfare of other people appears to have some impact on our moral judgments; if welfare concerns were not used as inputs, it would seem rather strange that so many arguments about morality often lean so heavily and explicitly upon them. I don’t argue that you should accept my moral argument because it’s Sunday, as that fact seems to have little bearing to my moral mechanisms. While this too might seem obvious to people (“of course other people’s suffering matters to me!”), understanding why the welfare of others matters to our moral judgments is a much trickier explanatory issue than understanding why our own welfare matters to us. Both of these are matters that any complete theory of morality needs to deal with.

The Morality Of Guilt

Today, I wanted to discuss the topic of guilt; specifically, what the emotion is, whether we should consider it to be a moral emotion, and whether it generates moral behavioral outputs. The first part of that discussion will be somewhat easier to handle than the latter. In the most common sense, guilt appears to an emotion aroused by the perception of wrong-doing which has harmed someone else on the part of the individual experiencing guilt. The negative feelings that accompany guilt often lead to the guilty party desiring to make amends to the injured one so as to compensate the damage done and repair the relationship between the two (e.g., “I’m sorry that totaled your car by driving it into your house; I feel like a total heel. Let me buy you dinner to make up for it”). Because the emotion appears to be aroused by the perceptions of a moral transgression – that is, someone feels they have done something wrong, or impermissible –  it seems like guilt could rightly be considered a moral emotion; specifically, an emotion related to moral conscience (a self regulating mechanism), rather than moral condemnation (an other regulating mechanism).

Nothing beats packing for a nice, relaxing guilt trip

The understanding that guilt is a moral emotion, then, allows us to inform our opinion about what kind of thing morality is by examining how guilt works in greater, proximate detail. In other words, we can infer what adaptive value our moral sense might have had through studying the form of the emotional guilt mechanisms: what inputs they use and what outputs they produce. This brings us to some rather interesting work I recently dug out of my backlog of papers to read, by de Hooge et al (2011), that focused on figuring out what kinds of effects guilt tends to have on people’s behavior when you take guilt out of a dyadic (two-person) relationship and drop it into larger groups of people. The authors were interested, in part, on deciding whether or not guilt could be classified as a morally good emotion. While they acknowledge guilt is a moral emotion, they question whether it produces morally good outcomes in certain types of situations.

This leads naturally to the following question: what is a morally good outcome? The answer to that question is going to depend on what type of function one thinks morality has. In this case, de Hooge et al (2011) write as if our moral sense is an altruism device – one that functions to deliver benefits to others at a cost to one’s self. Accordingly, a morally good outcome is going to be one that results in benefits flowing to others at a cost to the actor. Framed in terms of guilt, we might expect that individuals experiencing guilt will behave more altruistically than individuals who are not; the guilty’s regard for the welfare of others will be regulated upwards, with a corresponding down-regulation placed on their own welfare. The authors note that much of the previous research on guilt has uncovered evidence consistent with that pattern: guilty parties tend to forgo benefits to themselves or suffer costs in order to deliver benefits to the party they have wronged. This makes guilt look rather altruistic.

Such research, however, was typically conducted in a two-party context: the guilty party and their victim. This presents something of an interpretative issue, inasmuch as the guilty party only has that one option available to them: if, say, I want to make you better off, I need to suffer a cost myself. While that might make the behavior look altruistic in nature, in the social world that we reside within, that is usually not the only option available; I could, for instance, also make you better off not at an expense to myself, but rather at the expense of someone else; an outcome most people wouldn’t exactly call altruism, and one de Hooge et al (2011) wouldn’t consider morally good either. To the extent a guilty party is interested in making their victim better off in both case, both outcomes would look the same in a two-party case; to the extent the guilty party is interested in behaving altruistically towards the victimized party, though, things would look different in a three-party context.

As they usually do…

de Hooge et al (2011) report on the results of three pilot studies and four experiments examining how guilt affects behavior in these three-party contexts in terms of welfare-relevant choices. While I don’t have time to discuss all of what they did, I wanted to highlight one of their experiments in more detail while noting that each of them generated data consistent with the same general pattern. The experiment I will discuss is their third one. In that experiment, 44 participants were assigned to either a guilt or a control condition. In both conditions, the participants were asked to complete a two-part joint effort task with another person to earn payment rewards. Colored letters (red or green) would pop up on each player’s screens and the participant and their partner had to click a button quickly in order to complete the task: the participant would push the button if the letter was green, whereas their partner would have to push if the letter was red. In the first part of the task, the performance of both the participant and their partner would be earning rewards for the participant; in the second part, the pair would be earning rewards for the partner instead. Each reward was worth 8 units of what I’ll call welfare points.

The participants were informed that while they would receive the bonus from the first round, their partner would not receive a bonus from the second. In the control condition, the partner did not earn the bonus because of their own poor performance; in the guilt condition, the partner did not earn the bonus because of the participant’s poor performance. In the next phase of this experiment, the participants were presented with three pay offs: their own, their partner’s, and an unrelated individual from the experiment who had also earned the bonus. The participants were told that one of the three would be randomly assigned the chance to redistribute the earnings though, of course, the participants always received that assignment. This allowed participants to give a benefit to their partner, but to do so at either a cost to themselves or at a cost to someone else.

Out of the 8 welfare units the participants had earned, they opted to give an average of 2.2 of them to their partner in the guilt condition, but only 1 unit in the control condition, so guilt did seem to make the participants somewhat more altruistic. Interestingly, however, guilt made participants even more willing to take from the outside party: guilty parties took an average of 4.2 units from the third party for their partner, relative to the 2.5 units they took in the control condition. In short, the participants appeared to be interested in repairing the relationship between themselves and their partners, but were more interested in doing so via taking from someone else, rather than giving up their own resources. Participants also viewed the welfare of the third party as being relatively unimportant as compared to the welfare of the partner they had ostensibly failed.

“To make up for hurting Mike, I think it’s only fair that Karen here suffers”

This returns us to the matter of what kind of thing morality is. de Hooge et al (2011) appear to view morality as an altruism device and view guilt as a moral emotion, yet, strangely, guilt did not appear to make people substantially more altruistic; instead, it seems to make them partial. Given that guilt was not making people behave more altruistically, we might want to reconsider the adaptive function of morality. What if, rather than acting as an altruism device, morality functions as an association management mechanism? If our moral sense functions to build and manage partial relationships, benefiting someone you’ve harmed at the expense of other targets of investment might make more sense. This is because there are good reasons to suspect that friendships represent partial allies maintained in the service of being able to win potential future disputes (DeScioli & Kurzban, 2009). These partial alliances are rank-ordered, however: I have a best friend, close friends, and more distant ones. In order to signal that I rank you highly as a friend, then, I need to demonstrate that I value you more than other people. Showing that I value you highly relative to myself – as would be the case with acts of altruism – would not necessarily tell you much about your value as my friend, relative to other friends. By contrast, behaving in ways that signal I value you more than others at least temporarily – as appeared to be the case in current experiments – could serve to repair a damaged alliance. Morality as an altruism device doesn’t fit the current pattern of data; an alliance management device does, though.

References: DeScioli, P. & Kurzban, R. (2009). The alliance hypothesis for human friendship. PLoS ONE 4(6): e5802. doi:10.1371/journal.pone.0005802

de Hooge, I. Nelissen R., Breugelmans, S., & Zeelenberg, M. (2011). What is moral about guilt? Acting “prosocially” at the disadvantage of others. Journal of Personality & Social Psychology, 100, 462-473.

 

Do Moral Violations Require A Victim?

If you’ve ever been a student of psychology, chances are pretty good that you’ve heard about or read a great many studies concerning how people’s perceptions about the world are biased, incorrect, inaccurate, erroneous, and other such similar adjectives. A related sentiment exists in some parts of the morality literature as well. Perhaps the most notable instance is the unpublished paper on moral dumbfounding, by Haidt, Bjorklund, & Murphy (2000). In that paper, the authors claim to provide evidence that people first decide whether an act is immoral and then seek to find victims or harms for the act post hoc. Importantly, the point seems to be that people seek out victims and harm despite them not actually existing. In other words, people are mistaken in perceiving harm or victims. We could call such tendencies the “fundamental victim error” or the “harm bias”, perhaps. If that interpretation of the results is correct, it would carry a number of implications, chief among which (for my present purposes) is that harm is not a required input for moral systems. Whatever cognitive systems are in charge of processing morally-relevant information, they seem to be able to do so without knowledge of who – if anyone – is getting harmed.

Just a little consensual incest. It’s not like anyone is getting hurt.

Now I’ve long found that implication to be a rather interesting one. The reason it’s interesting is because, in general, we should expect that people’s perceptions about the world are relatively accurate. Not perfect, mind you, but we should be expected to be as accurate as available information allows us to be. If our perceptions weren’t generally accurate, this would likely yield all sorts of negative fitness consequences: for example, believing you can achieve a goal you actually cannot could lead to the investment of time and resources in a fruitless endeavor; resources which could be more profitably spent elsewhere. Sincerely believing you’re going to win the lottery does not mean the tickets are wise investments. Given these negative consequences for acting on inaccurate information, we should expect that our perceptual systems evolved to be as accurate as they can be, given certain real-world constraints.

The only context I’ve seen in which being wrong about something could consistently lead to adaptive outcomes is in the realm of persuasion. In this case, however, it’s not that being wrong about something per se helps you, as much as someone else being wrong helps you. If people happen to think my future prospects are bright – even if they’re not – it might encourage them to see me as an attractive social partner or mate; an arrangement from which I could reap benefits. So, if some part of me happen to be wrong, in some sense, about my future prospects, and being wrong doesn’t cause me to behave in too many maladaptive ways, and it also helps persuade you to treat me better than you would given accurate information, being wrong (or biased) could be, at times, adaptive.

How does persuasion relate to morality and victimhood, you may well be wondering? Consider again the initial point about people, apparently, being wrong about the existence of harms and victims of acts they deem to be immoral. If one was to suggest that people are wrong in this realm – indeed, that our psychology appears to be designed in such a way to consistently be wrong – one would also need to couch that suggestion in the context of persuasion (or some entirely new hypothesis about why being wrong is a good thing). In other words, the argument would need to go something like this: by perceiving victims and harms where none actually exist, I could be better able to persuade other people to take my side in a moral dispute. The implications of that suggestion would seem to, in a rather straight-forward way, rely on people taking sides on moral issues on the basis of harm in the first place; if they didn’t, claims of harm wouldn’t be very persuasive. This would leave the moral dumbfounding work in a bit of a bind, theoretically-speaking, with respect to whether harms are required inputs for moral systems or not: that people perceive something as immoral and then later perceive harms would suggest harms are not required inputs; that arguments about harms are rather persuasive could suggest that harms are required inputs.

Enough about implications; let’s get to some research 

At the very least, the perceptions of victimhood and harm appear intimately tied perceptions of immorality. The connection between the two was further examined recently by Gray, Schein, & Ward, (2014) across five studies, though I’m only going to discuss one of them. In the study of interest, 82 participants each rated 12 actions on whether they wrong (1-5 scale, from ‘not wrong at all’ to ‘extremely wrong’) and whether the act had a victim (1-5 scale, from ‘definitely not’ to definitely yes’). These 12 actions were broken down into three groups of four acts each: the harmful group (including items like kicking a dog or hitting a spouse), the impure group (including masturbating to a picture of your dead sister or covering a bible with feces), and the neutral group (such as eating toast or riding a bus). The interesting twist in this study involved the time frame in which participants answered: one group was placed under a time constraint in which they had to read the question and provide their answers within seven seconds; the other group was not allowed to answer until at least a seven-second delay had passed, and were given an unlimited amount of time in which to answer. So one group was relying on, shall we say, their gut reaction, while the other was given ample time to reason about things consciously.

Unsurprisingly, there appeared to be a connection between harm and victimhood: the directly harmful scenarios generated more certainty about a victim (M = 4.8) than the impure ones (M = 2.5), and the neutral scenarios didn’t generate any victims (M = 1). More notably, the time constraint did have an effect, but only in the impure category: when answering under time constraints in the impure category, participants reported more certainty about the existence of a victim (M = 2.9) relative to when they had more time to think (M = 2.1). By contrast, the perceptions of victims in the harm (M = 4.8 and 4.9, respectively) and neutral categories (M = 1 and 1) did not differ across time constraints.

This finding puts a different interpretive spin on the moral dumbfounding literature: when people had more time to think about (and perhaps invent) victims for more ambiguous violations, they came up with fewer victims. Rather than people reaching a conclusion about immorality first and then consciously reasoning about who might have been harmed, it seems that people could have instead been reaching implicit conclusions about both harm and immorality quite early on, and only later consciously reasoning about why an act which seemed immoral isn’t actually making any worthy victims. If representations about victims and harms are arising earlier in this process than would be anticipated by the moral dumbfounding research, this might speak to whether or not harms are required inputs for moral systems.

Turns out that piece might have been more important than we thought

It is possible, I suppose, that morality could simply use harm as an input sometimes without it being a required input. That possibility would allow harm to be both persuasive and not required, though it would require some explanation as to why harm is only expected to matter in moral judgments at times. At present, I know of no such argument having ever been made, so there’s not too much to engage with on that front.

It is true enough that, at times, when people perceive victims, they tend to perceive victims in a rather broad sense, naming entities like “society” to be harmed by certain acts. Needless to say, it seems rather difficult to assess such claims, which makes one wonder how people perceive such entities as being harmed in the first place. One possibility, obviously, is that such entities (to the extent they can be said to exist at all) aren’t really being harmed and people are using unverifiable targets to persuade others to join a moral cause without the risk of being proved wrong. Another possibility, of course, is that the part of the brain that is doing the reporting isn’t quite able to articulate the underlying reason for the judgment well to others. That is, one part of the brain is (accurately) finding harm, but the talking part isn’t able to report on it. Yet another possibility still is that harm befalling different groups is strategically discounted (Marczyk (2015). For instance, members of a religious group might find disrespect towards a symbol of their faith (rubbing feces on the bible, in this case) to be indicative of someone liable to do harm to their members; those opposed to the religious group might count that harm differently – perhaps not as harm at all. Such an explanation could, in principle, explain the time-constraint effect I mentioned before: the part of the brain discounting harm towards certain groups might not have had enough time to act on the perceptions of harm yet. While these explanations are not necessarily mutually exclusive, they are all ideas worth thinking about.

References: Gray, K., Schein, C., & Ward, A. (2014). The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering. Journal of Experimental Psychology, 143, 1600-1615.

Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished Manuscript. 

Marczyk, J. (2015). Moral alliance strategies theory. Evolutionary Psychological Science, 1, 77-90.

Should Men Have A Voice In The Abortion Debate?

I recently found myself engaged in an interesting discussion on the matter of abortion (everyone’s favorite topic for making friends and civil conversation). The unique thing about this debate was that I found myself in agreement with the other party when it came to the heart of the matter: whether abortions should be legally available and morally condemned (our answers would be “yes” and “no”, respectively). With such convergent views, one might wonder what there is left to argue about. Well, the discussion centered on whether I, as a man, should be able to have any opinion about abortion (positive or negative), or whether such opinions – and corresponding legislation – should be restricted to women. In this case, my friend suggested that I was, in fact, not entitled to hold any views about abortion because of my gender, going on to state that she was not interested in hearing any men’s opinions on the issue. She even went as far as to suggest that the feelings of a woman who disagreed with her stance about abortion would be more valid than mine on the matter. This struck me as a frankly sexist and bigoted view (in case you don’t understand why it sounds that way, imagine I ended this post by saying “I’m not interested in hearing any women’s views on this subject” and you should get the picture), but one I think is worth examining a bit further, especially because my friend’s view was not some anomaly; it’s a perspective I’ve heard before.

So it’s worth having my thoughts ready for future reference when this comes up again

As for the disagreement itself, I was curious why my friend felt this way: specifically, why she did she believe men are precluded from having opinions on abortions? Her argument was that men cannot understand the issue because they are not the one carrying the babies, having periods, taking hormonal birth control, feeling the day-to-day effects of pregnancy on one’s body, and so on. The argument, then, seems to involve the idea that women have privileged access to some relevant information (based on firsthand experience, or at least the potential of it) which men do not, as well as the idea that women are the ones enduring the lion’s share of the consequences resulting from pregnancy. I wanted to examine each of these claims to show why they do not yield the conclusion she felt they did.

The first piece of information I wanted to discuss is one I mentioned sometime ago: men and women do not appear to differ appreciably in their views regarding abortion. According to some Gallup data from 1975-2009 concerning the matter, between 22-35% of women believed abortion should be legal in all circumstance, 15-21% believed it should be illegal in all, and 48-55% of women believed it should be legal in some circumstances; the corresponding ranges for men were  21-29%, 13-19%, and 54-59%, respectively. From those numbers, we can see that men and women seem to hold largely similar views about abortion. My friend expressed a disinterest in hearing about this information, presumably because she did not feel it had any relevance to the argument at hand.

However, I feel there is a real relevance to those numbers that speaks to the first point my friend made: that women have privileged access to certain experiences and information men do not. It’s true enough that men and women have different experiences and perceptions in certain domains on average; I don’t know anyone who would deny that. However, those differences in experiences do not appear to yield substantial differences in opinion on the matter of abortion. This is a rather curious point. How are we to interpret this lack of a difference? Here are two ways that come to mind: first, we could continue to say that women have access to some privileged source of information bearing on the moral acceptability of abortion which men do not, but, despite this asymmetry in information, both sexes come to agreement about the topic in almost equal numbers anyway. In this case, then, we would be using a variable factor to explain a lack of differences between the sexes (i.e., “men and women come to agree on abortion almost perfectly owing to their vastly different experiences that the other sex cannot understand).

There might also just be a very similar person behind the mirror

This first interpretation strikes me as particularly unlikely, though not impossible. The second (and more likely) interpretation that comes to mind is that, despite frequent contentions to the contrary, variables relating to one’s sex per se – such as having periods or being the ones to give birth – are not actually the factors primarily driving views on abortion. If abortion views are driven instead by, say, one’s sexual strategy (whether one tends to prefer more long-term, monogamous or short-term, promiscuous mating arrangements), then the idea that men cannot understand arguments for or against abortion because of some unique experiences they do not have falls apart. Men and women both possess cognitive adaptions for long- and short-term mating strategies so, if those mechanisms are among the primary drivers of abortion views, the issue seems perfectly understandable for both sexes. Indeed, I haven’t heard an argument for or against abortion that has just left me baffled, as if it were spoken in a foreign language, regardless of whether I agree or not with it. Maybe I’m just not hanging out at the right parties and not hearing the right arguments.

Even if women were privy to some experiences which men could not understand and those unique experiences shaped their views on abortion, that still strikes me as a strange reason to disallow men from having opinions about it. Being affected by an issue in some unique way – or even primarily – does not mean you’re the only one affected by it, nor that other people can’t hold opinions about how you behave. One example I would raise to help highlight that point would be a fictional man I’ll call Tom. Tom happens to be prone to random outbursts of anger during which he has a habit of yelling at and fighting other people. I would not relate to Tom well; he is uniquely affected by something I am not and he likely sees the world much differently than I would. However, social species that we happen to be, his behavior resulting from those unique experiences has impacts on other people, allowing the construction of moral arguments for why he should or should not be condemned for doing what he does.

To say that abortion is a woman’s issue, or that they’re the only ones allowed to have opinions about it because they bear most of the consequences, is to overlook a lot of social impact. Men have mothers, sisters, friends, and sexual partners would who be affected by the legality of abortion; some men who do not wish to become fathers are certainly affected by abortion laws, just as men who wish to become fathers might be. To again turn to an analogy, one could try to make the argument that members of the military are the people most affected by the decision to go to war (they’re the ones who will be fighting and dying), so they should be the only one’s allowed to vote on the matter of whether our country enters armed combat. Objections to this argument might include propositions such as, “but civilians will be impacted by the war too” which, well, is kind of the whole point.

For example, see this rather strange quote

While one is free to hold to a particular political position without any reason beyond “that’s how I feel”, a position that ends up focusing on the sex of a speaker instead of their ideas seems like the kind of argument that socially-progressive individuals would want to avoid and fight against. To be clear, I’m not saying that sex is never relevant when it comes to determining one’s political and moral views: in my last post, for instance, I discussed the wide gap that appears between men and women with respect to their views about legalized prostitution, with men largely favoring it and women more often opposing it; a gap which widens when presented with information about how legalized prostitution is safer. What’s important to note in that case is that when sex is a relevant factor in the decision-making process we see differences in opinion between men and women’s views; not similarities. Those differences don’t imply that one sex’s average opinion is correct, mind you, but they serve as a cue that factors related to sex – such as mating interests – might be pulling some strings. In such cases, men and women might literally have a hard time understanding the opinion of the opposite sex, just as some people have trouble seeing the infamous dress as either black and blue or gold and white. That just doesn’t seem to be the case for abortion.

Where’s The Market For Organs (And Sex)?

Imagine for a moment that you’re in the market for a new car. While your old car runs fine, you’ve decided you want an upgrade to a newer, fancier model. As new cars are expensive – and because you won’t have much need for the old one anymore – you decide that you want to sell your old car to someone else to raise some of the capital for the new one. This seems like a mutually-beneficial exchange for both you and the buyer. Now if I were to tell you that selling your car is morally repugnant and that you should be legally forbidden from making that sale, you might think me a little strange. You might think it even stranger if I said that I would not object (or at least not as strongly) to you giving your old car away for free. It would seem to make little sense, at least in the abstract, that you’re allowed to give something away that you’re not allowed to sell. A number of goods and services follow this logic for many people in reality, though: namely sex and bodily organs. There are those who feel that people should not be allowed to legally sell sex or kidneys, but don’t prohibit these things from being given away for free if a donor is feeling generous. How are we to understand these interesting – and seemingly contradictory – positions?

Only with lots of in-depth research, one can hope…

Let’s start by considering some cool new data from Elias et al (2015). The researchers collected data from over 3,400 Americans on Mturk split between a control and treatment group. In the control sample, about 1,600 of these participants were just asked about their attitudes concerning the acceptability of organ sales, with roughly 52% of them rating the idea of regulated monetary compensation for bodily organs to be acceptable (as far as I can tell, this wasn’t about a free market for organs, but some government type of program). In the treatment group, the participants were first provided with a short, 500-word essay outlining the current organ shortages faced by people in need of transplants, the consequences of such shortages, and a few proposals that had been put forth to try and alleviate some of those costs. In the face of this information, views about monetary compensation for organs rose dramatically, with 72% of participants in the treatment group rating the proposal as acceptable; an approximate gain of 20%. Moreover, these effects were relatively homogeneous with respect to various features of the respondents, such as, I think, their gender and religious affiliation.

Elias et al (2015) also used this same design to examine attitudes about prostitution. In a second study with another 1,600 US Mturkers, a control group was asked about the acceptability of legalized prostitution while a treatment group received information about how legalized prostitution reduced costly outcomes like sexual violence and sexually transmitted diseases. In the control group, prostitution hung around a 67.3% acceptability rating; in the treatment group, this rating was 67.4%. While one might interpret those figures at representing pretty much zero change in acceptability given this information, one would be wrong. The reason this interpretation is incorrect is because, unlike the organ case, the effects of this information were not homogeneous with respect to some participant characteristics. For instance, among men, 78% of the control group supported prostitution while 96% in the treatment group did. How very progressive of them. So why was there no difference between the groups in average acceptability rating? Well, because the women had a much different view: 56% of the women in the control supported prostitution whereas this figure dropped to 41% in the treatment group. A similar effect was seen for the religious and non-religious, with the welfare information making the non-religious more accepting (81% to 94%) and the religious less so (57% to 47%).

One point to take from these results is that welfare concerns do indeed seem to serve as inputs for moral mechanisms. While this point might seem trivial to some, it has been claimed that welfare concerns are used a post-hoc justifications for moral judgments, rather than driving factors. A second point is that the manner in which those welfare consequences matter depends on the individual receiving them: those who are in need of organs represent, for lack of a better word, a “useful” victim; someone who is otherwise a good target of social investment and happens to be facing a temporary state of need (provided they don’t die, that is). On the other hand, prostitutes are less universally “useful” as a recipient of altruism: for men, prostitutes tend to reflect benefits, as they increase short-term mating opportunities; for women, prostitutes tend to reflect costs, decreasing the metaphorical market price of sex. A similar logic holds for religion, to the extent that religious membership tends to reflect member’s preferences for long-term mating strategies, which prostitution threatens.

 Safe prostitution is only making God and his sex-punishing STDs angry

In terms ultimate moral functioning, then, these results appear consistent with an alliance-building function; the one I’ve been going on about for some time now. The short version of that hypothesis is that morality functions, in part, as a kind of ingratiation device, allowing us to identify social assets. It is worth contrasting that function with the hypothesis that our moral psychology is simply functioning to increase welfare more generally. These scenarios were presenting legalized prostitution and organ sales as increasing welfare for certain parties in both cases. However, certain individuals did not seem to want to see those welfare gains achieved for certain groups because the two have opposing best interests in mind. This is understandable in precisely the same way that I would not only want to avoid give the guy threatening me with a knife access to benefits he otherwise wouldn’t be able to achieve, but I would also want to have costs inflicted upon him to stop him from making my life harder. While the costs inflicted by prostitutes on long-term maters might be substantially less intentional and more indirect, they are costs nonetheless.

Finally, it’s also worth noting that the alliance hypothesis is consistent with other, older findings about selling organs as well. Tetlock (2000) reports that, when faced with the matter of whether selling organs should be legal, many people opposed to the idea cite welfare concerns: specifically, they appear concerned that poor people would be forced into donating organs for finical reasons and, conversely, that the rich would be the primary beneficiaries of such a policy. Why might these concerns get raised? I imagine that answer has something to do with the idea that organ markets will, essentially, inflict costs on already-needy groups people are hoping to provide benefits to (the poor), whereas the group receiving the benefits might not appear terribly needy (the rich). As the rich as seen as less needy than the poor, the former are likely assessed to be worse alliance potential, all else being equal. In fact, I would wager that people’s opinions about selling organs in an open market are probably correlated with their belief in whether the poor are responsible for their station in life, or whether they’re viewed as otherwise hard-working but unlucky. If people view the poor as having relatively stable need states (responsible for their current situation), other people would likely be less concerned with expending effort to help them, as such an investment would be unlikely to be returned (since their need today signals their need tomorrow as well). By contrast, the unlikely poor represent good social investments, and so might warrant some additional moral protection.

Well, that’s what you get for being irresponsible with your money

Findings like these highlight the considerable subtlety that research into the moral domain needs to take. In short, if you want to understand how people’s moral positions will change on the basis of some welfare-relevant information, you’ll likely be served by knowing where their stake in the matter at hand might reside: either directly or indirectly with regard to whether those involved in the dispute would make valuable social assets. Indeed, these findings are quite reminiscent of the Tucker Max case I wrote about some time ago, where a rather sizable donation ($500,000) was rejected by planned parenthood because some supporters of the organization perceived the source of the donation to be morally unacceptable (and, importantly, because the association was to be made publicly, rather than anonymously. If he didn’t want his name on the building, I suspect matters would have ended differently). In some cases, you can’t sell things that you can give away; in other cases, so long as the right conditions are met, people don’t even want you to be able to give those things away either.

References: Elias, J., Lacetera, N, & Macis, M. (2015). Sacred values? The effect of information on attitudes towards payment for human organs. American Economic Review Papers & Proceedings. 

Tetlock, P. (2000). Coping with trade-offs: Psychological constraints and political implications. In Elements of Reason: Cognition, Choice, & the Bounds of Rationality. Ed. Lupia, A., McCubbins, M., & Popkin, S. 239-322.  

Socially-Strategic Welfare

Continuing with the trend from my last post, I wanted to talk a bit more about altruism today. Having already discussed that people do, in fact, appear to engage in altruistic behaviors and possess some cognitive mechanisms that have been selected for that end, I want to move into discussing the matter of the variance in altruistic inclinations. That is to say that people – both within and between populations – are differentially inclined towards altruistic behavior, with some people appearing rather disinterested in altruism, while others appear quite interested in it. The question of interest for many is how those differences are to be explained. One explanatory route would be to suggest that the people in question have, in some sense, fundamentally different psychologies. A possible hypothesis to accompany that explanation might go roughly as follows: if people have spent their entire lives being exposed to social messages about how helping others is their duty, their cognitive mechanisms related to altruism might have developed differently than someone who instead spent their life being exposed to the opposite message (or, at least, less of that previous one). On that note, let’s consider the topic of welfare.

In a more academic fashion, if you don’t mind…

The official website of Denmark suggests that such a message of helping being a duty might be sent in that country, stating that:

The basic principle of the Danish welfare system, often referred to as the Scandinavian welfare model, is that all citizens have equal rights to social security. Within the Danish welfare system, a number of services are available to citizens, free of charge.

Provided that this statement accurately characterizes what we would consider the typical Danish stance on welfare, one might imagine that growing up in such a country could lead individuals to develop substantially different views about welfare than, say, someone who grew up in the US, where opinions are quite varied. In my non-scientific and anecdotal experience, while some in the US might consider the country a welfare state, those same people frequently seem to be the ones who think that is a bad thing; those who think it’s a good thing often seem to believe the US is not nearly enough of a welfare state. At the very least, the US doesn’t advertise a unified belief about welfare on its official site.

On the other hand, we might consider another hypothesis: that Danes and Americans don’t necessarily possess any different cognitive mechanisms in terms of their being designed for regulating altruistic behavior. Instead, members of both countries might possess very similar underlying cognitive mechanisms which are being fed different inputs, resulting in the different national beliefs about welfare. This is the hypothesis that was tested by Aaroe & Petersen (2014). The pair make the argument that part of our underlying altruistic psychology is a mechanism that functions to determine deservingness. This hypothetical mechanism is said to use inputs of laziness: in the presence of a perceived needy but lazy target, altruistic inclinations towards that individual should be reduced; in the presence of a needy, hard-working, but unlucky individual, these inclinations should be augmented. Thus, cross-national differences, as well as within-group differences, concerning support for welfare programs should be explained, at least in part, by perceptions of deservingness (I will get to the why part of this explanation later).

Putting those ideas together, two countries that differ on their willingness to provide welfare should also differ on their perceptions of the recipients in general. However, there are exceptions to every rule: even if you believe (correctly or incorrectly) that group X happens to be lazy and undeserving of welfare, you might believe that a particular member of group X bucks that trend and does deserve assistance. This is the same thing as saying that while men are generally taller than women, you can find exceptions where a particular woman is quite tall or a man quite short. This leads to a corollary prediction,that Aaroe & Petersen examine: despite decades of exposure to different social messages about welfare, participants from the US and Denmark should come to agree on whether or not a particular individual deserves welfare assistance.

     Never have I encountered a more deserving cause

The authors sampled approximately 1000 participants from both the US and Denmark; a sample designed to be representative of their home country’s demographics. That sample was then surveyed on their views about people who receive social welfare via a free-association task in which they were asked to write descriptors of those recipients. Words that referred to the recipients’ laziness or poor luck were coded to determine which belief was the more dominant one (as defined by lazy words minus unlucky one). As predicted, the lazy stereotype was dominant in the US, relative to Denmark, with Americans listing an average of 0.3 more words referring to laziness than luck; approximately four-times the size from Denmark, in which these two beliefs were more balanced.

In line with that previous finding was the fact that Americans were also more likely to support the tightening of welfare restrictions (M = 0.57) than the Danes (M = 0.49, scale 0-1). However, this difference between the two samples only existed under the condition of informational uncertainty (i.e., when participants were thinking about welfare recipients in general). When presented with a welfare recipient who was described as the victim of a work-related accident and motivated to return to work, the US and Danish citizens both agreed that welfare for restrictions for people like that person should not be tightened (M = 0.36 and 0.35 respectively); when this recipient was instead described as able-bodied but unmotivated to work, the Americans and Danes once again agreed, suggesting that welfare restrictions should be tightened for people like him (M = 0.76 and 0.79). In the presence of more individualizing information, then, the national stereotypes built over a lifetime of socialization appear to get crowded out, as predicted. All it took was about two sentences worth of information to get the US and Danish citizens to agree. This pattern of data would seem to support the hypothesis that some universal psychological mechanisms reside in both populations, and their differing views tend to be the result of their being fed different information.

This brings us to the matter of why people are using cues to laziness to determine who should receive assistance, which is not explicitly addressed in the body of the paper itself. If the psychological mechanisms in question function to reduce the need of others per se, laziness cues should not be relevant. Returning to the example from my last post, for instance, mothers do not tend to withhold breastfeeding from infants on the basis on whether those infants are lazy. Instead, breastfeeding seems better designed to reduce need per se in the infants. It’s more likely that mechanisms responsible for determining these welfare attitudes are instead designed to build lasting friendships (Tooby & Cosmides, 1996): by assisting an individual today, you increase the odds they will be inclined to assist you in the future. This altruism might be especially relevant when the individual is in more severe need, as the marginal value of altruism in such situations is larger, relative to when they’re less needy (in the same way that a very hungry individual values the same amount of food more than a slightly hungry one; the same food is simply a better return on the same investment when given to the hungrier party). However, lazy individuals are unlikely to be able to provide such reciprocal assistance – even if they wanted to – as the factors determining their need are chronic, rather than temporary. Thus, while both the lazy and motivated individual are needy, the lazy individual is the worse social investment; the unlucky one is much better.

Investing in toilet futures might not have been the wisest retirement move

In this case, then, perceptions of deservingness appear to be connected to adaptations that function to build alliances. Might perceptions of deservingness in other domains serve a similar function? I think it’s probable. One such domain is the realm of moral punishment, where transgressors are seen as being deserving of punishment. In this case, if victimized individuals make better targets of social investment than non-victimized ones (all else being equal), then we should expect people to direct altruism towards the former group; when it comes to moral condemnation, the altruism takes the form of assisting the victimized individual in punishing the transgressor. Despite that relatively minor difference, the logic here is precisely the same as my explanation for welfare attitudes. The moral explanation would require that moral punishment contains an alliance-building function. When most people think morality, they don’t tend to think about building friendships, largely owing to the impartial components of moral cognitions (since impartiality opposes partial friendships). I think that problem is easy enough to overcome; in fact, I deal with it in an upcoming paper (Marczyk, in press). Then again, it’s not as if welfare is an amoral topic, so there’s overlap to consider as well.

References: Aaroe, l. & Petersen, M. (2014). Crowding out culture: Scandinavians and Americans agree on social welfare in the face of deservingness cues. The Journal of Politics, 76, 684-697.

Marczyk, J. (in press). Moral alliance strategies theory. Evolutionary Psychological Science

Tooby, J. & Cosmides, L. (1996). Friendship and the banker’s paradox: Other pathways to the evolution of adaptations for altruism. Proceedings of the British Academy, 88, 119-143.

Some Thoughts On Side-Taking

Humans have a habit of inserting themselves in the disputes of other people. We often care deeply about matters concerning what other people do to each other and, occasionally, will even involve ourselves in disputes that previously had nothing to do with us; at least not directly. Though there are many examples of this kind of behavior, one of the most recent concerned the fatal shooting of a teen in Ferguson, Missouri, by a police officer. People from all over the country and, in some cases, other countries, were quick to weigh in on the issue, noting who they thought was wrong, what they think happened, and what punishment, if any, should be doled out. Phenomena like that one are so commonplace in human interactions it’s likely the case that the strangeness of the behavior often goes almost entirely unappreciated. What makes the behavior strange? Well, the fact that intervention in other people’s affairs and attempts to control their behavior or inflict costs on them for what they did tends to be costly. As it turns out, people aren’t exactly keen on having their behavior controlled by others and will, in many cases, aggressively resist those attempts.

Not unlike the free-spirited house cat

Let’s say, for instance, that you have a keen interest in killing someone. One day, you decide to translate that interest into action, attacking your target with a knife. If I were to attempt and intervene in that little dispute to try and help your target, there’s a very real possibility that some portion of your aggression might become directed at me instead. It seems as if I would be altogether safer if I minded my own business and let you get on with yours. In order for there to be selection for any psychological mechanisms that predispose me to become involved in other people’s disputes, then, there need to be some fitness benefits that outweigh the potential costs I might suffer. Alternatively, there might also be costs to me for not becoming involved. If the costs to non-involvement are greater than the costs of involvement, then there can also be selection for my side-taking mechanisms even if they are costly. So what might some of those benefits or costs be?

One obvious candidate is mutual self-interest. Though that term could cover a broad swath of meanings, I intend it in the proximate sense of the word at the moment. If you and I both desire that outcome X occurs, and someone else is going to prevent that outcome if either of us attempt to achieve it, then it would be in our interests to join forces – at least temporarily – to remove the obstacle in both of our paths. Translating this into a concrete example, you and I might be faced by an enemy who wishes to kill both of us, so by working together to kill him first, we can both achieve an end we desire. In another, less direct case, if my friend became involved in a bar fight, it would be in my best interests to avoid seeing my friend harmed, as an injured (or dead) friend is less effective at providing me benefits than a healthy one. In such cases, I might preferentially side with my friend so as to avoid seeing costs inflicted on him. In both cases, both the other party and I share a vested interest in the same outcome obtaining (in this case, the removal of a mutual threat).

Related to that last example is another candidate explanation: kin selection. As it is adaptive for copies of my genes to reproduce themselves regardless of which bodies they happen to be located in, assisting genetic relatives in disputes could similarly prove to be useful. A partially-overlapping set of genetic interests, then, could (and likely does) account for a certain degree of side-taking behavior, just as overlapping proximate interests might. By helping my kin, we are achieving a mutually-beneficial (ultimate-level) goal: the propagation of common genes.

A third possible explanation could also be grounded in reciprocal altruism, or long-term alliances. If I take your side today to help you achieve our goals, this might prove beneficial in the long term to the extent that it encourages you to take my side in the future. This explanation would work even in the absence of overlapping proximate or genetic interests: maybe I want to build my house where others would prefer I did not and maybe you want to get warning labels attached to ketchup bottles.You don’t really care about my problem and I don’t really care about yours, but so long as you’re willing to help me scratch my back on my problem, I might also be willing to help you scratch yours.

Also not unlike the free-spirited house cat

There is, however, another prominent reason we might take the side of another individual in a dispute: moral concerns. That is, people could take sides on the basis of whether they perceive someone did something “wrong”. This strategy, then, relies on using people’s behavior to take sides. In that domain, locating the benefits to involvement or the costs to non-involvement becomes a little trickier. Using behavior to pick sides can carry some costs: you will occasionally side against your interests, friends, and family by doing so (to the extent that those groups behave in immoral ways towards others). Nevertheless, the relative upsides to involvement in disputes on the basis of morality need to exist in some form for the mechanisms generating that behavior to have been selected for. As moral psychology likely serves the function of picking sides in disputes, we could consider how well the previous explanations for side taking fare for explaining moral side taking.

We can rule out the kin selection hypothesis immediately as explaining the relative benefits to moral side taking, as taking someone’s side in a dispute will not increase your genetic relatedness to them. Further, a mechanism that took sides on the basis of kinship should be primarily using genetic relatedness as an input for side-taking behavior; a mechanism that uses moral perceptions should be relatively insensitive to kinship cues. Relatedness is out.

A mutualistic account of morality could certainly explain some of the variance we see in moral side-taking. If both you and I want to see a cost inflicted on an individual or group of people because their existence presents us with costs, then we might side against people who engage in behaviors that benefit them, representing such behavior as immoral. This type of argument has been leveraged to understand why people often oppose recreational drug use: the opposition might help people with long-term sexual strategies inflict costs on the more promiscuous members of a population. The complication that mutualism runs into, though, is that certain behaviors might be evaluated inconsistently in that respect. As an example, murder might be in my interests when in the service of removing my enemies or the enemies of my allies; however, murder is not in my interests when used against me or my allies. If you side against those who murder people, you might also end up siding against people who share your interests and murder people (who might, in fact, further your interests by murdering others who oppose them).

While one could make the argument that we also don’t want to be murdered ourselves – accounting for some or all of that moral representation  of murder as wrong – something about that line doesn’t sit right with me: it seems to conceive of the mutual interest in an overly broad manner. Here’s an example of what I mean: let’s say that I don’t want to be murdered and you don’t want to be murdered. In some sense, we share an interest in common when it comes to preventing murder; it’s an outcome we both want to avoid. So let’s say one day I see you being attacked by someone who intends to murder to you. If I were to come to your aid and prevent you from being killed, I have not necessarily achieved my goal (“I don’t want to be murdered”); I’ve just helped you achieve yours (“You don’t want to be murdered”). To use an even simpler example, if both you and I are hungry, we both share an interest in obtaining food; that doesn’t mean that my helping you get food is filling my interests or my stomach. Thus, the interest in the above example is not necessarily a mutual one. As I noted previously, in the case of friends or kin it can be a mutual interest; it just doesn’t seem to be the case when thinking about the behavior per se. My preventing your murder is only useful (in the fitness sense of the word) to the extent that doing so helps me in some way in the future.

Another account of morality which differs from the above positions posits that side-taking on the basis of behavior could help reduce the costs of becoming involved in the disputes of others. Specifically, if all (or at least a sizable majority of) third parties took the same side in a dispute, one side would back down without the need for fights to be escalated to determine the winner (as more evenly-matched fights might require increased fighting costs to determine a winner, whereas lopsided ones often do not). This is something of a cost-reduction model. While the idea that morality functions as a coordination device – the same way, say, a traffic light does – raises an interesting possibility, it too comes with a number of complications. Chief among those complications is that coordination need not require a focus on the behavior of the disputants. In much the same way that the color of a traffic light bears no intrinsic relationship to driving behavior but is publicly observable, so too might coordination in the moral domain need not bear any resemblance to the behavior of the disputants. Third parties could, for instance, coordinate around the flip of a coin, rather than the behavior of the disputants. If anything, coin flips might be better tools than disputant’s behavior as, unlike behavior, the outcome of coin flips are easily observable. Most immoral behavior is notably not publicly observable, making coordination around it something of a hassle.

 And also making trials a thing…

What about the alliance-building idea? At first blush, taking sides on the basis of behavior seems like a much different type of strategy than siding on the basis of existing friendships. With some deeper consideration, though, I think there’s a lot of merit to the idea. Might behavior work as a cue for who would make a good alliance partner for you? After all, friendships have to start somewhere, and someone who was just stolen from might have a sudden need for partial partners that you might fill by punishing the perpetrator. Need provides a catalyst for new relationships to form. On the reverse end, that friend of yours who happens to be killing other people is probably going to end up racking up more than a few enemies: both the ones he directly impacted and the new ones who are trying to help his victims. If these enemies take a keen interest in harming him, he’s a riskier investment as costs are likely coming his way. The friendship itself might even become a liability to the extent that the people he put off are interested in harming you because you’re helping him, even if your help is unrelated to his acts. At such a point, his behavior might be a good indication that his value as a friend has gone down and, accordingly, it might be time to dump your friend from your life to avoid those association costs; it might even pay to jump on the punishing bandwagon. Even though you’re seeking partial relationships, you need impartial moral mechanisms to manage that task effectively.

This could explain why strangers become involved in disputes (they’re trying to build friendships and taking advantage of a temporary state of need to do so) and why side-taking on the basis of behavior rather than identity is useful at times (your friends might generate more hassle than they’re worth due to their behavior, especially since all the people they’re harming look like good social investments to others). It’s certainly an idea that deserves more thought.