First dates and large social events, like family reunions or holiday gatherings, can leave people wondering about which topics should be off-limits for conversations, or even dreading which topics will inevitably be discussed. There’s nothing quite like the discomfort that a drunken uncle feeling the need to let you know precisely what he thinks about the proper way to craft immigration policy or what he thinks about gay marriage can bring. Similarly, it might not be a good idea to open up a first date with an in-depth discussion of your deeply held views on abortion and racism in the US today. People realize, quite rightly, that such morally-charged topics have the potential to be rather divisive, and can quickly alienate new romantic partners or cause conflict within otherwise cohesive groups. Alternatively, however, in the event that you happen to be in good agreement with others on such topics, they can prove to be fertile grounds for beginning new relationships or strengthening old ones; the enemy of my enemy is my friend and similar such sayings attest to that. All this means you need to be careful about where and how you spread your views about these topics. Moral stances are kind of like manure in that way.
Now these are pretty important things to consider if you’re a human, since a good portion of your success in life is going to be determined by who your allies are. One’s own physical prowess is no longer sufficient to win conflicts when you’re fighting against increasingly larger alliances, not to mention the fact that allies also do wonders for your available options regarding other cooperative ventures. Friends are useful, and this shouldn’t news to anyone. This would, of course, drive selection pressures for adaptations that help people to build and maintain healthy alliances. However, not everyone ends up with a strong network of alliances capable of helping them protect or achieve their interests. Friends and allies are a zero-sum resource, as the time they spend helping one person (or one group of people) is time not spent with another. The best allies are a very limited and desirable resource, and only a select few will have access to them: those who have something of value to offer in return. So what are the people towards the bottom of the alliance hierarchy to do? Well, one potential answer is the obvious, and somewhat depressing, outcome: not much. They tended to get exploited by others; often ruthlessly so. They either need to increase their desirability as a partner to others in order to make friends who can protect them, or face those severe and persistent social costs.
Any available avenue for those exploited parties that help them avoid such costs and protect their interests, then, ought to be extremely appealing. A new paper by Petersen (2013) proposes that one of these avenues might be for those lacking in the alliance department to be more inclined to use moralization to protect their interests. Specifically, the proposition on offer is that if one lacks the private ability to enforce their own interests, in the form of friends, one might be increasingly inclined to turn towards public means of enforcement: recruiting third-party moralistic punishers. If you can create a moral rule that protects your self-interest, third parties – even those who otherwise have no established alliance with you – ought to become your de facto guardians whenever those interests are threatened. Accordingly, the argument goes that those lacking in friends ought to be more likely to support existing rules that protect them against exploitation, whereas those with many friends, who are capable of exploiting others, ought to feel less interest in supporting moral rules that prevent said exploitation. In support of this model, Petersen (2013) notes that there is a negative correlation – albeit a rather small one – between proxies for moralization and friend-based social support (as opposed to familial or religious support, which tended to correlate as well, but in the positive direction).
So let’s run through a hypothetical example to clarify this a bit: you find yourself back in high school and relatively alone in that world, socially. The school bully, with his pack of friends, have been hounding you and taking your lunch money; the classic bully move. You could try and stand up to the bullies to prevent the loss of money, but such attempts are likely to be met with physical aggression, and you’d only end up getting yourself hurt on top of then losing your money anyway. Since you don’t have enough friends who are willing and able to help tip the odds in your favor, you could attempt to convince others that it ought to be immoral to steal lunch money. If you’re successful in your efforts, the next time the bullies attempt to inflict costs on you, they would find themselves opposed by the other students who would otherwise just stay out of it (provided, of course, that they’re around at the time). While these other students might not be your allies at other times, they are your allies, temporarily, when you’re being stolen from. Of course, moralizing stealing prevents you from stealing from others – as well as having it done to you – but since you weren’t in the position to be stealing from anyone in the first place, it’s really not that big of a loss for you, relative to the gain.
While such a model posits a potentially interesting solution for those without allies, it leaves many important questions unaddressed. Chief among these questions is the matter of what’s in it for third parties? Why should other people adopt your moral rules, as opposed to their own, let alone be sure to intervene even if you share the moral rule? While third-party support is certainly a net benefit for the moralizer who initially can’t defend their own interests, it’s a net cost to the people who actually have to enforce the moral rule. If those bullies are trying to steal from you, the costs of deterring, and if necessary, fighting them off, falls on shoulders of others who would probably rather avoid such risks. These costs are magnified further because a moral rule against stealing lunch money ought to require people to punish any and all instance of the bullying; not just your specific one. As punishing people is generally not a great way to build or maintain relationships with them, supporting this moral rule, then, could prevent the punishers from forming what might be otherwise-useful alliances with the bullying parties. Losing potential friendships to temporarily support someone you’re not actually friends with and won’t become friends with doesn’t sound like a very good investment.
The costs don’t even end there, though. Let’s say, hypothetically, that most people do agree that the stealing of lunch money ought to be stopped and are willing to accept the moral rule in the first place. There are costs involved in enforcing the rule, and it’s generally in everyone’s best interest to not suffer those costs personally. So, while people might be perfectly content with their being a rule against stealing, they don’t want to be the ones who have to enforce it; they would rather free-ride on other people’s punishment efforts. Unfortunately, the moral rule requires a large number of potential punishers for it to be effective. This means that those willing to punish would need to incentivise non-punishers to start punishing as well. These incentives, of course, aren’t free to deliver. This now leads to punishers needing to, in essence, not only punish those who commit the immoral act, but also punish those who fail to punish people who commit the immoral act (which leads to punishing those who fail to punish those who fail to punish as well, and so on. The recursion can be hard to keep track of). As the costs of enforcement continue to mount, in the absence of compensating benefits it’s not at all clear to me why third parties should become involved in the disputes of others, or try to convince other people to get involved. Punishing an act “because it’s immoral” is only a semantic step away from punishing something “just because”.
A more plausible model, I feel, would be an alliance-based model for moralization: people might be more likely to adopt moral rules in the interests of increasing their association value to specific others. Let’s use one of the touchy, initial subjects – abortion – as a test case here: if I adopt a moral stance opposing the practice, I would make myself a less-appealing alliance partner for anyone who likes the idea of abortions being available, but I would also make myself a more-appealing to partner to anyone who dislike the idea (all else being equal). Now that might seem like a wash in terms of costs and benefits on the whole – you open yourself up to some friends and foreclose on others – but there are two main reasons I would still favor the alliance account. The first is the most obvious: it locates some potential benefits for the rule-adopters. While it is true that there are costs to making a moral stance, there aren’t only costs anymore. The second benefit of the alliance account is that the key issue here might not be whether you make or lose friends on the whole, but rather that it can ingratiate you to specific people. If you’re trying to impress a particular potential romantic partner or ally, rather than all romantic partners or allies more generally, it might make good sense to tailor your moral views to that specific audience. As was noted previously, friendship is a zero-sum game, and you don’t get to be friends with everyone.
It goes without saying that the alliance model is far from complete in terms of having all its specific details fleshed out, but it gives us some plausible places with which to start our analysis: considerations of what specific cues to people might use to assess relative social value, or how those cues interact with current social conditions to determine the degree of current moral support. I feel the answers to such questions will help us shed light on many additional ones, such as why almost all people will agree with the seemingly-universal rule stating “killing morally is wrong” and then go on to expand upon the many, many non-universal exceptions to that moral rule over which they don’t agree (such as when killing in self-defense, or when you find your partner having sex with another person, or when killing a member of certain non-human species, or killing unintentionally, or when killing a terminally ill patient rather than letting them suffer, and so on…). The focus, I feel, should not be on why how powerful of a force third-party punishment can be, but rather why third parties might care (or fail to care) about the moral violations of others in the first place. Just because I think murder is morally wrong, it doesn’t mean I’m going to react the same way to any and all cases of murder.
References: Petersen, M. (2013). Moralization as protection against exploitation: Do individuals without allies moralize more? Evolution and Human Behavior, 34, 78-85 DOI: 10.1016/j.evolhumbehav.2012.09.006