“Biologists call this behavior altruism, when we help someone else at some cost to ourselves. If you think about it, altruism is the basis of all morality. So the larger question is, Why are we moral?” – John Horgan [emphasis mine]
John Horgan, a man not known for a reputation as a beacon of understanding, recently penned the above thought that expresses what I feel to be an incorrect sentiment. Before getting to the criticism of that point, however, I would like to first commend John for his tone in this piece: it doesn’t appear as outwardly hostile towards the field of evolutionary psychology as several of his past pieces have been. Sure, there might be the odd crack about “hand-wavy speculation” and the reminder about how he doesn’t like my field, but progress is progress; baby steps, and all that.
I would also like to add at the outset that Horgan states that:
“Deceit is obviously an adaptive trait, which can help us (and many other animals) advance our interest” [Emphasis mine]
I find myself interested as to why Horgan seems to feel that deceit is obviously adaptive, but violence (in at least some of its various forms) obviously isn’t. Both certainly can advance an organism’s interests, and both generally advance those interests at the expense of other organisms. Given that Horgan seems to offer nothing in the way of insight into how he arbitrates between adaptations and non-adaptations, I’ll have to chalk his process up to mere speculation. Why one seems obvious and the other does not might have something to do with Trivers accepting Horgan’s invitation to speak at his institution last December, but that might be venturing too far into that other kind of “hand-wavy speculation” Horgan says he dislikes so much. Anyway…
The claim that altruism is the basis of all morality might seem innocuous enough – the kind of thing ostensibly thoughtful people would nod their head at – but an actual examination of the two concepts will show that the sentiment only serves to muddy the waters of our understanding of morality. Perhaps that revelation could have been reached had John attempted to marshal more support for the claim beyond saying, “If you think about it” (which is totally not speculation…alright; I’ll stop), but I suppose we can’t hope for too much progress at once. So let’s begin, then, by considering the ever-quotable line from Adam Smith:
“It is not from the benevolence of the butcher, the brewer, or the baker, that we can expect our dinner, but from their regard to their own interest”
Smith is describing a scenario we’re all familiar with: when you want a good or service that someone else can provide you generally have to make it worth their while to provide it to you. This trading of benefits-at-a-cost is known as reciprocal altruism. However, when I go to the mall and give Express money so they will give me a new shirt, this exchange is generally not perceived as two distinct, altruistic acts (I endure the cost of losing money to benefit Express and Express endures the cost of losing a shirt to benefit me) that just happen to occur in close temporal proximity to one another, nor is it viewed as a particularly morally praiseworthy act. In fact, such exchanges are often viewed as two selfish acts, given that the ostensible altruism on the behavioral level is seen as a means for achieving benefits, not an end in and of itself. One could also consider the example with regards to fishing: if you sit around all day waiting for a fish to altruistically jump into your boat so you can cook it for dinner, you’ll likely be waiting a long time; better to try and trick the fish by offering it a tasty morsel on the end of a hook. You suffer a cost (the loss of the bait and your time spent sitting on a boat) and deliver a benefit to the fish (it gets a meal), whereas the fish suffers a cost (it gets eaten soon after) that benefits you, but neither you or the fish were attempting to benefit the other.
There’s a deeper significance to that point, though: reciprocally altruistic relationships tend to break down in the event that one party fails to return benefits to the other (i.e. when the payments over time for one party actually resembles altruism). Let’s say my friend helps me move, giving up his one day off a month he has in the process. This gives me a benefit and him a cost. At some point in the future, my friend is now moving. In the event I fail to reciprocate his altruism, there are many who might well say that I behaved immorally, most notably my friend himself. This does, however, raise the inevitable question: if my friend was expecting his altruistic act to come back and benefit him in the future (as evidenced by his frustration that it did not do so), wasn’t his initial act a selfish one on precisely the same level as my shopping or fishing example above?
What these examples serve to show is that, depending on how you’re conceptualizing altruism, the same act can be viewed as selfish or altruistic, which throws a wrench into the suggestion that all morality is based on altruism. One needs to really define their terms well for that statement to even mean anything worthwhile. As the examples also show, precisely how people behave towards each other (whether selfishly or altruistically) is often a topic of moral consideration, but just because altruism can be a topic of moral consideration it does not mean it’s the basis of moral judgments. To demonstrate that altruism is not the basis of our moral judgments, we can also consider a paper by DeScioli et al (2011) examining the different responses people have to moral omissions and moral commissions.
In this study, subjects were paired into groups and played a reverse dictator game. In this game, person A starts with a dollar and person B has a choice between taking 10 cents of that dollar, or 90 cents. However, if person B didn’t make a choice within 15 seconds, the entire dollar would automatically be transferred to them, with 15 cents subtracted for running out of time. So, person B could be altruistic and only take 10 cents (meaning the payoffs would be 90/10 for players A and B, respectively), be selfish and take 90 cents (10/10 payoff), or do nothing making the payoffs 0/85 (something of a mix of selfish and spite). Clearly, failing to act (an omission) was the worst payoff for all parties involved and the least “altruistic” of the options. If moral judgments use altruism as their basis, one should expect that, when given the option, third parties should punish the omissions more harshly than either of the other two conditions (or, at the very least, punish the latter two conditions equally as harshly). However, those who took the 90 cents were the ones who got punished the most; roughly 21 cents, compared to the 14 cents that those who failed to act were punished. An altruism-based account of morality would appear to have a very difficult time making sense of that finding.
Further still, an altruism-based account of morality would fail to provide a compelling explanation for the often strong moral judgments people have in reaction to acts that don’t distinctly harm or benefit anyone, such as others having a homosexual orientation or deciding not to have children. More damningly still, an altruism basis for moral judgments would have a hell of a time trying to account for why people morally support courses of action that are distinctly selfish: people don’t tend to routinely condemn others for failing to give up their organs, even non-vital ones, to save the lives of strangers, and most people would condemn a doctor for making the decision to harvest organs from one patient against that patient’s will in order to save the lives of many more people.
An altruism account would similarly fail to explain why people can’t seem to agree on many moral issues in the first place. Sure, it might be altruistic for a rich person to give up some money in order to feed a poor person, but it would also be altruistic for that poor person to forgo eating in order to not impose that cost on the rich one. Saying that morality is based in altruism doesn’t seem to even provide much information about how precisely that altruism will be enacted, moral interactions will play out, or seem to lend any useful or novel predictions more generally. Then again, maybe an altruism-based account of morality can obviously deal with these objects if I just think about it…
References: DeScioli, P., Christner, J., & Kurzban, R. (2011). The omission strategy Psychological Science, 22, 442-446 DOI: 10.1177/0956797611400616