Morality, Empathy, And The Value Of Theory

Let’s solve a problem together: I have some raw ingredients that I would like to transform into my dinner. I’ve already managed to prepare and combine the ingredients, so all I have left to do is cook them. How am I to solve this problem of cooking my food? Well, I need a good source of heat. Right now, my best plan is to get in my car and drive around for a bit, as I have noticed that, after I have been driving for some time, the engine in my car gets quite hot. I figure I can use the heat generated by driving to cook my food. It would come as no surprise to anyone if you have a couple of objections with my suggestion, mostly focused on the point that cars were never designed to solve the problems posed by cooking. Sure, they do generate heat, but that’s really more of a byproduct of their intended function. Further, the heat they do produce isn’t particularly well-controlled or evenly-distributed. Depending on how I position my ingredients or the temperature they require, I might end up with a partially-burnt, partially-raw dinner that is likely also full of oil, gravel, and other debris that has been kicked up into the engine. Not only is the car engine not very efficient at cooking, then, it’s also not very sanitary. You’d probably recommend that I try using a stove or oven instead.

“I’m not convinced. Get me another pound of bacon; I’m going to try again”

Admittedly, this example is egregious in its silliness, but it does make its point well: while I noted that my car produces heat, I misunderstood the function of the device more generally and tried to use it to solve a problem inappropriately as a result. The same logic also holds in cases where you’re dealing with evolved cognitive mechanisms. I examined such an issue recently, noting that punishment doesn’t seem to do a good job as a mechanism for inspiring trust, at least not relative to its alternatives. Today I wanted to take another run at the underlying issue of matching proximate problem to adaptive function, this time examining a different context: directing aid to the great number of people around the world who need altruism to stave off death and non-lethal, but still quite severe, suffering (issues like alleviating malnutrition and infectious diseases). If you want to inspire people to increase the amount of altruism directed towards these needy populations, you will need to appeal to some component parts of our psychology, so what parts should those be?

The first step in solving this problem is to think about what cognitive systems might increase the amount of altruism directed towards others, and then examine the adaptive function of each to determine whether they will solve the problem particularly efficiently. Paul Bloom attempted a similar analysis (about three years ago, but I’m just reading it now), arguing that empathetic cognitive systems seem like a poor fit for the global altruism problem. Specifically, Bloom makes the case that empathy seems more suited to dealing with single-target instances of altruism, rather than large-scale projects. Empathy, he writes, requires an identifiable victim, as people are giving (at least proximately) because they identify with the particular target and feel their pain. This becomes a problem, however, when you are talking about a population of 100 or 1000 people, since we simply can’t identify with that many targets at the same time. Our empathetic systems weren’t designed to work that way and, as such, augmenting their outputs somehow is unlikely to lead to a productive solution to the resource problems plaguing certain populations. Rather than cause us to give more effectively to those in need, these systems might instead lead us to over-invest further in a single target. Though Bloom isn’t explicit on this point, I feel he would likely agree that this has something to do with empathetic systems not having evolved because they solved the problems of others per se, but rather because they did things like help the empathetic person build relationships with specific targets, or signal their qualities as an associate to those observing the altruistic behavior.

Nothing about that analysis strikes me as distinctly wrong. However, provided I have understood his meaning properly, Bloom goes on to suggest that the matter of helping others involves the engagement of our moral systems instead (as he explains in this video, he believes empathy “fundamentally…makes the world worse,” in the moral sense of the term, and he also writes that there’s more to morality – in this case, helping others – than empathy). The real problem with this idea is that our moral systems are not altruistic systems, even if they do contain altruistic components (in much the same way that my car is not a cooking mechanism even if it does generate heat). This can be summed up in a number of ways, but simplest is in a study by Kurzban, DeScioli, & Fein (2012) in which participants were presented with the footbridge dilemma (“Would you push one person in front of a train – killing them – to save five people from getting killed by it in turn?”). If one was interested in being an effective altruist in the sense of delivering the greatest number of benefits to others, pushing is definitely the way to go under the simple logic that five lives saved is better than one life spared (assuming all lives have equal value). Our moral systems typically oppose this conclusion, however, suggesting that saving the lives of the five is impermissible if it means we need to kill the one. What is noteworthy about the Kurzban et al (2012) paper is that you can increase people’s willingness to push the one if the people in the dilemma (both being pushed and saved) are kin.

Family always has your back in that way…

The reason for this increase in pushing when dealing with kin, rather than strangers, seems to have something to do with our altruistic systems that evolved for delivering benefits to close genetic relatives; what we call kin-selected mechanisms (mammary glands being a prime example). This pattern of results from the footbridge dilemma suggests there is a distinction between our altruistic systems (that benefit others) and our moral ones; they function to do different things and, as it seems, our moral systems are not much better suited to dealing with the global altruism problem than empathetic ones. Indeed, one of the main features of our moral systems is nonconsequentialism: the idea that the moral value of an act depends on more than just the net consequences to others. If one is seeking to be an effective altruist, then, using the moral system to guide behavior seems to be a poor way to solve that problem because our moral system frequently focuses on behavior per se at the expense of its consequences. 

That’s not the only reason to be wary of the power of morality to solve effective altruism problems either. As I have argued elsewhere, our moral systems function to manage associations with others, most typically by strategically manipulating our side-taking behavior in conflicts (Marczyk, 2015). Provided this description of morality’s adaptive function is close to accurate, the metaphorical goal of the moral system is to generate and maintain partial social relationships. These partial relationships, by their very nature, oppose the goals of effective altruism, which are decidedly impartial in scope. The reasoning of effective altruism might, for instance, suggest that it would be better for parents to spend their money not on their child’s college tuition, but rather on relieving dehydration in a population across the world. Such a conclusion would conflict not only with the outputs of our kin-selected altruistic systems, but can also conflict with other aspects of our moral systems. As some of my own, forthcoming research finds, people do not appear to perceive much of a moral obligation for strangers to direct altruism towards other strangers, but they do perceive something of an obligation for friends and family to help each other (specifically when threatened by outside harm). Our moral obligations towards existing associates make us worse effective altruists (and, in Bloom’s sense of the word, morally worse people in turn).

While Bloom does mention that no one wants to live in that kind of strictly utilitarian world – one in which the welfare of strangers is treated equally to the welfare of friends and kin – he does seem to be advocating we attempt something close to it when he writes:

Our best hope for the future is not to get people to think of all humanity as family—that’s impossible. It lies, instead, in an appreciation of the fact that, even if we don’t empathize with distant strangers, their lives have the same value as the lives of those we love.

Appreciation of the fact that the lives of others have value is decidedly not the same thing as behaving as if they have the same value as the ones we love. Like most everyone else in the world, I want my friends and family to value my welfare above the welfare of others; substantially so, in fact. There are obvious adaptive benefits to such relationships, such as knowing that I will be taken care of in times of need. By contrast, if others showed no particular care for my welfare, but rather just sought to relieve as much suffering as they could wherever it existed in the world, there would be no benefit to my retaining them as associates; they would provide with me assistance or they wouldn’t, regardless of the energy I spent (or didn’t) maintaining social relationship with them. Asking the moral system to be a general-purpose altruism device is unlikely to be much more successful than asking my car to be an efficient oven, that people to treat others the world over as if they were kin, or that you empathize with 1000 people. It represents an incomplete view as to the functions of our moral psychology. While morality might be impartial with respect to behavior, it is unlikely to be impartial with regard to the social value of others (which is why, also in my forthcoming research, I find that stealing to defend against an outside agent of harm is rated as more morally acceptable than doing so to buy recreational drugs).  

“You have just as much value to me as anyone else; even people who aren’t alive yet”

To top this discussion off, it is also worth mentioning those pesky, unintended consequences that sometimes accompany even the best of intentions. By relieving deaths from dehydration, malaria, and starvation today, you might be ensuring greater harm in future generations in the form of increasing the rate of climate change, species extinction, and habitat destruction brought about by sustaining larger global human populations. Assuming for the moment that was true, would that mean that feeding starving people and keeping them alive today would be morally wrong? Both options – withholding altruism when it could be provided and ensuring harm for future generations – might get the moral stamp of disapproval, depending on the reference group (from the perspective of future generations dealing with global warming, it’s bad to feed; from the perspective of the starving people, it’s bad to not feed). This is why the slight majority of participants in Kurzban et al (2012) reported that pushing and not pushing can both be morally unacceptable courses of action.  If we are relying on our moral sense to guide our behavior in this instance, then, we would unlikely be very successful in our altruistic endeavors.

References: Kurzban, R., DeScioli, P., & Fein, D. (2012). Hamilton vs. Kant: Pitting adaptations for altruism against adaptation for moral judgment. Evolution & Human Behavior, 33, 323-333.

Marczyk, J. (2015). Moral alliance strategies theory. Evolutionary Psychological Science, 1, 77-90.

One comment on “Morality, Empathy, And The Value Of Theory

  1. Pingback: Benefiting Others: Motives Or Ends? | Pop Psychology