If you want to understand and explain morality, the first useful step is to be sure you’re clear about what kind of thing morality is. This first step has, unfortunately, been a stumbling point for many researchers and philosophers. Many writers on the topic of morality, for example, have primarily discussed (and subsequently tried to explain) altruism: behaviors which involves actors suffering costs to benefit someone else. While altruistic behavior can often be moralized, altruism and morality are not the same thing; a mother breastfeeding her child is engaged in altruistic behavior, but this behavior does not appear to driven by moral mechanisms. Other writers (as well as many of the same ones) have also discussed morality in conscience-centric terms. Conscience refers to self-regulatory cognitive mechanisms that use moral inputs to influence one’s own behavior. As a result of that focus, many moral theories have not adequately been able to explain moral condemnation: the belief that others ought to be punished for behaving immorally (DeScioli & Kurzban, 2009). While the value of being clear about what one is actually discussing is large, it is often, and sadly, not the case that many treaties on morality begin by being clear about what they think morality is, nor is it the case that they tend to avoid conflating morality with other things, like altruism.
“It is our goal to explain the function of this device”
When one is not quite clear on what morality happens to be, you can end up at a loss when you’re trying to explain it. For instance, Graham et al (2012), in their discussion of how many moral foundations there are, write:
We don’t know how many moral foundations there really are. There may be 74, or perhaps 122, or 27, or maybe only five, but certainly more than one.
Sentiments like these suggest a lack of focus on what it is precisely the authors are trying to understand. If you are unsure whether the thing you are trying to explain is 2, 5, or over 100 things, then it is likely time to take a step back and refine your thinking a bit. As Graham et al (2012) do not begin their paper with a mention of what kind of thing morality is, they leave me wondering what precisely it is they are trying to explain with 5 or 122 parts. What they do posit is that morality is innate (organized in advance of experience), modified by culture, the result of intuitions first and reasoning second, and that is has multiple foundations; none of that, however, removes my wondering of what precisely they mean when they write “morality”.
The five moral foundations discussed by Graham et al (2012) include kin-directed altruism (what they call the harm foundation), mechanisms for dealing with cheaters (fairness), mechanisms for forming coalitions (loyalty), mechanisms for managing coalitions (authority), and disgust (sanctity). While I would agree that navigating these different adaptive problems are all important for meeting the challenges of survival and reproduction, there seems to be little indication that these represent different domains of moral functioning, rather than simply different domains upon which a single, underlying moral psychology might act (in much the same way, a kitchen knife is capable of cutting a variety of foods, so one need not carry a potato knife, a tomato knife, a celery knife, and so on). In the interests of being clear where others are not, by morality I am referring to the existence of the moral dimension itself; the ability to perceive “right” and “wrong” in the first place and generate the associated judgments that people who engage in immoral behaviors ought to be condemned and/or punished (DeScioli & Kurzban, 2009). This distinction is important because it would appear that species are capable of navigating the above five problems without requiring the moral psychology humans possess. Indeed, as Graham et al (2012) mention, many non-human species share one or many of these problems, yet whether those species possess a moral psychology is debatable. Chimps, for instance, do not appear to punish others for engaging in harmful behavior if said behavior has no effect on them directly (though chimps do take revenge for personal slights). Why, then, might a human moral psychology lead us to condemn others whereas it does not seem to exist in chimps, despite us sharing most of those moral foundations? That answer is not provided, or even discussed, throughout the length of moral foundations paper.
To summarize up this point, the moral foundation piece is not at all clear on what type of thing morality is, resulting in it being unclear when attempting to make a case that many – not one – distinct moral mechanisms exist. It does not necessarily tackle how many of these distinct mechanisms might exist, and it does not address the matter of why human morality appears to differ from whatever nonhuman morality there might – or might not – be. Importantly, the matter of what adaptive function morality has – what adaptive problems it solved and how it solved them – is left all but untouched. Graham et al (2012) seem to fall into the same pit trap that so many before them have of believing they have explained the adaptive value of morality because they outline an adaptive value for somethings like kin-direct altruism, reciprocal altruism, and disgust, despite these concepts not being the same thing as morality per se.
Such pit traps often prove fatal for theories
Making explicit hypotheses of function for understanding morality – as with all of psychology – is crucial. While Graham et al (2012) try to compare these different hypothetical domains of morality to different types of taste receptors on our tongues (one for sweet, bitter, sour, salt, and umami), that analogy glosses over the fact that these different taste receptors serve entirely separate functions by solving unique adaptive problems related to food consumption. Without any analysis of which unique adaptive problems are solved by morality in the domain of disgust, as opposed to, say, harm-based morality, as opposed to fairness-based morality – and so on – the analogy does not work. The question of importance in this case is what function(s) these moral perceptions serve and whether that (or those) function(s) vary when our moral perceptions are raised in the realm of harm or disgust. If that function is consistent across domains, then it is likely handled by a single moral mechanism; not many of them.
However, one thing Graham et al (2012) appear sure about is that morality cannot be understood through a single dimension, meaning they are putting their eggs in the many-different-functions basket; a claim with which I take issue. A prediction that this multiple morality hypothesis put forth by moral foundations theory might make, if I am understanding it correctly, would be that you ought to be able to selectively impair people’s moral cognitions via brain damage. For example, were you to lesion some hypothetical area of the brain, you would be able to remove a person’s ability to process harm-based morality while leaving their disgust-based morality otherwise unaffected (likewise for fairness, sanctity, and loyalty). Now I know of no data bearing on this point, and none is mentioned in the paper, but it seems that, were such a effect possible, it likely would have been noticed by now.
Such a prediction also seems unlikely to hold true in light of a particular finding: one curious facet of moral judgments is that, given someone perceives an act to be immoral, they almost universally perceive (or rather, nominate) someone – or a group of someones – to have been harmed by it. That is to say they perceive one or more victims when they perceive wrongness. If morality, at least in some domains, was not fundamentally concerned with harm, this would be a very strange finding indeed. People ought not need to perceive a victim at all for certain offenses. Nevertheless, it seems that people do not appear to perceive victim-less moral wrongs (despite their inability to always consciously articulate such perceptions), and will occasionally update their moral stances when their perceptions of harms are successfully challenged by others. The idea of victim-less moral wrongs, then, appears to originate much more from researchers claiming that an act is without a victim, rather than from their subject’s perceptions.
Pictured: a PhD, out for an evening of question begging
There’s a very real value to being precise about what one is discussing if you hope to make any forward momentum in a conversation. It’s not good enough for a researcher to use the word morality when it’s not at all clear to what that word is referring. When such specifications are not made, people seem to end up doing all sorts of things, like explaining altruism, or disgust, or social status, rather than achieving their intended goal. A similar problem was encountered when another recent paper on morality attempted to define “moral” as “fair”, and then not really define what they meant by “fair”: the predictable result was a discussion of why people are altruistic, rather than why they are moral. Moral foundations theory seems to only offer a collection of topics about which people hold moral opinions; not a deeper understanding of how our morality functions.
References: DeScioli, P. & Kurzban, R. (2009) Mysteries of morality. Cognition, 112, 281-299.
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., & Ditto, P. (2012). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55–130.