A Dust-Up Over College Majors

One of the latest political soundbites I have seen circling my social media is a comment by Jeb Bush regarding psychology majors and the value their degrees afford them in the job market. This is a rather interesting topic for me for a few reasons, chief among which is, obviously, I’m a psychology major currently engaged in application season. As one of the psych-majoring, job-seeking types, I’ve discussed job prospects with many friends and colleagues from time to time. The general impression I’ve been given up until this week is that – at least for psychology majors with graduate degrees looking to get into academia – the job market is not as bright as one might hope. The typical job search can involve months or years of posting dozens or hundreds of applications to various schools, with many graduates reduced to taking underpaid positions as adjuncts, making barely enough to pay their bills. By contrast, I’ve had friends in other programs tell me about how their undergraduate degree has people metaphorically lining up to give them a job; jobs that would likely pay them a starting salary that could be the same or more than I could demand with a PhD in psychology, even in private industry. Considered along those dimensions, a degree envy can easily arise.

Slanderous rumor incoming in 3…2…

Job envy aside, let’s consider the quote from Jeb:

“Universities ought to have skin in the game. When a student shows up, they ought to say ‘Hey, that psych major deal, that philosophy major thing, that’s great, it’s important to have liberal arts … but realize, you’re going to be working a Chick-fil-A…The number one degree program for students in this country … is psychology. I don’t think we should dictate majors. But I just don’t think people are getting jobs as psych majors. We have huge shortages of electricians, welders, plumbers, information technologists, teachers.”

Others have already weighed in on this comment, noting that, well, it’s not exactly true: psychology is only the second most common major, and most of the people with psychology BAs don’t end up working in fast food. The median starting salary for someone with a BA in psychology is about $38,000 a year, which rises to about $62,000 after 10 years if this data set I’ve been looking at is accurate. So that’s hardly fast food wages, even if it might be underwhelming to some.

However, there are some slightly more charitable ways of interpreting Jeb’s comment (provided one feels charitable towards him, which many do not): perhaps one might consider instead how psychology majors do on the job market relative to other majors. After all, just having a college degree tends to help people find good jobs, relative to having no degree at all. So how do psychology majors do in that data set I just mentioned? Of the 319 majors listed and ranked by mid-career income, psychology can be found in three locations (starting and mid-career salaries, respectively):

  • (138) Industrial Psych – 45/74k
  • (231) Psychology – 38/62k
  • (292) Psychology & Sociology – 35/55k

So, despite its popularity as a major, the salary prospects of psychology majors tend to fall below the mid-point of the expected pay scale. Indeed, the median salary of psychology majors is quite similar to theater (38/59k), creative writing (38/63k), or art history majors (40/64k). Not to belittle any of those majors either, but it is possible that much of the salary benefits from holding these degrees comes from holding a college degree in general, rather than those ones in particular. Nevertheless, these salaries are also fairly comparable to electricians, plumbers, and welders, or at least the estimates Google returns to me; information technologists seem to have better prospects, though (55/84K).

In general, then, the comment by Jeb seems off-base when taken literally: most of the fields he mentions tend to do about as well as psychology majors, and most psychology majors are not making minimum wage. With a charitable interpretation, there is something worth thinking about, though: psychology majors don’t seem to do particularly well when compared with other majors. Indeed, about 25 of the listed majors have median starting salaries at 60k or above; the median of psychology majors after 10 years of working. Now, yes; there is more to a career and an education than what financial payoff one eventually reaps from it (once one pays off the costs of a college degree), but there’s also room to think about how much value the degree you are receiving adds to your life and the lives of others, relative to your options. In fact, I think that concern has a lot to do with why Jeb mentioned the careers he does: it’s not hard to see how the skills a plumber or an electrician learns during their training are applied in their eventual career; the path between a psychology education and a profitable career that provides others with a needed service is not quite as easy to imagine.

It’s down at the bottom; I promise

This brings us to a rather old piece of research regarding psychology majors. Lunneborg & Wilson (1985) sought to answer an interesting question: would psychology majors major in psychology again, if they had to the option to do it all over? Their sample was made up of about 800 psychology majors who had graduated between 1978 and 1982. In addition to the “would you major in psychology again” question, participants were also asked to provide their impressions of the importance of their college experience in general, those of the psychology courses they took in particular, and their satisfaction with the skills they felt they gained through their psychology education on a scale from 0 to 4. Of the 750 participants who responded to the question regarding whether they would major in psychology again, 522 (69%) indicated that they would (conversely, 31% would not). While some other specific numbers are not mentioned (unfortunately), the authors report that those who graduated more recently were more likely to indicate a willingness to major in psychology again, relative to those who had been graduates for a longer period of time, perhaps suggesting that some realities of the job market have yet to set in.

More precise numbers are provided, however, to the questions concerning the perceived value of the psychology major. Psychology degrees were rated as most relevant to personal growth (M = 2.38) and liberal arts education (M = 2.23), but less so for graduate preparation in psychology (M = 1.78), graduate preparation outside of psychology (M = 1.61), or career preparation in general (M = 1.49). While people seemed to find psychology interesting, they were more ambivalent about whether they were walking away from their education with career-relevant skills. Perhaps it is not surprising, then, that those who continued on with their education (at the MA level or above) were more satisfied with their education than those who stopped at the BA, as it is at these higher levels that skill sets begin to be explored and applied more fully. Indeed, the authors also report that those who said they were working in a field related to psychology or in their desired field of work were more likely to indicate they would major in psychology again. When things worked out, people seemed inclined to repeat their choices; when their results were less fortunate, people were more inclined to make different choices.

As I mentioned before, making money is not the only reason people seek out certain degrees.This is backed up by the fact that a full 43% of respondents who rated their career preparation from a psychology major as a 0 said they would major in the field again (as compared with 100% who rated career preparation as a 3). People find learning about how people think interesting – I certainly know I do – and so they are often naturally drawn to such courses. It’s a bit more engaging for most to hear about the decisions people make than it is to learn about, say, abstract calculus That psychology is interesting to people is a good consolation for its students, considering that psychology majors are among the most likely to be unemployed, relative to other college majors, and the earning premium of their degrees is also among the lowest.

“…But not at Chik-fil-A; I have standards, after all”

Returning to Jeb’s comment one last time, we can see some truth to what he said if we consider the heart of the matter, rather than the specifics: psychology majors do not have outstanding job prospects relative to other college majors – whether in terms of expected income or employment more generally – and psychology majors also report career preparation as being one of the least relevant things about their education, at least at the BA level. It would seem that many psychology majors are getting jobs not necessarily because of their psychology major, but perhaps because they had a major. Some degree is better than no degree in the job market; a fact that many non-degree holders are painfully aware of. Despite their career prospects, psychology remains the second most popular major in the US, attesting to people’s interest in hearing about the subject. On the other hand, if we consider the specifics, Jeb’s comment is wrong: psychology majors are not working fast food jobs, their income is about on the same level of the other careers he lists, and psychology is the second most – not first – popular major. Which parts of all this information sound most relevant likely depends on your position relative to them: most psychology majors do not enjoy having their field of study denigrated, as the reaction to Jeb’s comments showed; his political opponents likely want to have this comment do the most amount of harm to Jeb as possible (and a heavy degree of overlap likely exists between these two groups). Nevertheless, there are some realities of people’s degrees that ought to not get lost in the defense against comments like these.

References: Lunneborg, P. & Wilson, V. (1985). Would you major in psychology again? Teaching of Psychology, 1, 17-20.

Health Food Nazis

“Hitler was a vegetarian. Just goes to show, vegetarianism, not always a good thing. Can, in some extreme cases, lead to genocide.” – Bill Bailey

There’s a burgeoning new field of research in psychology known as health licensing*. Health licensing is the idea that once people do something health-promoting, they subsequently give themselves psychological license to do other, unhealthy things. A classic example of this kind of research might go something like this: an experimenter will give participants a chance to do something healthy, like go on a jog or eat a nutritious lunch. After participants engage in this healthy behavior, they are then given a chance to do something unhealthy, like break their own legs. Typical results show that once people have engaged in these otherwise healthy behaviors, they are significantly more likely to engage in self-destructive ones, like leg-breaking, in order to achieve a balance between their healthy and unhealthy behaviors. This is just one more cognitive quirk to add to the ever-lengthening list of human psychological foibles.

Now that you engaged in hospital-visiting behavior, feel free to burn yourself to even it out.

Now many of you are probably thinking one or both of two things: “that sounds strange” and “that’s not true”. If you are thinking those things, I’m happy that we’re on the same page so far. The problems with the above hypothetical area of research are clear. First, it seems strange that people would go do something unhealthy and harmful because they had previously done something which was good for them; it’s not like healthy and unhealthy behaviors need to be intrinsically balanced out for any reason, at least not one that readily comes to mind. Second, it seems strange that people would want to engage in the harmful behaviors at all. Just because an option to do something unhealthy is presented, it doesn’t mean people are going to want to take it, as it might have little appeal to them. When people typically engage in behaviors which are deemed harmful in the long-term – such as smoking, overeating junk food, or other such acts which are said to be psychologically ‘licensed’ by healthy behaviors – they do so because of the perceived short-term benefits of such things. People certainly don’t drink for the hangover; they drink for the pleasant feelings induced by the booze.

So, with that in mind, what are we to make of a study that suggests doing something healthy can give people a psychological license to adopt immoral political stances? In case that sounds too abstract, the research on the table today examines whether drinking sauerkraut juice make people more likely to endorse Nazi-like politics, and no; I’m not kidding (as much as I wish I was). The paper (Messner & Brugger, 2015) itself leans heavily on moral licensing: the idea that engaging in moral behaviors activates compensating psychological mechanisms that encourage the actor to engage in immoral ones. So, if you told the truth today, you get to lie tomorrow to balance things out. Before moving further into the details of the paper, it’s worth mentioning that the authors have already bumped up against one of the problems from my initial example: I cannot think of a reason that ‘moral’ and ‘immoral’ behaviors need to be “balanced out” psychologically (whatever that even means), and none is provided. Indeed, as some people continuously refrain from immoral (or unhealthy) behaviors, whereas others continuously indulge in them, compensation or balance doesn’t seem to factor into the equation in the same way (or at all) for everyone.

Messner & Brugger (2015) try to draw on a banking analogy, whereby moral behavior gives one “credit” into their account that can be “spent” on immoral behavior. However, this analogy is largely unhelpful as you cannot spend money you do not have, but you can engage in immoral behaviors even if you have no morally-good “credit”. It’s also unhelpful in that it presumes immoral behavior is something one wants to spend their moral credit on; the type of immoral behavior seems to be besides the point, as we will soon see. Much like my leg-breaking example, this too seems to make little sense: people don’t seem to want to engage in immoral behavior because it is immoral. As the bank account analogy is not at all helpful for understanding the phenomenon in question, it seems better to drop it altogether, since it’s only likely to sow confusion in the minds of anyone trying to really figure out what’s going on here. Then again, perhaps the confusion is only present in the paper to compensate for all the useful understanding the researchers are going to provide us later.

“We broke half the lights to compensate for the fact that the other half work”

Moving forward, the authors argue that, because health-relevant behavior is moralized, engaging in some kind of health-promoting behavior – in this case, drinking sauerkraut juice (high in fiber and vitamin C, we are told) – ought to give people good moral “credit” which they will subsequently spend on immoral behavior (in much the same way buying eco-friendly products leads to people giving themselves a moral license to steal, we are also told). Accordingly, the authors first asked 128 Swiss students to indicate who was more moral: someone who drinks sauerkraut juice or someone who drinks Nestea. As predicted, 78% agreed that the sauerkraut-juice drinker was more moral, though whether a “neither, and this question is silly” option existed is not mentioned. The students also indicated how morally acceptable and right wing a number of attitudes were; statements which related to, according to the authors, a number of nasty topics like devaluing the culture of others (i.e., seeing a woman wearing a burka making someone uncomfortable), devaluing other nations (viewing foreign nationals as a burden on the state), affirming antisemitism (disliking some aspects of Israeli politics), devaluing the humanity of others (not agreeing that all public buildings ought to be modified for handicapped access), and a few others. Now all of these statements were rated as immoral by the students, but whether they represent what the authors think they do (Nazi-like politics) is up for interpretation.

In any case, another 111 participants were then collected and assigned to drink sauerkraut juice, Nestea, or nothing. Those who drank the sauerkraut juice rated it as healthier than those who drank the Nestea and, correspondingly, were also more likely to endorse the Nazi-like statements (M = 4.46 on a 10-point scale) than those who drank Nestea (M = 3.82) or nothing (M = 3.73). Neat. There are, however, a few other major issues to address. The first of these is that, depending on who you sample, you’re going to get different answers to the “are these attitudes morally acceptable?” questions. Since it’s Swiss students being assessed in both cases, I’ll let that issue slide for the more pressing, theoretical one: the author’s interpretation of the results would imply that the students who indicated that such attitudes are immoral also wished to express them. That is to say, because they just did something healthy (drank sauerkraut juice) they now want to engage in immoral behavior. They don’t seem to picky about what immoral behavior they engage in either, as they’re apparently more willing to adopt political stances they would otherwise oppose, were it not for the disgusting, yet healthy, sauerkraut juice.

This strikes me very much as the kind of metaphorical leg-breaking I mentioned earlier. When people engage in immoral (or unhealthy) behaviors, they typically do so because of some associated benefit: stealing grants you access to resources you otherwise wouldn’t obtain; eating that Twinkie gives you the pleasant taste and the quick burst of calories, even if they make you fat when you do that too much. What benefits are being obtained by the Swiss students who are now (slightly) more likely to endorse right-wing, Nazi-like politics? None are made clear in the paper and I’m having a hard time thinking up any myself. This seems to be a case of immoral behavior for the sake of it, which could only arise from a rather strange psychology. Perhaps there is something worth noting going on here that isn’t being highlighted well; perhaps the authors just stumbled on a statistical fluke (which does happen regularly). In either case, the idea of moral licensing doesn’t seem to help us understand what’s happening at all, and the banking metaphors and references to “balancing” and “compensation” seem similarly impotent to move us forward.

“Just give him the money; he eats well, so it’s OK”

The moral licensing idea is even worse than all that, though, as it doesn’t engage with the main adaptive reason people avoid self-beneficial, but immoral behaviors: other people will punish you for them. If I steal from someone else, they or their allies might well take revenge on me; that I assure them of my healthy diet will likely create little to no effective deterrence against the punishment I would soon receive. If that is the case – and I suspect it is – then this self-granted “moral license” would be about as useful as my simply believing that stealing from others isn’t wrong and won’t be punished (which is to say, “not at all”). Any type of moral license needs to be granted by potential condemners in order to be of any practical use in that regard, and the current research does not assess whether that is the case. This limited focus on conscience – rather than condemnation – complete with the suggestion that people are likely to adopt social politics they would otherwise oppose for the sake of achieving some kind of moral balance after drinking 100 ml of gross sauerkraut juice makes for a very strange paper indeed.

References: Messner, C. & Brugger, A. (2015). Nazis by Kraut: A playful application of moral self-licensing. Psychology, 6, http://dx.doi.org/10.4236/psych.2015.69112

*This statement has not been evaluated by the FDA or any such governmental body; the field doesn’t actually exist to the best of my knowledge, but I’ll tell you it does anyway.


Some…Interesting…Research On STI Stigma

Today I have the distinct pleasure of examining some research from the distinguished Terri Conley once again. Habitual readers of my writing might know the name; in fact, they might even know that I have written about her work before. The first time I did, it was only to mention, briefly, that Terri had proposed that sexual reproduction was a byproduct of sexual pleasure. To put that claim into easily-understandable terms, it would go something like, “sexual reproduction does not itself contribute to reproduction, but is the result of sexual pleasure, which does contribute to reproduction”. I’m sure many of you might be thinking that doesn’t make any sense, and for a very good reason. The second time I wrote about her work, it involved a number of sex differences which had been labeled myths; in this case, they were myths in the sense of “they are all true”, which is a peculiar usage of the term. On the block for today is a claim about how people are irrational about the risks posed by STIs, complete with a paper that meets the high standards set by the previous two pieces.

I think it might be time to finally see a doctor about that problem

I will start my examination of the piece, by Conley et al (2015), by noting that – like so much psychology work before it and like so much more that is sure to come – the predictions made by the authors are made in the absence of anything resembling a theoretical justification. In other words, sections which might include phrases like, “we predicted we would find this effect because…” are not present. With that in mind, the main hypothesis of the current paper is that people are irrationally biased against STIs and those infected by them, perceiving sexual behavior as exceedingly risky and the diseases as especially harmful. The idea was tested in an assortment of ways. In the first study, 680 participants were asked about the number of people (out of 1000) who would be expected to die on either (a) a 300-mile drive or (b) as the result of contracting an HIV infection from a single instance of unprotected sex with a non-injection drug user. Conley et al (2015) note that people are about 20-times as likely to die on that car ride than they are to contract HIV and die from it as the result of a single sexual encounter.

Sure enough, Conley et al (2015) report that their participants were wildly off the mark: while they overestimated both rates of death, the number of people estimated to die from HIV was far, far higher (M = 72, SD = 161) than from a car accident (M = 4, SD = 15). While people were statistically 20-times more likely to die from a car accident, they believed they were 17-times more likely to die from HIV. What a bias! Some things about those numbers does not sit right with me, though. For instance, it seems unlikely that people are that inaccurate: do people really believe that a little less than 1% of causal sexual encounters result in death from HIV? The variance of those estimates also seems to be exceedingly large, at least for STI risk (the standard deviation of which is over 10 times as large as the car accidents). So what’s going on? I think that answer has a lot to do with the particular question Conley et al (2015) asked:

“Assume that 1,000 people had unprotected intercourse (sex without a condom) yesterday. None of the 1,000 people who had sex were previous intra-venous drug users. How many of these 1,000 individuals who had unprotected sex would you expect to die from HIV contracted from the sexual encounter”

This phrasing is unfortunately – perhaps even purposefully – vague. One possible way of interpreting that question is that it is asking about how many people will die given they have become infected. Asking about how many people will become infected and die is much different than asking about how many infected people will die, and that vagueness could account for the widely-varying estimates being reported. As the wording is not at all clear, the estimates of mortality could be overestimated, at least relative to what the authors think they’re measuring. How this point was not addressed by any editors or reviewers is beyond me.

Their second study examined how people perceived those who (sort of) unknowingly transmitted either a sexual- or non-sexual infection to their sexual partner: H1N1 or chlamydia. That is, they knew they had symptoms of something, but wrote it off as either allergies or a UTI. Again, we find Conley et al (2015) going to great pains to emphasis that H1N1 is the much more harmful bug of the pair, so as to suggest people should believe it worse to transmit the flu. In this study, 310 participants were asked to read brief stories about the infection being spread after some unprotected sex, and then assessed the target who spread it on some 6-points scales. The person who had spread the infection was rated as slightly more selfish (Ms = 3.9/3.6), risky (Ms = 4.8/4.4), and dumb (Ms = 4.3/3.9) when they had spread the sexually-transmitted one (sexual/non-sexual means, respectively). Of course, as the transmission of the STI could have been prevented through the use of, say, a condom when encountering a new sexual partner, whereas the same option is not available for the flu, it’s hard to conclude that the participants are irrational or wrong in their judgments. While Conley et al (2015) note this possibility, they do nothing to test it, asserting instead that their data nevertheless represents an ample amount of evidence in favor of their hypothesis.

Too bad these don’t protect against bad interpretations of data

The third study is perhaps the funniest of them all. It’s not an experiment, but rather a retrospective analysis of information provided on government websites concerning the prevention of driving accidents and contracting STIs (tying into their first study). Conley et al’s (2015) bold prediction was that:

“…government public information websites would promote abstinence as the best way to avoid acquiring an STI, but that these websites would not promote abstinence from driving, which is, statistically, riskier.”

You are reading that correctly: the prediction is that government websites will not advocate that people avoid driving entirely, as opposed to avoiding having sex (or, rather, postponing it until certain criteria have been met, such as marriage). This is not what I would consider a “prediction”, inasmuch as I’m sure they knew what they would find. In any case, 86% of state websites discussed STI prevention, with 72% mentioning that abstinence is the most effective way of avoiding one (a claim which is true beyond dispute); by contrast, 78% of state websites discussed driving accidents and, shockingly, none of them advocated that people avoid driving altogether. What an astounding bias!

Now perhaps that is because, as the authors briefly mention, navigating one’s daily life without the use of a car (or some form of transportation) is all but impossible for many. However, the authors feel that – because sex, not driving, is biologically motivated – asking people to give up (or rather, postpone) sex is more unnatural and difficult. Foregoing the matter of what that is supposed to mean, I remain skeptical as to whether this lack of asking people to avoid driving entirely is evidence of “inappropriately negative” reactions to STIs in particular, despite Conley et al’s (2015) enthusiasm for that interpretation.

There was one detail of the paper which really stood out to me throughout all of this, however. It wasn’t their weak methods or poor interpretations of the data, either, but rather the following sentence:

“This component of the study provides strong evidence for the hypothesis that people who transmit STIs are unjustly stigmatised in society.”

The emphasis on “unjustly” in that passage was made by the authors; not me. While it’s possible I am reading too much into their emphasis, that strikes me as an (unintentional?) slip that puts the biases of the authors on display rather prominently. Taken together with the general poor quality of their work, it appears that there is a particular social agenda which is being pushed by this research. Perhaps that agenda is noble; perhaps it isn’t. Regardless of which it happens to be, once agendas begin to make their way into research, the soundness of the interpretations of data often suffer serious damage. In this case, Conley et al (2015) seem to be doing all they can to make people look irrational and, importantly, wrong, rather than earnestly assessing their work. They’re trying to game the system and their research suffers because of it.

 ”People are unjustly biased against living in my house”

Now, to be clear, I do feel there exists a certain percentage of the population with a vested interest in pushing ideas that make other people more or less likely to engage in certain kinds of intercourse, be that intercourse promiscuous or monogamous in nature. That is, if I want there to be more sexually-available options in the population, I might try to convince others that casual sex is really quite good for them, regardless of the truth in my claim. The current research is not a solid demonstration of people doing this, however; it’s not even a decent one. Ironically, the current research paper instead seems to serve as an example of that very bias it is hoping to find in others. After all, making it seem like STIs really aren’t that big of a deal would do wonders for making the costs associated with short-term encounters seem far less relevant. Also ironically, if such efforts were successful, the costs of casual encounters would likely rise over time, as more promiscuous people less concerned with STIs will likely lead to them spreading the things more regularly, and the STIs mutating into more harmful strains (as they no longer need to keep their host alive as long to successfully reproduce themselves). All that aside, with the glaring problems in this paper, I find it remarkable it ever saw the light of publication.

References: Conley, T., Moors, A., Matsick, J., & Ziegler, A. (2015). Sexuality-related risks are judged more harshly than comparable health risks. International Journal of Sexual Health, DOI: 10.1080/19317611.2015.1063556.

How Many Foundations Of Morality Are There?

If you want to understand and explain morality, the first useful step is to be sure you’re clear about what kind of thing morality is. This first step has, unfortunately, been a stumbling point for many researchers and philosophers. Many writers on the topic of morality, for example, have primarily discussed (and subsequently tried to explain) altruism: behaviors which involves actors suffering costs to benefit someone else. While altruistic behavior can often be moralized, altruism and morality are not the same thing; a mother breastfeeding her child is engaged in altruistic behavior, but this behavior does not appear to driven by moral mechanisms. Other writers (as well as many of the same ones) have also discussed morality in conscience-centric terms. Conscience refers to self-regulatory cognitive mechanisms that use moral inputs to influence one’s own behavior. As a result of that focus, many moral theories have not adequately been able to explain moral condemnation: the belief that others ought to be punished for behaving immorally (DeScioli & Kurzban, 2009). While the value of being clear about what one is actually discussing is large, it is often, and sadly, not the case that many treaties on morality begin by being clear about what they think morality is, nor is it the case that they tend to avoid conflating morality with other things, like altruism.

“It is our goal to explain the function of this device”

When one is not quite clear on what morality happens to be, you can end up at a loss when you’re trying to explain it. For instance, Graham et al (2012), in their discussion of how many moral foundations there are, write:

We don’t know how many moral foundations there really are. There may be 74, or perhaps 122, or 27, or maybe only five, but certainly more than one.

Sentiments like these suggest a lack of focus on what it is precisely the authors are trying to understand. If you are unsure whether the thing you are trying to explain is 2, 5, or over 100 things, then it is likely time to take a step back and refine your thinking a bit. As Graham et al (2012) do not begin their paper with a mention of what kind of thing morality is, they leave me wondering what precisely it is they are trying to explain with 5 or 122 parts. What they do posit is that morality is innate (organized in advance of experience), modified by culture, the result of intuitions first and reasoning second, and that is has multiple foundations; none of that, however, removes my wondering of what precisely they mean when they write “morality”.

The five moral foundations discussed by Graham et al (2012) include kin-directed altruism (what they call the harm foundation), mechanisms for dealing with cheaters (fairness), mechanisms for forming coalitions (loyalty), mechanisms for managing coalitions (authority), and disgust (sanctity). While I would agree that navigating these different adaptive problems are all important for meeting the challenges of survival and reproduction, there seems to be little indication that these represent different domains of moral functioning, rather than simply different domains upon which a single, underlying moral psychology might act (in much the same way, a kitchen knife is capable of cutting a variety of foods, so one need not carry a potato knife, a tomato knife, a celery knife, and so on). In the interests of being clear where others are not, by morality I am referring to the existence of the moral dimension itself; the ability to perceive “right” and “wrong” in the first place and generate the associated judgments that people who engage in immoral behaviors ought to be condemned and/or punished (DeScioli & Kurzban, 2009). This distinction is important because it would appear that species are capable of navigating the above five problems without requiring the moral psychology humans possess. Indeed, as Graham et al (2012) mention, many non-human species share one or many of these problems, yet whether those species possess a moral psychology is debatable. Chimps, for instance, do not appear to punish others for engaging in harmful behavior if said behavior has no effect on them directly (though chimps do take revenge for personal slights). Why, then, might a human moral psychology lead us to condemn others whereas it does not seem to exist in chimps, despite us sharing most of those moral foundations? That answer is not provided, or even discussed, throughout the length of moral foundations paper.

To summarize up this point, the moral foundation piece is not at all clear on what type of thing morality is, resulting in it being unclear when attempting to make a case that many – not one – distinct moral mechanisms exist. It does not necessarily tackle how many of these distinct mechanisms might exist, and it does not address the matter of why human morality appears to differ from whatever nonhuman morality there might – or might not – be. Importantly, the matter of what adaptive function morality has – what adaptive problems it solved and how it solved them – is left all but untouched. Graham et al (2012) seem to fall into the same pit trap that so many before them have of believing they have explained the adaptive value of morality because they outline an adaptive value for somethings like kin-direct altruism, reciprocal altruism, and disgust, despite these concepts not being the same thing as morality per se.

Such pit traps often prove fatal for theories

Making explicit hypotheses of function for understanding morality – as with all of psychology – is crucial. While Graham et al (2012) try to compare these different hypothetical domains of morality to different types of taste receptors on our tongues (one for sweet, bitter, sour, salt, and umami), that analogy glosses over the fact that these different taste receptors serve entirely separate functions by solving unique adaptive problems related to food consumption. Without any analysis of which unique adaptive problems are solved by morality in the domain of disgust, as opposed to, say, harm-based morality, as opposed to fairness-based morality – and so on – the analogy does not work. The question of importance in this case is what function(s) these moral perceptions serve and whether that (or those) function(s) vary when our moral perceptions are raised in the realm of harm or disgust. If that function is consistent across domains, then it is likely handled by a single moral mechanism; not many of them.

However, one thing Graham et al (2012) appear sure about is that morality cannot be understood through a single dimension, meaning they are putting their eggs in the many-different-functions basket; a claim with which I take issue. A prediction that this multiple morality hypothesis put forth by moral foundations theory might make, if I am understanding it correctly, would be that you ought to be able to selectively impair people’s moral cognitions via brain damage. For example, were you to lesion some hypothetical area of the brain, you would be able to remove a person’s ability to process harm-based morality while leaving their disgust-based morality otherwise unaffected (likewise for fairness, sanctity, and loyalty). Now I know of no data bearing on this point, and none is mentioned in the paper, but it seems that, were such a effect possible, it likely would have been noticed by now.

Such a prediction also seems unlikely to hold true in light of a particular finding: one curious facet of moral judgments is that, given someone perceives an act to be immoral, they almost universally perceive (or rather, nominate) someone – or a group of someones – to have been harmed by it. That is to say they perceive one or more victims when they perceive wrongness. If morality, at least in some domains, was not fundamentally concerned with harm, this would be a very strange finding indeed. People ought not need to perceive a victim at all for certain offenses. Nevertheless, it seems that people do not appear to perceive victim-less moral wrongs (despite their inability to always consciously articulate such perceptions), and will occasionally update their moral stances when their perceptions of harms are successfully challenged by others. The idea of victim-less moral wrongs, then, appears to originate much more from researchers claiming that an act is without a victim, rather than from their subject’s perceptions.

Pictured: a PhD, out for an evening of question begging

There’s a very real value to being precise about what one is discussing if you hope to make any forward momentum in a conversation. It’s not good enough for a researcher to use the word morality when it’s not at all clear to what that word is referring. When such specifications are not made, people seem to end up doing all sorts of things, like explaining altruism, or disgust, or social status, rather than achieving their intended goal. A similar problem was encountered when another recent paper on morality attempted to define “moral” as “fair”, and then not really define what they meant by “fair”: the predictable result was a discussion of why people are altruistic, rather than why they are moral. Moral foundations theory seems to only offer a collection of topics about which people hold moral opinions; not a deeper understanding of how our morality functions.

References: DeScioli, P. & Kurzban, R. (2009) Mysteries of morality. Cognition, 112, 281-299.

Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., & Ditto, P. (2012). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55–130.