Part of academic life in psychology – and a rather large part at that – centers around publishing research. Without a list of publications on your resume (or CV, if you want to feel different), your odds of being able to do all sorts of useful things, such as getting and holding onto a job, can be radically decreased. That said, people doing the hiring do not typically care to read through the published research of every candidate applying for the position. This means that career advancement involves not only publishing plenty of research, but publishing it in journals people care about. Though it doesn’t affect the quality of the research in in any way, publishing in the right places can be suitably impressive to some. In some respects, then, your publications are a bit like recommendations, and some journal’s names carry more weight than others. On that subject, I’m somewhat disappointed to note that a manuscript of mine concerning moral judgments was recently rejected from one of these prestigious journals, building upon the ever-lengthening list of prestigious things I’ve been rejected from. Rejection, I might add, appears be another rather large part of academic life in psychology.
The decision letter said, in essence, that while they were interesting, my results were not groundbreaking enough for publication the journal. Fair enough; my results were a bit on the expected side of things, and journals do presumably have standards for such things. Being entirely not bitter about the whole experience of not having my paper placed in the esteemed outlet, I’ve decided to turn my attention to two recent articles published in a probably-unrelated journal within psychology, Psychological Science (proud home of the trailblazing paper entitled “Leaning to the Left Makes the Eiffel Tower Seem Smaller“). Both papers were examining what could be considered to fall within the realm of moral psychology, and both present what one might consider to be novel – or at least cute – findings. Somewhat curiously, both papers also lean a bit heavily on the idea of metaphors being more than metaphors, perhaps owing to their propensity for using the phrase “embodied cognition”. The first paper deals with the association between light and dark and good and evil, while the second concerns the association between physical cleanliness and moral cleanliness.
The first paper, by Banerjee, Chatterjee, & Sinha (2012) sought to examine whether recalling abstract concepts of good and evil could make participants perceive the room they’re in to be brighter or darker, respectively. They predicted this, as far as I can tell, on the basis of embodied cognition suggesting that metaphorical representations are hooked up to perceptual systems and, though they aren’t explicit about this, they also seem to suggest that this connection is instantiated in such a way so as to make people perceive the world incorrectly. That is to say that thinking about a time they behaved ethically or unethically ought to make people’s perceptions about the brightness of the world less accurate, which is a rather strange thing to predict if you ask me. In any case, 40 subjects were asked to think about a time they were ethical or unethical (so 20 per group), and to then estimate the brightness of the room they were in, from 1 to 7. The mean brightness rating of the ethical group was 5.3, and the rating in the unethical group was 4.7. Success; it seemed that metaphors are really embodied in people’s perceptual systems.
Not content to rest on that empirical success, Banerjee et al (2012) pressed forward with a second study to examine whether subjects recalling ethical or unethical actions were more likely to prefer objects that produced light (like a candle or a flashlight), relative to objects which did not (such as an apple or a jug). Seventy-four students were again split into two groups and asked to recall an ethical or unethical action in their life, asked to indicated their preference for the objects, and estimate the brightness of the room in watts. The subjects in the unethical condition again estimated the room as being dimmer (M = 74 watts) than the ethical group (M = 87 watts). The unethical group also tended to show a greater preference for light-producing objects. The authors suggest that this might be the case either because (a) the subjects thought the room was too dim, or (b) that participants were trying to reduce their negative feelings of guilt about acting unethically by making the room brighter. This again sounds like a rather peculiar type of connection to posit (the connection between guilt and wanting things to be brighter), and it manages to miss anything resembling a viable functional account for what I think the authors are actually looking at (but more on that in a minute).
The second paper comes to us from Schnall, Benton, & Harvey (2008), and it examines an aspect of the disgust/morality connection. The authors noted that previous research had found a connection between increasing feelings of disgust and more severe moral judgments, and they wanted to see if they could get that connection to run in reverse: specifically, they wanted to test whether priming people with cleanliness would cause them to deliver less-severe moral judgments about the immoral behaviors of other. The first experiment involved 40 subjects (20 per-cell seemed to be a popular number) who were asked to complete a scrabbled sentence task, with half of the subjects being posed with neutral sentences and the other half with sentences related to cleanliness. Immediately afterwards, they were asked to rate the severity of six different actions typically judged to be immoral on a 10-point scale. On average, the participants primed with the cleanliness words rated the scenarios as being less wrong (M = 5) than those given neutral primes (M = 5.8). While the overall difference was significant, only one of the six actions was rated as being significantly different between conditions, despite all showing the same pattern between conditions. In any case, the authors suggested that this may be due to the disgust component of moral judgments being reduced by the primes.
To test this explanation, the second experiment involved 44 subjects watching a scene from Trainspotting to induce disgust, and then having half of them wash their hands immediately afterwards. Subjects were then asked to rate the same set of moral scenarios. The group that washed their hands again had a lower overall rating of immorality (M = 4.7), relative to the group that did not (M = 5.3), with the same pattern as experiment 1 emerging. To explain this finding, the authors say that moral cleanliness is more than a metaphor (restating their finding) and then reference the idea that humans are trying to avoid “animal reminder” disgust, which is a pretty silly idea for a number of reasons that I need not get into here (the short version is that it doesn’t sound like the type of thing that does anything useful in the first place).
Both studies, it seems, make some novel predictions and present a set of results that might not automatically occur to people. Novelty only takes us so far, though: neither study seems to move our understanding of moral judgments forward much, if at all, and neither one even manages to put forth a convincing explanation for their findings. Taking these results at face value (with such small sample sizes, it can be hard to say whether these are definitely ‘real’ effects, and some research on priming hasn’t been replicating so well these days), there might be some interesting things worth noting here, but the authors don’t manage to nail down what those things are. Without going into too much detail, the first study seem to be looking at what would be a byproducts of a system dedicated to assessing the risk of detection and condemnation for immoral actions. Simply put, the risks involved in immoral actions go down as the odds of being identified do, so when something lowers the odds of being detected – such as it being dark, or the anonymity that something like the internet or a mask can provide – one could expect people to behave in a more immoral fashion as well.
In terms of the second study, the authors would likely be looking at another byproduct, this time of a system designed to avoid the perceptions of associations with morally-blameworthy others. As cleaning oneself can do things like remove evidence of moral wrongdoing, and thus lower the odds of detection and condemnation, one might feel a slightly reduced pressure to morally condemn others (as the perception of their being less concrete evidence of an association). With respect to the idea of detection and condemnation, then, both studies might be considered to be looking at the same basic kind of byproduct. Of course, phrased in this light (“here’s a relatively small effect that is likely the byproduct of a system designed to do other things and probably has little to no lasting effect on real-world behavior”), neither study seems terribly “trailblazing”. For a journal that can boast about receiving roughly 3000 submissions a year and accepting only 11% of them for publication, I would think they could avoid such submissions in favor of research that the label “groundbreaking” or “innovative” could be more accurately applied (unless they actually were the most groundbreaking of the bunch, that is). It would be a shame for any journal if genuinely good work was passed on because it seemed to be “too obvious” in favor of research that is cute, but not terribly useful. It also seems silly that it matters which journal one’s research is published in in the first place, career-wise, but so does washing your hands in a bright room so as to momentarily reduce the severity of moral judgments to some mild degree.
References: Banerjee P, Chatterjee P, & Sinha J (2012). Is it light or dark? Recalling moral behavior changes perception of brightness. Psychological science, 23 (4), 407-9 PMID: 22395128
Schnall S, Benton J, & Harvey S (2008). With a clean conscience: cleanliness reduces the severity of moral judgments. Psychological science, 19 (12), 1219-22 PMID: 19121126