“Couldn’t-Even-Possibly-Be-So Stories”: Just-World Theory

While I was reading over a recent paper by Callan et al (2012) about the effects that a victim’s age has on people’s moral judgments, I came across something that’s particularly – and disappointingly – rare in most of the psychological literature: the authors explicitly thinking about possible adaptive functions of our psychology. That is to say, the authors were considering what adaptive problem(s) some particular aspect of human psychology might be designed to solve. In that regard, I would praise the authors for giving these important matters some thought. Their major stumbling point, however, is that the theory the authors reference, just-world theory, suggests an implausible function; one that couldn’t even potentially be correct.

Just-world theory, as presented by Hafer (2000), is a very strange kind of theory. It begins with the premise that people have a need to believe in a just or fair world, so people think that others shouldn’t suffer or gain unless they did something to deserve it. More precisely, “good” people are supposed to be rewarded and “bad” people are supposed to be punished, or something like that, anyway. When innocent people suffer, then, this belief is supposedly “threatened”, so, in order to remove the threat and maintain their just-world belief, people derogate the victim. This makes the victim seem less innocent and more deserving of their suffering, so the world can again be viewed as just.

I’ll bet that guy made Santa’s “naughty” list.

Phrased in terms of adaptationist reasoning, just-world theory would go something like this: humans face the adaptive problem of maintaining a belief in a just world in the face of contradictory evidence. People solve this problem with cognitive mechanisms that function to alter that contradictory evidence into confirmatory evidence. The several problems with this suggestion ought to jump out clearly at this point, but let’s take them one at a time and examine them in some more depth. The first issue is that the adaptive problem being posited here isn’t one; indeed, it couldn’t be. Holding a belief, regardless of whether that belief is true or not, is a lot like “feeling good”, in that neither of them, on their own, actually do anything evolutionary useful. Sure, beliefs (such as “Jon is going to attack me”) might motivate you to execute certain behaviors (running away from Jon), but it is those behaviors which are potentially useful; not the beliefs per se. Natural selection can only “see” what you do; not what you believe or how you feel. Accordingly, an adaptive problem could not even potentially be the maintaining of a belief.

But let’s assume for the moment that maintaining a belief could be a possible adaptive problem. Even granting this, just-world theory runs directly into a second issue: why would contradictory evidence “threaten” that belief in the first place? It seem perfectly plausible that an individual could simply believe whatever it is was important to believe and be done with it, rather than trying to rationalize that belief to ensure it’s consistent with other beliefs or accurate. For instance, say, for whatever reason, it’s adaptively important for people to believe that anyone who leaves their house at night will die. Then someone who believes this observes their friend Max leaving the house at night and return very much alive. The observer in this case could, it seems, go right along believing that anyone who leaves their house at night will die without also needing to believe either that (a) Max didn’t leave his house at night or (b) Max isn’t alive. While the observer might also believe one or both of those things, it would seem to be irrelevant as to whether or not they did.

On a related note, it’s also worth noting that just-world theory seems to imply that the adaptive goal here is to hold an incorrect belief – that “the world” is just. Now there’s nothing implausible about the suggestion that an organism can be designed to be strategically wrong in certain contexts; when it comes to persuading others, for instance, being wrong can be an asset at times. When you aren’t trying to persuade others of something, however, being wrong will, at best, be neutral to, at worst, exceedingly maladaptive. So what does Hafer (2000) suggest the function of such incorrect beliefs might be?

By [dissociating from an innocent victim], observers can at least be comforted that although some people are unjustly victimized in life, all is right with their own world and their own investments in the future (emphasis mine)

As I mentioned before, this explanation couldn’t even possibly work, as “feeling good” isn’t one of those things that does anything useful by itself. As such, maintaining an incorrect belief for the purposes of feeling good fails profoundly as a proper explanation for any behavior.

Not only is the world not just, it isn’t tuned into your thought frequencies either, no matter how strongly you incorrectly believe it is.

On top of all the aforementioned problems, there’s also a major experimental problem: the just-world theory only seems to have been tested in one direction. Without getting too much into the methodological details of her studies, Hafer (2000) found that when a victim was “innocent”, subjects who were primed for thinking about their long-term plans were slightly more likely to blame the victim for their negative life outcome, derogate them, and disassociate from them (i.e. they should have been more cautious and what happened to them is not likely to happen to me), relative to subjects who were not primed for the long term. Hafer’s interpretation of these results was that, at least in the long-term condition, the innocent victim threatened the just-world belief, so people in turn perceived the victim as less innocent.

While the innocent-victims-being-blamed angle was examined, Hafer (2000) did not examine the opposite context: that of the undeserving recipient. Let’s say there was someone you really didn’t like, and you found out that this someone recently came into a large sum of money through an inheritance. Presumably, this state of affairs would also “threaten” your just-world belief; after all, bad people are supposed to suffer, not benefit, so you’d be left with a belief-threatening inconsistency. If we presented subjects with a similar scenario, would we expect them to “protect” their just-world belief by reframing their disliked recipient as a likable and deserving one? While I admittedly have no data bearing on that point, my intuitive answer to the question would be a resounding “probably not”; they’d probably just view their rival as richer pain-in-their-ass after receiving the cash. It’s not as if intuitions about who’s innocent and guilty seem to shift simply on the basis of received benefits and harms; the picture is substantially more nuanced.

“If he was such a bad guy, why was he pepper-spraying people instead of getting sprayed?”

To reiterate, I’m happy to see psychologists thinking about functions when developing their research; while such a focus is by no means sufficient for generating good research or sensibly interpreting results (as we’ve just seen), I think it’s an important step in the right direction. The next major step would be for psychological researchers to better learn how to differentiate plausible and non-plausible functions, and for that they need evolutionary theory. Without evolutionary theory, ostensible explanations like “feeling good” and “protecting beliefs” can be viewed as acceptable and, in some cases, even as useful, despite them being anything but.

References: Callan, M., Dawtry, R., & Olson, J. (2012). Justice motive effects in ageism: The effects of a victim’s age on observer perceptions of injustice and punishment judgments Journal of Experimental Social Psychology, 48 (6), 1343-1349 DOI: 10.1016/j.jesp.2012.07.003

Hafer, C. (2000). Investment in Long-Term Goals andCommitment to Just Means Drive
the Need to Believe in a Just World Personality and Social Psychology Bulletin, 26, 1059-1073

Now You’re Just A Moral Rule I Used To Know

At one point in my academic career I found myself facing something resembling an ethical dilemma: I had, what I felt was, a fantastic idea for a research project, hindered only by the fact that someone had conducted a very similar experiment a few years prior to my insight. To those of you unfamiliar with the way research is conducted, this might not seem like too big of a deal; after all, good science requires replications, so it seems like I should be able to go about my research anyway with no ill effects. Somewhat unfortunately for me – and the scientific method more generally – academic journals are not often very keen on publishing replications, nor are dissertation committees and other institutions that might eventually examine my resume impressed by them. There was, however, a possible “out” for me in this situation: I could try and claim ignorance. Had I not have read the offending paper, or if others didn’t know about it (“it” being either the paper’s existence or the knowledge that I read it), I could have more convincingly presented my work as entirely novel. All I had to do would be not cite the paper and write as if no work had been conducted on the subject, much like Homer Simpson telling himself, “If I don’t see it, it’s not illegal” as he runs a red light.

Plan B for making the work novel again is slightly messier…

Since I can’t travel back in time and unread the paper, presenting the idea as completely new (which might mean more credit for me) would require that I convince others that I had not read it. Attempting that, however, comes with a certain degree of risk: if other people find out that I had read the paper and failed to give proper credit, my reputation as a researcher would likely suffer as a result. Further, since I know that I read the paper, that knowledge might unintentionally leak out, resulting in my making an altogether weaker claim of novelty. Thankfully, (or not so thankfully, depending on your perspective) there’s another way around this problem that doesn’t involve time travel; my memory for the study could simply “fail”. If I suddenly was no longer aware of the fact that I had read the paper, if those memories no longer existed or existed but could not be accessed, I could honestly claim that my research was new and exciting, making me that much better off.

Some new research by Shu and Gino (2012) asked whether our memories might function in this fashion, much like the Joo Janta 200 Super-Chromatic Peril-Sensitive Sunglasses found in The Hitchhiker’s Guide to the Galaxy series: darkening at the first sign of danger, preventing the wearer from noticing and allowing them to remain blissfully unaware. In this case, however, the researchers asked whether engaging in an immoral action – cheating – might subsequently result in the actor’s inability to remember other moral rules. Across four experiments, when subjects were given an opportunity to act less than honestly, either through commission or omission, they reported remembering fewer previously read moral – but not neutral – rules.

In the first of these experiments, participants read both an honor code and a list of requirements for obtaining a driver’s license and they were informed that they would be answering questions about the two later. The subjects were then given a series of problems to try and solve in a given period of time, with each correct answer netting a small profit. In one of the conditions, the experimenter tallied the number of correct answers for each participant and paid them accordingly; in the other condition, subjects noted how many answers they got right and paid themselves privately, allowing for subjects to misrepresent their performance for financial gain. Following their payment, subjects were then given a memory task for the previously-read information. When given the option for cheating, about a third of the subjects took advantage of the opportunity, reporting that they had solved an additional five of the problems, on average. That some people cheated isn’t terribly noteworthy; what was is that when the subjects were tested on their recall of the information they had initially read, those who cheated tended to remember fewer items concerning the honor code than those who did not (2.33 vs 3.71, respectively), but remembered similar number of items about the license rules (4 vs 3.79). The cheaters’ memories seemed to be, at least temporarily, selectively impaired for moral items.

There goes that semester of business ethics…

Of course, that pattern of results is open to a plausible alternative explanation: people who read the moral information less carefully were also more likely to cheat (or people who are more interested in cheating had less of an interest in moral information). The second experiment sought to rule that explanation out. In the follow-up study, subjects initially read two moral documents: the honor code and the Ten Commandments. The design was otherwise similar, minus one key detail: subjects took two memory tasks, one before they had the opportunity to cheat and another one after the fact. Before there was any option for dishonest behavior, subjects’ performance on their memory for moral items was similar regardless of whether they would later cheat or not (4.33 vs 4.44, respectively). After the problem solving task, however, the subjects who cheated subsequently remembered fewer moral items about the second list they read (3.17), relative to those who did not end up cheating (4.21). The decreased performance on the memory task seemed to be specific to the subjects who cheated, but only after they had acted dishonestly; not before.

The third experiment shifted gears, looking instead at acts of omission rather than outright lying. First, subjects were asked to read the honor code as before, with one group of subjects being informed that the memory task they would later complete would yield an additional $1.50 of payment for each correct answer. This gave the subjects some incentive to remember and accurately report their knowledge of the honor code later (to try and rule out the possibility that, previously, subjects had remembered the same amount of moral information, but just neglected to report that they did). Next, subjects were asked to solve some SAT problems on a computer, and each correct answer would, as before, net the subject some additional payment. However, some subjects were informed that the program they were working with contained a glitch that would cause the correct answer to be displayed on the screen five seconds after the problem appeared unless they hit the space bar. The results showed that, of the subjects that knew the correct answer would pop up on the screen, almost all of them (minus one very moral subject) made use of that glitch at least once during the experiment and, ss before, the cheaters recalled fewer moral items than the non-cheating groups (4.53 vs 6.41). Further, while the incentives for accurate recall were effective in the non-cheating group (they remembered more items when they were paid for each correct answer), this was not the case for the cheaters: whether they were being paid to remember or not, the cheaters still remembered about the same amount of information.

Forgetting about the forth experiment for now, I’d like to consider why we might expect to see this pattern of results. Shu and Gino (2012) suggest that such motivated forgetting might help in “reducing dissonance and regret”, to maintain one’s “self-image”. Such explanations are not even theoretically plausible functions for this kind of behavior, as “feeling good”, in and of itself, doesn’t do anything useful. In fact, forgetting moral rules could be harmful, to the extent that it might make one more likely to commit acts that others would morally condemn, resulting in increased social sanctions or physical aggression. However, if such ignorance was used strategically, it might allow the immoral actor in question to mitigate the extent of that condemnation. That is to say, committing certain immoral acts out of ignorance is seen as being less deserving of punishment than committing them intentionally, so if you can persuade others that you just made a mistake, you’d be better off.

“Oops?”

While such an explanation might be at least plausible, there are some major issues with it, namely that cheating-contingent rule forgetting is, well, contingent on the fact that you cheated. Some cognitive system needs to know that you cheated in the first place to start suppressing one’s memory for moral rule accessibility, and if that system knows that a moral rule has been violated, it may leak that information into the world (in other words, it might cause the same problem that it was hypothesized to solve). Relatedly, suppressing memory accessiability for moral rules more generally, specifically moral rules unrelated to the current situation, probably won’t do you much good when it comes to persuading others that you didn’t know the moral rule which you broke in the first place – what they’ll likely be condemning you for. If you’re caught stealing, forgetting that adultery is immoral won’t help out (and claiming that you didn’t know stealing was immoral is itself not the most believable of excuses).

That said, the function behind the cognitive mechanisms generating this pattern of results likely does involve persuasion at its conceptual core. That people have difficulty accessing moral information after they’ve done something less than moral probably represents some cognitive systems for moral condemnation becoming less active (one side-effect of which is that your memory for moral rules isn’t accessed, as one isn’t trying to find a moral violation), while systems for defending against moral condemnation come online. Indeed, as the forth, unreviewed, study found, even moral words appeared to be less accessible; not just rules. However, this was only the case for cheaters who had been exposed to an honor code; when there was less of a need to defend against condemnation (when one didn’t cheat or hadn’t been exposed to an honor code), those systems stayed relatively dormant.

References: Shu, L., & Gino, F. (2012). Sweeping dishonesty under the rug: How unethical actions lead to forgetting of moral rules. Journal of Personality and Social Psychology, 102 (6), 1164-1177 DOI: 10.1037/a0028381

The Fight Over Mankind’s Essence

All traits of biological organisms require some combination and interaction of genetic and non-genetic factors to develop. As Tooby and Cosmides put it in their primer:

Evolutionary psychology is not just another swing of the nature/nurture pendulum. A defining characteristic of the field is the explicit rejection of the usual nature/nurture dichotomies — instinct vs. reasoning, innate vs. learned, biological vs. cultural. What effect the environment will have on an organism depends critically on the details of its evolved cognitive architecture.

The details of that cognitive architecture are, to some extent, what people seem to be referring to when they use the word “innate”, and figuring out the details of that architecture is a monumental task indeed. For some reason, this task of figuring out what’s “innate” also draws some degree of what I feel is unwarranted hostility and precisely why it does is a matter of great interest. One might posit that some of this hostility is due to the term itself. “Innate” seems to be a terribly problematic term for the same two reasons that most other contentious terms are: people can’t seem to agree on a clear definition for the word or a  context to apply it in, but they still use it fairly often despite that. Because of this, interpersonal communication can get rather messy, much like two teams trying to play a sport in which each is playing the game under a different set of rules; a philosophical game of Calvinball. I’m most certainly not going to be able to step into this debate and provide the definition for “innate” that all parties will come to intuitively agree upon and use consistently in the future. Instead, my goal is to review two recent papers that examined the contexts in which people’s views of innateness vary.

“Just add environment!” (Warning: chicken outcome will vary with environment)

Anyone with a passing familiarity in the debates that tend to surround evolutionary psychology will likely have noticed that most of these debates tend to revolve around issues of sex differences. Further, this pattern tends to hold whether it’s a particular study being criticized or the field more generally; research on sex differences just seems to catch a disproportionate amount of the criticism, relative to most other topics, and that criticism can often get leveled at the entire field by association (even if the research is not published in an evolutionary psychology, and even if the research is not conducted by people using an evolutionary framework). While this particular observation of mine is only an anecdote, it seems that I’m not alone in noticing it. The first of the two studies on attitudes towards innateness was conducted by Geher & Gambacorta (2010) on just this topic. They sought to determine the extent to which attitudes about sex differences might be driving opposition to evolutionary psychology and, more specifically, the degree to which those attitudes might be correlated with being an academic, being a parent, or being politically liberal.

Towards examining this issue, Geher & Gambacorta (2010) created questions aimed at assessing people attitudes in five domains: (1) human sex differences in adulthood, (2) human sex differences in childhood, (3) behavioral sex differences in chickens, (4) non-sex related human universals, and (5) behavioral differences between dogs and cats. Specifically, the authors asked about the extent to which these differences were due to nature or nurture. As mentioned in the introduction, this nature/nurture dichotomy is explicitly rejected in the conceptual foundations of evolutionary psychology and is similarly rejected by the authors as being useful. This dimension was merely used in order to capture the more common attitudes about the nature of biological and environmental causation, where the two are often seen as fighting for explanatory power in some zero-sum struggle.

Of the roughly 270 subjects who began the survey, not all of them completed every section. Nevertheless, the initial sample included 111 parents and 160 non-parents, 89 people in academic careers and 182 non-academics, and the entire sample was roughly 40 years old and mildly politically liberal, on average. The study found that political orientation was correlated with judgments of whether sex differences in humans (children and adults) were due to nature or environment, but not the other three domains (cats/dogs, chickens/hens, or human universals): specifically, those with more politically liberal leanings were also more likely to endorse environmental explanations for human sex differences. Across other domains there were some relatively small and somewhat inconsistent effects, so I wouldn’t make much of them just yet (though I will mention that women’s studies and sociology fields seemed consistently more inclined to chalk each domain – excepting the differences between cats and dogs – up to nurture, relative to other fields; I’ll also mention their sample was small). There was, however, a clear effect that was not discussed in the paper:subjects were more likely to chalk non-human animal behavior up to nature, relative to human behavior, and this effect seemed more pronounced with regards to sex differences specifically. With these findings in mind, I would echo the conclusion of the paper that there is appears to be some political, or, more specifically, moral dimension to these judgments of the relative roles of nature and nurture. As animal behavior tends to fall outside of the traditional human moral domain, chalking their behavior up to nature seemed less unpalatable for the subjects.

See? Men and women can both do the same thing on the skin of a lesser beast.

The next paper is a new release from Knobe & Samuels (2013). You might remember Knobe from his other work in asking people slightly different questions and getting vastly different responses, and it’s good to see he’s continuing on with that proud tradition. Knobe & Samuels begins by asking the reader to imagine how they’d react to the following hypothetical proposition:

Suppose that a scientist announced: ‘I have a new theory about the nature of intention. According to this theory, the only way to know whether someone intended to bring about a particular effect is to decide whether this effect truly is morally good or morally bad.’

The authors predict that most people would reject this piece of folk psychology made explicit; value judgments are supposed to be a different matter entirely from tasks like assessing intentionality or innateness, yet these judgments do not appear to be truly be independent from each other in practice. Morally negative outcomes are rated as being more intentional than morally positive ones, even if both are brought about as a byproduct of another goal. Knobe & Samuels (2013) sought to extent this line of research in the realm of attitudes about innateness.

In their first experiment, Knobe & Samuels asked subjects to consider an infant born with a rare genetic condition. This condition ensures that if a baby breastfeeds in the first two weeks of life it will either have extraordinarily good math abilities (condition one) or exceedingly poor math skills (condition two). While the parents could opt to give the infant baby formula that would ensure the baby would just turn out normal with regard to its math abilities, in all cases the parents were said to have opted to breastfeed, and the child developed accordingly. When asked about how “innate” the child’s subsequent math ability was, subjects seemed to feel that baby’s abilities were more innate (4.7 out of 7) when they were good, relative to when those abilities were poor (3.4). In both cases, the trait depended on the interaction of genes and environment and for the same reason, yet when the outcome was negative, this was seen as being less of an innate characteristic. This was followed up by a second experiment where a new group of subjects were presented with a vignette describing a fake finding about human’s genes: if people experienced decent treatment (condition one) or poor treatment (condition two) by parents at least sometimes, then a trait would reliability develop. Since most all people do experience decent or poor treatment by their parents on at least some occasions, just about everyone in the population comes to develop this trait. When asked about how innate this trait was, again, the means through which it developed mattered: traits resulting from decent treatment were rated as more innate (4.6) than traits resulting from poor treatment (2.7).

Skipping two other experiments in the paper, the final study presented these cases either individually, with each participant seeing only one vignette as before, or jointly, with some subjects seeing both versions of the questions (good/poor math abilities, decent/poor treatment) one immediately after the other, with the relevant differences highlighted. When subjects saw the conditions independently, the previous effects were pretty much replicated, if a bit weakened. However, even seeing these cases side-by-side did not completely eliminate the effect of morality on innateness judgments: when the breastfeeding resulted in worse math abilities this was still seen as being less innate (4.3) than the better math abilities (4.6) and, similarly, when poor treatment led to a trait developing it was viewed as less innate (3.8) than when it resulted from better treatment (3.9). Now these differences only reached significance because of the large sample size in the final study as they were very, very small, so I again wouldn’t make much of them, but I do still find it somewhat surprising that there were still small differences to be talked about at all.

Remember: if you’re talking small effects, you’re talking psychology.

While these papers are by no means the last word on the subject, they represent an important first step in understanding the way that scientists and laypeople alike represent claims about human nature. Extrapolating these results a bit, it would seem that strong opinions about research in evolutionary psychology are held, at least to some extent, for reasons that have little to do with the field per se. This isn’t terribly surprising, as it’s been frequently noted that many critics of evolutionary psychology have a difficult time correctly articulating the theoretical commitments of the field. Both studies do seem to suggest that moral concerns play some role in the debate, but precisely why the moral dimension seems to find itself represented in the debate over innateness is certainly an interesting matter that neither paper really gets into. My guess is that it has something to do with the perception that innate behaviors are less morally condemnable than non-innate ones (hinting at an argumentative function), but that really just pushes the question back a step without answering it. I look forward to future research on this topic – and research on explanations, more generally – to help fill in the gaps of our understanding of this rather strange phenomenon.

References: Geher, G., & Gambacorta, D. (2010). Evolution is Not Relevant to Sex Differences in Humans Because I Want it That Way! Evidence for the Politicization of Human Evolutionary Psychology EvoS: The Journal of the Evolutionary Studies Consortium , 2, 32-47

Knobe, J., & Samuels, R. (2013). Thinking like a scientist: Innateness as a case study Cognition, 126 (1), 72-86 DOI: 10.1016/j.cognition.2012.09.003

(Not So) Simple Jury Persuasion: Beauty And Guilt

It should come as no shock to anyone, really, that people have all sorts of interesting cognitive biases. Finding and describing these biases would seem to make up a healthy portion of the research in psychology, and one can really make a name for themselves if the cognitive bias they find happens to be particularly cute. Despite this well-accepted description of the goings-on in the human mind (it’s frequently biased), most research in the field of psychology tends to overlook, explicitly or implicitly, those ever-important “why” questions concerning said biases; the paper by Herrera et al (2012) that I’ll be writing about today (and the Jury Room covered recently) is no exception, but we’ll deal with that in a minute. Before I get to this paper, I would like to talk briefly about why we should expect cognitive biases in the most general terms.

Hypothesis 1: Haters gonna hate?

When it comes to the way our mind perceives and processes information, one might consider two possible goals for those perceptions: (1) being accurate – i.e. perceiving the world in an “objective” or “correct” way – or (2) doing (evolutionarily) useful things. A point worth bearing in mind is that the latter goal is the only possible route by which any cognitive adaptation could evolve; a cognitive mechanism that did not eventually result in a reproductive advantage would, unsurprisingly, not be likely to spread throughout the population. That’s most certainly not to say that accuracy doesn’t matter; it does, without question. However, accuracy is only important insomuch as it leads to doing useful things. Accuracy for accuracy’s sake is not even a potential selection pressure that could shape our psychology. While, generally speaking, having accurate perceptions can often lead towards adaptive ends, when those two goals are in conflict, we should expect doing useful things to win every time, and, when that happens, we should see a cognitive bias as the result.

A quick example can drive this point home: your very good friend finds himself in conflict with a complete stranger. You have arrived late to the scene, so you only have your friend’s word and the word of the stranger as to what’s going on. If you were an objectively accurate type, you might take the time to listen to both of their stories carefully, do your best to figure out how credible each party is, find out who was harmed and how much, and find the “real” victim in the altercation. Then, you might decide whether or not to get involved on the basis of that information. Now that may sound all well and good, but if you opt for this route you also run the risk of jeopardizing your friendship to help out a stranger, and losing the benefits of that friendship is a cost. Suffering that cost is, all things considered, evolutionarily, would be a “bad” thing, even if uninvolved parties might consider it to be it the morally correct action (skirting for the moment the possibility of costs that other parties might impose, though avoiding those could easily be fit in the “doing useful things” sides of the equation). This suggests that, all else being equal, there should be some bias that pushes people towards siding with their friends, as siding against them is a costlier alternative.

So where all this leads us is to the conclusion that when you see someone proposing that a cognitive bias exists, they are, implicitly or explicitly, suggesting that there is a conflict between accuracy and some cost of that accuracy, be that conflict over behaving in a way that generates an adaptive outcome, trade-offs between cognitive costs of computation and accuracy, or anything else. With that out of the way, we can now consider the paper by Herrera et al (2012) that purports to find a strange cognitive bias when it comes to the interaction of (a) perceptions of credibility, responsibility, and control of a situation when it comes to domestic violence against women, (b) their physical attractiveness, and (c) their prototypicality as a victim. According to their results, attractiveness might not always be a good thing.

Though, let’s face it, attractiveness is, on the whole, typically a good thing.

In their study, Herrera et al (2012) recruited a sample of 169 police offers (153 of which were men) from various regions of Spain. They were divided into four groups, each of which read a different vignette about a hypothetical woman who had filed a self-defense plea for killing her husband by stabbing him in the back several times, citing a history of domestic abuse a fear that he would have killed her during an argument. The woman in these stories – Maria – was either described as attractive or unattractive (no pictures were actually included) along the following lines: thick versus thin lips, smooth features versus stern and jarring ones, straight blonde hair versus dark bundled hair, and slender versus non-slender appearance. In terms of whether Maria was a prototypical battered woman, she was either described as having 2 children, no job with an income, hiding her face during the trial, being poorly dressed, and timid in answering questions, or as having no children, a well-paying job, being well dressed, and resolute in her interactions.

Working under the assumption that these manipulations are valid (I feel they would have done better to have used actual pictures of women rather than brief written descriptions, but they didn’t), the authors found an interesting interaction: when Maria was attractive and prototypical, she was rated as being more credible than when she was unattractive and prototypical (4.18 vs 3.30 out of 7). The opposite pattern held for when Maria was not prototypical; here, attractive Maria was rated as being less credible than her unattractive counterpart (3.72 vs 3.85). So, whether attractiveness was a good or a bad thing for Maria’s credibility depended on how well she otherwise met some criteria for your typical victim of domestic abuse. On the other hand, more responsibility was attributed to Maria for the purported abuse when she was attractive overall (5.42 for attractive, 5.99 for unattractive).

Herrera et al (2012) attempt to explain the attractiveness portion of their results by suggesting that attractiveness might not fit in with the prototypical picture of a female version of domestic abuse, which results in less lenient judgments of their behavior. It seems to me this explanation could have been tested with the data they collected, but they either failed to do so or did and did not find significant results. More to the point, this explanation is admittedly strange, given that attractive women were also rated as more credible when they were otherwise prototypical, and the author’s proximate explanation should, it seems, predict precisely the opposite pattern in that regard. Perhaps they might have had ended up with a more convincing explanation for their results had their research been guided with some theory as to why we should see these biases with regard to attractiveness, (i.e. what the conflict in perception should be being driven by) but it was not.

I mean, it seems like a handicap to me, but maybe you’ll find something worthwhile…

There was one final comment in the paper I would like to briefly consider with regard to what the authors consider two fundamental due process requirements in cases of women’s domestic abuse: (1) the presumption of innocence on the part of the woman making the claim of abuse and (2) the woman’s right to a fair hearing without the risk of revictimization; revictimization, in this case, referring to instances where the woman’s claims are doubted and her motives are called into question. What is interesting about that claim is that it would seem to set up an apparently unnoticed or unmentioned double-standard: it would seem to imply that women making claims of abuse are supposed to be, by default, believed; this would seem to do violence to the right that the potential perpetrator is supposed to have with regard to their presumption of innocence. Given that part of the focus of this research is on the matter of credibility, this unmentioned double-standard seems out of place. This apparent oversight might have to do with the fact that this research was only examining moral claims made by a hypothetical woman, rather than another claim also made by a man, but it’s hard to say for sure.

References: Herrera, A., Valor-Segura, I., & Expósito, F (2012). Is Miss Sympathy a Credible Defendant Alleging Intimate Partner Violence in a Trial for Murder? The European Journal of Psychology Applied to Legal Context, 4, 179-196

No, Really; Group Selection Still Doesn’t Work

Back in May, I posed a question concerning why an organism would want to be a member of group: on the one hand, an organism might want to join a group because, ultimately, that organism calculates that joining a group would likely lead to benefits for itself that the organism would not otherwise obtain; in other words, organisms would want to join a group for selfish reasons. On the other hand, an organism might want to join a group in order to deliver benefits to the entire group, not just themselves. In this latter case, the organism would be joining the group for, more or less, altruistic reasons. For reasons that escape my current understanding, there are people who continue to endorse the second reason for group-joining as plausible, despite it being anathema to everything we currently know about how evolution works.

The debate over whether adaptations for cooperation and punishment were primarily forged by selection pressures at the individual or group level has gone on for so long because, in part, much of the evidence that was brought to bear on the matter could have been viewed as being consistent with either theory – if one was creative enough in their interpretation of the results, anyway. The results of a new study by Krasnow et al (2012) should do one of two things to the group selectionists: either make them reconsider their position or make them get far more creative in their interpreting.

Though I think I have a good guess which route they’ll end up taking.

The study by Krasnow et al (2012) took the sensible route towards resolving the debate: they created contexts where the two theories make opposing predictions. If adaptations for social exchange (cooperation, defection, punishment, reputation, etc) were driven primarily by self-regarding interests (as it is the social exchange model), information about how your partner behaved towards you should be more relevant than information about how your partner behaved towards others when you’re deciding how to behave towards them. In stark contrast, a group selection model would predict that those two types of information should be of similar value when deciding how to treat others, since the function of these adaptations should be to provide group-wide gains; not selfish ones.

These contexts were created across two experiments. The first experiment was designed in order to demonstrate that people do, in fact, make use of what the authors called “third-party reputation”, defined as a partner’s reputation for behaving a certain way towards others. Subjects were brought into the lab to play a trust game with a partner who, unbeknownst to the subjects, were computer programs and not real people. In a trust game, a player can either not trust their partner, resulting in an identical mid-range payoff for both (in this case, $1.20 for both), or trust their partner. If the first player trusts, their partner can either cooperate – leading to an identical payoff for both players that’s higher than the mid-range payoff ($1.50 for both) – or defect – leading to an asymmetrical payoff favoring the defector ($1.80 and $0.90). In the event that the player trusted and their partner defected, the player was given an option to pay to punish their partner, resulting in both their payoffs sitting at a low level ($0.60 for both).

Before the subjects played this trust game, they were presented with information about their partner’s third-party reputation. This information came in the form of questions that their partner had ostensibly filled out earlier, which assessed the willingness of that partner to cheat given freedom from detection. Perhaps unsurprisingly, subjects were less willing to trust a partner who indicated they would be more likely to cheat, given a good opportunity. What this result tells us, then, is that people are perfectly capable of making use of third-party reputation information when they know nothing else about their partner. These results do not help us distinguish between group and individual-level accounts, however, as both models predict that people should act this way; that’s where the second study came in.

“Methods: We took 20 steps, turned, and fired”

The second study added in the crucial variable: first-party reputation, or your partner’s past behavior towards you. This information was provided through the results of two prisoner’s dilemma games that were visible to the subject, one which was played between a subject and their partner and the other played between the partner and a third party. This led to subjects encountering four kinds of partners: one who defected both on the subject and a third party, one who cooperated with both, and one who defected on one (either the subject or the third party) but cooperated with the other. Following this initial game, subjects again played a two-round trust game with their partners. This allowed the following question to be answered: when subjects have first-party reputation available, do they still make use of third-party reputation?

The answer could not have been a more resounding, “no”. When deciding whether they were going to trust their partner or not, the third-party reputation did not predict the outcome at all, whereas first-party reputation did, and, unsurprisingly, subjects were less willing to trust a partner who had previously defected on them. Further, a third-party reputation for cheating did not make subjects any more likely to punish their partner, though first-party reputation didn’t have much value in those predictions either. That said, the social exchange model does not predict that punishment should be enacted strictly on the grounds of being wronged; since punishment is costly it should only be used when subjects hope to recoup the costs of that punishment in subsequent exchanges. If subjects do not wish to renegotiate the terms of cooperation via punishment, they should simply opt to refrain from interacting with their partner altogether.

That precise pattern of results was borne out: when a subject were defected on and the subject then punished the defector, that same subject was also likely to cooperate in subsequent rounds with their partner. In fact, they were just as likely to cooperate with their partner as they were cases where the partner did not initially defect. It’s worth repeating that subjects did this while, apparently, ignoring how their partner had behaved towards anyone else. Subjects only seemed to punish the partner in order to persuade their partner to treat them better; they did not punish because their partner had hurt anyone else. Finally, first-party reputation, unlike third-party reputation, had an effect on whether subjects were willing to cooperate with their partner on their first move in the trust game. People were more likely to cooperate with a partner who had cooperated with them, irrespective of how that partner behaved towards anyone else.

Let’s see you work that into your group selection theory.

To sum up, despite group selection models predicting that subjects should make use of first- and third-party information equally, or at least jointly, they did not. Subjects only appeared to be interested in information about how their partner behaved towards others to the extent that such information might predict how their partner would behave towards them. However, since information about how their partner had behaved towards them is a superior cue, subjects made use of that first-party information when it was available to the exclusion of third-party reputation.

Now, one could make the argument that you shouldn’t expect to see subjects making use of information about how their partners behaved towards other parties because there is no guarantee that those other parties were members of the subject’s group. After all, according to group selection theories, altruism should only be directed at members of one’s own group specifically, so maybe these results don’t do any damage to the group selectionist camp. I would be sympathetic to that argument, but there are two big problems to be dealt with before I extend that sympathy: first, it would require that group selectionists give up all the previously ambiguous evidence they have said is consistent with their theory, since almost all of that research does not explicitly deal with a subject’s in-group either; they don’t get to recognize evidence only in cases where it’s convenient for their theory and ignore it when it’s not. The second issue is the one I raised back in May: “the group” is a concept that tends to lack distinct boundaries. Without nailing down this concept more concretely, it would be difficult to build any kind of stable theory around it. Once that concept had been developed more completely, then it would need to be shown that subjects will act altruistically towards their group (and not others) irrespective of the personal payoff for doing so; demonstrating that people act altruistically with the hopes that they will be benefited down the road from doing so is not enough.

Will this study be the final word on group selection? Sadly, probably not. On the bright side, it’s at least a step in the right direction.

References: Krasnow, M.M., Cosmides, L., Pederson, E.J., & Tooby, J. (2012). What are punishment and reputation for? PLOS ONE, 7

Altruism Is Not The Basis Of Morality

“Biologists call this behavior altruism, when we help someone else at some cost to ourselves. If you think about it, altruism is the basis of all morality. So the larger question is, Why are we moral?” – John Horgan [emphasis mine]

John Horgan, a man not known for a reputation as a beacon of understanding, recently penned the above thought that expresses what I feel to be an incorrect sentiment. Before getting to the criticism of that point, however, I would like to first commend John for his tone in this piece: it doesn’t appear as outwardly hostile towards the field of evolutionary psychology as several of his past pieces have been. Sure, there might be the odd crack about “hand-wavy speculation” and the reminder about how he doesn’t like my field, but progress is progress; baby steps, and all that.

Just try and keep those feet pointed in the right direction and you’ll do fine.

I would also like to add at the outset that Horgan states that:

“Deceit is obviously an adaptive trait, which can help us (and many other animals) advance our interest” [Emphasis mine]

I find myself interested as to why Horgan seems to feel that deceit is obviously adaptive, but violence (in at least some of its various forms) obviously isn’t. Both certainly can advance an organism’s interests, and both generally advance those interests at the expense of other organisms. Given that Horgan seems to offer nothing in the way of insight into how he arbitrates between adaptations and non-adaptations, I’ll have to chalk his process up to mere speculation. Why one seems obvious and the other does not might have something to do with Trivers accepting Horgan’s invitation to speak at his institution last December, but that might be venturing too far into that other kind of “hand-wavy speculation” Horgan says he dislikes so much. Anyway…

The claim that altruism is the basis of all morality might seem innocuous enough – the kind of thing ostensibly thoughtful people would nod their head at – but an actual examination of the two concepts will show that the sentiment only serves to muddy the waters of our understanding of morality. Perhaps that revelation could have been reached had John attempted to marshal more support for the claim beyond saying, “If you think about it” (which is totally not speculation…alright; I’ll stop), but I suppose we can’t hope for too much progress at once. So let’s begin, then, by considering the ever-quotable line from Adam Smith:

“It is not from the benevolence of the butcher, the brewer, or the baker, that we can expect our dinner, but from their regard to their own interest”

Smith is describing a scenario we’re all familiar with: when you want a good or service that someone else can provide you generally have to make it worth their while to provide it to you. This trading of benefits-at-a-cost is known as reciprocal altruism. However, when I go to the mall and give Express money so they will give me a new shirt, this exchange is generally not perceived as two distinct, altruistic acts (I endure the cost of losing money to benefit Express and Express endures the cost of losing a shirt to benefit me) that just happen to occur in close temporal proximity to one another, nor is it viewed as a particularly morally praiseworthy act. In fact, such exchanges are often viewed as two selfish acts, given that the ostensible altruism on the behavioral level is seen as a means for achieving benefits, not an end in and of itself. One could also consider the example with regards to fishing: if you sit around all day waiting for a fish to altruistically jump into your boat so you can cook it for dinner, you’ll likely be waiting a long time; better to try and trick the fish by offering it a tasty morsel on the end of a hook. You suffer a cost (the loss of the bait and your time spent sitting on a boat) and deliver a benefit to the fish (it gets a meal), whereas the fish suffers a cost (it gets eaten soon after) that benefits you, but neither you or the fish were attempting to benefit the other. 

There’s a deeper significance to that point, though: reciprocally altruistic relationships tend to break down in the event that one party fails to return benefits to the other (i.e. when the payments over time for one party actually resembles altruism). Let’s say my friend helps me move, giving up his one day off a month he has in the process. This gives me a benefit and him a cost. At some point in the future, my friend is now moving. In the event I fail to reciprocate his altruism, there are many who might well say that I behaved immorally, most notably my friend himself. This does, however, raise the inevitable question: if my friend was expecting his altruistic act to come back and benefit him in the future (as evidenced by his frustration that it did not do so), wasn’t his initial act a selfish one on precisely the same level as my shopping or fishing example above?

Pictured above: not altruism

What these examples serve to show is that, depending on how you’re conceptualizing altruism, the same act can be viewed as selfish or altruistic, which throws a wrench into the suggestion that all morality is based on altruism. One needs to really define their terms well for that statement to even mean anything worthwhile. As the examples also show, precisely how people behave towards each other (whether selfishly or altruistically) is often a topic of moral consideration, but just because altruism can be a topic of moral consideration it does not mean it’s the basis of moral judgments. To demonstrate that altruism is not the basis of our moral judgments, we can also consider a paper by DeScioli et al (2011) examining the different responses people have to moral omissions and moral commissions.

In this study, subjects were paired into groups and played a reverse dictator game. In this game, person A starts with a dollar and person B has a choice between taking 10 cents of that dollar, or 90 cents. However, if person B didn’t make a choice within 15 seconds, the entire dollar would automatically be transferred to them, with 15 cents subtracted for running out of time. So, person B could be altruistic and only take 10 cents (meaning the payoffs would be 90/10 for players A and B, respectively), be selfish and take 90 cents (10/10 payoff), or do nothing making the payoffs 0/85 (something of a mix of selfish and spite). Clearly, failing to act (an omission) was the worst payoff for all parties involved and the least “altruistic” of the options. If moral judgments use altruism as their basis, one should expect that, when given the option, third parties should punish the omissions more harshly than either of the other two conditions (or, at the very least, punish the latter two conditions equally as harshly). However, those who took the 90 cents were the ones who got punished the most; roughly 21 cents, compared to the 14 cents that those who failed to act were punished. An altruism-based account of morality would appear to have a very difficult time making sense of that finding.

Further still, an altruism-based account of morality would fail to provide a compelling explanation for the often strong moral judgments people have in reaction to acts that don’t distinctly harm or benefit anyone, such as others having a homosexual orientation or deciding not to have children. More damningly still, an altruism basis for moral judgments would have a hell of a time trying to account for why people morally support courses of action that are distinctly selfish: people don’t tend to routinely condemn others for failing to give up their organs, even non-vital ones, to save the lives of strangers, and most people would condemn a doctor for making the decision to harvest organs from one patient against that patient’s will in order to save the lives of many more people.

“The patient in 10B needs a kidney and we’re already here…”

An altruism account would similarly fail to explain why people can’t seem to agree on many moral issues in the first place. Sure, it might be altruistic for a rich person to give up some money in order to feed a poor person, but it would also be altruistic for that poor person to forgo eating in order to not impose that cost on the rich one. Saying that morality is based in altruism doesn’t seem to even provide much information about how precisely that altruism will be enacted, moral interactions will play out, or seem to lend any useful or novel predictions more generally. Then again, maybe an altruism-based account of morality can obviously deal with these objects if I just think about it

References: DeScioli, P., Christner, J., & Kurzban, R. (2011). The omission strategy Psychological Science, 22, 442-446 DOI: 10.1177/0956797611400616

Some Thoughts On Gender Bias In Academia

Gender bias can be something of a “sexy” topic for many; the kind of issue that can easily get large groups of people worked up and in the mood to express opinions full of anger, mockery, and the word “duh”. On a related note, there’s been an article going around by Moss-Rascusin et al (2012) concerning whether some science faculty members tend to be slightly biased in favor of men, relative to women, and whether said subtle biases might be responsible for some portion of some gender gaps. This paper, and some associated commentary, has brought to mind a few thoughts that I happen to find quite interesting; thoughts about bias that I would like to at least give some consideration amidst all the rest of the coverage this article has been getting.

First one to encounter life-altering discrimination… wins, I guess…

First, about the study itself: Moss-Racusin et al (2012) sent out fake undergraduate materials (which included a brief statement about future goals and a little bit of background information regarding things like letters of recommendation and GRE scores) to 127 faculty members in either biology, chemistry, or physics departments. These materials differed only in terms of the name of applications (either John or Jennifer), and the faculty members were asked to evaluate the student, falsely believing that these evaluations would be used to help the student’s career development.The results of this experiment showed that the faculty members tended to rate the student’s competence and hireability lower when it was Jennifer, relative to John. Further, these faculty members offered more mentoring advice to John as well as recommending an annual salary of $4000 less to Jennifer, on average (though that salary was still around $25,000, which isn’t too bad…). Also, the faculty members tended to report that they liked Jennifer more.

What we have here looks like a straightforward case of sex-based discrimination. While people liked the woman more, they also saw her as less competent, at least in these fields particular fields, given identical credentials (even if these credentials were rather merge in scope). Off the top of my head, I see nothing glaringly wrong with this study, so I’m fine with accepting the results; there most certainly did seem to be a bias in this context, albeit not an overtly hostile one. There are, however. a few notes worthy of consideration: first, the authors don’t really examine why this bias exists. The authors suggest (i.e. say it’s reasonable…) that this bias is due to pervasive cultural stereotypes, but as far as I can see, that’s just an assertion; they really didn’t do anything to test in order to see if that’s the case here or not. Sure, they administered the “Modern Sexism Scale*”, but I have my reservations about precisely what that scale is supposed to be measuring. Like many studies in psychology, this paper is big on presenting and restating findings (people discriminate by sex because they’re sexist) but light on explanatory power.

Another interesting piece of information worthy of consideration that comes to mind relates to a previous paper, published the same journal one year prior. Ceci & Williams (2011) documented an impressive amount of evidence that ran counter to claims of women being discriminated against in science fields in terms of having their manuscripts reviewed, being awarded grant funding, and also being interviewed and subsequently hired (at least in regards to PhDs applying for tenure-track positions at R1 institutions in the natural sciences). When discrimination was found in their analysis, it was typically fleeting in size, inconsistent in which gender it favored, and, further, it often wasn’t found at all. So, potentially, the results of the current paper, which are themselves rather modest in size, could just be a fluke, resulting from how little information about these applicants was provided (in other words, faculty members might have been falling back on sex as an important source of information, given that they lacked much else in the way of other useful information). While Moss-Racusin et al (2012) suggest that the subtle biases they found might translate into later discrimination resulting in gender gaps, it would require a fairly odd pattern of discrimination, where, on the one hand, women are discriminated against in some contexts because they’re viewed as less competent, but then are subsequently published, awarded grants, and hired at the same rate as men anyway, despite those perceptions (which could potentially be interpreted as suggesting that the standards are subsequently set lower for women).

“Our hiring committee has deemed you incompetent as a researcher; welcome aboard!”

Peculiar patterns of how and when discrimination would need to work aside, there’s another point that I found to be the most interesting of all, and it’s the one I was hoping to focus on. This point comes in the form of a comment made by Jerry Coyne over at Why Evolution Is True. Coyne apparently finds it very surprising that this bias against women in the Moss-Racusin et al (2012) paper was displayed in equal force by both male and female faculty members. Coyne later repeats his surprise in a subsequent post on the topic, so this doesn’t just appear to be a slip on the keyboard; he really was surprised. What I find so interesting about this surprise is what it would seem to imply: the default assumption is that when a woman is being discriminated against, a man ought to be the culprit.

Granted, that interpretation takes a little bit of reading between the lines, but there’s something to it, I feel. There must have been some expectation that was violated in order for there to be surprise, so if that wasn’t Coyne’s default assumption, I would be curious as to what his assumption was. I get the sense that this assumption would not be limited to Coyne, however; it seems to have come up in other areas as well, perhaps most notably in the case of the abortion issue. Abortion debates often get framed as part of “The War on Women”, with opposition to abortion being seen as the male side and support for abortion being seen as the female side. This is fairly interesting considering the fact that men and women tend to hold very similar views on abortion, with both groups opposing it roughly as often as they support it.

If I had to guess at the underlying psychology behind that read-in assumption (assuming my assessment is correct), it would go something like this: when people perceive a victim, they’re more or less required to perceive a perpetrator as well; it’s a requirement of the cognitive moral template. Whether that perpetrator actually exists or not can be beside the point, but some people are going to look like better perpetrators than others. In this specific instance, when women, as a group, are supposed to be the victims, that really only leaves non-women as potential perpetrators. This is due to two major reasons: first, men may make better perpetrators in general for a variety of reasons and, second, the parties represented in this moral template for perpetrator and victim can’t be the same party; if you want an effective moral claim, you can’t be a victim of yourself. A tendency to assume men are the culprits when women are supposed to be the victims could be further exacerbated in the event that women are also more likely to be seen as victims generally.

An observation made by Alice Cooper (1975) when he penned the line, “Only women bleed…”

The larger point is, assuming that all the effects reported in the Moss-Racusin et al (2012) study were accurately detected and consistently replicated, there are two gender biases reported here: Jennifer is rated as less competent and John is rated as less likable, both strictly on gendered grounds. However, I get the impression that only one of those biases will likely be paid much mind, as has been the case in pretty much all the reporting about the study. While people may talk about the need to remedy the bias against women, I doubt that those same people will be concerned about bridging the “likability gap” between men and women as well. It would seem that ostensible concerns for sexism can be, ironically, inadvertently sexist themselves.

*[EDIT] As an aside, it’s rather odd that the Modern Sexism Scale only concerns itself with (what it assumes is) sexism against women specifically; nothing in that scale would in anyway appear to assess sexism against men.

References: Ceci SJ, & Williams WM (2011). Understanding current causes of women’s underrepresentation in science. Proceedings of the National Academy of Sciences of the United States of America, 108 (8), 3157-62 PMID: 21300892

Moss-Racusin CA, Dovidio JF, Brescoll VL, Graham MJ, & Handelsman J (2012). Science faculty’s subtle gender biases favor male students. Proceedings of the National Academy of Sciences of the United States of America PMID: 22988126

Dinner, With A Side Of Moral Stances

One night, let’s say you’re out to dinner with your friends (assuming, of course, that you’re the type with friends). One of these friends decides to order a delightful medium-rare steak with a side of steamed carrots. By the time that the orders arrive, however, some mistake in the kitchen has led said friend to receive the salmon special instead. Now, in the event you’ve ever been out to dinner and this has happened, one of these two things probably followed: (1) your friend doesn’t react, eats the new dish as if they had ordered it, and then goes on about how they made such a good decision to order the salmon, or (2) they grab the waiter and yell a string of profanities at him until he breaks down in tears.

OK; maybe a bit of an exaggeration, but the pattern of behavior that we see in the event of a mixed-up order at a restaurant typically more closely resembles the latter pattern. Given that most people can recognize that they didn’t receive the order they actually made, what are we to make about the proposition that people seem to have trouble recognizing some moral principles they just endorsed?

“I’ll endorse what she’s endorsing…”

A new study by Hall et al (2012) examined, what they’re calling, “choice blindness”, which is, apparently, quite a lot like “change blindness”, except with decisions instead of people. In this experiment, a researcher with a survey about general moral principles or moral stances on certain specific issues approached 160 strangers who happened to be walking through the park. Once the subjects had filled out the first page of the survey and flipped the piece of  paper over the clipboard to move onto the second, an adhesive on the back of the clipboard held on to and removed the lightly-attached portion of the survey to reveal a new set of questions. The twist is that the new set of questions were the opposite set of moral stances, so if a subject said they agreed that the government shouldn’t be monitoring emails, the new question would imply that the subject felt the government should be monitoring emails.

Overall, only about a third to a half of the subjects appeared to catch that the questions had been altered, a number which is very similar to the results found for the change blindness research. Further, many of the subjects that missed the deception also went on to give verbal justifications for their ‘decisions’ that appeared to be in opposition to their initial choice on the survey. That said, only about a third of the subjects who expressed extremely polarized scores (a 1 or a 9) failed to catch the manipulation, and authors also found that those who rated themselves as more politically involved were similarly more likely to detect the change.

So what are we to make of these findings? The authors suggest their is no straight-forward interpretation, but also suggest that choice blindness disqualifies vast swaths of research from being useful, as the results suggest that people don’t have “real” opinions. Though they say they are hesitant to suggest such an interpretation, Hall et al (2012) feel those interpretations need to be taken seriously as well, so perhaps they aren’t so hesitant after all. It might almost seem ironic that Hall et al (2012) seem “blind” to the opinion they had just expressed (don’t want to suggest such alternatives, but also do want to suggest such alternatives), despite that opinion being in print, and both opinions residing within the same sentence.

“Alright, alright; I’ll get the coin…”

It would seem plausible that the authors have no solid explanation of their results because they seemed to have gone into the study without any clearly stated theory. Such is the unfortunate state of much of the research in psychology; a dead-horse issue I will continue to beat. Describing an effect as a psychological “blindness” alone does not tell us anything; it merely restates the finding, and restatements of findings without additional explanations are not terribly useful for understanding what we’re seeing.

There are a number of points to consider regarding these results, so let’s start with the obvious: these subjects were not seeking to express their opinions so much as they were approached by a stranger with a survey. It seems plausible that at least some of these subjects really weren’t paying much attention to what they were doing or not really engaged in the task at hand. I can’t say to what extent this would be a problem, but it’s at least worth keeping in mind. One possible way of remedying this might be to have subjects first not only mark their agreement with an issue on the scale, but also briefly justify that opinion. If you got subjects to then try and argue against their previously stated justifications moments later, that might be a touch more interesting.

Given that there’s no strategic context under which these morals stances are being made in this experiment, some random fluctuation in answers might be expected. In fact, lack of context might be the reason that some subjects may not have been particularly engaged in the task in the first place, as evidenced by people who had more extreme scores or who were more involved in politics being more attentive to these changes. Accordingly, another potential issue here concerns the mere expectation of consistency in responses: research has already shown that people don’t hold universally to one set of moral principles or moral stances (i.e. the results from various versions of the trolley and footbridge dilemmas, among others). Indeed, we should expect moral judgments (and justifications for those judgments) to be made strategically, not universally, for the very simple reason that universal behaviors will not always lead to useful outcomes. For instance, eating when you’re hungry is a good idea; continuing to eat at all points, even when you aren’t hungry, is generally not. What that’s all getting at is that the justification of a moral stance is a different task than the generation of a moral stance, and if memory fails to retain information about what you wrote on a survey some strange researcher just handed you when you’re trying to get through the park,  you’re perfectly capable of reasoning about why some other moral stance is acceptable.

“I could have sworn I was against gay marriage. Ah well”

Phrased in those terms (“when people don’t remember what stance they just endorsed – after being approached by a stranger that was asking them to endorse some stance they might not have given any thought to until moments prior – they’re capable of articulating supportive arguments for an opposing stance”), the results of this study are not terribly strange. People often have to reason differently about whether a moral act is acceptable or not, contingent on where they currently stand in any moral interaction. For example, deciding whether an instance of murder was morally acceptable or not will probably depend, in large part, on which side of that murder you happen to stand on: did you just kill someone you don’t like, or did someone else just kill someone you did like? An individual that stated murder is always wrong in all contexts might be at something of a disadvantage, relative to one with a bit more flexibly in their moral justifications (to the extent that those justifications will persuade others about whether to punish the act or not, of course).

One could worry about what people’s “real” opinions are, then, but it would seem that doing so fundamentally misstates the question. Saying that when something bad happens to you is wrong, and when that same something bad happens to someone you dislike is right, both represent real opinions, but they’re not universal opinions; they’re context-specific. Asking about “real” universal moral opinions would be like asking about “real” universal emotions or states (“Ah, but how happy is he really? He might be happy now, but he won’t be tomorrow, so he’s not actually happy, is he?”). Now, of course, some opinions might be more stable than others, but that will likely be the case only insomuch as the contexts surrounding those judgments doesn’t tend to change.

References: Hall, L., Johansson, P., & Strandberg, T. (2012). Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming Survey PLOS ONE

Just A Little Off The Top

In my last post I suggested that humans likely possess a series of modules designed to assess victim characteristics when it comes to assessing their associated victimhood claims. Simply put, there are some people who make better social investments than others, and, accordingly, would tend to have their victimhood claims seen as more legitimate than those others. Specifically, I noted that men might be at something of a disadvantage when attempting to advance a victimhood claim, relative to women, as women might tend to be better targets of social investment (at least in certain contexts; I would be hesitant to assume that this is the case a prior across all contexts).

Coincidentally, I happened to come across this article (and associated Reddit post) today discussing whether or not male newborns should be circumcised. I feel the article and, more importantly, the comments discussing the article serve as an interesting (if non-scientific) case study in the weight of moral claims between genders. So let’s talk about the moral reactions of people to circumcision.

“There might have been a slight mix-up, but don’t worry; we’ll do your ears for free!”

In the debate over whether male offspring should be circumcised around the time they’re born, those in favor of circumcision seem to tend and phrase their stance in a consequentialist fashion, often claiming one or more of three* things: (1) circumcised penises are more appealing aesthetically, (2) circumcision brings with it health benefits in the form of a reduction in the risk of STI transmission, and (3) that the removal of the foreskin doesn’t really do any serious harm to the child. The function of these arguments would appear fairly self-evident: they are attempts to disengage the moral psychology of others by way of denying a victim, and without a victim the moral template cannot be completed. Those in favor of allowing circumcision, then, are generally claiming that being circumcised is a benefit, or at the very least not a cost, and you don’t get to be a victim without being perceived to have suffered a cost.

Beginning with the second point – the reduction of risk for contracting HIV – there is evidence to suggest that circumcision does nothing of the sort. Though I am unable to access the article, this paper in The Journal of Sexual Medicine reports that not only is circumcision not associated with a lower incidence of STIs in men, including HIV, but it might even be associated with a slightly higher incidence of infection (for whatever reason). The studies that claim to find the 40-60% reduction in the female-to-male transmission rate of HIV in circumcised men seem to have been conducted largely in African populations, where other issues, such as general hygiene, might be a factor. Specifically, one of the proposed reasons why uncircumcised males in these studies are more likely to become infected is that the foreskin traps fluids and pathogens, increasing bodily contact duration with them. In other words, a little soap and water after sex (along with a condom during sex, of course) could likely accomplish the same goal as circumcision in these cases, so the removal of the foreskin might be just a bit extreme of a remedy.

I’m most certainly not an expert in this field so don’t just take my word for it, but my perspective on the matter is that the results about whether circumcision decreases the transmission of HIV are mixed at best. Further, at least some of the hypothesized means through which circumcision could potentially work in this regard appear perfectly achievable through other, non-surgical means. Now that I’ve covered at least some ground on that evidentiary front, we can turn towards the more interesting, moral side. On the basis of this mixed evidence and a general lack of understanding as to how circumcision might work, the report issued by the American Academy of Pediatrics suggested that:

“…the benefits of newborn male circumcision justify access to this procedure for families who choose it” [emphasis mine].

“If you can’t be trusted to wash it properly, I’m going to cut it off”

One interesting facet of our moral judgments is that, to some extent, they are nonconsequentist. That is to say, even if an act leads to a positive consequence, it can still be considered immoral. The classic example of this concerns people’s intuitions in the trolley and footbridge dilemmas: in the former dilemma, roughly 90% of subjects say that diverting an out-of-control trolley away from five hikers and towards one hiker is morally acceptable; in the latter dilemma, a roughly equivalent percentage of subjects say that pushing another person in front of a train to save five hikers is morally impermissible (DeScioli & Kurzban, 2009). Despite the consequences of each action being identical, the moral feel of each action is radically different. Thus, to say that an action is justified strictly by referencing a cost/benefit ratio (and a rather fuzzy and potentially flawed one at that) can be to miss the point morally to some degree. That said, to some other degree it does hit the mark because, as previously mentioned, moral claims need a victim, and without costs there can be no victim.

This conflict between the nonconsequentialist and consequentialist aspect of our moral psychology appear readily visible in the reactions of people when it comes to comparing elective surgery on the genitals of boys to that of elective surgery when performed on girls. A few years back, the American Academy of Pediatrics also recommended reconsidering US law regarding whether or not doctors should be allowed to engage in a “ceremonial” female circumcision. Though not much is said about the details of the procedure explicitly, the sense the various articles discussing it give is that it is a “harmless” one, essentially amounting to a pinprick to the clitoris or clitoral hood capable of drawing a drop of blood.The AAP recommended this reconsideration in order to, hopefully, appease certain cultural groups that might otherwise take their daughters overseas to engage in a much more extreme version of the ritual where piece of the external genitalia are cut or fully removed. This recommendation by the AAP was soon reversed, following a political outcry.

It’s worth noting that, during discussions on the topic of circumcision, there are many people who get rather upset when a comparison is made between the female and male varieties, typically because the female version is more extreme. A complete removal of the clitoris is, no doubt, worse than the removal of the foreskin of the penis. When comparing a pinprick to the clitoris that does no permanent damage to a complete or partial removal of the male foreskin though, that argument would seem to lose some of its weight. Even without that consequentialist weight, however, people were still very strongly opposed the ceremonial pricking on (more or less) nonconsequentialist grounds:

“We retracted the policy because it is important that the world health community understands the AAP is totally opposed to all forms of female genital cutting, both here in the U.S. and anywhere else in the world,” said AAP President Judith S. Palfrey. [emphasis, mine]

The interesting question to me, then, is why male genital cutting isn’t currently opposed as vehemently in all forms when performed on newborn infants who cannot consent to the procedure (Wikipedia puts the percentage of newborn boys being circumcised before they leave the hospital at over 50% in the US). One could try and point, again, to the iffy data on HIV reduction, but even in the event that such data was good and no alternatives were available to reduce the spread of the virus, it would leave one unable to explain why circumcision as a practice dates back thousands of years, well before HIV was ever a concern. It would also still leave the moral question of consent very much alive: specifically, what are the acceptable bounds for parents when making decisions for their dependent offspring? A pinprick to the genitals for culture reasons apparently falls into the non-accepted category, whereas the remove of foreskin for aesthetic or cultural reasons is accepted.

He’ll appreciate all that endorsement cash when he’s grown up.

Now maybe, as some suggest, the female genital pricking would to some extent morally license the more extreme version of the practice. That would certainly be a bad outcome to the procedure, and such an argument seeks to add the consequentialist weight back into the argument. Indeed, most all the articles on the topic, along with many of the commenters, likened the pricking to its far more extreme expression elsewhere. However, I get the sense that such a justification for avoiding pricking might represent a post hoc justification for the aversion. I get that sense because I saw no evidence presented that this moral licensing outcome would actually obtain (just a concern that it would) and no indication that more people would really be OK with the procedure if the moral licensing risks could be reduced or removed. I don’t think I recall anyone saying, “I’d be alright with the clitoral pinprick if…”

Returning to the gender issue, however, why do people not raise similar concerns about male circumcision licensing other forms of harm to men? My sense is that concerns like these are not raised with nearly as much force or as frequently because, in general, people are less bothered by males being hurt, infants or otherwise, for reasons I mentioned in the last post. Even in the event that these people are just incredible swayed by the HIV data, (though why people accept or reject evidence in the first place is also an interesting topic) those potential benefits wouldn’t be realized by the boys until at least the point at which they’re sexually active, so circumcising boys when they’re newborns without any indication they even sort of consent seems premature.

So when people say male and female circumcision aren’t comparable, I feel they have more in their mind then just the consequentialist outcome. I get the sense that the emotional feel of the issues can’t be compared, in part, because one happens to men and one happens to women, and our psychology tends to treat such cases differently.

*Note: this list of three items is not intended to be comprehensive. There are also cultural or religious reasons cited for circumcision, but I won’t be considering them here, as they didn’t appear well represented in the current articles.

References: DeScioli, P., & Kurzban, R. (2009). Mysteries of Morality Cognition , 112, 281-299 DOI: 10.1016/j.cognition.2009.05.008

No, Really; What About The Men?

If you’re the kind of person who has frequented internet discussion boards, you’ll know that debates over sexism can get a bit heated. You might also have noticed that many problems men faced are not infrequently dismissed on the grounds of being relative unimportant when compared to issues women face. One common form this dismissal takes is the purposely misspelled, “what about teh poor menz?”, since everyone on the internet is automatically intellectually twelve. In fact, the whole sexism debate is often treated like a zero-sum game, where reducing sexism against one sex makes the other one worse off. We could debate whether that’s the case or not, but that’s not my goal today. Today, my goal is to ask, quite seriously, what about the men?

“Check out the male privilege on that guy”

There were two posts on Reddit that inspired this post: the first is this story of President Obama reacting to the Akin quote regarding rape. Here’s what Obama had to say:

“The views expressed were offensive,” said Obama. “Rape is rape. And the idea that we should be parsing and qualifying and slicing what types of rape we are talking about doesn’t make sense to the American people and certainly doesn’t make sense to me. So what I think these comments do underscore is why we shouldn’t have a bunch of politicians, a majority of whom are men, making health care decisions on behalf of women.” [emphasis mine]

Now, it seems to me that what we should want when it comes to our elected official writing legislation regarding our health care has nothing to do with gender per se; it should have everything to do with the policies themselves, not their proposers. For instance, imagine that Akin was a woman who uttered the same nonsensical quote about rape pregnancies: would that be an opportune time to comment about how women in general can’t be trusted to make their own health care decisions? I’d think not, yet it seemed to be a fine time to draw attention to the male gender.

Here’s the second post: the radically different prices that men and women can be charged for admission to certain venues. This is about as blatant case of sexism as you could think of. It’s also exceedingly common: bars and clubs hold “ladies nights” where women are charged less – if they’re charged at all – for entry and drinks on no basis other than gender. What you rarely, if ever, find is the reverse, where men are given a pass and women are required to pay a higher premium. Now we could argue about whether this is a good business move (whether it ends up profiting the clubs or not) but that’s not the point here. I doubt many people would accept women being charged higher tuition premiums to attend college, for instance, if it ended up causing the college to profit.

One could also argue about whether ladies nights can be said to do men real harm. Complaining about them might even be conceptualized as a first-world problem, or a whiny privileged male problem. Whether they do or not is still besides the point, which is this: it’s probable that even when a policy hurts or discriminates against men, that harm or discrimination will be taken less seriously than a comparable one directed against women. There is likely be some psychological module devoted to taking into account victim characteristics when assessing victimhood claims, and gender would appear to be a very relevant variable. In fact, I would predict that gender will be an important variable above and beyond the extent to which the sexes historically faced different costs from different acts (rape, for instance, entails the additional risk of pregnancy from a male-on-female case, but not the reverse, so we might expect people to say a woman getting raped is worse than a man being raped).

“It’s illegal to talk on your cell while driving without having my number”

Some intriguing data come to us from Mustard (2001), who examined roughly 80,000 federal sentences across different racial groups, men, and women. Doing so requires controlling for a number of relevant variables, such as offense type and severity, past criminal record, and so on, since the variable of interest is, “the extent to which an individual who is in the same district court, commits the same offense, and has the same criminal history and offense level as another person received a different sentence on the basis on race, ethnicity or gender”. Mustard (2001) found that, after controlling for these variables, some of the usual racial bias come through: sentences that were handed out to blacks and Hispanics were, on average, 5.5 and 4.5 months longer than comparable sentences handled out to whites. Seems like a run-of-the-mill case of discrimination so far, I’m sure. However, the male-female discrepancy didn’t fair any better: men, on average, also received sentences 5.5 months longer than women did. Huh; it would seem that the male-female bias is about as bad, if not worse, than racial biases in this case.

It’s potentially worth noting that these disparities could come about for two reasons: sentencing within the guidelines – just differently between genders – or departing from the sentencing guidelines. For example, both men and women could be sentenced within the appropriate guidelines, but women tend to be sentenced towards the low end of the sentencing range while men are sentenced towards the high end. Granted, bias is still bias, whether it’s due to departures or sticking to the accepted range of sentencing length, but, as it turns out, roughly 70% of the gender difference could be accounted for by departures from sentencing guidelines; specifically, women were more often given downward departures from the guidelines, relative to men. When blacks and Hispanics are granted a downward departure, it averages about 5.5 months less than departures given to whites; for women, the average departure was almost 7 months greater than for men. Further, when the option of no jail time is available, females are also more likely to be granted no time, relative to men (just as whites are to blacks).

It’s also worth noting, as Mustard (2001) does, that these disparities were only examined at the level of sentencing. It would not be much of a leap to consider the possibility that similar disparities existed in other aspects of the moral domain, such as suspicion regarding a perpetrator, the devotion of resources to certain varieties of crime, or even assessments concerning whether a crime took place at all. Further still, it doesn’t consider the everyday social, non-criminal, extent to which men may not given certain considerations that women would. If that is the case, the effects of these biases that we see in this paper are likely to be cumulative, and the extent of the differences we see at time of sentencing might only reflect a part of the overall discriminatory pattern. Simply noticing the pattern, however, does not explain it, which means it’s time to consider some potential reasons why women may be assessed more positively than men when it comes to handing out punishment.

Whether he wins or loses here, he still loses.

Perhaps the most obvious issue is one I’ve touched on previously: men could be treating women better (or avoiding punishing them harshly) in hopes of receiving sexual favors later. In this regard, women are often able to offer men something that other men simply can’t, which makes them a particularly appealing social asset. However, such an explanation is likely incomplete, as it would only be able to account for cases in which men treated women preferentially, not cases where women treated other women preferentially as well. While the current data doesn’t speak to that issue (the interaction between the sex of the judge, sex of the convict, and sentencing length was not examined), I wouldn’t doubt it plays a significant role in accounting for this bias.

Another potential explanation is that men may in fact be more likely to be dangerous, leading people, men and women alike, to assume men are more likely to be guilty, acted more intentionally, and should be punished more severely (among other things). If the former proposition is true, then such a bias would likely be useful on the whole. However, that does not imply it would be useful or accurate for any one given case, especially if other, potentially more useful, sources of information are available (such as criminal history). Gender would only be a proxy for the other variables people wish to assess, which means its use would likely lead to inaccurate assessments in many individual cases. 

Finally, one other issue returns to the point I was making last post: if women are, all things considered, better targets of social investment for other people relative to men, punishing them harshly is unlikely to lead to a good strategic outcome. Simply put, punishing women may be socially costlier than punishing men, so people might shy away from it when possible. While this is unlikely to be an exhaustive list of potential explanations for this discriminatory bias, it seems a plausible starting position. Now the only thing left to do is to get people to care about solving (and seeing) the problem in the first place.

References: Mustard, D. B. (2001). Racial, ethnic, and gender disparities in sentencing: Evidence from the U.S. federal courts. Journal of Law and Economics, 44, 285-314 DOI: 10.1086/320276