Why Would You Ever Save A Stranger Over A Pet?

The relationship between myself and my cat has been described by many as a rather close one. After I leave my house for almost any amount of a time, I’m greeted by what appears to be a rather excited animal that will meow and purr excessively, all while rubbing on and rolling around my feet upon my return. In turn, I feel a great deal of affection towards my cat, and derive feelings of comfort and happiness from taking care of and petting her. Like the majority of Americans, I happen to be a pet owner, and these experiences and ones like them will all sound perfectly normal and relatable. I would argue, however, that they are, in fact, very strange feelings, biologically-speaking. Despite the occasional story of cross-species fostering, other animals do not seem to behave in ways that indicates they seek out anything resembling pet-ownership. It’s often not until the idea of other species not making habits of having pets is raised that one realizes how strange of a phenomenon pet ownership can be. Finding that bears, for instance, reliably took care of non-bears, providing them with food and protection, would be a biological mystery of the first-degree.

And that I get most of my work done like this seems normal to me.

So why people seem to be so fond of pets? My guess is that the psychological mechanisms that underlie pet ownership in humans are not designed for that function per se. I would say that for a few reasons, notable among them are the time and resource factors. First, psychological adaptations take a good deal of time to be shaped by selective forces, which means long periods of co-residence between animals and people would be required for any dedicated adaptations to have formed. Though it’s no more than a guess on my part, I would assume that conditions that made extended periods of co-residence more probable would likely not have arisen prior to the advent of agriculture and geographically-stable human populations. The second issue involves the cost/benefit ratios: pets require a good deal of investment, at least in terms of food. In order for there to have been any selective pressure to keep pets, the benefits provided by the pets would have needed to have more than offset the costs of their care, and I don’t know of any evidence in that regard. Dogs might have been able to pull their weight in terms of assisting in hunting and protection, but it’s uncertain; other pets – such as cats, birds, lizards, or even the occasional insect – probably did not. While certain pets (like cats) might well have been largely self-sufficient, they don’t seem to offer much in the way of direct benefits to their owners either. No benefits means no distinct selection, which means no dedicated adaptions.

Given that there are unlikely to be dedicated pet modules in our brain, what other systems are good candidates for explaining the tendency towards seeking out pets? The most promising one that comes to mind are our already-existing systems designed for the care of our own, highly-dependent offspring. Positing that pet-care is a byproduct of our infant-care would manage to skirt both the issues of time and resources; our minds were designed to endure such costs to deliver benefits to our children. It would also allow us to better understand certain facets of the ways people behave towards their pets, such as the “aww” reaction people often have to pets (especially young ones, like kittens and puppies) and babies, as well as the frequent use of motherese (baby-talk) when talking to pets and children (to compare speech directed at pets and babies see here and here. Note as well that you don’t often hear adults talking to each other in this manner). Of course, were you to ask people whether their pets are their biological offspring, many would give the correct response of “no”. These verbal responses, however, do not indicate that other modules of the brain – ones that aren’t doing the talking – “know” that pets aren’t actually your offspring, in much the same way that parts of the brain dedicated to arousal don’t “know” that generating arousal to pornography isn’t going to end up being adaptive.

There is another interesting bit of information concerning pet ownership that I feel can be explained through the pets-as-infants model, but to get to it we need to first consider some research on moral dilemmas by Topolski et al (2013). This dilemma is a favorite of mine, and the psychological community more generally: a variant of the trolley dilemma. In this study, 573 participants were asked to respond to a series of 12 similar moral dilemmas, all of which had the same basic setup: there is a speeding bus that is about to hit either a person or an animal that both wandered out into the street. The subject only has time to save one of them, and are asked which they would prefer to save. (Note: each subject responded to all 12 dilemmas, which might result in some carryover effects. A between subjects design would have been stronger here. Anyway…) The identity of the animal and person in the dilemma were varied across the conditions: the animal was either the subject’s pet (subjects were asked to imagine one if they didn’t currently have one) or someone else’s pet, and the person was either a foreign tourist, a hometown stranger, a distant cousin, a best friend, a sibling, or a grandparent.

The study also starred Keanu Reeves.

In terms of saving someone else’s pet, people generally didn’t seem terribly interested. From a high about of 12% of subjects choosing someone else’s pet over a foreign tourist to a low of approximately 2% of subjects picking the strange pet over their own sibling. The willingness to save the animal in question rose substantially when it was the subject’s own pet being considered, however: while people were still about as likely to save their own pet in cases involving a grandparent or sibling, approximately 40% of subjects indicated they would save their pet over a foreign tourist or a hometown stranger (for the curious, about 23% would save their pet over a distant cousin and only about 5% would save their pet over a close friend. For the very curious, I could see myself saving my pet over the strangers or distant cousin). The strength of the relationship between pet owners and their animals appears to be strong enough to, quite literally, make almost half of them throw another human stranger under the bus to save their pet’s lives.

This is a strange response to give, but not for the obvious reasons: given that our pets are being being treated as our children by certain parts of our brain, this raises the question as to why anyone, let alone a majority of people, would be willing to sacrifice the lives of their pets to save a stranger. I don’t expect, for instance, that many people would be willing to let their baby get hit by the bus to save a tourist, so why that discrepancy? Three potential reasons come to mind: first, the pets are only “fooling” certain psychological systems. While some parts of our psychology might be treating pets as children, other parts may well not be (children do not typically look like cats or dogs, for instance). The second possible reason involves the clear threat of moral condemnation. As we saw, people are substantially more interested in saving their own pets, relative to a stranger’s pet. By extension, it’s probably safe to assume that other, uninvolved parties wouldn’t be terribly sympathetic to your decision to save an animal over a person. So the costs to saving the pet might well be perceived as higher. Similarly, the potential benefits to saving an animal may typically be lower than those of another person, as saved individuals and their allies are more likely to do things like reciprocate help, relative to a non-human. Sure, the pet’s owner might reciprocate, but the pet itself would not.

The final potential reason that comes to mind concerns that interesting bit of information I alluded to earlier: women were more likely to indicate they would save the animal in all conditions, and often substantially so. Why might this be the case? The most probable answer to that question again returns to the pets-as-children model: whereas women have not had to face the risk of genetic uncertainty in their children, men have. This risk makes males generally less interested in investing in children and could, by extension, make them less willing to invest in pets over people. The classic phrase, “Momma’s babies; Daddy’s maybes” could apply to this situation, albeit in an under-appreciated way (in other words, men might be harboring doubts about whether the pet is actually ‘theirs’, so to speak). Without reference parental investment theory – which the study does not contain – explaining this sex difference in willingness to pick animals over people would be very tricky indeed. Perhaps it should come as no surprise, then, that the authors do not do a good job of explaining their findings, opting instead to redescribe them in a crude and altogether useless distinction between “hot” and “cold” types of cognitive processing.

“…and the third type of cognitive processing was just right”

In a very real sense, some parts of our brain treat our pets as children: they love them, care for them, invest in them, and wish to save them from harm. Understanding how such tendencies develop, and what cues our minds use to make distinctions between their offspring, the offspring of others, their pets, and non-pet animals are very interesting matters which are likely to be furthered by considering parental investment theory. Are people raised with pets from a young age more likely to view them as fictive offspring? How might hormonal changes during pregnancy affect women’s interest in pets? Might cues of a female mate’s infidelity make their male partner less interested in taking care of pets they jointly own? Under what conditions might pets be viewed as a deterrent or an asset to starting new romantic relationships, in the same way that children from a past relationship might? The answers to these questions require placing pet care in its proper context, and you’re going to have quite a hard time doing that without the right theory.

References: R. Topolski, J.N. Weaver, Z. Martin, & J. McCoy (2013). Choosing between the Emotional Dog and the Rational Pal: A Moral Dilemma with a Tail. ANTHROZOÖS, 26, 253-263 DOI: 10.2752/175303713X13636846944321

Washing Hands In A Bright Room

Part of academic life in psychology – and a rather large part at that – centers around publishing research. Without a list of publications on your resume (or CV, if you want to feel different), your odds of being able to do all sorts of useful things, such as getting and holding onto a job, can be radically decreased. That said, people doing the hiring do not typically care to read through the published research of every candidate applying for the position. This means that career advancement involves not only publishing plenty of research, but publishing it in journals people care about. Though it doesn’t affect the quality of the research in in any way, publishing in the right places can be suitably impressive to some. In some respects, then, your publications are a bit like recommendations, and some journal’s names carry more weight than others. On that subject, I’m somewhat disappointed to note that a manuscript of mine concerning moral judgments was recently rejected from one of these prestigious journals, building upon the ever-lengthening list of prestigious things I’ve been rejected from. Rejection, I might add, appears be another rather large part of academic life in psychology.

After the first dozen or so times, you really stop even noticing.

The decision letter said, in essence, that while they were interesting, my results were not groundbreaking enough for publication the journal. Fair enough; my results were a bit on the expected side of things, and journals do presumably have standards for such things. Being entirely not bitter about the whole experience of not having my paper placed in the esteemed outlet, I’ve decided to turn my attention to two recent articles published in a probably-unrelated journal within psychology, Psychological Science (proud home of the trailblazing paper entitled “Leaning to the Left Makes the Eiffel Tower Seem Smaller“). Both papers were examining what could be considered to fall within the realm of moral psychology, and both present what one might consider to be novel – or at least cute – findings. Somewhat curiously, both papers also lean a bit heavily on the idea of metaphors being more than metaphors, perhaps owing to their propensity for using the phrase “embodied cognition”. The first paper deals with the association between light and dark and good and evil, while the second concerns the association between physical cleanliness and moral cleanliness.

The first paper, by Banerjee, Chatterjee, & Sinha (2012) sought to examine whether recalling abstract concepts of good and evil could make participants perceive the room they’re in to be brighter or darker, respectively. They predicted this, as far as I can tell, on the basis of embodied cognition suggesting that metaphorical representations are hooked up to perceptual systems and, though they aren’t explicit about this, they also seem to suggest that this connection is instantiated in such a way so as to make people perceive the world incorrectly. That is to say that thinking about a time they behaved ethically or unethically ought to make people’s perceptions about the brightness of the world less accurate, which is a rather strange thing to predict if you ask me. In any case, 40 subjects were asked to think about a time they were ethical or unethical (so 20 per group), and to then estimate the brightness of the room they were in, from 1 to 7. The mean brightness rating of the ethical group was 5.3, and the rating in the unethical group was 4.7. Success; it seemed that metaphors are really embodied in people’s perceptual systems.

Not content to rest on that empirical success, Banerjee et al (2012) pressed forward with a second study to examine whether subjects recalling ethical or unethical actions were more likely to prefer objects that produced light (like a candle or a flashlight), relative to objects which did not (such as an apple or a jug). Seventy-four students were again split into two groups and asked to recall an ethical or unethical action in their life, asked to indicated their preference for the objects, and estimate the brightness of the room in watts. The subjects in the unethical condition again estimated the room as being dimmer (M = 74 watts) than the ethical group (M = 87 watts). The unethical group also tended to show a greater preference for light-producing objects. The authors suggest that this might be the case either because (a) the subjects thought the room was too dim, or (b) that participants were trying to reduce their negative feelings of guilt about acting unethically by making the room brighter. This again sounds like a rather peculiar type of connection to posit (the connection between guilt and wanting things to be brighter), and it manages to miss anything resembling a viable functional account for what I think the authors are actually looking at (but more on that in a minute).

Maybe the room was too dark, so they couldn’t “see” a better explanation.

The second paper comes to us from Schnall, Benton, & Harvey (2008), and it examines an aspect of the disgust/morality connection. The authors noted that previous research had found a connection between increasing feelings of disgust and more severe moral judgments, and they wanted to see if they could get that connection to run in reverse: specifically, they wanted to test whether priming people with cleanliness would cause them to deliver less-severe moral judgments about the immoral behaviors of other. The first experiment involved 40 subjects (20 per-cell seemed to be a popular number) who were asked to complete a scrabbled sentence task, with half of the subjects being posed with neutral sentences and the other half with sentences related to cleanliness. Immediately afterwards, they were asked to rate the severity of six different actions typically judged to be immoral on a 10-point scale. On average, the participants primed with the cleanliness words rated the scenarios as being less wrong (M = 5) than those given neutral primes (M = 5.8). While the overall difference was significant, only one of the six actions was rated as being significantly different between conditions, despite all showing the same pattern between conditions. In any case, the authors suggested that this may be due to the disgust component of moral judgments being reduced by the primes.

To test this explanation, the second experiment involved 44 subjects watching a scene from Trainspotting to induce disgust, and then having half of them wash their hands immediately afterwards. Subjects were then asked to rate the same set of moral scenarios. The group that washed their hands again had a lower overall rating of immorality (M = 4.7), relative to the group that did not (M = 5.3), with the same pattern as experiment 1 emerging. To explain this finding, the authors say that moral cleanliness is more than a metaphor (restating their finding) and then reference the idea that humans are trying to avoid “animal reminder” disgust, which is a pretty silly idea for a number of reasons that I need not get into here (the short version is that it doesn’t sound like the type of thing that does anything useful in the first place).

Both studies, it seems, make some novel predictions and present a set of results that might not automatically occur to people. Novelty only takes us so far, though: neither study seems to move our understanding of moral judgments forward much, if at all, and neither one even manages to put forth a convincing explanation for their findings. Taking these results at face value (with such small sample sizes, it can be hard to say whether these are definitely ‘real’ effects, and some research on priming hasn’t been replicating so well these days), there might be some interesting things worth noting here, but the authors don’t manage to nail down what those things are. Without going into too much detail, the first study seem to be looking at what would be a byproducts of a system dedicated to assessing the risk of detection and condemnation for immoral actions. Simply put, the risks involved in immoral actions go down as the odds of being identified do, so when something lowers the odds of being detected – such as it being dark, or the anonymity that something like the internet or a mask can provide – one could expect people to behave in a more immoral fashion as well.

The internet can make monsters of us all.

In terms of the second study, the authors would likely be looking at another byproduct, this time of a system designed to avoid the perceptions of associations with morally-blameworthy others. As cleaning oneself can do things like remove evidence of moral wrongdoing, and thus lower the odds of detection and condemnation, one might feel a slightly reduced pressure to morally condemn others (as the perception of their being less concrete evidence of an association). With respect to the idea of detection and condemnation, then, both studies might be considered to be looking at the same basic kind of byproduct. Of course, phrased in this light (“here’s a relatively small effect that is likely the byproduct of a system designed to do other things and probably has little to no lasting effect on real-world behavior”), neither study seems terribly “trailblazing”. For a journal that can boast about receiving roughly 3000 submissions a year and accepting only 11% of them for publication, I would think they could avoid such submissions in favor of research that the label “groundbreaking” or “innovative” could be more accurately applied (unless they actually were the most groundbreaking of the bunch, that is). It would be a shame for any journal if genuinely good work was passed on because it seemed to be “too obvious” in favor of research that is cute, but not terribly useful. It also seems silly that it matters which journal one’s research is published in in the first place, career-wise, but so does washing your hands in a bright room so as to momentarily reduce the severity of moral judgments to some mild degree.

References: Banerjee P, Chatterjee P, & Sinha J (2012). Is it light or dark? Recalling moral behavior changes perception of brightness. Psychological science, 23 (4), 407-9 PMID: 22395128

Schnall S, Benton J, & Harvey S (2008). With a clean conscience: cleanliness reduces the severity of moral judgments. Psychological science, 19 (12), 1219-22 PMID: 19121126

This Is Water: Making The Familiar Strange

In the fairly-recent past, there was a viral video being shared across various social media sites called “This is Water” by David Foster Wallace. The beginning of the speech tells a story of two fish who are oblivious to the water in which they exist, in much the same way that humans come to take the existence of the air they breathe for granted. The water is so ubiquitous that the fish fail to notice it; it’s just the way things are. The larger point of the video – for my present purposes – is that the inferences people make in their day-to-day lives are so automatic as to become taken for granted. David correctly notes that there are many, many different inferences that one could make about people we see in our every day lives: is the person in the SUV driving it because they fear for their safety or are they selfish for driving that gas-guzzler? Is the person yelling at their kids not usually like that, or are they an abusive parent? There are two key points in all of this. The first is the aforementioned habit people have to take the ability we have to draw these kinds of inferences in the first place for granted; what Cosmides & Tooby (1994) call instinct blindness. Seeing, for instance, is an incredibly complex and difficult-to-solve task, but the only effort we perceive when it comes to vision involves opening our eyes: the seeing part just happens. The second, related point is the more interesting part to me: it involves the underdetermination of the inferences we draw from the information we’re provided. That is to say that no part of the observations we make (the woman yelling at her child) intrinsically provides us with good information to make inferences with (what is she like at other times).

Was Leonidas really trying to give them something to drink?

There are many ways of demonstrating underdetermination, but visual illusions – like this one – prove to be remarkable effective in quickly highlighting cases where the automatic assumptions your visual systems makes about the world cease to work. Underdetermination isn’t just a problem need to be solved with respect to vision, though: our minds make all sorts of assumptions about the world that we rarely find ourselves in a position to appreciate or even notice. In this instance, we’ll be considering some of the information our mind automatically fills in concerning the actions of other people. Specifically, we perceive our world along a dimension of intentionality. Not only do we perceive that individuals acted “accidentally” or “on purpose”, we also perceive that individuals acted to achieve certain goals; that is, we perceive “motives” in the behavior of others.

Knowing why others might act is incredibly useful for predicting and manipulating their future behavior. The problem that our minds need to solve, as you can no doubt guess by this point, is that intentions and motives are not readily observable from actions. This means that we need to do our best to approximate them from other cues, and that entails making certain assumptions about observable actions and the actors who bring them about. Without these assumptions, we would have no way to distinguish between someone killing in self-defense, killing accidentally, or killing just for the good old fashion fun of it. The questions for consideration, then, concern which kinds of assumptions tend to be triggered by which kinds of cues under what circumstances, as well as why they get triggered by that set of cues. Understanding what problems these inferences about intentions and motives were designed to solve can help us more accurately predict the form that these often-unnoticed assumptions will likely take.

While attempting to answer that question about what cues our minds use, one needs to be careful to not lapse in the automatically-generated inferences our minds typically make and remain instinct-blind. The reason that one ought to avoid doing this – in regards to inferences about intentions and motives – is made very well by Gawronski (2009):

“…how [do] people know that a given behavior is intentional or unintentional[?]  The answer provided…is that a behavior will judged as intentional if the agent (a) desired the outcome, (b) believed that the action would bring about the outcome, (c) planned the action, (d) had the skill to accomplish the action, and (e) was aware of accomplishing the outcome…[T]his conceptualization implies the risk of circularity, as inferences of intentionality provide a precondition for inferences about aims and motives, but at the same time inferences of intentionality depend on a perceivers’ inferences about aims and motives.”

In other words, people often attempt to explain whether or not someone acted intentionally by referencing motives (“he intended to harm X because he stood to benefit”), and they also often attempt to explain someone’s motives on the basis of whether or not they acted intentionally (“because he stood to benefit by harming X, he intended harm”). On top of that, you might also notice that inferences about motives and intentions are themselves derived, at least in part, from other, non-observable inferences about talents and planning. This circularity manages to help us avoid something resembling a more-complete explanation for what we perceive.

“It looks three-dimensional because it is, and it is 3-D because it looks like it”

Even if we ignore this circularity problem for the moment and just grant that inferences about motives and intentions can influence each other, there is also the issue of the multiple possible inferences which could be drawn about a behavior. For instance, if you observe a son push his father down the stairs and kill him, one could make several possible inferences about motives and intentions. Perhaps the son wanted money from an inheritance, resulting in his intending to push his father to cause death. However, pushing his father not only kills close kin, but also carries the risk of a punishment. Since the son might have wanted to avoid punishment (and might well have loved his father), this would result in his not intending to push his father and cause death (i.e. maybe he tripped, which is what caused him to push). Then again, unlikely as it may sound, perhaps the son actively sought punishment, which is why he intended to push. This could go on for some time. The point is that, in order to reach any one of these conclusions, the mind needs to add information that is not present in the initial observation itself.

This leads us to ask what information is added, and on what basis? The answer to this question, I imagine, would depend on the specific inferential goals of the perceiver. One goal is could be accuracy: people wish to try and infer the “actual” motivations and intentions of others, to the extent it makes sense to talk about such things. If it’s true, for instance, that people are more likely to act in ways that avoid something like their own bodily harm, our cognitive systems could be expected to pick up on that regularity and avoid drawing the the inference that someone was intentionally seeking it. Accuracy only gets us so far, however, due to the aforementioned issue of multiple potential motives for acting: there are many different goals one might be intending to achieve and many different costs one might be intending to avoid, and these are not always readily distinguishable from one another. The other complication is that accuracy can sometimes get in the way of other useful goals. Our visual system, for instance, while not always accurate, might well be classified as honest. That is to say though our visual system might occasionally get things wrong, it doesn’t tend to do so strategically; there would be no benefit to sometimes perceiving a shirt as blue and other times as red in the same lighting conditions.

That logic doesn’t always hold for perceptions of intentions and motives, though: intentionally committed moral infractions tend to receive greater degrees of moral condemnation than unintentional ones, and can make one seem like a better or worse social investment. Given that there are some people we might wish to see receive less punishment (ourselves, our kin, and our allies) and some we might wish to see receive more (those who inflict costs on us or our allies), we ought to expect our intentional systems to perceive identical sets of actions very differently, contingent on the nature of the actor in question. In other words, if we can persuade others about our intentions and motives, or the intentions and motives of others, and alter their behavior accordingly, we ought to expect perceptual biases that assist in those goals to start cropping up. This, of course, rests on the idea that other parties can be persuaded to share your sense of these things, posing us with related problems like under what circumstances does it benefit other parties to develop one set of perceptions or another?

How fun this party is can be directly correlated to the odds of picking someone up.

I don’t pretend to have all the answers to questions like these, but they should serve as a reminder that the our minds need to add a lot of structure to the information they perceive in order to do many of the things of which they are capable. Explanations for how and why we do things like perceive intentionality and motive need to be divorced from the feeling that such perceptions are just “natural” or “intuitive”; what we might consider the experience of the word “duh”. This is an especially large concern when you’re dealing with systems that are not guaranteed to be accurate or honest in their perceptions. The cues that our minds use to determine what the motives people had when they acted and what they intended to do are by no means always straightforward, so saying that inferences are generated by “the situation” is unlikely to be of much help, on top of just being wrong.

References: Cosmides, L. & Tooby, J. (1996). Beyond intuition and instinct blindness: Towards an evolutionary rigorous cognitive science. Cognition, 50, 41-77.

Gawronski, B. (2009). The Multiple Inference Model of Social Perception: Two Conceptual Problems and Some Thoughts on How to Resolve Them. Psychological Inquiry, 20, 24-29 DOI: 10.1080/10478400902744261

Sexed-Up Statistics – Female Genital Mutilation

“A lie can travel halfway around the world while the truth is putting on its shoes” – Mark Twain.

I had planned on finishing up another post today (which will likely be up tomorrow now) until a news story caught my eye this morning, changing my plans somewhat. The news story (found on Alternet) is titled, “Evidence shows that female genital cutting is a growing phenomenon in the US“. Yikes; that certainly sounds worrying. From that title, and subsequent article, it would seem two things are likely to inferred by the reader: (1) There is more female genital cutting in the US in recent years than there was in the past and (2) some kind of evidence supports that claim. There were several facets of the article that struck me as suspect, however, most of which speak to the second point: I don’t think the author has the evidence required to substantiate their claims about FGC. Just to clear up a few initial points, before moving forward with this analysis, no; I’m not trying to claim that FGC doesn’t occur at all in the US or on overseas trips from the US. Also, I personally oppose the practice in both the male and female varieties; cutting pieces off a non-consenting individual is, on my moral scale, a bad thing. My points here only concern accurate scholarship in reporting. They also raise the possibility that the problem may well be overstated – something which, I think, ought to be good news.

It means we can start with just the pitchforks; the torches aren’t required…yet.

So let’s look at the first major alarmist claim of the article: there was a report put out by the Sanctuary for Families that claimed approximately 200,000 women living in the US were living in risk of genital cutting. That number sounds pretty troubling, but the latter part of the claim sounds a bit strange: what does “at risk” mean? I suppose, for instance, that I’m living “at risk” of being involved in a fatal car accident, just as everyone else who drives a car is. Saying that there are approximately 200,000,000 people in the US living at risk of a fatal car crash is useless on its own, though: it requires some qualifications. So what’s the context behind the FGC number? The report itself references a 1997 paper by the CDC that estimated between 150,000 and 200,000 women in the US were at risk of being forced to undergo FGC (which we’ll return to later). Given that the reference for claim is a paper by the CDC, it seems very peculiar that the Sanctuary for Families attaches a citation that instead directs you to another news site that just reiterates the claim.

This is peculiar for two reasons: first, it’s a useless reference. It would be a bit like my writing down on a sheet of paper, “I think FGC is one the rise” because I had read it somewhere, and then referencing the fact that I wrote that down when I say it again the next time.Without directing one the initial source of the claim, it’s not a proper citation and doesn’t add any information. The second reason that the reference is peculiar is that the 1997 CDC paper (or at least what I assume is the paper) is actually freely available online. It took me all of 15 seconds to find it through a Google search. While I’m not prepared to infer any sinister motivation on the Sanctuary for Families for not citing the actual paper, it does, I think speak to the quality of scholarship that went into drafting the report, and in a negative way. It makes one wonder whether they actually read the key report in the first place.

Thankfully, it does finally provide us with the context as to how the estimated number was arrived at. The first point worth noting is that the estimate the paper delivers (168,000) is a reflection of people living in the US who had either already undergone the procedure before they moved here or who might undergo it in the future (but not necessarily within the US). The estimate is mute on when or where the procedure might have taken place. If it happened in another country years or decades ago, it would be part of this estimate. In any case, the authors began with the 1990 census data of the US population. On the census, respondents were asked about their country of origin and how long they lived in the US. From that data, the authors then cross-referenced the estimated rates of FGC in people’s home countries to estimate whether or not they were likely to have undergone the procedure. Further, the authors made the assumption in all of this that immigrants were not unique from the population from which they were derived with respect to their practicing of FGC: if 50% of the population in a families’ country of origin practiced it, then 50% of immigrants were expected to have practiced it or might do so in the future. In other words, the 168,000 number is an estimate, based on other estimates, based on an assumption.

It’s an impressive number, but I worry about its foundation.

I would call this figure, well, a very-rough estimate, and not exactly solid evidence. Further, it’s an estimate of FGC in other countries; not in the US. The authors of the CDC paper were explicit about this point, writing, “No direct information is available on FGC in the United States”. It is curious, then, that the Sanctuary report and the Alternet article both reference the threat of FGC that girls in the US face while referencing the CDC estimate. For example, here’s how the Sanctuary report phrased the estimate:

In 1997, however, the Centers for Disease Control and Prevention (CDC) estimated that as many as 150,000 to 200,000 girls in the United States were at risk of being forced to undergo female genital mutilation.

See the important differences? The CDC estimate wasn’t one concerning people at risk of being forced to undergo the practice; it was an estimate of people who might undergo it and whom might have already undergone it at some point in the past in some other country. Indeed, the CDC document could more accurately be considered an immigration report, rather than an paper on FGC itself. So, when the Sanctuary report and Alternet article suggest that the number of women at risk for FGC is rising, what they appear to mean is that immigration from certain countries where the practice is more common is rising, but that doesn’t seem to have quite the same emotional effect. Importantly, the level of risk isn’t ever qualified. Approximately 200,000,000 people are “at risk” of being involved in a fatal car crash; how many of them actually are involved in one? (about 40,000 a year and on the decline). So how many of the 168,000 women “at risk” for FGC already had one, how many might still be “at risk”, and how many of those “at risk” end up actually undergoing the procedure? Good evidence is missing on these points.

This kind of not-entirely-accurate reporting remind me of a piece by Neuroskeptic on what he called “sexed-up statistics”. These are statistics presented or reported on in such a way as to make some problem seem as bad as possible, most likely in the goal of furthering some social, political, or funding goals (big problems attract money for their solution). It’s come up before in the debate over the wage-gap between men and women, and when considering the extent of rape among college-aged (and non-college aged) women, to just name two prominent cases. This ought not be terribly surprising in light of the fact that the pursuit of dispassionate accuracy is likely not the function of human reasoning. The speed with which people can either accept or reject previously-unknown information (such as the rate of FGC in the US and whether it’s a growing problem) tells us that concerns for accuracy per se are not driving these decisions. This is probably why the initial quote by Mark Twain carries the intuitive appeal that it does.

“Everyone but me and the people I agree with are so easily fooled!”

FGC ought to be opposed, but it’s important to not let one’s opposition for it (or, for that matter, one’s opposition or support for any other specific issue) get in the way of accurately considering and reporting on the evidence at hand (or al least doing the best one can in that regard). The evidence – and that term is used rather loosely here – presented certainly does not show that illegal FGC is a “growing phenomenon in the US”, as Jodie at Alternet suggests. How could the evidence even already show it was a growing problem if one grants that determining the initial and current scope of the problem hasn’t been done and couldn’t even feasibly be done? As far as the “evidence” suggests, the problem could be on the rise, on the decline, or have remained static. One of those options just happens to make for the “sexier” story; the story more capable of making its way halfway around the world in an instant.

Mathematical Modeling Of Menopause

Some states of affairs are so ubiquitous in the natural world that – much like the air we breathe – we stop noticing their existence or finding them particularly strange. The effects of aging are good examples of this. All else being equal, we ought to expect organisms that are alive longer to reproduce more. The longevity/reproduction link would seem to make the previously-unappreciated question of why organism’s bodies tend to breakdown over time rather salient. Why do organisms grow old and frail, before one or more homeostatic systems start failing, if being alive tends to aid in reproduction? One candidate explanation for understanding senescence involves.considering the trade off between the certainty of the present and the uncertainty of the future; what we might consider the discount rate of life. Each day, our bodies need to avoid death from a variety of sources, such as accidental injuries, intentional injuries from predators or conspecifics, the billions of hungry microorganisms we encounter, or lacking access to sufficient metabolic resources. Despite the whole world seemingly trying to kill us constantly, our bodies manage to successfully cheat death pretty well, all things considered.

“What do we say to death? Not to..OH MY GOD, WHAT’S BEHIND YOU?”

Of course, we don’t always manage to avoid dying: we get sick, we get into fights, and sometimes we jump out of airplanes for fun. Each new day, then, brings new opportunities that might result in the less-than-desirable outcome, and the future is full of new days. This makes each day in the future that much less valuable than each day in the present, as future days come with the same potential benefits, but all the collective added risk. Given the uncertainty of the future, it follows that some adaptations might be designed to increase our chances of being alive today, even if they decrease our odds of being alive tomorrow. These adaptations may well explain why we age the way we do. They would be expected to make us age in very specific ways, though: all our biological systems ought to be expected to breakdown at roughly the same time. This is because investing tons of energy into making a liver that never breaks doesn’t make much sense if the lungs give out too easily, as the body with the well-functioning liver would die all the same without the ability to breathe; better to divert some of that energy from liver maintenance to lung function.

As noted previously, however, being alive is only useful from an evolutionary perspective if being alive means better genetic representation in the future. The most straightforward way of achieving said genetic representation is through direct reproduction. This makes human menopause a very strange phenomenon indeed. Why do female’s reproductive capabilities shut off decades before the rest of their body tends to? It seems that pattern of loss of function parallels the liver/lungs example. Further, as the use of the word ‘human’ suggests, this cessation of reproductive abilities is not well-documented among other species. It’s not that other species don’t ever lose the capacity for reproduction, mind you, just that they tend to lose it much closer to the point when they would die anyway. This adds a second part to our initial question concerning the existence of menopause: why does it seem to only really happen in humans?

Currently, the most viable explanation is known as “The Grandmother Hypothesis“. The hypothesis suggests that, due to the highly-dependent nature of human offspring and the risks involved in pregnancy, it became adaptive for women to cease focusing on producing new offspring of their own and shift their efforts towards investing in their existing offspring and grandoffspring. At its core, the grandmother hypothesis is just an extension of kin selection: the benefits to helping relatives begin to exceed the benefits of direct reproduction. While this hypothesis may well prove to not be the full story, it does have two major considerations going for it: first, it explains the loss of reproductive capacity through a tradeoff – time spent investing in new offspring is time not spent investing in existing ones. It doesn’t commit what I would call the “dire straits fallacy” by trying to get something for free, as some standard psychology ideas (like depressive realism) seem to. The second distinct benefit of this hypothesis is perhaps more vital, however: it explains why menopause appears to be rather human-specific by referencing something unique to humans – extremely altricial infants that are risky to give birth to.

A fairly accurate way to conceptualize the costs of the pregnancy-through-college years.

A new (and brief) paper by Morton, Stone, & Singh (2013) sought to examine another possible explanation for menopause: mate choice on the part of males.The authors used mathematical models to attempt and demonstrate that, assuming men have a preference for young mates, mutations that had deleterious effects on women’s fertility later in life could drift into fixation. Though the authors aren’t explicit on this point, they seem to be assuming, de facto, that human female menopause is a byproduct of senescence plus a male sexual preference for younger women, as without this male sexual preference, their simulated models failed to result in female menopause. They feel their models demonstrate that you don’t necessarily need something like a grandmother hypothesis to explain menopause. My trust in results derived from mathematical models like these can be described as skeptical at the best of times, so it should come as no surprise that I found this explanation lacking on three rather major fronts.

My first complaint is that while their model might show that – given certain states of affairs held – explanations like the grandmother hypothesis need not be necessary, they fail to rule out the grandmother hypothesis in empirical or theoretical way. They don’t bother to demonstrate that their state of affairs actually held. Why that’s a problem is easy to recognize: it would be trivial to concoct a separate mathematical model that “demonstrated” the strength of the grandmother hypothesis by making a different set of assumptions (such as by assuming that past a certain age, investments returned in existing offspring outweighed investments in new ones). Yes; to do so would be pure question-begging, and I fail to see how the initial model provided by Morton et al (2013) isn’t doing just that.

My second complaint is, like the grandmother hypothesis, Morton et al’s (2013) byproduct model does consider tradeoffs, avoiding the dire straits fallacy; unlike the grandmother hypothesis, however, the byproduct account fails to posit anything human-specific about menopause. It seems to me that the explanation on offer from the byproduct account could be applied to any sexually-reproducing species. Trying to explain a relatively human-specific trait with a non-human-specific selection pressure isn’t as theoretically-sound as I would like. “But”, Morton et al might object, “we do posit a human-specific trait: a male preference for young female mates“. A fine rebuttal, complicated only by the fact that this is actually the weakest point of the paper. The authors appear to be trying to use an unexplained-preference to explain the decline in fertility, when it seems the explanation ought to run in precisely the opposite direction. If, as the model initially assumes, ancestral females did not differ substantially in their fertility with respect to age, how would a male preference for younger females ever come to exist in the first place? What benefits would arise to men who shunned older – but equally fertile – women in favor of younger ones? It’s hard to say. By contrast, if our starting point is that older females were less fertile, a preference for younger ones is easily explained.

No amount of math makes this an advisable idea.

Preferences are not explanations themselves; they require explanations. Much like aging, however, people can take preferences for granted because of how common they are (like the human male’s tendency to find females of certain ages maximally attractive), forgetting that basic fact in the process. The demonstration that male mating preferences could have been the driving force explaining the existence of menopause, then, seems empty. The model, like many others that I’ve encountered, seems to do little more than restate the author’s initial assumptions as conclusions, just in the language of math, rather than English. As far as I can see, the model makes no testable or novel predictions, and only manages to reach that point by assuming a maladaptive, stable preference on the part of men. I wouldn’t mark it down as a strong contender for helping us understand the mystery of menopause.

References: Morton, R., Stone, J., & Singh, R. (2013). Mate Choice and the Origin of Menopause PLoS Computational Biology, 9 (6) DOI: 10.1371/journal.pcbi.1003092

How Hard Is Psychology?

The scientific method is a pretty useful tool for assisting people in doing things related to testing hypotheses and discerning truth – or as close as one can come to such things. Like the famous Churchill quote about democracy, the scientific method is the worst system we have for doing so, except for all the others. That said, the scientists who use the method are often not doing so in the single-minded pursuit of truth. Perhaps phrased more aptly, testing hypotheses is generally not done for its own sake: people testing hypotheses are typically doing so for other reasons, such as raising one’s status and furthering one’s career in the process. So, while the scientific method could be used to test any number of hypotheses, scientists tend to try and use for certain ends and to test certain types ideas: those perceived to be interesting, novel, or useful. I imagine that none of that is particularly groundbreaking information to most people: science in theory is different from science in practice. A curious question, then, is given that we ought to expect scientists from all fields to use the method for similar reasons, why are some topics to which the scientific method is applied viewed as “soft” or “hard” (like psychology and physics, respectively)?

Very clever, Chemistry, but you’ll never top Freud jokes.

One potential reason for this impression is that these non-truth-seeking (what some might consider questionable) uses to which people attempt to put the scientific method could simply be more prevalent in some fields, relative to other ones. The further one strays from science in theory to science in practice, the softer your field might be seen as being. If, for instance, psychology was particularly prone to biases that compromises the quality or validity of the data, relative to other fields, then people would be justified in taking a more critical stance towards the findings from it. One of those possible biases involves tending to only report the data consistent with one hypothesis or another. As the scientific method requires reporting the data that is both consistent and inconsistent with one’s hypothesis, if only one of those is being done, then the validity of the method can be compromised and you’re no longer doing “hard” science. A 2010 paper by Fanellli provides us with some reason to worry on that front. In that paper, Fanelli examined approximately 2500 papers randomly drawn from various disciplines to determine the extent to which positive results (those which support one or more of the hypotheses being tested statistically) dominate in the published literature. The Psychology/Psychiatry category sat at the top of the list, with 91.5% of all published papers reporting positive results.

While that number may seem high, it is important to put the figure into perspective: the field at the bottom of that list – the one which reported the fewest positive results overall – were the Space Sciences, with 70.2% of all the sampled published work reporting positive results. Other fields ran a relatively smooth line between the upper- and lower-limits, so the extent to which the fields differ in positive results dominating is a matter of degree; not kind. Physics and Chemistry, for instance, both ran about 85% in terms of positive results, despite both being considered “harder” sciences than psychology. Now that the 91% figure might seem a little less worrying, let’s add some more context to reintroduce the concern: those percentages only consider whether any positive results were reported, so papers that tested multiple hypotheses tended to have a better chance of reporting something positive. It also happened that papers within psychology tended to test more hypotheses on average than papers in other fields. When correcting for that issue, positive results in psychology were approximately five-times more likely than positive results in the space sciences. By comparison, positive results physics and chemistry were only about two-and-a-half-times more likely. How much cause for concern should this bring us?

There are two questions to consider, before answering that last question: (1) what are the causes of these different rates of positive results and (2) are these differences in positive results driving the perception among people that some sciences are “softer” than others? Taking these in order, there are still more reasons to worry about the prevalence of positive results in psychology: according to Fanelli, studies in psychology tend to have lower statistical power than studies in physical science fields. Lower statistical power means that, all else being equal, psychological research should find fewer – not greater – percentages of positive results overall. If psychological studies tend to not be as statistically powerful, where else might the causes of the high-proportion of positive results reside? One possibility is that psychologists are particularly likely to be predicting things that happen to be true. In other words, “predicting” things in psychology tends to be easy because hypotheses tend to only be made after a good deal of anecdata has been “collected” by personal experience (incidentally, personal experience is a not-uncommonly cited reason for research hypotheses within psychology). Essentially, then, predictions in psychology are being made once a good deal of data is already in, at least informally, making them less predictions and more restatements of already-known facts.

“I predict that you would like a psychic reading, on the basis of you asking for one, just now.”

A related possibility is that psychologists might be more likely to engage in outright-dishonest tactics, such actually collecting their data formally first (rather than just informally), and then making up “predictions” that restate their data after the fact. In the event that publishers within different fields are more or less interested in positive results, then we ought to expect researchers within those fields to attempt this kind of dishonesty on a greater scale (it should be noted, however, that the data is still the data, regardless of whether it was predicted ahead of time, so the effects on the truth-value ought to be minimal). Though greater amounts of outright dishonesty is a possibility, it would be unclear as to why psychology would be particularly prone to this, relative to any other field, so it might not be worth worrying too much about. Another possibility is that psychologists are particularly prone to using questionable statistical practices that tend to boost their false-positive rates substantially, an issue which I’ve discussed before.

There are two issues above all the others stand out to me, though, and they might help to answer the second question – why psychology is viewed as “soft” and physics as “hard”. The first issue has to do with what Fanelli refers to as the distinction between the “core”  and the “frontier” of a discipline. The core of a field of study represents the agreed upon theories and concepts on which the field rests; the frontier, by contrast, is where most of the new research is being conducted and new concepts are being minted. Psychology, as it currently stands, is largely frontier-based. This lack of a core can be exemplified by a recent post concerning “101 greats insights from psychology 101“. In the list, you’ll find the word “theory” used a collective three times, and two of those mentions concern Freud. If you consider the plural – “theories” – instead, you’ll find five novel uses of the term, four of which mention no specific theory. The extent to which the remaining two uses represent actual theories, as opposed to redescriptions of findings, is another matter entirely. If one is left with only a core-less frontier of research, that could well send the message that the people within the field don’t have a good handle on what it is they’re studying, thus the “soft” reputation.

The second issue involves the subject matter itself. The “soft” sciences – psychology and its variants (like sociology and economics) – seem to dabble in human affairs. This can be troublesome for more than one reason. A first reason might involve the fact that the other humans reading about psychological research are all intuitive psychologists, so to speak. We all have an interest in understanding the psychological factors that motivate other people in order to predict what they’re going to do. This seems to give many people the impression that psychology, as a field, doesn’t have much new information to offer them. If they can already “do” psychology without needing explicit instructions, they might come to view psychology as “soft” precisely because it’s perceived as being easy. I would also note that this suggestion ties neatly into the point about psychologists possibly tending to make many predictions based on personal experience and intuitions. If the findings they are delivering tend to give people the impression that “Why did you need research? I could have told you that”, that ease of inference might cause people to give psychology less credit as a science.

“We go to the moon because it is hard, making physics a real science”

The other standout reason as to why psychology might pose people with the soft perception is that, on top of trying to understand other people’s psychological goings-on, we also try to manipulate them. It’s not just that we want to understand why people support or oppose gay marriage, for instance, it’s that we might also want to change their points of view. Accordingly, findings from psychology tend to speak more directly to issues people care a good deal about (like sex, drugs, and moral goals. Most people don’t seem to argue over the latest implications from chemistry research), which might make people either (a) relatively resistant to the findings or (b) relatively accepting of them, contingent more on one’s personal views and less on the scientific quality of the work itself. This means that, in addition to many people having a reaction of “that is obvious” with respect to a good deal of psychological work, they also have the reaction of “that is obviously wrong”, neither of which makes psychology look terribly important.

It seems likely to me that many of these issues could be mediated with the addition of a core to psychology. If results need to fit into theory, various statistical manipulations might become somewhat easier to spot. If students were learning how to think about psychology, rather than to think about and remember lists of findings which they feel are often trivial or obviously wrong, they might come away with a better impression of the field. Now if only a core could be found

References: Fanelli D (2010). “Positive” results increase down the Hierarchy of the Sciences. PloS one, 5 (4) PMID: 20383332

When (And Why) Is Discrimination Acceptable?

As a means of humble-bragging, I like to tell people that I have been rejected from many prestigious universities; the University of Pennsylvania, Harvard, and Yale are all on that list. Also on that list happens to be the University of New Mexico, home of one Geoffrey Miller. Very recently, Dr. Miller has found himself in a little bit of moral hot water from what seems to be an ill-conceived tweet. It reads as follows: “Dear obese PhD applicants: if you don’t have enough willpower to stop eating carbs, you won’t have the willpower to do a dissertation #truth“. Miller subsequently deleted the tweet and apologized for it in two follow up tweets. Now, as I mentioned, I’ve been previously rejected from Miller’s lab – on more than one occasion, mind you (I forgot if it was 3 or 4 times now) – so clearly, I was discriminated against. Indeed, discrimination policies are vital to anyone, university or otherwise, with open positions to fill. When you have 10 slots open and you get approximately 750 applications, you need some way of discriminating between them (and whatever method you use will disappoint approximately 740 of them). Evidently, being obese is one characteristic that people found to be morally unacceptable to even jokingly suggest you were discriminating on the basis of. This raises the question of why?

Oh no; someone’s going to get a nasty email…

Let’s start with a related situation: it’s well-known that many universities make use of standardized test scores, such as the SAT or GRE, in order to screen out applicants. As a general rule, this doesn’t tend to cause too much moral outrage, though it does cause plenty of frustration. One could – any many do – argue that using these scores is not only morally acceptable, but appropriate, given that they predict some facets of performance at school-related tasks. While there might be some disagreement over whether or not the tests are good enough predictors of performance (or whether they’re predicting something conceptually important), there doesn’t appear to be much disagreement about whether or not they could be made use of, from a moral standpoint. That’s a good principle to start the discussion over the obese comment with, isn’t it? If you have a measure that’s predictive of some task-relevant skill, it’s OK to use it.

Well, not so fast. Let’s say, for the sake of this argument, that obesity was actually a predictor of graduate school performance. I don’t know if there’s actually any predictive value there, but let’s assume there is and, just for the sake of this example, let’s assume that being obese was indicative of doing slightly worse at school, like Geoffrey suggested; why it might have that effect is, for the moment, of no importance. So, given that obesity could, to some extent, predict graduate school performance, should schools be morally allowed  to use it in order to discriminate between potential applicants?

I happen to think the matter is not nearly so simple as predictive value. For starters, there doesn’t seem to be any widely-agreed upon rule as for precisely how predictive some variable needs to be before its use is deemed morally acceptable. If obesity could, controlling for all other variables, predict an additional 1% of the variance graduate performance, should applications start including boxes for height and weight? While 1% might not seem like a lot, if you could give yourself a 1% better chance at succeeding at some task for free (landing a promotion, getting hired, avoiding being struck by a car or, in this case, admitting a productive student), it seems like almost everyone would be interested in doing so; ignoring or avoiding useful information would be a very curious route to opt for, as it only ensures that, on the whole, you make a worse decision than if you hadn’t considered it. One could play around with the numbers to try and find some threshold of acceptability, if they were so inclined (i.e. what if it could predict 10%, or only 0.1%), to help drive the point home. In any case, there are a number of different factors which could predict graduate school performance in different respects: previous GPAs, letters of recommendation, other reasoning tasks, previous work experience, and so on. However, to the best of my knowledge, no one is arguing that it would be immoral to only use any of them other than the best predictor (or the top X number of predictors, or the second best if you aren’t using the first, and so on). The core of the issue seems to center on obesity, rather than discriminant validity per se.

*May also apply to PhD applications.

Thankfully, there is some research we can bring to bear on the matter. The research comes from a paper by Tetlock et al (2000) who were examining what they called “forbidden base rates” – an issue I touched on once before. In one study, Tetlock et al presented subjects with an insurance-related case: an insurance executive had been tasked with assessing how to charge people for insurance. Three towns had been classified as high-risk (10% chance of experiencing fires or break-ins), while another three had been classified as low-risk (less than 1% chance). Naturally, you would expect that anyone trying to maximize their risk-to-profit ratio would change different premiums, contingent on risk. If one is not allowed to do so, they’re left with the choices of offering coverage at a price that’s too low to be sustainable for them or too high to be viable for some of their customers. While you don’t want to charge low-risk people more than you need to, you also don’t want to under-charge the high-risk ones and risk losing money. Price discrimination in this example is a good thing.

The twist was that these classifications of high- and low-risk either happened to correlate along racial lines, or they did not, despite their being no a priori interest in discriminating against any one race. When faced with this situation, something interesting happens: compared to conservatives and moderates, when confronted with data suggesting black people tended to live in the high-risk areas, liberals tended to advocate for disallowing the use of the data to make profit-maximizing economic choices. However, this effect was not present when the people being discriminated against in the high-risk area happened to be white.

In other words, people don’t seem to have an issue with the idea of using useful data to discriminate amongst groups of people itself, but if that discrimination ended up affecting the “wrong” group, it can be deemed morally problematic. As Tetlock et al (2000) argued, people are viewing certain types of discrimination not as “tricky statistical issues” but rather as moral ones. The parallels to our initial example are apparent: even if discriminating on the basis of obesity could provide us with useful information, the act itself is not morally acceptable in some circles. Why people might view discrimination against obese people morally offensive itself is a separate matter. After all, as previously mentioned, people tend to have no moral problems with tests like GRE that discriminate not on weight, but other characteristics, such as working memory, information processing speeds, and a number of other difficult to change factors. Unfortunately, people tend to not have much in the way of conscious insight into how their moral judgments are arrived at and what variables they make use of (Hauser et al, 2007), so we can’t just ask people about their judgments and expect compelling answers.

Though I have no data bearing on the subject, I can make some educated guesses as to why obesity might have moral protection: first, and perhaps most obvious, is that people with the moral qualms about discrimination along the weight dimension might themselves tend to be fat or obese and would prefer to not have that count against them. In much the say way, I’m fairly confident that we could expect people who scored low on tests like the GRE to downplay their validity as a measure and suggest that schools really ought to be looking at other factors to determine admission criteria. Relatedly, one might also have people they consider to be their friends or family members who are obese, so they adopt moral stances against discrimination that would ultimately harm their social ingroup. If such groups become prominent enough, siding against them would become progressively costlier. Adopting a moral rule disallowing discrimination on the basis of weight can spread in those cases, even if enforcing that rule is personally costly, on account of not adopting the rule can end up being an even greater cost (as evidenced by Geoffrey currently being hit with a wave of moral condemnation for his remarks).

Hopefully it won’t crush you and drag you to your death. Hang ten.

As to one final matter, one could be left wondering why this moralization of judgments concern certain traits – like obesity – can be successful, whereas moralization of judgments based on other traits – like whatever GREs measure – doesn’t obtain. My guess in that regard is that some traits simply effect more people or effect them in much larger ways, and that can have some major effects on the value of an individual adopting certain moral rules. For instance, being obese effects many areas of one’s life, such as mating prospects and mobility, and weight cannot easily be hidden. On the other hand, something like GRE scores effect very little (really, only graduate school admissions), and are not readily observable. Accordingly, one manages to create a “better” victim of discrimination; one that is proportionately more in need of assistance and, because of that, more likely to reciprocate any given assistance in the future (all else being equal). Such a line of thought might well explain the aforementioned difference we see in judgments between racial discrimination being unacceptable when it predominately harms blacks, but fine when it predominately harmed whites. So long as the harm isn’t perceived as great enough to generate an appropriate amount of need, we can expect people to be relatively indifferent to it. It just doesn’t create the same social-investment potential in all cases.

References: Hauser, M., Cushman, F., Young, L., Kang-Xing Jin, R., & Mikhail, J. (2007). A dissociation between moral judgments and justifications. Mind & Language, 22, 1-21.

Tetlock, P., Kristel, O., Elson, S., Green, M., & Lerner, J. (2000). The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78 (5), 853-870 DOI: 10.1037//0022-3514.78.5.853

Why Are They Called “Spoilers”?

Imagine you are running experiments with mice. You deprive the mice of food until they get hungry and then you drop them into a maze. Now obviously the hungry mice are pretty invested in the idea of finding the food; you have been starving them and all. You’re not really that evil of a researcher, though: in one group, you color-code the maze so the mice always know where to go to find the reward. The mice, I expect, would not be terribly bothered by your providing them with information and, if they could talk, I doubt many of them would complain about your “spoiling” the adventure of finding the food themselves. In fact, I would also expect most people would respond the same way when they were hungry: they would rather you provide them with the information they sought directly instead of having to make their own way through the pain of a maze (or do some equally-annoying psychological task) before they could eat. We ought to expect this because, at least in this instance, as well as many others, having access to greater quantities of accurate information allows you to do more useful things with your time. Knowing where food is cuts down on your required search time, which allows you to spend that time in other, more fruitful ways (like doing pretty much anything that undergraduates can do that doesn’t involve serving a participant for psychologists). So what are we to make of cases where people seem to actively avoid such information and claim they find it aversive?

Spoiler warning: If you would rather formulate your own ideas first, stop reading now.

The topic arose for me lately in the context of the upcoming E3 event, where the next generation of video games will be previewed. There happens to be one video game specifically I find myself heavily invested in and, for whatever reason, I find myself wary of tuning into E3 due to the risk of inadvertently exposing myself to any more content from the game. I don’t want to know what the story is; I don’t want to see any more game play; I want to remain as ignorant as possible until I can experience the game firsthand. I’m also far from alone in that experience: of approximately 40,000 who have voiced their opinions, a full half reported that they found spoilers unpleasant. Indeed, the word that refers to the leaking of crucial plot details itself implies that the experience of learning them can actually ruin the pleasure that finding them out for yourself can bring, in much the same way that microorganisms make food unpalatable or dangerous to ingest. Am I, along with the other 20,000, simply mistaken? That is, do spoilers actually make the experience of reading some book or playing some video game any less pleasant? At least two people think that answer is “yes”.

Leavitt & Chistienfeld (2011) suggest that spoilers, in fact, do not make the experience of a story any less pleasant. After all, the authors mention people are perfectly willing to experience stories again, such as by rereading a book, without any apparent loss of pleasure from the story (curiously they cite no empirical evidence on this front, making it an untested assumption). Leavitt & Christienfeld also suggested that perceptual fluency (in the form of familiarity) with a story might make it more pleasant because the information subsequently becomes easier to process. Finally, the pair appear all but entirely disinterested in positing any reasons as to why so many people might find spoilers unpleasant. The most they offer up is the possibility that suspense might have something to do with it, but we’ll return to that point later. The authors, like your average person discussing spoilers, didn’t offer anything resembling a compelling reason as for why people might not like them. They simply note that many people think spoilers are unpleasant and move on.

In any case, to test whether spoilers really spoiled things, they recruited approximately 800 subjects to read a series of short stories, some of which came with a spoiler, some of which without, and some in which the spoiler was presented as the opening paragraph of the short story itself. These stories were short indeed: between 1,400 and 4,200 words a piece, which amounts to the approximate length of this post to about three of them. I think this happens to be another important detail to which I’ll return later, (as I have no intention of spoiling my ideas fully yet). After the subjects had read each story, they rated how much they enjoyed it on a scale of 1 to 10. Across all three types of stories that were presented – mysteries, ironic twists, and literary ones – subjects actually reported liking the spoiled stories somewhat more than the non-spoiled ones. The difference was slight, but significant, and certainly not in the spoiler-are-ruining-things direction. From this, the authors suggest that people are, in fact, mistaken in their beliefs about whether spoilers have any adverse impact on the pleasure one gets from a story. They also suggest that people might like birthday presents more if they were wrapped in clear cellophane.

Then you can get the disappointment over with much quicker.

Is this widespread avoidance of spoilers just another example of quirky, “irrational” human behavior, then, born from the fact that people tend to not have side-by-side exposure to both spoiled and non-spoiled version of a story? I think Leavitt & Christenfeld are being rather hasty in their conclusion, to put it mildly. Let’s start with the first issue: when it comes to my concern over watching the E3 coverage, I’m not worried about getting spoilers for any and all games. I’m worried about getting spoilers for one specific game, and it’s a game from a series I already have a deep emotional commitment to (Dark Souls, for the curious reader). When Harry Potter fans were eagerly awaiting the moment they got to crack open the next new book in the series, I doubt they would care much one way or the other if you told them about the plot to the latest Die Hard movie. Similarly, a hardcore Star Wars fan would probably not have enjoyed someone leaving the theater in 1980 blurting out that Darth Vader was Luke’s father; by comparison, someone who didn’t know anything about Star Wars probably wouldn’t have cared. In other words, the subjects likely have absolutely no emotional attachment to the stories they were reading and, as such, the information they were being given was not exactly a spoiler. If the authors weren’t studying what people would typically consider aversive spoilers in the first place, then their conclusions about spoilers more generally are misplaced.

One of the other issues, as I hinted at before, is that the stories themselves were all rather short. It would take no more than a few minutes to read even the longest of them. This lack of investment of time could cause a major issue for the study but, as the authors didn’t posit any good reasons for why people might not like spoilers in the first place, they didn’t appear to give the point much, if any, consideration. Those who care about spoilers, though, seem to be those who consider themselves part of some community surrounding the story; people who have made some lasting emotional connection with in it along with at least a moderately deep investment of time and energy. At the very least, people have generally selected the story to which they’re about to be exposed themselves (which is quite unlike being handed a preselected story by an experimenter).

If the phenomenon we’re considering appears to be a costly act with no apparent compensating benefits – like actively avoiding information that would otherwise require a great deal of temporal investment to obtain – then it seems we’re venturing into the realm of costly signaling theory (Zahavi, 1975). Perhaps people are avoiding the information ahead of time so they can display their dedication to some person, group, or signal something about themselves by obtaining the information personally. If the signal is too cheap, its information value can be undermined, and that’s certainly something people might be bothered by.

So, given the length of these stories, there didn’t seem to be much that one could actually spoil. If one doesn’t need to invest any real time or energy in obtaining the relevant information, spoilers would not be likely to cause much distress, even in cases where someone was already deeply committed to the story. At worst, the spoilers have ruined what would have been 5 minutes of effort. Further, as I previously mentioned, people don’t seem to dislike receiving all kinds of information (“spoilers” about the location of food or plot detains from stories they don’t care about, for instance). In fact, we ought to expect people to crave these “spoilers” with some frequency, as information gain for cheap or free is, on the whole, generally a good thing. It is only when people are attempting to signal something with their conspicuous ignorance that we ought to expect “spoilers” to actually be spoilers, because it is only then that they have the potential spoil anything. In this case, they would be ruining an attempt to signal some underlying quality of the person who wants to find out for themselves.

Similar reasoning helps explain why it’s not enough for them to just hate people privately.

In two short pages, then, the paper by Leavitt & Christenfeld (2011) demonstrates a host of problems that can be found in the field of psychological research. In fact, this might be the largest number of problems I’ve seen crammed into such a small space. First, they appear to fundamentally misunderstand the topic they’re ostensibly researching. It seems, to me, anyway, as if they’re trying to simply find a new “irrational belief” that people hold, point it out, and say, “isn’t that odd?”. Of course, simply finding a bias or mistaken belief doesn’t explain anything about it, and there’s little to no apparent effort made to understand why people might hold said odd belief. The best the authors offer is that the tension in a story might be heightened by spoilers, but that only comes after they had previously suggested that such suspense might detract from enjoyment by diverting a reader’s attention. While these two claims aren’t necessarily opposed, they seem at least somewhat conflicting and, in any case, neither claim is ever tested.

There’s also a conclusion that vastly over-reaches the scope of the data and is phrased without the necessary cautions. They go from saying that their data “suggest that people are wasting their time avoiding spoilers” to intuitions about spoilers just being flat-out “wrong”. I will agree that people are most definitely wasting their time by avoiding spoilers. I would just also add that, well, that waste is probably the entire point.

References: Leavitt JD, & Christenfeld NJ (2011). Story spoilers don’t spoil stories. Psychological science, 22 (9), 1152-4 PMID: 21841150

Zahavi, M. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.

Two Fallacies From Feminists

Being that it’s summer, I’ve decided to pretend I’m going to kickback once more from working for a bit and write about a more leisurely subject. The last time I took a break for some philosophical play, the topic was Tucker Max’s failed donation to Planned Parenthood. To recap that debacle, there were many people who were so put off by Tucker’s behavior and views that they suggested that Planned Parenthood accepting his money ($500,000) and putting his name on a clinic would be too terrible to contemplate. Today, I’ll be examining two fallacies that likely come from an largely-overlapping set of people: those who consider themselves feminists. While I have no idea how common these views are among the general population or even among feminists themselves, they’ve come across my field of vision enough times to warrant a discussion. It’s worth noting up front that these lines of reasoning are by no means limited strictly to feminists; they just come to us from feminists in these instances. Also, I like the alliteration that singling that group brings in this case. So, without any further ado, let’s dive right in with our first fallacy.

Exhibit A: Colorful backgrounds do not a good argument make.

For those of you not in the know, the above meme is known as the “Critical Feminist Corgi”. The sentiment expressed by it – if you believe in equal rights, then you’re a feminist – has been routinely expressed by many others. Perhaps the most notable instance of the expression is the ever-quotable “feminism is the radical notion that women are people“, but it comes in more than one flavor. The first clear issue with the view expressed here is reality. One doesn’t have to look very far to find people who do not think men can be feminists. Feminist allies, maybe, but not true feminists; that label is reserved strictly for women, since it is a “woman’s movement”. If feminism was simply a synonym for a belief in equal rights or the notion that women are people, then that this disagreement even exists seems rather strange. In fact, were feminism a synonym for a belief in equal rights, then one would need to come to the conclusion that anyone who doesn’t think men can be feminists cannot be a feminist themselves (in much the same way that someone who believes in a god cannot also be an atheist; it’s simply definitional). If those who feel men cannot be feminists can themselves still be considered feminists (perhaps some off-brand feminist, but feminist nonetheless), then it would seem clear that the equal-rights definition can’t be right.

A second issue with this line of reasoning is more philosophical in nature. Let’s use the context of the corgi quote, but replace the specifics: if you believe in personal freedom, then you are a Republican. Here, the problems become apparent more readily. First, a belief in freedom is neither necessary or sufficient for calling oneself a Republican (unlike the previous atheist example, where a lack of belief is both necessary and sufficient). Second, the belief itself is massively underspecified. The boundary conditions on what “freedom” refers to are so vague that it makes the statement all but meaningless. The same notions can said to apply well to the feminism meme: a belief in equal rights is apparently neither necessary or sufficient, and what “equal rights” means depends on who you ask and what you ask about. Finally, and most importantly, the labels “Republican” and “Feminist” appear to represent approximate group-identifications; not a single belief or goal, let alone a number of them. The meme attempts to blur the line between a belief (like atheism) and group-identification (some atheist movement; perhaps the Atheism+ people, who routinely try to blur such lines).

That does certainly raise the question as to why people would try and blur that line, as well as why people would resist the blurring. I feel the answer to the former can be explained in a similar manner to why a cat’s threat display involves puffed-up fur and their backs arched: it’s an attempt to look larger and more intimidating than one actually is. All else being equal, aggressing against a larger or more powerful individual is costlier than the same aggression directed towards a less-intimidating one. Accordingly, it would seem to also follow that aggressing against larger alliances is costlier than aggressing against smaller ones. So, being able to suggest that approximately 62% of people are feminists makes a big difference, relative to suggesting that only 19% of people independently adopt the label. Of course, the 43% of people who didn’t initially identify as feminists might take some issue with their social support being co-opted: it forces an association upon them that may be detrimental to their interests. Further still, some of those within the feminist camp might also wish that others would not adopt the label for related reasons. The more feminists their are, the less social status can be derived from the label. If, for instance, feminism was defined as the belief that women are people, then pretty much every single person would be feminist, and being a feminist wouldn’t tell you much about that person. The signal value of the label gets weakened and the specific goals of certain feminists might become harder to achieve amongst the sea of new voices. This interaction between relative status within a group and signal value may well help us understand the contexts in which this blurring behavior should be expected to be deployed and resisted.

Exhibit B: Humor does not a good argument make either.

The second fallacy comes to us from Saturday Night Live, but they were hardly the innovators of this line of thought. The underlying idea here seems to be that men and women have different and relatively non-overlapping sets of best interests, and the men are only willing to support things that personally inconvenience them. Abortion falls on the female-side of the best interests, naturally. Again, this argument falters on both the fronts of reality and philosophy, but I’ll take them in reverse order this time. The philosophical fallacy being committed here is known as the Ecological Fallacy. In this fallacy, essentially, each individual is viewed as being a small representative of the larger group to which they belong.  An easy example is the classic one about height: just because men are taller than women on average, it does not mean that any given male you pull from the population will be taller than any given female. Another more complicated example could involve IQ. Let’s say you tested a number of men and women on an IQ test and found that men, on average, performed better. However, that gap may be due to some particularly well-performing outlier males. If that’s the case, it may be the case that the “average” man actually scores worse than the “average” woman by in large, but the skewed group distributions tell a different story.

Now, onto the reality issues. When it comes to question of whether gender is the metaphorically horse pulling the cart of abortion views, the answer is “no”. In terms of explaining the variance in support for abortion, gender has very little to do with it, with approximately equal numbers of men and women supporting and opposing it. A variable that seems to do a much better job of explaining the variance in views towards abortion is actually sexual strategy: whether one is more interested in short-term or long-term sexual relationships. Those who take the more short-term strategy are less interested in investing in relationships and their associated costs – like the burdens of pregnancy – and accordingly tend to favor policies and practices that reduce said costs, like available contraceptives and abortions. However, those playing a more long-term strategy are faced with a problem: if the costs to sex are sufficiently low and people are more promiscuous because of that, the value of the long-term relationships declines. This leads those attempting to invest in long-term strategies to support policies and practices that make promiscuity costlier, such as outlawing abortion and making contraceptives difficult to obtain. To the extent that gender can predict views on abortion (which is not very well to begin with), that connection is likely driven predominately by other variables not exclusive to gender.

We are again posed with the matter of why these fallacies are committed here. My feeling is that the tactic being used here is, as before, the manipulation of association values. By attempting to turn abortion into a gendered issue – one which benefits women, no less – the message that’s being sent is that if you oppose abortion, you also oppose most women. In essence, it attempts to make the opposition to abortion appear to be a more powerfully negative signal. It’s not just that you don’t favor abortion; it’s that you also hate women. The often-unappreciated irony of this tactic is that it serves to, at least in part, discredit the idea that we live in a deeply misogynistic society that is biased against women. If the message here is that being a misogynist is bad for your reputation, which it would seem to be, it would seem that state of affairs would only hold in a society where the majority of people are, in fact, opposed to misogyny. Were we to use a sports analogy, being a Yankee’s fan is generally tolerated or celebrated in New York. If that same fan travels to Boston, however, their fandom might now become a distinct cost, as not only are most people there not Yankee’s fans, but many actively despise their baseball rivals. The appropriateness and value of an attitude depends heavily on one’s social context. So, if the implication that one is a misogynist is negative, that tells you something important about the values of wider culture in which the accusation is made.

Unlike that degree in women’s studies.

I suppose the positive message to get from all this is that attitudes towards women aren’t nearly as negative as some feminists make them out to be. People tend to believe in equality – in the vague sense, anyway – whether or not they consider themselves feminists, and misogyny – again, in the vague sense – is considered a bad thing. However, if the perceptions about those things are open to manipulation, and if those perceptions can be used to persuade people to help you achieve your personal goals, we ought to expect people – feminist and non-feminist alike – to try and take advantage of that state of affairs. The point in these arguments, so to speak, is to be persuasive; not to be accurate (Mercier & Sperber, 2011). Accuracy only helps insomuch as it’s easier to persuade people of true things, relative to false ones.

References: Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory Behavioral and Brain Sciences, 34 (02), 57-74 DOI: 10.1017/S0140525X10000968

It’s (Sometimes) Good To Be The King

Given my wealth of anecdata, I would feel confident saying that, on the whole, people high in status (whether because of their wealth, their social connections, or both) tend to not garner much in the way of sympathy from third parties. It’s why we end up with popular expressions like “First World Problems” – frustrations deemed to be minor, experienced by people who are relatively well off in life. The idea that people so well-off can be bothers by such trivial annoyances serves as the subject of some good comedic fodder. There are a number of interesting topics surrounding the issue, though: first, there’s the matter of why First World Problems exist in the first place. That is, why do people not simply remain content with their life once they reach a certain level of comfort? Did people really need to bother developing high-speed wireless internet when we already had dial-up (which is pretty good, compared to people without internet)? Why would it feel so frustrating for us with high-speed wireless if we had to switch back? A second issue would be the hypocrisy that frequently surrounds people who use the First World Problems term in jest, rather than sympathy. There are those who will mock others for what they perceive to be First World Problems, then turn around and complain about something trivial themselves (like, say, the burden of having to deal with annoying people on their high-speed wireless internet). The topic of the day will concern a third topic: considering contexts in which sympathy is strategically deployed (or not deployed) in terms of moral judgments on the basis of the social status of a target individual.

But first a quick First World Problem: I really dislike earbud headphones.

A new paper by Polman, Pettit, & Wisenfeld (2013) sought to examine the phenomenon of moral licensing with respect the status of an actor. Roughly speaking, moral licensing represents the extent to which one is not morally condemned or punished for an immoral action, relative to someone without that licensing. Were person A and B to both commit the same immoral act (like adultery), if one was to be punished less, or otherwise suffered fewer social costs associated with the act, all else being equal, we would say that person’s actions were morally licensed  by the condemners to some degree. The authors, in this case, predicted that both high- and low-status individuals ought to be expected to receive some degree of moral licensing, but for different reasons: high-status individuals were posited to receive this license because of a “credential bias…[which leads] people to perceive dubious behavior as less dubious”, whereas low status individuals were posited to receive moral licensing through moral “credits…[which offer] counterbalancing [moral] capital”, allowing low-status individuals to engage in immoral behavior to the extent that their transgressions do not outweigh their capital. If immoral behavior is viewed as creating a metaphorical debt, the generated debt would be lower for those high in status, but able to be paid off more readily by those low in status.

So the authors predicted that high-status individuals will have their behavior reinterpreted to be more morally positive when there is some ambiguity allowing for reinterpretation, whereas low-status individuals won’t be morally condemned as strongly because “they’ve already suffered enough”. Now I know I’ve said it before, but these types of things can’t be said enough as far as I’m concerned: these predictions seem to be drawn from intuitions – not from theory. The biases that the authors are positing are, essentially, circular restatements of a pattern of results they hoped to find (i.e. high-status people will be given moral license because of a bias that causes high-status people to be given moral license). Which is not to say they’re necessarily wrong, mind you (in fact, since this is psychology, I don’t think I’m spoiling anything by telling you they found the results they predicted); it’s just that this paper doesn’t further theory in moral licensing, as the authors suggest it does. Rather, what the paper does instead is present us with a new island of findings within the moral licensing research. In any case, let’s first take a look at what the paper reported.

In the first study, the authors presented subjects with a case of potential racial discrimination (where 5 white and 2 black candidates for a job were interviewed and only 2 white ones were hired). The name of person doing the hiring was manipulated to try and make them sound high in status (Winston Rivington), low-status (Billy-Bob), neutral (James). The subjects were subsequently asked whether the person doing the hiring should be condemned in various ways, whether socially or legally. The results showed that, as predicted, both high- and low-status individuals were condemned less (M = 3.22 and 3.14 out of a 1 to 9 scale, respectively), than the control (M = 3.78). While there was an effect, it was a relatively weak one, perhaps as ought to expected from such a weak manipulation. The manipulation was stronger in the next study. In study two, subjects were also asked about someone making a hiring decision, but this person was now either the executive of a fortune 500 company or a janitor. Further, the racism in the hiring decision was either clear (the person doing the hiring admitted to it) or ambiguous (the person doing the hiring referenced performance for their decision). The results of the second study showed that, when the moral infraction was unambiguous, the high status individual was condemned more (M = 7.81), relative to when the infraction was ambiguous (M = 5.42). By contrast, whether the infraction was committed ambiguously or unambiguously by the lower-status individual, the condemnation remained the same (6.42 and 6.48 respectively). Further, individuals with more dispositional sympathy tended to be the ones punishing the low-status individuals less. The effect of that sympathy, however, did not transfer to the high-status individuals. While high-status individual’s condemnation varied with ambiguity of the act, low-status people seemed to get the same level of sympathy regardless of whether they transgressed ambiguously or not.

If only he was poorer; then the racism would be more acceptable.

In the final study, subjects were again presented with a story that contained an ambiguous, potential moral wrong: someone taking money off a table at the end of a cafeteria-style lunch and putting it in their pocket. The person taking the money was either just “someone”, “the janitor”, or “the residence director” at the cafeteria. Again, the low- and high-status individuals received less condemnation on average (M = 3.11 and 3.17) than the control (M = 4.33). However, only high-status individuals had their behavior perceived as less wrong (M = 5.03); the behavior was rated as being equally wrong in both the control (M = 6.21) and low-status (6.05) condition. Conversely, it was only in the low-status condition that the person taking the money was given more sympathy (M = 5.75); both the high-status (3.60) and control (3.70) received equal and lesser amounts of sympathy.

Now for the more interesting part. This study hints at the perhaps-unsurprising conclusion that people who differ in some regard – in this case, status – are treated differently. Sympathy is reserved for certain groups of people (in this case, typically not for people like Winston Rivington), whereas the benefit of the doubt can be reserved for others (typically not for the Billy-Bobs of the world). The matter which is not dealt with in this paper is the more interesting one, in my mind: why should we expect that to be the case? That is, what is the adaptive function of sympathy and, given that function, in what situations ought we expect it to be strategically deployed? For instance, the authors offer up the following suggestion:

Moreover, we contend that high or low status may sometimes deprive wrongdoers of a license. When wrongdoers’ high status is viewed as undeserved, illegitimate or exploitative, they may pay a particularly high cost for their transgressions.

It seems as if they’re heading in the right direction – thinking about different variables which might have some effects on how much moral condemnation an individual might suffer as the result of an immoral act – but they don’t quite know how to get there. Presumably, what the authors are suggesting in their above example has something to do with the social value of an actor to other potential third-party condemners. Someone who merely inherited their high status may be seen as a bad investment, as their ability to maintain that position and benefits it may bring – and thus their future social value -  is in question. If their high status is derived from exploitative means, their social value may be questioned on the grounds that the benefits they might provide come at too great a cost; the cost of drawing condemnation from the enemies the high-status individual has made while rising to power. Conversely, individuals who are low in status as a result of behaviors that makes them bad investments – like, say, excessive drug use – may well not see the benefits of sympathy-based moral licensing. It might be less useful to feel sympathy for someone who repeatedly made poor choices and shows no signs of altering that pattern. The larger point is that, in order to generate good theory and good predictions, you’d be well-served by thinking about adaptive costs and benefits in cases like this. Intuitions will only get you so far.

In this case, intuitions only netted them a publication in a high impact factor journal. So good show on that front.

What I find particularly interesting about this study, though, is that the results run (sort of) counter to some data I recently collected, despite my predicting similar kinds of effects. With respect to at least one kind of ambiguously-immoral behavior and two personal characteristics (neither of which was status), moral judgments and condemnation appeared to be stubbornly impartial. While my results aren’t ready for prime time yet (and I do hope the lack of significant results doesn’t cause issues when it comes to publication; another one of my first world problems), I merely want to note (as the authors also suggest) that such moral licensing effects do appear to come with boundary conditions, and teasing those out will certainly take time. Whatever the shape of those boundary conditions, redescribing them in terms of a “bias” doesn’t cut it as an explanation, nor does it assist in future theorizing about the subject. In order to move research along, it’s long-past time our intuitions are granted a more solid foundation.

References: Polman, E., Pettit, N., & Wiesenfeld, B. (2013). Effects of wrongdoer status on moral licensing Journal of Experimental Social Psychology, 49 (4), 614-623 DOI: 10.1016/j.jesp.2013.03.012