Altruism Is Not The Basis Of Morality

“Biologists call this behavior altruism, when we help someone else at some cost to ourselves. If you think about it, altruism is the basis of all morality. So the larger question is, Why are we moral?” – John Horgan [emphasis mine]

John Horgan, a man not known for a reputation as a beacon of understanding, recently penned the above thought that expresses what I feel to be an incorrect sentiment. Before getting to the criticism of that point, however, I would like to first commend John for his tone in this piece: it doesn’t appear as outwardly hostile towards the field of evolutionary psychology as several of his past pieces have been. Sure, there might be the odd crack about “hand-wavy speculation” and the reminder about how he doesn’t like my field, but progress is progress; baby steps, and all that.

Just try and keep those feet pointed in the right direction and you’ll do fine.

I would also like to add at the outset that Horgan states that:

“Deceit is obviously an adaptive trait, which can help us (and many other animals) advance our interest” [Emphasis mine]

I find myself interested as to why Horgan seems to feel that deceit is obviously adaptive, but violence (in at least some of its various forms) obviously isn’t. Both certainly can advance an organism’s interests, and both generally advance those interests at the expense of other organisms. Given that Horgan seems to offer nothing in the way of insight into how he arbitrates between adaptations and non-adaptations, I’ll have to chalk his process up to mere speculation. Why one seems obvious and the other does not might have something to do with Trivers accepting Horgan’s invitation to speak at his institution last December, but that might be venturing too far into that other kind of “hand-wavy speculation” Horgan says he dislikes so much. Anyway…

The claim that altruism is the basis of all morality might seem innocuous enough – the kind of thing ostensibly thoughtful people would nod their head at – but an actual examination of the two concepts will show that the sentiment only serves to muddy the waters of our understanding of morality. Perhaps that revelation could have been reached had John attempted to marshal more support for the claim beyond saying, “If you think about it” (which is totally not speculation…alright; I’ll stop), but I suppose we can’t hope for too much progress at once. So let’s begin, then, by considering the ever-quotable line from Adam Smith:

“It is not from the benevolence of the butcher, the brewer, or the baker, that we can expect our dinner, but from their regard to their own interest”

Smith is describing a scenario we’re all familiar with: when you want a good or service that someone else can provide you generally have to make it worth their while to provide it to you. This trading of benefits-at-a-cost is known as reciprocal altruism. However, when I go to the mall and give Express money so they will give me a new shirt, this exchange is generally not perceived as two distinct, altruistic acts (I endure the cost of losing money to benefit Express and Express endures the cost of losing a shirt to benefit me) that just happen to occur in close temporal proximity to one another, nor is it viewed as a particularly morally praiseworthy act. In fact, such exchanges are often viewed as two selfish acts, given that the ostensible altruism on the behavioral level is seen as a means for achieving benefits, not an end in and of itself. One could also consider the example with regards to fishing: if you sit around all day waiting for a fish to altruistically jump into your boat so you can cook it for dinner, you’ll likely be waiting a long time; better to try and trick the fish by offering it a tasty morsel on the end of a hook. You suffer a cost (the loss of the bait and your time spent sitting on a boat) and deliver a benefit to the fish (it gets a meal), whereas the fish suffers a cost (it gets eaten soon after) that benefits you, but neither you or the fish were attempting to benefit the other. 

There’s a deeper significance to that point, though: reciprocally altruistic relationships tend to break down in the event that one party fails to return benefits to the other (i.e. when the payments over time for one party actually resembles altruism). Let’s say my friend helps me move, giving up his one day off a month he has in the process. This gives me a benefit and him a cost. At some point in the future, my friend is now moving. In the event I fail to reciprocate his altruism, there are many who might well say that I behaved immorally, most notably my friend himself. This does, however, raise the inevitable question: if my friend was expecting his altruistic act to come back and benefit him in the future (as evidenced by his frustration that it did not do so), wasn’t his initial act a selfish one on precisely the same level as my shopping or fishing example above?

Pictured above: not altruism

What these examples serve to show is that, depending on how you’re conceptualizing altruism, the same act can be viewed as selfish or altruistic, which throws a wrench into the suggestion that all morality is based on altruism. One needs to really define their terms well for that statement to even mean anything worthwhile. As the examples also show, precisely how people behave towards each other (whether selfishly or altruistically) is often a topic of moral consideration, but just because altruism can be a topic of moral consideration it does not mean it’s the basis of moral judgments. To demonstrate that altruism is not the basis of our moral judgments, we can also consider a paper by DeScioli et al (2011) examining the different responses people have to moral omissions and moral commissions.

In this study, subjects were paired into groups and played a reverse dictator game. In this game, person A starts with a dollar and person B has a choice between taking 10 cents of that dollar, or 90 cents. However, if person B didn’t make a choice within 15 seconds, the entire dollar would automatically be transferred to them, with 15 cents subtracted for running out of time. So, person B could be altruistic and only take 10 cents (meaning the payoffs would be 90/10 for players A and B, respectively), be selfish and take 90 cents (10/10 payoff), or do nothing making the payoffs 0/85 (something of a mix of selfish and spite). Clearly, failing to act (an omission) was the worst payoff for all parties involved and the least “altruistic” of the options. If moral judgments use altruism as their basis, one should expect that, when given the option, third parties should punish the omissions more harshly than either of the other two conditions (or, at the very least, punish the latter two conditions equally as harshly). However, those who took the 90 cents were the ones who got punished the most; roughly 21 cents, compared to the 14 cents that those who failed to act were punished. An altruism-based account of morality would appear to have a very difficult time making sense of that finding.

Further still, an altruism-based account of morality would fail to provide a compelling explanation for the often strong moral judgments people have in reaction to acts that don’t distinctly harm or benefit anyone, such as others having a homosexual orientation or deciding not to have children. More damningly still, an altruism basis for moral judgments would have a hell of a time trying to account for why people morally support courses of action that are distinctly selfish: people don’t tend to routinely condemn others for failing to give up their organs, even non-vital ones, to save the lives of strangers, and most people would condemn a doctor for making the decision to harvest organs from one patient against that patient’s will in order to save the lives of many more people.

“The patient in 10B needs a kidney and we’re already here…”

An altruism account would similarly fail to explain why people can’t seem to agree on many moral issues in the first place. Sure, it might be altruistic for a rich person to give up some money in order to feed a poor person, but it would also be altruistic for that poor person to forgo eating in order to not impose that cost on the rich one. Saying that morality is based in altruism doesn’t seem to even provide much information about how precisely that altruism will be enacted, moral interactions will play out, or seem to lend any useful or novel predictions more generally. Then again, maybe an altruism-based account of morality can obviously deal with these objects if I just think about it

References: DeScioli, P., Christner, J., & Kurzban, R. (2011). The omission strategy Psychological Science, 22, 442-446 DOI: 10.1177/0956797611400616

The Salience Of Cute Experiments

In the course of proposing new research and getting input from others, I have had multiple researchers raise the same basic concern to me: the project I’m proposing might be unlikely to eventually get published because, given that I find the results I predict that I will, reviewers might feel the results are not interesting or attention-grabbing enough. While I don’t doubt that the concern is, to some degree, legitimate*, it has me wondering about whether their exists an effect that is essentially the reverse of that issue. That is, how often does bad research get published simply on the grounds that it appears to be interesting, and are reviewers willing to overlook some or all the flaws of a research project because it is, in a word, cute?

Which is why I always make sure my kitten is an author on all my papers.

The cute experiment of the day is Simons & Levin (1998). If you would like to see a firsthand example of the phenomenon this experiment is looking at before I start discussing it, I’d recommend this video of the color changing card trick. For those of you who just want to skip right to the ending, or have already seen the video, the Simons & Levin (1998) paper sought to examine “change blindness”: the frequent inability of people to detect changes in their visual field from one moment to the next. While the color changing card trick only replaced the colors of people’s shirts, tablecloths, or backdrops, the experiment conducted by Simons & Levin (1998) replaced actual people in the middle of a conversation to see if anyone would notice.The premise of this study would appear to be interesting on the grounds that many people might assume that they would notice something like the fact that they were suddenly talking to a different person then they were a moment prior, and the results of this study would seem to suggest otherwise. Sure sounds interesting when you phrase it like that.

So how did the researchers manage to pull off this stunt? The experiment began when a confederate holding a map approached a subject on campus. After approximately 10 or 15 seconds of talking, two men holding a door would pass in between the confederate and the subject. Behind this door was a second confederate who changed places with the first. The second confederate would, in turn, carry on the conversation as if nothing had happened. Of the 15 subjects approached in such a manner, only 7 reported noticing the change of confederate in the following interview. The authors mention that out of the 7 subjects that did notice the change, there seemed to be a bias in age: specifically, the subjects in the 20-30 age range (which was similar to that of the confederates) seemed to notice the change, whereas the older subjects (in the 35-65 range) did not. To explain this effect, Simons & Levin (1998) suggested that younger subjects might have been treating the confederates as their “in-group” because of their age (and accordingly paying more attention to their individual features) whereas the older subjects were treating the confederates as their “out-group”, also because of their age (and accordingly paying less attention to their features).

In order to ostensibly test their explanation, the authors ran a follow-up study. This time the same two confederates were dressed as construction workers (i.e. they wore slightly different construction hats, different outfits, and different tool belts) in order to make them appear as more of an “out-group” member to the younger subjects. The confederates then exclusively approached people in the younger age group. Lo and behold, when the door trick was pulled, this time only 4 of the 12 subjects caught on. So here we have a cute study with a counter-intuitive set of results and possible implications for all sorts of terms that end in -ism.

And the psychology community goes wild!

It seems to have gone unnoticed, however, that the interpretation of the study wasn’t particularly good. The first issue, though perhaps the smallest, is the sample size. Since these studies only ran a total of 13.5 subjects each, on average, the extent to which this difference in change blindness (approximately 15%) across groups is just due to chance is unknown. Let’s say, however, that we give the results the benefit of the doubt and assume that they would remain stable if sample size was scaled up. Even given that consideration, there are still some very serious problems remaining.

The larger problem is that the authors did not actually test their explanation. This issue comes in two parts. First, Simons and Levin (1998) proposed that subjects were using cues of group membership in determining whether or not to pay attention to an individual’s features. In their first study, this cue was assumed to be age; in the second study, this cue was assumed to now be construction worker. Of note, however, is that the same two confederates took part in both experiments, and I doubt their age changed much between the two trials. This means that if Simons and Levin (1998) were right, age only served as an indicator of group membership in first context; in the second, that cue was overridden by another – construction worker. Why that might be the case is left completely untouched by the authors, and that seems like a major oversight. The second part is that the authors didn’t test whether the assumed “in-group” would be less change blind. In order to do that they would have had to, presumably, pull the same door trick using construction workers as their subjects. Since Simons and Levin (1998) only tested an assumed out-group, they are unable to make a solid case for differences in group membership being responsible for the effect they’re talking about.

Finally, the authors seem to just assume that the subjects were paying attention in the first place. Without that assumption these results are not as counter-intuitive as they might initially seem, just as people might not be terribly impressed by a magician who insisted everyone just turned around while he did his tricks. The subjects had only known the confederates for a matter of seconds before the change took place, and during those seconds they were also focused on another task: giving directions. Further, the confederate (who is still a complete stranger at this point) is swapped out for another very similar one (both are male, both are approximately the same age, race, and height, as well as being dressed very similarly). If the same door trick was pulled with a male and female confederate, or a friend and a stranger, or people of different races, or people of different ages, and so on, one would predict you’d see much less change blindness.

My only change blindness involves being so rich I can’t see bills smaller than $20s

The real interesting questions would then seem to be what cues to people attend to, why do they attend to them, and in what order are they attended to? None of these questions are really dealt with by the paper. If the results they present are to be taken at face value, we can say the important variables are often not the color of one’s shirt, the sound of one’s voice (within reason), very slight differences in height, and modestly different hairstyles (when one isn’t wearing a hat) when dealing with complete strangers of similar gender and age, while also involved in another task.

So maybe that’s not a terribly surprising result, when phrased in such a manner. Perhaps the surprising part might even be that so many people noticed the apparently not so obvious change. Returning to the initial point, however, I don’t think many researchers would say that an experiment designed to demonstrate that people aren’t always paying attention to and remembering every single facet of their environment would be a publishable paper. Make it cute enough, however, and it can become a classic.

*Note: whether the concerns are legitimate or not, I’m going to do the project anyway.

References: Simons, D.J., & Levin, D.T. (1998). Failure to detect changes to people during a real-world interaction Psychonomic Bulletin & Review, 5, 644-649 DOI: 10.3758/BF03208840

Can Situations Be Strong Or Weak?

“The correspondence bias is the tendency to draw inferences about a person’s unique and enduring dispositions from behaviors that can be entirely explained by the situation in which they occur. Although this tendency is one of the most fundamental phenomena in social psychology, its causes and consequences remain poorly understood” – Gilbert and Malone, 1995

Social psychologists are not renowned for being particularly good at understanding things, even things which are (supposedly) fundamental to their field of study. Like the proverbial drunk looking for his missing keys at night under a streetlight rather than in the park where he lost them “because the light is better”, part of the reason social psychologists are not very good at providing genuine understanding is because they often begin with some false premise or assumption. In the case of the correspondence bias, as defined by Gilbert & Malone (1995), I feel one of these misunderstandings is the idea that behavior can be caused or explained by the situation at all (let alone ‘entirely’); that is, unless one defines “the situation” in a way that ceases to be of any real value.

Which is about as valuable as the average research paper in psychology.

According to Gilbert and Malone (1995), an “eminently reasonable” rule is that “…one should not explain with dispositions that which has already been explained by the situation”. They go on to suggest that people tend to “underestimate the power of situations”, frequently mistaking “…strong situation[s] for relatively weak one[s]. To use some more concrete examples, people seemed to perform poorly at tasks like predicting how much shock subjects in the Milgram experiment on obedience would give when asked to by an experimenter, or tend to do things like judge basketball players as less competent when they were shooting free throws in a dimly-lit room, relative to a well-lit one. In these experiments, the command of an experimenter and the lighting of a room are supposed to be, I think, “strong” situations that “highly constrain” behavior. Not to put too fine of a point on it, but that makes no sense.

A simple example should demonstrate why. Let’s say you wanted to see how “strong” of a situation a hamburger is, and you measure the strength of the situation by how much subjects are willing to pay for that burger. An initial experiment finds that subjects are, on average, willing to pay a maximum of about $5 for that average burger. Good to know. Now, a second experiment is run, but this time subjects are divided into three groups: group 1 has just finished eating a larger meal, group 2 ate that same meal four hours prior and nothing else since, and group 3 ate that meal 8 hours prior and nothing else since. These three groups are now presented with the same average hamburger. The results you’d now find is that group 1 seems relatively uninterested in paying for that burger (say, $0.50, on average), group 2 is somewhat interested in paying for it ($5), and group 3 is very interested in paying for it ($10).

From this hypothetical pattern of results, what are we to conclude about the “strength” of the situation of an opportunity to buy a burger? Obviously, the burger itself (the input provided by the environment) explains nothing about the behavior of the subjects and has no intrinsic “strength”. This shouldn’t be terribly surprising because abstract situations aren’t what generate behavior; psychological modules do. Whether that burger is currently valuable or not is going to depend crucially on the outputs of certain modules monitoring things like one’s current caloric state, and other modules recognizing the hamburger as a good source of potential calories. That’s not to say that the situations are irrelevant to the behavior that is eventually generated, of course; it just implies that which aspects of the environment matter, how much they matter, when they matter, and why they matter, are all determined by the current state of the existing psychological structures of the organism in question. A resource that is highly valuable in one situation is not necessarily valuable in another.

“If it makes my car go fast, just imagine how much time it’ll cut off my 100m”

Despite their use of inconsistent and sloppy language regarding the interaction between environments and dispositions in generating behavior, Glibert and Malone (1995) seem to understand that point to some extent. Concerning an example where a debate coach instructs subjects to write a pro-Castro speech, the authors write:

“…[T]he debate coach’s instructions merely alter the payoffs associated with the two behavioral options…the essayist’s behavioral options are not altered by the debate coach’s instructions; rather, the essayist’s motivation to enact each of the behavioral options is altered.”

What people are often underestimating (or overestimating, depending on context), then, is not the strength of the situation, but the strength of other competing dispositions, given a certain set of environmental inputs. While this might seem like a minor semantic issue, I feel it might holds a deeper significance, insomuch as it leads researchers to ask the wrong kinds of questions. For instance, what’s noteworthy to me about Gilbert and Malone’s (1995) analysis of the ultimate causes of the correspondence bias are not the candidate explanations they put forward, but rather which questions they don’t ask and what explanations they don’t give.

The authors suggest that the correspondence bias might not have historically had many negative consequences for a number of reasons that I won’t get into here. The only possible positive consequence they discuss is that the bias might allow people to predict the behavior of others. This is a rather strange benefit to posit, I feel, given that almost the entirety of their paper up to that point had been focused on how this bias is likely to lead to incorrect predictions, all things considered. Even granting that the correspondence bias might only tend to be an actual problem in contexts artificially created in psychology experiments (such as by randomly assigning subjects to groups), in no case does it seem to lead to more accurate predictions of others’ behavior.

The ultimate explanations offered for the correspondence bias left me feeling like (and I could be wrong about this) the authors were still thinking about the bias as an error in the way we think; they don’t seem to give the impression that the bias had an real function. Now, that could be true; the bias might well be a neutral-to-maladaptive byproduct, though what the bias would be a byproduct of isn’t immediately clear. While, from a strictly accuracy-based point of view, the bias might often lead to inaccurate conclusions, as I’ve mentioned before, accuracy is only important to the extent that it helps organisms do useful things. The question that Gilbert and Malone (1995) fail to ask, given their focus on accuracy, is why would people bother attributing the behavior of others to situational or dispositional characteristics in the first place?

My road rage happens to be indifferent to whether you were lost or just a slow driver.

Being able to predict the behavior of other organisms is useful, no doubt; it lets you know who is likely to be a good social investment and who isn’t, which will in turn affect the way you behave towards others. Given the stakes at hand, and since you’re dealing with organisms that can be persuaded, accuracy in perceptions might not always be the best policy. Suppose you’re in competition with a rival over some resource; since the Olympics are currently going on, you’re now a particularly good swimmer and competing in some respective event. Let’s say you don’t come in first; you end up placing behind one of your country’s bitter rivals. How are you going to explain that loss to other people? You might concede that your rival was simply a better swimmer than you, but that’s not likely to garner you a whole lot of support. Alternatively, you might suggest that you were really the better swimmer, but some aspect of the situation ended up giving your rival a temporary upper-hand. What you’d be particularly unlikely to do would be to both suggest that your rival was actually the better swimmer and still beat you despite some situational factor that ended up putting you at an advantage.

As Gilbert and Malone (1995) mention in their introduction, a niece who is perceived as intentionally breaking a vase by their aunt would receive the thumbscrews, while the niece who is perceived as breaking a vase on accident would not. Depending on the nature of the situation – whether it’s one that will result in blame or praise – it might serve you will to minimize or maximize the perception of your involvement in bringing the events about. It would similarly serve you will to manipulate the perceptions of other people’s involvement in act. One way of doing this would involve going after the perceptions of whether a behavior was caused by a situation or a disposition; whether the outcome was a fluke or likely to be consistent across situations. This would lead to the straight-forward prediction that such attributional biases will tend to look remarkably self-serving, rather than just wrong in some general way. I’ll leave it up to you as to whether or not that seems to be the case.

References: Gilbert, D.T., & Malone, P.S. (1995). The correspondence bias Psychological Bulletin, 117, 21-38 DOI: 10.1037/0033-2909.117.1.21

Inequality Aversion Aversion

While I’ve touched on the issues surrounding the concept of “fairness” before, there’s one particular term that tends to follow the concept around like a bad case of fleas: inequality aversion. Following the proud tradition of most psychological research, the term manages to both describe certain states of affairs (kind of) without so much as an iota of explanatory power, while at the same time placing the emphasis on, conceptually, the wrong variable. In order to better understand why (some) people (some of the time) behave “fairly” towards others, we’re going to need to address both of the problems with the term. So, let’s tear the thing down to the foundation and see what we’re working with.

“Be careful; this whole thing could collapse for, like, no reason”

Let’s start off with the former issue: when people talk about inequality aversion, what are they referring to? Unsurprisingly, the term would appear to refer to the fact that people tend to show some degree of concern for how resources are divided among multiple parties. We can use the classic dictator game as a good example: when given full power over the ability to divide some amount of money, dictator players often split the money equally (or near-equally) between themselves and another player. Further, the receivers in the dictator games also tend to both respond to equal offers with favorable remarks and respond to unequal offered with negative remarks (Ellingsen & Johannesson, 2008). The remaining issue, then, concerns how are we to interpret findings like this, and why should we interpret them in such a fashion

Simply stating that people are averse to inequality is, at best, a restatement of those findings. At worst, it’s misleading, as people will readily tolerate inequality when it benefits them. Take the dictators in the example above: many of them (in fact, the majority of them) appear perfectly willing to make unequal offers so long as they’re on the side that’s benefiting from that inequality. This phenomena is also illustrated by the fact that, when given access to asymmetrical knowledge, almost all people take advantage of that knowledge for their own benefit (Pillutla & Murnighan,1995). As a final demonstration, take two groups of subjects; each subject given the task of assigning themselves and another subject to one of two tasks: the first task is described as allowing the subject a chance to win $30, while the other task has no reward and is described as being dull and boring.

In the first of these two groups, since subjects can assign themselves to whichever task they want, it’s perhaps unsurprising that 90% of the subjects assigned themselves to the more attractive task; that’s just simple, boring, self-interest. Making money is certainly preferable to being bored out of your mind, but automatically assigning yourself to the positive task might not be considered the fairest option The second group, however, flipped a coin in private first to determine how they would assign tasks, and following that flip made their assignment. In this group, since coins are impartial and all, it should not come as a surprise that…90% of the subjects again assigned themselves to the positive task when all was said and done (Batson, 2008). How very inequality averse and fair of them.

“Heads I win; Tails I also win.”

A recent (and brief) paper by Houser and Xiao (2010) examined the extent to which people are apparently just fine with inequality, but from the opposite direction: taking money away instead of offering it. In their experiment, subjects played a standard dictator game at first. The dictator had $10 to divide however they chose. Following this division, both the dictator and the receiver were given an additional $2. Finally, the receiver was given the opportunity to pay a fixed cost of $1 for the ability to reduce the dictator’s payoff by any amount. Another experimental group took part in the same task, except the dictator was passive in the second experiment; the division of the $10 was made at random by a computer program, representing simple chance factors.

A general preference to avoid inequality would, one could predict, be relatively unconcerned with the nature of that inequality: whether it came about through chance factors or intentional behavior should be irrelevant. For instance, if I don’t like drinking coffee, I should be relatively averse to the idea whether I was randomly assigned to drink it or whether someone intentionally assigned me to drink it. However, when it came to the receivers deciding whether or not to “correct” the inequality, precisely how that inequality came about mattered: when the division was randomly determined, about 20% of subjects paid the $1 in order to reduce the other player’s payoff, as opposed to the 54% of subjects who paid the cost in the intentional condition (Note: both of these percentages refer to cases in which the receiver was given less than half of the dictator’s initial endowment). Further still, the subjects in the random treatment deducted less, on average, than the subjects in the intention treatment.

The other interesting part about this punishment, as it pertains to inequality aversion, is that most people who did punish did not just make the payoffs even; the receivers deducted money from the dictators to the point that the receivers ended up with more money overall in the end. Rather than seeking equality, the punishing receivers brought about inequality that favored themselves, to the tune of 73% of the punishers in the intentional treatment and 66% in the random treatment (which did not differ significantly). The authors conclude:

…[O]ur data suggest that people are more willing to tolerate inequality when it is cause by nature than when it is intentionally created by humans. Nevertheless, in both cases, a large majority of punishers attempt to achieve advantageous inequality. (p.22)

Now that the demolition is over, we can start rebuilding.

This punishment finding also sheds some conceptual light on why inequality aversion puts the emphasis on the wrong variable: people are not averse to inequality, per se, but rather seem to be averse to punishment and condemnation, and one way of avoiding punishment is to make equal offers (of the dictators that made an equal or better offer, only 4.5% were punished). This finding highlights the problem of assuming a preference based on an outcome: just because some subjects make equal offers in a dictator game, it does not follow that they have a genuine preference for making equal offers. Similarly, just because men and women (by mathematical definition) are going to have the same number of opposite-sex sexual partners, it does not follow that this outcome was obtained because they desired the same number.

That is all, of course, not to say that preferences for equality don’t exist at all, it’s just that while people may have some motivations that incline them towards equality in some cases, those motivations come with some rather extreme caveats. People do not appear averse to inequality generally, but rather appear strategically interested in (at least) appearing fair. Then again, fairness really is a fuzzy concept, isn’t it?

References: Batson, C.D. (2008). Moral masquerades: Experimental exploration of the nature of moral motivation. Phenomenology and the Cognitive Sciences, 7, 51-66

Ellingsen, T., & Johannesson, M. (2008). Anticipated Verbal Feedback Induces Altruistic Behavior. Evolution and Human Behavior DOI: 10.1016/j.evolhumbehav.2007.11.001

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment Economic Letters DOI: 10.1016/j.econlet.2010.07.008

Pillutla, M.M. & Murnighan, J.K. (1995). Being fair or appearing fair: Strategic behavior in ultimatum bargaining. Academy of Management Journal, 38, 1408-1426.

50 Shades Of Grey (When It Comes To Defining Rape)

For those of you who haven’t have been following such things lately, Daniel Tosh recently catalyzed an internet firestorm of offense.The story goes something like this: at one of his shows, he was making some jokes or comments about rape. One woman in the audience was upset by whatever Daniel said and yelled out that rape jokes are never funny. In response to the heckler, Tosh either (a) made a comment about how the heckler was probably raped herself, or (b) suggested it would be funny were the heckler to get raped, depending upon which story you favor. The ensuing outrage seems to have culminated in a petition to have Daniel Tosh fired from Comedy Central, which many people ironically suggested has nothing at all to do with censorship.

This whole issue has proved quite interesting to me for several reasons. First, it highlights some of the problems I recently discussed concerning third party coordination: namely that publicly observable signals aren’t much use to people who aren’t at least eye witnesses. We need to rely on what other people tell us, and that can be problematic in the face of conflicting stories. It also demonstrated the issues third parties face when it comes to inferring things like harm and intentions: the comments about the incident ranged from a heckler getting what they deserved through to the comment being construed as a rape threat. Words like “rape-apologist” then got thrown around a lot towards Tosh and his supporters.

Just like how whoever made this is probably an anti-Semite and a Nazi sympathizer

While reading perhaps the most widely-circulated article about the affair, I happened to come across another perceptual claim that I’d like to talk about today:

According to the CDC, one in four female college students report that they’ve been sexually assaulted (and when you consider how many rapes go unreported, because of the way we shame victims and trivialize rape, the actual number is almost certainly much higher).

Twenty-five percent would appear alarmingly high; perhaps too high, especially when placed in the context of a verbal mud-slinging. A slightly involved example should demonstrate why this claim shouldn’t be taken at face value: in 2008, the United States population was roughly 300 million (rounding down). To make things simple, let’s assume (a) half the population is made up of women, (b) the average woman finishing college is around 22 and (c) any woman’s chances of being raped are equal, set at 25%. Now, in 2008, there were roughly 15 million women in the 18-24 age group; they are our first sample. If the 25% number was accurate, you’d expect that, among women ages 18-24, 3.75 million of them should have been raped at some point throughout their lives, or roughly 170,000 rape victims per year in that cohort (assuming rape rates are constant from birth to 24). In other words, each year, roughly 1.13% of the women who hadn’t previously been raped would be raped (and no one else would be).

Let’s compare that 1% number to the number of  reported rapes in the entire US in 2008: thirty rapes per hundred-thousand people, or 0.03%. Even after doubling that number (assuming all reported rapes come from women, and women are half the population, so the reported number is out of fifty-thousand, not a hundred-thousand) we only make it to 0.06%. In order to make it to 1.13% you would have to posit that for each reported rape there were about 19 unreported ones. For those who are following along with the math, that would mean that roughly 95% of rapes would never have been reported. While 95% unreported might seem like a plausible rate to some it’s a bit difficult to verify.

Rape, of course, doesn’t have a cut-off point for age, so let’s expand our sample to include women ages 25-44. Using the same assumptions of a 1% growth rate in rape victims per year rate, that would now mean that by age 44 almost half of all women would have experienced an instance of rape. We’re venturing farther into the realm of claims losing face value. Combining those two figures would also imply that a woman between 18 and 44 is getting raped in the US roughly every 30 seconds. So what gives: are things really that bad, are the assumptions wrong, is my math off, or is something else amiss?

Since I can’t really make heads or tails of any of this, I’m going with my math.

Some of the assumptions are, in fact, not likely to be accurate (such as a consistent rate of victimization across age groups), but there’s more to it than that. Another part of the issue stems from defining the term “rape” in the first place. As Koss (1993) notes, the way she defined rape in her own research – the research that came upon that 25% figure – appeared to differ tremendously from the way her subjects did. The difference was so stark that roughly 75% of the participants that Koss had labeled as having experiencing rape did not, themselves, consider the experience to be rape. This is somewhat concerning for two big reasons: first, the perceived rate of rape might be a low-ball estimate (we’ll call this the ignorance hypothesis), or the rate of rape might be being inflated rather dramatically by definitional issues (we’ll call this the arrogance hypothesis).

Depending on your point of view – what you perceive to be, or label as, rape – either one of these hypotheses could be true. What is not true is the notion that one in four college-aged women report that they’ve been sexually assaulted; they might report they’ve had unwanted sex, or have been coerced into having sex, but not that they were assaulted. As it turns out, that’s quite a valuable distinction to make.

Hamby and Koss (2003) expanded on this issue, using focus groups to help understand this discrepancy. Whereas one in four women might describe their first act of intercourse as something they went along with but was unwanted, only one in twenty-five report that it was forced (in, ironically, a forced-choice survey). Similarly, while one in four women might report that they gave into having sex due to verbal or psychological pressure, only one in ten report that they engaged in sexual intercourse because of the use or threat of physical force. It would seem that there is a great deal of ambiguity surrounding words like coercion, force, voluntary, or unwanted when it comes to asking about sexual matters: was the mere fear of force, absent any explicit uses or threats enough to count? If the woman didn’t want to have sex, but said yes to try and maintain a relationship, did that count as coercion? The focus groups had many questions, and I feel that means many researchers might be measuring a number of factors they hadn’t intended on, lumping all of them together under the umbrella of sexual assault.

The focus groups, unsurprisingly, made distinctions between wanting sex and voluntarily having sex; they also noted that it might often be difficult for people to distinguish between internal and external pressures to have sex. These are, frankly, good distinctions to make. I might not want to go into work, but that I show up there anyway doesn’t mean I was being made to work involuntarily. I might also not have any internal motivation to work, per se, but rather be motivated to make money; that I can only make money if I work doesn’t mean most people would agree that the person I work for is effectively forcing me to work.

No one makes me wear it; I just do because I think it’s got swag

When we include sex that was acquiesced to, but unwanted, in these figures – rather than what the women themselves consider rape – then you’ll no doubt find more rape. Which is fine, as far as definitional issues go; it just requires the people reporting these numbers to be specific as to what they’re reporting about. As concepts like wanting, forcing, and coercing are measured in degree rather than kind, one could, in principle, define rape in a seemingly endless number of ways.This puts the burden on researchers to be as specific as possible when formulating these questions and drawing their conclusions, as it can be difficult to accurately infer what subjects were thinking about when they were answering the questions.

References: Hamby, S.L., & Koss, M.P. (2003). Shades of gray: A qualitative study of terms used in the measurement of sexual victimization. Psychology of Women Quarterly DOI: 10.1111/1471-6402.00104

Koss. M.P. (1993). Detecting the scope of rape: A review of prevalence research methods. Journal of Interpersonal Violence DOI: 10.1177/088626093008002004

Kanizsa’s Morality

While we rely on our senses to navigate through life, there are certain quirks about the way our perception works that we often aren’t consciously aware of. It’s only when we encounter illusions, the most well-known of which tend to inhabit the visual domain, that certain inner workings of our perception modules become apparent. Take the following example as a good for instance: the checkerboard illusion. Given the proper context, our visual system is capable perceiving the two squares to be different colors despite the fact that they are the same color. On top of bringing certain facets of our visual modules into stark relief, the illusion demonstrates one other very important fact about our cognition: accuracy need not always be the goal. Our visual systems were only selected to be as good as they needed to be in order for us to do useful things, given the environment we tended to find ourselves in; they were not selected to be perfectly accurate in each and every situation they might find themselves in.

See, Criss Angel? It’s not that hard to do your job.

That endearing little figure is known as Kanizsa’s Triangle. While there is no actual triangle in the figure, some cognitive template is being automatically filled in given inputs from certain modules (probably ones designed for detecting edges and contrast), and the end result is that illusory perception; our mind automatically completes the picture, so to speak. This kind of automatic completion can have its uses, like allowing inferences to be drawn from a limited amount of information relatively quickly. Without such cognitive templates, tasks like learning language – or not walking into things – would be far more difficult, if not downright impossible. While picking up on recurrent and useful patterns of information in the world might lead to a perceptual quirk here and there, especially in highly abnormal and contrived scenarios like the previous two illusions, the occasional misfire is worth the associated gains.

Now let’s suppose that instead of detecting edges and contrasts we’re talking about detecting intentions and harm – the realm of morality. Might there be some input conditions that (to some extent) automatically result in a cognitive moral template being completed? Perhaps the most notable case came from Knobe (2003):

The vice president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits and it will also harm the environment.” The chairman of the board answered, “I don’t care at all about harming the environment, I just want to make as much profit as I can. Let’s start the new program.” They started the new program. Sure enough, the environment was harmed.

When asked, in this case most people suggested that the negative outcome was brought about intentionally and the chairman should be punished. When the word “harm” is replaced by help, people’s answers reverse and they now say the chairman wasn’t helping intentionally and deserves no praise.

Further research on the subject by Guglielmo & Malle (2010) found that the “I don’t care at all about…” in the preceding paragraph was indeed viewed differently by people depending on whether or not the person who said it was conforming to or violating a norm. When violating a norm, people tended to perceive some desire for that outcome in the violator, despite the violator stating they don’t care one way or the other; when conforming to a norm, people didn’t perceive that same desire given the same statement of indifference. The violation of a norm, then, might be used as part of an input for automatically filling in some moral template concerning perceptions of the violator’s desires, much like the Kanizsa Triangle. It can cause people to perceive a desire, even if there is none. This finding is very similar to another input condition I recently touched on: the perception of a person’s desires on their blameworthiness contingent on their ability to benefit from others being harmed (even if the person in question didn’t directly or indirectly cause the harm).

“I don’t care at all about the NFL’s dress code!”

A recent paper by Grey et al (PDF here) builds upon this analogy rather explicitly. In it, they point out two important things: first, any moral judgment requires there be a victim and a perpetrator dyad; this represents the basic cognitive template of moral interactions through which all moral judgments can be understood. Second, the authors note that there need not actually be a real perpetrator or a victim for a moral judgment to take place; all that’s required is the perception of this pair.

Let’s return briefly to vision: when it comes to seeing the physical world, it’s better to have an accurate picture of it. This is because you can’t do things like persuade a cliff it’s not actually a cliff, or tell gravity to not pull you down. Thankfully, since the physical environment isn’t trying to persuade us of anything in particular either, accurate pictures of the world are relatively easy to come by. The social world, however, is full of agents that might be (and thus, probably are) misrepresenting information for their own benefit.

Taken together with the work just reviewed, this suggests that the moral template can be automatically completed: people can be led to either perceive victims or perpetrators where there are none (given they already perceive one or the other), or fail perceive victims and perpetrators that actually exist (given that they fail to perceive one or the other). Since accuracy isn’t the goal of these perceptions per se, whether the inputs given to the moral template are erased or cause it to be filled in will likely depend on their context; that is to say people should strategically “see” or “fail to see” victims or perpetrators, quite unlike whether people see the Kanizsa Triangle (they almost universally do). Some of possible reasons why people might fall in one direction or the other will be the topic of the next post.

References: Guglielmo, S., & Malle, B.F. (2010). Can Unintended Side Effects Be Intentional? Resolving a Controversy Over Intentionality and Morality Personality and Social Psychology Bulletin DOI: 10.1177/0146167210386733

Knobe, J. (2003). Intentional Action and Side Effects in Ordinary Language Analysis DOI: 10.1093/analys/63.3.190

 

Assumed Guilty Until Proven Innocent

Motley Crue is a band that’s famous for a lot of reasons, their music least of all. Given their reputation, it was a little strange to see them doing what celebrities do best: selling out by endorsing Kia. At least I assume they were selling out. When I first saw the commercial, I doubted that Motley Crue just happened to really love Kia cars and had offered to appear in one of their commercials, letting it feature one of their many songs about overdosing. No; instead, my immediate reaction to the commercial was that Motley Crue probably didn’t care one way or another when it came to Kia, but since the company likely ponied up a boat-load of cash, Motley Crue agreed to, essentially, make a fake and implicit recommendation on the car company’s behalf. (Like Wayne’s World, but without the irony)

What’s curious about that reaction is that I have no way of knowing whether or not it’s correct; I’ve never talked to any of the band members personally, and I have no idea what the terms of that commercial were. Despite this, I feel, quite strongly, that my instincts on the matter were accurate. More curious still, seeing the commercial actually lowered my opinion of the band. I’m going to say a little more about what I think this reaction reflects later, but first I’d like to review a study with some very interesting results (and the usual set of theoretical shortcomings).

I’m not being paid to say it’s interesting, but I’ll scratch that last bit if the price is right.

The paper, by Inbar et al (2012), examined the question of whether intentionality and causality are necessary components when it comes to attributions of blameworthiness. As it turns out, people appear quite willing to (partially) blame others for outcomes that they had no control over – in this case, natural disasters – so long as said others might only have desired it to happen.

In the first of four experiments, the subjects in one condition read about how a man at a large financial firm was investing in “catastrophe bonds”, which would be worth a good deal of money if an earthquake struck a third world country within a two year period. Alternatively, they read about man investing in the same product, except this time the investment would pay out if an earthquake didn’t hit the country. In both cases, the investment ends up paying off. When subjects were asked about how morally wrong such actions are, and how morally blameworthy the investor was, the investor was rated as being more morally wrong and blameworthy in the condition where he benefited from harm, controlling for how much the subjects liked him personally.

The second experiment expanded on this idea. This time, the researchers varied the outcome of the investment: now, the investments didn’t always work out in the investor’s favor. Some of the people who were betting on the bad outcome actually didn’t profit because the good outcome obtained, and vice versa. The question being asked here was whether or not these judgments of moral wrongness and blameworthiness were contingent on profiting from a bad outcome or just being in the position to potentially benefit. As it turns out, actually benefiting wasn’t required: the results showed that the investor simply desiring the harmful outcome (that one didn’t cause, directly or indirectly) was enough to trigger these moral judgments. This pattern of results neatly mirrors judgments of harm – where attempted but failed harm is rated as being just about as bad as the completed and intended variety.

The third experiment sought to examine whether the benefits being contingent on harm – and harm specifically – mattered. In this case, an investor takes out that same catastrophe bond, but there are other investments in place, such that the firm will make the same amount of money whether or not there’s a natural disaster. In other words, now the investor has no specific reason to desire the natural disaster. In this case, subjects now felt the investor wasn’t morally in the wrong or blameworthy. So long as the investor wasn’t seen as wanting the negative outcome specifically, subjects didn’t seem to care about his doing the same thing. It just wasn’t wrong anymore.

“I’ve got some good news and some bad news…no, wait; that bad news is for you. I’m still rich.”

The final experiment in this study looked at whether or not selling that catastrophe bonds off would be morally exculpatory. As it turned out, it was: while the people who bought the bonds in the first place were not judged as nice people, subsequently selling the bonds the next day to pay off an unexpected offense reduced their blameworthiness. It was only when someone was currently in a position to benefit from harm that they were seen as more morally blameworthy.

So how might we put this pattern of results into a functional context?. Inbar et al (2012) note that moral judgments typically involve making a judgment about an actor’s character (or personality, if you prefer). While they don’t spell it out, what I think they’re referring to is the fact that people have to overcome an adaptive hurdle when engaging socially with others: they need to figure out which people in their social world to invest their scarce resources in. In order to successfully deal with this issue, one needs to make some (at least semi-accurate) predictions concerning the likely future behavior of others. If one sends the message that their interests are not your interests – such as by their profiting if you lose – there’s probably a good chance that they aren’t going to benefit you in the long term, at least relative to someone who sends the opposite signal.

However, one other aspect that Inbar et al (2012) don’t deal with brings us back to my feelings about Motley Crue. When deciding whether or not to blame someone, the decision needs to be made, typically, in the absence of absolute certainty regarding guilt. In my case, I made a judgment based on zero information, other than my background assumptions about the likely motives of celebrities and advertisers: I judged the band’s message as disingenuous, suggesting they would happily alter their allegiances if the price was right; they were fair-weather friends, who aren’t the best investments. In another case, let’s say that a dead body turns up, and they’ve clearly been murdered. The only witness to this murder was the killer, and whoever it is doesn’t feel like admitting it. When it comes time for the friends of the deceased to start making accusations, who’s going to seem like a better candidate: a stranger, or the burly fellow the dead person was fighting with recently? Those who desired to harm others tended to, historically, have the ability to translate those desires into actions, and, as such, make good candidates for blame.

“I really just don’t see how he could have been responsible for the recent claw attacks”

Now in the current study there was no way the actor in question could have been the cause of the natural disaster, but our psychology is, most likely, not built for dealing with abstract cases like that. While subjects may report that, no, that man was not directly responsible, some modules that are looking for possible candidates to blame are still hard at work behind the scenes, checking for those malicious desires; considering who would benefit from the act, and so on (“It just so happened that I gained substantially from your loss, which I was hoping for,” doesn’t make the most convincing tale). In much the same way, pornography can still arouse people, even though the porn offers no reliable increase in fitness and “the person” “knows” that. What I feel this study is examining, albeit not explicitly, are the input conditions for certain modules that deal in the uncertain and fuzzy domain of morality.

(As an aside, I can’t help but wonder whether the people in the stories – investment firms and third world countries – helped the researchers find the results they were looking for. It seems likely that some modules dealing with determining plausible perpetrators might tap into some variable like relative power or status in their calcuations, but that’s a post for another day.)

References: Inbar, Y., Pizarro, D., & Cushman, F. (2012). Benefiting From Misfortune: When Harmless Actions Are Judged to Be Morally Blameworthy Personality and Social Psychology Bulletin, 38 (1), 52-62 DOI: 10.1177/0146167211430232

The Difference Between Adaptive And Adapted

This is going to be something of a back to basics post, but a necessary one. Necessary, that is, if the comments I’ve been seeing lately are indicative of the thought processes of the population at large. It would seem that many people make a fundamental error when thinking about evolutionary explanations for behavior. The error involves thinking about the ultimate function of an adaptation or the selection pressures responsible for its existence, rather than the adaptation’s input conditions, in considering whether said adaptation is responsible for generating some proximate behavior. (If that sounded confusing, don’t worry; it’ll get cleared up in a moment) While I have seen the error made frequently among various lay people, it appears to even be common among those with some exposure to evolutionary psychology; out of the ninety undergraduate exams I just finished grading, only five students got the correct answer to a question dealing with the subject. That is somewhat concerning.

I hope that hook up was worth the three points it cost you on the test because you weren’t paying attention.

Here’s the question that the students were posed with:

People traveling through towns that they will never visit again nonetheless give tips to waiters and taxi drivers. Some have claimed that the theory of reciprocal altruism seems unable to explain this phenomenon because people will never be able to recoup the cost of the tip in a subsequent transaction with the waiter or the driver. Briefly explain the theory of reciprocal altruism, and indicate whether you think that this theory can or cannot explain this behavior. If you say it can, say why. If you say it cannot, provide a different explanation for this behavior.

The answers I received suggested that the students really did understand the function of reciprocal altruism: they were able to explain the theory itself, as well as some of the adaptive problems that needed to be solved in order for the behavior to be selected for, such as the ability to remember individuals and detect cheaters. So far, so good. However, almost all the students then indicated that the theory could not explain tipping behavior, since there was no chance that the tip could ever be reciprocated in the future. In other words, tipping in that context was not adaptive, so adaptations designed for reciprocal altruism could not be responsible for the behavior. The logic here is, of course, incorrect.

To understand why that answer is incorrect, let’s rephrase the question, but this time, instead of tipping strangers, let’s consider two people having sex:

People who do not want to have children still wish to have sex, so they engage in intercourse while using contraceptives. Some have claimed that the theory of sexual reproduction seems unable to explain this phenomenon because people will never be able to reproduce by having sex under those conditions. Briefly explain the theory of sexual reproduction, and indicate whether you think that this theory can or cannot explain this behavior. If you say it can, say why. If you say it cannot, provide a different explanation for this behavior.

No doubt, there are still many people who would get this question wrong as well; they might even suggest that the ultimate function of sex is just to “feel pleasure”, not reproduction, because feeling pleasure – in and of itself – is somehow adaptive (Conley, 2011, demonstrating that this error also extends to published literature). Hopefully, however, for most people at least one error should now appear a little clearer: contraceptives are an environmental novelty, and our psychology is not evolved to deal with a world in which they exist. Without contraceptives, the desire to have children is irrelevant to whether or not some sexual act will result in pregnancy.

That desire is also irrelevant if you’re in the placebo group

Contraceptives are a lot like taxi drivers, in that both are environmental novelties. Encountering strangers that you were not liable to interact with again was probably the exception, rather than the rule, for most of human evolution. That said, even if contraceptives were taken out of the picture and our environment was as “natural” as possible, our psychology would still not be perfectly designed for each and every context we find ourselves in. Another example about sex easily demonstrates this point: a man and a woman only need to have sex once, in principle, to achieve conception. Additional copulations before or beyond that point are, essentially, wasted energy that could have been spent doing other things. I would wager, however, that for each successful pregnancy, most couples probably have sex dozens or hundreds of times. Whether because the woman is not and will not be ovulating, because one partner is infertile, or because the woman is currently pregnant or breastfeeding, there are plenty of reasons why intercourse does not always lead to conception. In fact, intercourse itself would probably not be adaptive in the vast majority of occurrences, despite it being the sole path to human reproduction (before the advent of IVF, of course).

Turning the focus back to reciprocal altruism, throughout their lives, people behave altruistically towards a great many people. In some cases, that altruism will be returned in such a way that the benefits received will outweigh the initial costs of the altruistic act; in other cases, that altruism will not be returned. What’s important to bear in mind is that the output of some module adapted for reciprocal altruism will not always be adaptive. The same holds for the output of any psychological module, since organisms aren’t fitness maximizers – they’re adaptation executioners. Adaptations that tended to increase reproductive success in the aggregate were selected for, even if they weren’t always successful. These sound like basic points (because they are), but they’re also points that tend to frequently trip people up, even if those people are at least somewhat familiar with all the basic concepts themselves. I can’t help but wonder if that mistake is made somewhat selectively, contingent on topic, but that’s a project for another day.

References: Conley, T. (2011). Perceived proposer personality characteristics and gender differences in acceptance of casual sex offers. Journal of Personality and Social Psychology, 100 (2), 309-329 DOI: 10.1037/a0022152

Making Your Business My Business

“The government has no right to do what it’s doing, unless it’s doing what I want it to do” – Pretty much everyone everywhere.

As most people know by now, North Carolina recently voted on and approved an amendment to the state’s constitution that legally barred gay marriage. Many supporters of extending marriage rights to the homosexual community understandably found this news upsetting, which led the predictable flood of opinions about how it’s none of the government’s business who wants to marry who. I found the whole matter to be interesting on two major fronts: first, why would people support/oppose gay marriage in general, and, secondly, why on earth would people try to justify their stance using a line of reasoning that is (almost definitely) inconsistent with other views they hold?

Especially when they aren’t even running for political office.

Let’s deal with these issues in reverse order. First, let’s tackle the matter of inconsistency. We all (or at least almost all) want sexual behavior legislated, and feel the government has the right to do that, despite many recent protests to the contrary. As this helpful map shows, there are, apparently, more states that allow for first cousin marriage than gay marriage (assuming the information there is accurate). That map has been posted several times, presumably in support of gay marriage. Unfortunately, the underlying message of that map would seem to be that, since some people find first cousin marriage gross, it should be shocking that it’s more legal that homosexuality. What I don’t think that map was suggesting is that it’s not right that first cousin marriage isn’t more legal, as the government has no right legislating sexuality. As Haidt’s research on moral dumbfounding shows, many people are convinced that incest is wrong even when they can’t find a compelling reason why, and many people likewise feel it should be made illegal.

On top of incest, there’s also the matter of age. Most people will agree that children below a certain age should not be having sex, and, typically, that agreement is followed with some justification about how children aren’t mature enough to understand the consequences of their actions. What’s odd about that justification is that people don’t go on to then say that people should be allowed to have sex at any age, just so long as they can demonstrate that they understand the consequences of their actions through some test. Conversely, they also don’t say that people above the age of consent should be forbade from having sex until they can pass such a test. There are two points to make about this: the first is that no such maturity test exists in the first place, so when people make the judgments about maturity they’re just assuming that some people aren’t mature enough to make those kinds of decisions; in other words, children shouldn’t be allowed to consent to sex because they don’t think children should be allowed to consent to sex. The second point is, more importantly, even if such a test existed, suggesting that people shouldn’t be allowed to have sex without passing it would still be legislating sexuality. It would still be the government saying who can and can’t have sex and under what circumstances.

Those are just two cases, and there are many more. Turns out people are pretty keen on legislating the sexual behavior of others after all. (We could have an argument about those not being cases of sexuality per se, but rather about harm, but it turns out people are pretty inconsistent about defining and legislating harm as well) The point here, to clarify, is not that legalizing gay marriage would start us on a slippery slope to legalizing other, currently unacceptable, forms of sexuality; the point is that people try to justify their stances on matters of sexuality with inconsistently applied principles. Not only are these justifications inconsistent, but they may also have little or nothing to do with the actual reasons you or I end up coming to whatever conclusions we do, despite what people may say. As it turns out, our powers of introspection aren’t all they’re cracked up to be.

Letting some light in might just help you introspect better; it is dark in there…

Nisbett and Wilson (1977) reviewed a number of examples concerning the doubtful validity of introspective accounts. One of these finding concerned a display of four identical nylon stockings. Subjects were asked about which of the four pairs was the best quality, and, after they had delivered their judgment, why they had picked the pair the did. The results showed that people, for whatever reason, tended to overwhelmingly prefer the garment on the right side of the display (they preferred it four-times as much, relative to the garment on the left side). When queried about their selection, unsurprisingly, zero of the 52 subjects made mention of the stocking’s position in the lineup. When subjects were asked directly about whether the position of the pair of stockings had any effect on their judgment, again, almost all the subjects denied that it did.

While I will not re-catalog every example that Nisbett and Wilson (1977) present, the unmistakable conclusion arose that people have, essentially, little to no actual conscious insight into the cognitive processes underlying their thoughts and behavior. They often were unable to report that an experimental manipulation had any effect (when it did), or reported that irrelevant manipulations actually had (or would have had) some effect. In some cases, they were unable to even report that there was any effect at all, when there had in fact been one. As the authors put it:

… [O]thers have argued persuasively that “we can know more than we can tell,” by which it is meant that people can perform skilled activities without being able to describe what they are doing and can make fine discriminations without being able to articulate their basis. The research described above suggest that that converse is also true – that we sometimes tell more than we can know. More formally, people sometimes makes assertions about mental events to which they may have no access and these assertions may bear little resemblance to the actual events.

This – coupled with the inconsistent use of principled justifications – casts serious doubts on the explicit reasons people often give for either supporting or opposing gay marriage. For instance, many people might support gay marriage because they think it would make gay people happier, on the whole. For the sake of argument, suppose that you discovered gay marriage actually made gay people unhappier, on the whole: would you then be in favor of keeping it illegal? Presumably, you would not be (if you were in favor of legalization to begin with, that is). While making people happy might seem like a plausible and justifiable reason for supporting something, it does not mean that it was the – or a – cause of your judgment.

Marriage: a known source of lasting happiness

If the typical justifications that people give for supporting or opposing gay marriage are not likely to reflect the actual cognitive process that led to their decisions, what cognitive mechanisms might actually be underlying them? Perhaps the most obvious class of mechanisms are those that involve an individual’s mating strategy. Weeden et al. (2008) note that the decision to pursue a more short or long-term mating strategy is a complicated matter, full of tradeoffs concerning local environmental, individual, and cultural factors. They put forth what they call the Reproductive Religiosity Model, which posits that a current function of religious participation is to help ensure the success of a certain type of mating strategy: a more monogamous, long-term, high-fertility mating style. Men pursuing this strategy tend to forgo extra-pair matings in exchange for an increase in paternity certainty, whereas women similarly tend to forgo extra-pair matings for better genes in exchange for increased levels of paternal investment.

As Chris Rock famously quipped, “A man is only as faithful as his options”, though the sentiment would apply equally well to women. It does the long-term mating strategy no good to have plenty of freely sexually available conspecifics hanging around. Thus, according to this model, participation in religious groups helps to curb the risks involved in this type of mating style. This is why certain religious communities might want to decrease the opportunities for promiscuity and increase the social costs for engaging in it.  In order to decrease sexual availability, then, you might find religious groups doing things like opposing and seeking to punish people for engaging in: divorce, birth control use, abortion, promiscuity, and, relevant to the current topic, sexual openness or novelty (like pornography, sexual experimentation, or homosexuality). In support of this model, Weeden et al (2008) found that, controlling for non-reproductive variables, sexual variables were not only predictive of religious attendance, but also that, controlling for sexual variables, the non-reproductive variables were no longer predictive of religious attendance.

While the evidence is not definitively causal in nature, and there is likely more to this connection than a unidirectional arrow, it seems highly likely that cognitive mechanisms responsible for determining one’s currently preferred mating strategy also play a role in determining one’s attitudes towards the acceptability of other’s behaviors. It is also highly likely that the reasons people tend to give for their attitudes will be inconsistent, given that they don’t often reflect the actual functioning of their mind. We all have an interest in making other people’s business our business, since other people’s behaviors tend to eventually have an effect on us – whether that effect is relatively distant or close in the causal chain, or whether it is relatively direct or indirect. We just tend to not consciously understand why.

References: Nisbett, R., & Wilson, T. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84 (3), 231-259 DOI: 10.1037//0033-295X.84.3.231

Weeden, J., Cohen, A., & Kenrick, D. (2008). Religious attendance as reproductive support Evolution and Human Behavior, 29 (5), 327-334 DOI: 10.1016/j.evolhumbehav.2008.03.004

The Lacking Standards Of Philosophical Proof

Recently, I’ve reached that point in life that I know lots of us have struggled with: one day, you just wake up and say to yourself, “I know my multimillion dollar bank account might seem impressive, but I think I want more out of life than just a few million dollars. What I’d like would be more money. Much more”. Unfortunately, the only way to get more money is to do this thing called “work” at a place called a “job”, and these jobs aren’t always the easiest thing to find – especially the cushy ones – in what I’m told is a down economy. Currently, my backup plan has been to become a professor in case that lucrative career as a rockstar doesn’t pan out the way I keep hoping it will.

Unfortunately, working outside of the home means I’ll have less time to spend entertaining my piles of cash.

I’ve been doing well and feeling at home in various schools for almost my entire life, so I’ve not seen much point in leaving the warmth of the academic womb. However, I’ve recently been assured that my odds of securing such a position in one as a long-term career are probably somewhere between “not going to happen” and “never going to happen”. So, half-full kind of guy that I am, I’ve decided to busy myself with pointing out why many people who already have these positions don’t deserve them. With any luck, some universities may take notice and clear up some room in their budgets. Today, I’ll again be turning my eye on the philosophy department. Michael Austin recently wrote this horrible piece over at Psychology Today about why we should reject moral relativism in favor of moral realism – the idea that there are objective moral truths out there to be discovered, like physical constants. Before taking his arguments apart, I’d like to stress that this man actually has a paid position at a university, and I feel the odds are good he makes more money than you. Now that that’s out of the way, onto the fun part.

First, consider that one powerful argument in favor of moral realism involves pointing out certain objective moral truths. For example, “Cruelty for its own sake is wrong,” “Torturing people for fun is wrong (as is rape, genocide, and racism),” “Compassion is a virtue,” and “Parents ought to care for their children.” A bit of thought here, and one can produce quite a list. If you are really a moral relativist, then you have to reject all of the above claims. And this an undesirable position to occupy, both philosophically and personally.

Translation: it’s socially unacceptable to not agree with my views. It’s a proof via threat of ostracism. What Austin attempts to slip by there is the premise that you cannot both think something is morally unacceptable to you without thinking it’s morally unacceptable objectively. Rephrasing the example in the context of language allows us to see the flaw quickly: “You cannot think that the word “sex” refers to that thing you’re really bad at without also thinking that the pattern of sounds that make up the word has some objective meaning which could never mean anything else”. I’m perfectly capable of affirming the first proposition while denying the second. The word “sex” could have easily meant any number of things, or nothing at all, it just happens to refer to a certain thing for certain people. On the same note, I can both say “I find torturing kittens unacceptable” while realizing my statement is perfectly subjective. His argument is not what I would call a “powerful” one, though Austin seems to think it is.

It wasn’t the first time that philosophy rolled off my unsatisfied body and promptly fell asleep, pleased with itself.

Moving on:

 Second, consider a flaw in one of the arguments given on behalf of moral relativism. Some argue that given the extent of disagreement about moral issues, it follows that there are no objective moral truths…But there is a fact of the matter, even if we don’t know what it is, or fail to agree about it. Similarly for morality, or any other subject. Mere disagreement, however widespread, does not entail that there is no truth about that subject.

It is a bad argument to say that just because there is disagreement there is no fact of the matter. However, that gives us no reason to either accept moral realism or reject moral relativism; it just gives us grounds to reject that particular argument. Similarly, Austin’s suggestion that there is definitely a fact of the matter in any subject – or morality specifically – isn’t a good argument. In fact, it’s not even an argument; it’s an assertion. Personal tastes – such as what music sounds good, what food is delicious, and what deviant sexual acts are fun – are often the subject of disagreement and need not have an objective fact of the matter.

If Austin thinks disagreement isn’t an argument against moral realism, he should probably not think that agreement is an argument for moral realism. Unfortunately for us, he does:

There are some moral values that societies share, because they are necessary for any society to continue to exist. We need to value human life and truth-telling, for example. Without these values, without prohibitions on murder and lying, a given society will ultimately crumble. I would add that there is another reason why we often get the impression that there is more moral disagreement than is in fact the case. The attention of the media is directed at the controversial moral issues, rather than those that are more settled. Debates about abortion, same-sex marriage, and the like get airtime, but there is no reason to have a debate about whether or not parents should care for the basic needs of their children, whether it is right for pharmacists to dilute medications in order to make more profit, or whether courage is a virtue.

If most people agreed that the Sun went around Earth, that would in no way imply it was true. It’s almost amazing how he can point out that an argument is bad, then turn around and use an identical argument in the next sentence thinking it’s a killer point. Granted, if people were constantly stealing from and killing each other – that is, more than they do now – society probably wouldn’t fare too well. What the existence of society has to do with whether or not morality is objective, I can’t tell you. From these three points, Austin gives himself a congratulatory pat on the back, feeling confident that we can reject moral relativism and accept moral realism. With standards of proof that loose, philosophy could probably give birth and not even notice.

“Congratulations! It’s a really bad idea”

I’d be curious to see how Austin would deal with the species questions: are humans the only species with morality; do all animals have a sense of morality, social or otherwise; if they don’t, and morality is objective, why not? Again, the question seems silly if you apply the underlying logic to certain other domains, like food preferences: is human waste a good source of nutrients? The answer to that question depends on what species you’re talking about. There’s no objective quality of our waste products that either has the inherent property of nutrition or non-nutrition.

Did I mention Austin is a professor? It’s worth bearing in mind that someone who makes arguments that bad is actually being paid to work in a department dedicated to making and assessing arguments – in a down economy, no less. Even Psychology Today is paying him for his blogging services, I’m assuming. Certainly makes you wonder about the quality of candidates who didn’t get hired.