No Pain, No Gain: Victimhood And Selfishness

If you’re the kind of person who has an active social life, including things like friends and intimate sexual relationships, then you’re probably the type of person who has something better to do than read this page. In the event you’re reading this and also manage to have those kinds of relationships, you might have noticed that people who have recently (or not so recently) been hurt will sometimes go a bit off the rails. Perhaps they opt to drink to the point of blacking out and burning down their neighbor’s pets, or they might take a more subtle approach and simply become a slightly different person for a time (a little more extroverted, a little less concerned about safe sex, or just ball up in their closet and eat ice cream for days on end). I’ve kind of touched on this issue before when considering the function of depression, so today I’m going to shift gears a bit.

Right after I treat myself to some retail therapy…

I’ve written about victimhood several times in the past, and today I’d like to tie two pieces of information together to help understand a third. The first piece to consider is the Banker’s Paradox: people need to judge where to invest their social capital in so as to get the most from their investment over the long-term. Part of this assessment involves considering (a) whom your social capital would be most valuable for and (b) how likely the recipient of that investment would be to return in. Certain classes of people make better investment targets than others, depending on the context you currently find yourself (and them) in. The second piece of information involves how people attribute blame to victims and heroes: victims (conceptualized as someone that has a bad thing happen to them) are blamed less than heroes (conceptualized as someone who does good deeds) for later identical misdeeds.

In light of the Banker’s Paradox, the second finding makes more sense: victims may often make better social investment targets than heroes, at least with regard to their need for the assistance. Accordingly, if you’re looking to invest in someone and foster a relationship with them, casting them in a negative moral light probably won’t go very far towards achieving that goal. At the very least, if victims make better investment targets for other people, even if you aren’t looking to invest in that victim yourself, you run the risk of drawing condemnation from those other parties by siding against the victim they’re trying to invest in. To clarify this point, consider the story of Robin Hood; even if you were not benefiting directly from Robin stealing from the rich and giving to the poor, condemning him for theft would be unlikely to win you much support from the lower class he was helping. Unless you were looking to experience a public embarrassment and/or beating, keeping quiet about the whole stealing thing might be in your best interests.

With that in mind, it’s time to turn this analysis towards the perspective of the victimized party, who, up until now, has been treated in a relatively passive way; as someone who things happen to, or someone who receives investment. Third parties, in this scenario, would invest socially in victims because the victims hold real or potential social value. This value could in turn be strategically leveraged by victis in order to achieve other useful outcomes for themselves. For instance, if victims are less likely to be morally condemned for their actions by others, this would give victims a bit of moral wiggle room to behave more selfishly towards others while more effectively avoiding the consequences of their actions. As summed up by The Joker in The Dark Knight, “If you’re good at something [or valuable to someone], never do it [or be that] for free”.

“Also, don’t exceed the recommended daily dosage on your drugs”.

This hypothesis was inadvertently tested by Zitek et al (2010), who examined the link between perceiving oneself as a victim and subsequent feelings of entitlement. I say inadvertently, because, as is usually the case in psychology research, their experiment was carried out with no theory worth mentioning. Essentially, this research was done a hunch, and the results were in no way explained. While I’d prefer not to have to harp on this point so frequently, it’s just too frequent of an issue to ignore. Moving on…

In the first experiment, Zitek et al (2010) asked two groups of subject to either (a) write about a time they were bored or (b) write about a time life treated them unfairly. Following this, subjects were asked three questions relating to their sense of entitlement. Lastly, subjects were given a chance to help the experimenter with an additional task, ostensibly unrelated to the current experiment. This final measure could be considered an indirect assessment of the subject’s current altruistic tendencies. When it came to the measures of entitlement, subjects who wrote about an unfair instance in their life rated themselves as more entitled than the control group (4.34 vs 3.85 out of 7, respectively). Further, subjects were less likely to help the experimenter out with the additional task in the unfair condition, relative t the bored one (60% vs 81% of subjects helped, respectively).

A second experiment altered the entitlement questions somewhat and also asked about the subject’s selfish behavioral intentions in lieu of asking subjects to help out on the additional task. Zitek et al (2010) also asked about other aspects of the subject’s current negative emotions, such as anger. The results for this experiment showed that subjects in the unfair condition were slightly more likely to report selfish behavioral intentions (3.78) than subjects in the bored condition (3.42). Similarly, subjects in the unfair condition also reported a greater sense of entitlement (4.91) relative to the bored group (4.54). The subject’s current feelings of anger and frustration did not mediate this effect significantly, whereas feelings of entitlement did, suggesting there is something special about entitlement in this regard.

One final experiment got a little more personal: instead of just asking subjects about a time life was unfair to them, an experiment was run where subjects would lose out on a prize either fairly or unfairly. In the unfair loss condition, what appeared to be a computer glitch prevented subjects from being able to win, whereas in the fair loss condition the task subjects were given was designed to appear as if they were simply unable to solve it successfully in the time allotted. After this loss, subjects were asked how they would allocate money for a hypothetical experiment between themselves and another player, contingent on them outperforming that other player 70% of the time. Again, the same effect popped up, where subjects in the unfair loss condition suggested they should get more money in the hypothetical experiment ($3.93 out of 6) relative to the fair loss condition ($3.64). While that might not seem like much of a difference, when considering only the most selfish allocations of money, those in the unfair condition were more than twice as likely (19%) to make such divisions (8% in the bored group).

The forth experiment might have got a little out of hand…

So, being a victim would seem to make people slightly more selfish in these cases. The size of this effect wasn’t particularly impressive in the current experiments, but the measures of victimization were rather tame; two involved experiences long past, and the third involved victimization at the hands of a relatively impersonal force – a computer glitch. More recent and more intense victimization might do something to change the extent of this effect, but that will have to be a matter for future research to sort out; research that might be a little difficult to conduct, as most review boards aren’t too keen on approving research that aims to cause significant discomfort for the subjects.

That said, many people might have easily made the opposite prediction: that being victimized would lead people to become more altruistic and less selfish, perhaps based in some proximate empathy model (i.e. “I don’t want to see people hurt the way I was”). While I certainly wouldn’t want to write off such a possible outcome, given the proper context, discussing that possibility will be a job for another day. What I will suggest is that we shouldn’t expect victimhood to make people do one and only one thing; we should expect their behavior will be highly dependent on their contexts. After all, the appropriateness of a behavior can only be determined contextually, and behaving selfishly with reckless abandon is still a risky proposition, victim or not.

References: Zitek EM, Jordan AH, Monin B, & Leach FR (2010). Victim entitlement to behave selfishly. Journal of personality and social psychology, 98 (2), 245-55 PMID: 20085398

Why All The Fuss About Equality?

In my last post, I discussed why the term “inequality aversion” is a rather inelegant way to express certain human motivations. A desire to not be personally disadvantaged is not the same thing as a desire for equity, more generally. Further, people readily tolerate and create inequality when it’s advantageous for them to do so, and such behavior is readily understandable from an evolutionary perspective. What isn’t as easily understandable – at least not as immediately – is why equality should matter at all. Most research on the subject of equality would appear to just take it for granted (more-or-less) that equality matters without making a concerted attempt to understand why that should be the case. This paper isn’t much different.

On the plus side, at least they’re consistent.

The paper by Raihani & McAuliffe (2012) sought to disentangle two possible competing motives when it comes to punishing behavior: inequality and reciprocity. The authors note that previous research examining punishment often confounds these two possible motives. For instance, let’s say you’re playing a standard prisoner’s dilemma game: you have a choice to either cooperate or defect, and let’s further say that you opt for cooperation. If your opponent defects, not only do you lose out on the money you would have made had he cooperated, but your opponent also ends up with more money than you do overall. If you decided to punish the defector, the question arises of whether that punishment is driven by the loss of money, the aversion to the disadvantageous inequality, or some combination of the two.

To separate the two motives out, Raihani & McAuliffe (2012) used a taking game. Subjects would play the role of either player 1 or player 2 in one of three condition. In all conditions, player 1 was given an initial endowment of seventy cents; player 2, on the other hand, started out with either ten cents, thirty cents, or seventy cents. In all conditions, player 2 was then given the option of taking twenty cents from player 1, and following that decision player 1 was then given an option of playing ten cents to reduce player 2′s payoff by thirty cents. The significance of these conditions is that, in the first two, if player 2 takes the twenty cents, no disadvantageous inequality is created for player 1. However, in the last condition, by taking the twenty cents, player 2 creates that inequality. While the overall loss of money across conditions is identical, the context of that loss in terms of equality is not. The question, then, is how often player 1 would punish player 2.

In  the first two conditions, where no disadvantageous inequality was created for player 1, player 1 didn’t punish significantly more whether player 2 had taken money or not (approximately 13%). In the third treatment, where player 2′s taking did create that kind of inequality, player 1 was now far more likely to pay to punish (approximately 40%). So this is a pretty neat result, and it mirrors past work that came at the question from the opposite angle (Xiao & Houser, 2010; see here). The real question, though, concerns how we are to interpret this finding. These results, in and of themselves, don’t tell us a whole lot about why equality matters when it comes to punishment decisions.

They also doesn’t tell me much about this terrible itch I’ve been experiencing lately, but that’s a post for another day.

I think it’s worth noting that the study still does, despite it’s best efforts, confound losing money and generating inequality; in no condition can player 2 create disadvantageous inequality for player 1 without taking money away as well. Accordingly, I can’t bring myself to agree with the authors, who conclude:

  Together, these results suggest that disadvantageous inequality is the driving force motivating punishment, implying that the proximate motives underpinning human punishment might therefore stem from inequality aversion rather than the desire to reciprocate losses.

It could still well be the case that player one would rather not have twenty cents taken from them, thank you very much, but don’t reciprocate with punishment for other reasons. To use a more real-life context, let’s say you have a guest come to your house. At some point after that guest has left, you discover that he apparently also left with some of the cash you had been saving to buy whatever expensive thing you had your eye on at the time. When it came to deciding whether or not you desired to see that person punished for what they did, precisely how well off they were relative to you might not be your first concern. The theft would not, I imagine, automatically become OK in the event that the guy only took your money because you were richer than he was. A psychology that was designed to function in such a manner would leave one wide-open for exploitation by selfish others.

However, how well off you were, relative to how needy the person in question was, might have a larger effect in the minds of other third party condemners. The sentiment behind the tale of Robin Hood serves as an example towards that end: stealing from the rich is less likely to be condemned by others than stealing from one of lower standing. If other third parties are less likely, for whatever reason, to support your decision to punish another individual in contexts where you’re advantaged over the person being punished, punishment immediately risks becoming more costly. At that point. it might be less costly to tolerate the theft rather than risking condemnation by others for taking action against it.

What might be better referred to as, “The Metallica V. Napster Principle”.

One final issue I have with the paper is a semantic one: the authors label the act of player 2 taking money as cheating, which doesn’t fit my preferred definition (or, frankly, any definition of cheating I’ve ever seen). I favor the Tooby and Cosmides definition where a cheater is defined as “…an individual who accepts a benefit without satisfying the requirements that provision of that benefit was made contingent upon.” As there was no condition required for player 2 to be allowed to take money from player 1, it could hardly be considered an act of cheating. This seemingly minor issue, however, might actually hold some real significance, in the Freudian sense of the word.

To me, that choice of phrasing implies that the authors realize that, as I previously suggested, player 1s would really prefer if player 2s didn’t take any money from them; after all, why would they? More money is better than less. This highlights, for me, the very real and very likely possibility that what player 1s were actually punishing was having money taken from them, rather than the inequality, but they were only willing to punish in force when that punishment could more successfully be justified to others.

References: Raihani NJ, & McAuliffe K (2012). Human punishment is motivated by inequity aversion, not a desire for reciprocity. Biology letters PMID: 22809719

Xiao, E., & Houser, D. (2010). When equality trumps reciprocity Journal of Economic Psychology, 31, 456-470 DOI: 10.1016/j.joep.2010.02.001

Inequality Aversion Aversion

While I’ve touched on the issues surrounding the concept of “fairness” before, there’s one particular term that tends to follow the concept around like a bad case of fleas: inequality aversion. Following the proud tradition of most psychological research, the term manages to both describe certain states of affairs (kind of) without so much as an iota of explanatory power, while at the same time placing the emphasis on, conceptually, the wrong variable. In order to better understand why (some) people (some of the time) behave “fairly” towards others, we’re going to need to address both of the problems with the term. So, let’s tear the thing down to the foundation and see what we’re working with.

“Be careful; this whole thing could collapse for, like, no reason”

Let’s start off with the former issue: when people talk about inequality aversion, what are they referring to? Unsurprisingly, the term would appear to refer to the fact that people tend to show some degree of concern for how resources are divided among multiple parties. We can use the classic dictator game as a good example: when given full power over the ability to divide some amount of money, dictator players often split the money equally (or near-equally) between themselves and another player. Further, the receivers in the dictator games also tend to both respond to equal offers with favorable remarks and respond to unequal offered with negative remarks (Ellingsen & Johannesson, 2008). The remaining issue, then, concerns how are we to interpret findings like this, and why should we interpret them in such a fashion

Simply stating that people are averse to inequality is, at best, a restatement of those findings. At worst, it’s misleading, as people will readily tolerate inequality when it benefits them. Take the dictators in the example above: many of them (in fact, the majority of them) appear perfectly willing to make unequal offers so long as they’re on the side that’s benefiting from that inequality. This phenomena is also illustrated by the fact that, when given access to asymmetrical knowledge, almost all people take advantage of that knowledge for their own benefit (Pillutla & Murnighan,1995). As a final demonstration, take two groups of subjects; each subject given the task of assigning themselves and another subject to one of two tasks: the first task is described as allowing the subject a chance to win $30, while the other task has no reward and is described as being dull and boring.

In the first of these two groups, since subjects can assign themselves to whichever task they want, it’s perhaps unsurprising that 90% of the subjects assigned themselves to the more attractive task; that’s just simple, boring, self-interest. Making money is certainly preferable to being bored out of your mind, but automatically assigning yourself to the positive task might not be considered the fairest option The second group, however, flipped a coin in private first to determine how they would assign tasks, and following that flip made their assignment. In this group, since coins are impartial and all, it should not come as a surprise that…90% of the subjects again assigned themselves to the positive task when all was said and done (Batson, 2008). How very inequality averse and fair of them.

“Heads I win; Tails I also win.”

A recent (and brief) paper by Houser and Xiao (2010) examined the extent to which people are apparently just fine with inequality, but from the opposite direction: taking money away instead of offering it. In their experiment, subjects played a standard dictator game at first. The dictator had $10 to divide however they chose. Following this division, both the dictator and the receiver were given an additional $2. Finally, the receiver was given the opportunity to pay a fixed cost of $1 for the ability to reduce the dictator’s payoff by any amount. Another experimental group took part in the same task, except the dictator was passive in the second experiment; the division of the $10 was made at random by a computer program, representing simple chance factors.

A general preference to avoid inequality would, one could predict, be relatively unconcerned with the nature of that inequality: whether it came about through chance factors or intentional behavior should be irrelevant. For instance, if I don’t like drinking coffee, I should be relatively averse to the idea whether I was randomly assigned to drink it or whether someone intentionally assigned me to drink it. However, when it came to the receivers deciding whether or not to “correct” the inequality, precisely how that inequality came about mattered: when the division was randomly determined, about 20% of subjects paid the $1 in order to reduce the other player’s payoff, as opposed to the 54% of subjects who paid the cost in the intentional condition (Note: both of these percentages refer to cases in which the receiver was given less than half of the dictator’s initial endowment). Further still, the subjects in the random treatment deducted less, on average, than the subjects in the intention treatment.

The other interesting part about this punishment, as it pertains to inequality aversion, is that most people who did punish did not just make the payoffs even; the receivers deducted money from the dictators to the point that the receivers ended up with more money overall in the end. Rather than seeking equality, the punishing receivers brought about inequality that favored themselves, to the tune of 73% of the punishers in the intentional treatment and 66% in the random treatment (which did not differ significantly). The authors conclude:

…[O]ur data suggest that people are more willing to tolerate inequality when it is cause by nature than when it is intentionally created by humans. Nevertheless, in both cases, a large majority of punishers attempt to achieve advantageous inequality. (p.22)

Now that the demolition is over, we can start rebuilding.

This punishment finding also sheds some conceptual light on why inequality aversion puts the emphasis on the wrong variable: people are not averse to inequality, per se, but rather seem to be averse to punishment and condemnation, and one way of avoiding punishment is to make equal offers (of the dictators that made an equal or better offer, only 4.5% were punished). This finding highlights the problem of assuming a preference based on an outcome: just because some subjects make equal offers in a dictator game, it does not follow that they have a genuine preference for making equal offers. Similarly, just because men and women (by mathematical definition) are going to have the same number of opposite-sex sexual partners, it does not follow that this outcome was obtained because they desired the same number.

That is all, of course, not to say that preferences for equality don’t exist at all, it’s just that while people may have some motivations that incline them towards equality in some cases, those motivations come with some rather extreme caveats. People do not appear averse to inequality generally, but rather appear strategically interested in (at least) appearing fair. Then again, fairness really is a fuzzy concept, isn’t it?

References: Batson, C.D. (2008). Moral masquerades: Experimental exploration of the nature of moral motivation. Phenomenology and the Cognitive Sciences, 7, 51-66

Ellingsen, T., & Johannesson, M. (2008). Anticipated Verbal Feedback Induces Altruistic Behavior. Evolution and Human Behavior DOI: 10.1016/j.evolhumbehav.2007.11.001

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment Economic Letters DOI: 10.1016/j.econlet.2010.07.008

Pillutla, M.M. & Murnighan, J.K. (1995). Being fair or appearing fair: Strategic behavior in ultimatum bargaining. Academy of Management Journal, 38, 1408-1426.

50 Shades Of Grey (When It Comes To Defining Rape)

For those of you who haven’t have been following such things lately, Daniel Tosh recently catalyzed an internet firestorm of offense.The story goes something like this: at one of his shows, he was making some jokes or comments about rape. One woman in the audience was upset by whatever Daniel said and yelled out that rape jokes are never funny. In response to the heckler, Tosh either (a) made a comment about how the heckler was probably raped herself, or (b) suggested it would be funny were the heckler to get raped, depending upon which story you favor. The ensuing outrage seems to have culminated in a petition to have Daniel Tosh fired from Comedy Central, which many people ironically suggested has nothing at all to do with censorship.

This whole issue has proved quite interesting to me for several reasons. First, it highlights some of the problems I recently discussed concerning third party coordination: namely that publicly observable signals aren’t much use to people who aren’t at least eye witnesses. We need to rely on what other people tell us, and that can be problematic in the face of conflicting stories. It also demonstrated the issues third parties face when it comes to inferring things like harm and intentions: the comments about the incident ranged from a heckler getting what they deserved through to the comment being construed as a rape threat. Words like “rape-apologist” then got thrown around a lot towards Tosh and his supporters.

Just like how whoever made this is probably an anti-Semite and a Nazi sympathizer

While reading perhaps the most widely-circulated article about the affair, I happened to come across another perceptual claim that I’d like to talk about today:

According to the CDC, one in four female college students report that they’ve been sexually assaulted (and when you consider how many rapes go unreported, because of the way we shame victims and trivialize rape, the actual number is almost certainly much higher).

Twenty-five percent would appear alarmingly high; perhaps too high, especially when placed in the context of a verbal mud-slinging. A slightly involved example should demonstrate why this claim shouldn’t be taken at face value: in 2008, the United States population was roughly 300 million (rounding down). To make things simple, let’s assume (a) half the population is made up of women, (b) the average woman finishing college is around 22 and (c) any woman’s chances of being raped are equal, set at 25%. Now, in 2008, there were roughly 15 million women in the 18-24 age group; they are our first sample. If the 25% number was accurate, you’d expect that, among women ages 18-24, 3.75 million of them should have been raped at some point throughout their lives, or roughly 170,000 rape victims per year in that cohort (assuming rape rates are constant from birth to 24). In other words, each year, roughly 1.13% of the women who hadn’t previously been raped would be raped (and no one else would be).

Let’s compare that 1% number to the number of  reported rapes in the entire US in 2008: thirty rapes per hundred-thousand people, or 0.03%. Even after doubling that number (assuming all reported rapes come from women, and women are half the population, so the reported number is out of fifty-thousand, not a hundred-thousand) we only make it to 0.06%. In order to make it to 1.13% you would have to posit that for each reported rape there were about 19 unreported ones. For those who are following along with the math, that would mean that roughly 95% of rapes would never have been reported. While 95% unreported might seem like a plausible rate to some it’s a bit difficult to verify.

Rape, of course, doesn’t have a cut-off point for age, so let’s expand our sample to include women ages 25-44. Using the same assumptions of a 1% growth rate in rape victims per year rate, that would now mean that by age 44 almost half of all women would have experienced an instance of rape. We’re venturing farther into the realm of claims losing face value. Combining those two figures would also imply that a woman between 18 and 44 is getting raped in the US roughly every 30 seconds. So what gives: are things really that bad, are the assumptions wrong, is my math off, or is something else amiss?

Since I can’t really make heads or tails of any of this, I’m going with my math.

Some of the assumptions are, in fact, not likely to be accurate (such as a consistent rate of victimization across age groups), but there’s more to it than that. Another part of the issue stems from defining the term “rape” in the first place. As Koss (1993) notes, the way she defined rape in her own research – the research that came upon that 25% figure – appeared to differ tremendously from the way her subjects did. The difference was so stark that roughly 75% of the participants that Koss had labeled as having experiencing rape did not, themselves, consider the experience to be rape. This is somewhat concerning for two big reasons: first, the perceived rate of rape might be a low-ball estimate (we’ll call this the ignorance hypothesis), or the rate of rape might be being inflated rather dramatically by definitional issues (we’ll call this the arrogance hypothesis).

Depending on your point of view – what you perceive to be, or label as, rape – either one of these hypotheses could be true. What is not true is the notion that one in four college-aged women report that they’ve been sexually assaulted; they might report they’ve had unwanted sex, or have been coerced into having sex, but not that they were assaulted. As it turns out, that’s quite a valuable distinction to make.

Hamby and Koss (2003) expanded on this issue, using focus groups to help understand this discrepancy. Whereas one in four women might describe their first act of intercourse as something they went along with but was unwanted, only one in twenty-five report that it was forced (in, ironically, a forced-choice survey). Similarly, while one in four women might report that they gave into having sex due to verbal or psychological pressure, only one in ten report that they engaged in sexual intercourse because of the use or threat of physical force. It would seem that there is a great deal of ambiguity surrounding words like coercion, force, voluntary, or unwanted when it comes to asking about sexual matters: was the mere fear of force, absent any explicit uses or threats enough to count? If the woman didn’t want to have sex, but said yes to try and maintain a relationship, did that count as coercion? The focus groups had many questions, and I feel that means many researchers might be measuring a number of factors they hadn’t intended on, lumping all of them together under the umbrella of sexual assault.

The focus groups, unsurprisingly, made distinctions between wanting sex and voluntarily having sex; they also noted that it might often be difficult for people to distinguish between internal and external pressures to have sex. These are, frankly, good distinctions to make. I might not want to go into work, but that I show up there anyway doesn’t mean I was being made to work involuntarily. I might also not have any internal motivation to work, per se, but rather be motivated to make money; that I can only make money if I work doesn’t mean most people would agree that the person I work for is effectively forcing me to work.

No one makes me wear it; I just do because I think it’s got swag

When we include sex that was acquiesced to, but unwanted, in these figures – rather than what the women themselves consider rape – then you’ll no doubt find more rape. Which is fine, as far as definitional issues go; it just requires the people reporting these numbers to be specific as to what they’re reporting about. As concepts like wanting, forcing, and coercing are measured in degree rather than kind, one could, in principle, define rape in a seemingly endless number of ways.This puts the burden on researchers to be as specific as possible when formulating these questions and drawing their conclusions, as it can be difficult to accurately infer what subjects were thinking about when they were answering the questions.

References: Hamby, S.L., & Koss, M.P. (2003). Shades of gray: A qualitative study of terms used in the measurement of sexual victimization. Psychology of Women Quarterly DOI: 10.1111/1471-6402.00104

Koss. M.P. (1993). Detecting the scope of rape: A review of prevalence research methods. Journal of Interpersonal Violence DOI: 10.1177/088626093008002004

Some Research I Was Going To Do

I have a number of research projects lined up for my upcoming dissertation, and, as anyone familiar with my ideas can tell you, they’re all brilliant. You can imagine my disappointment, then, to find out that not only had one of my experiments been scooped by another author three years prior, but they found the precise patterns of results I had predicted. Adding insult to injury, the theoretical underpinnings of the work are all but non-existent (as is the case in the vast majority of the psychology research literature), meaning I got scooped by someone who doesn’t seem to have a good idea why they found the results they did. My consolation prize is that I get to write about it earlier than expected, so there’s that, I suppose…

What? No, I’m not crying; I just got something in my eye. Or allergies. Whatever.

The experiment itself (Morewedge, 2009) resembles a Turing test. Subjects come into the lab to play a series of ultimatum games. One person has to divide a pot of $3 one of three ways – $2.25/$0.75 (favoring either the divider or the receiver) or evenly ($1.50 for each) – and the receiver can either accept or reject these offers. The variable of interest, however, is not whether the receiver will accept the money; it’s whether the receiver perceives the offer to have been made by a real person or a computer program, as all the subjects were informed beforehand that the proposers they would encounter were drawn randomly from a pool of computer programs or real players. In essence, the experiment was examining whether or not participants perceived they were playing with an intentional agent (a human) or a non-intentional agent, representing chance (a computer), contingent on the outcome. A brilliant experiment that I thought of first, a mere three years after it had already been published.

Overall, the subjects were no more or less likely to suggest they were playing against a person or a computer, and were also no more likely to see a person or a computer as being responsible when they received an even split. However, this was not the case across the other two offers: when subjects received an unusually positive offer ($2.25 for themselves), they tended to attribute this offer to a computer; when the offer was negative ($0.75 for themselves) they overwhelmingly attributed it to a person. When causation was ambiguous, “harm” (used rather loosely here) seemed to trigger the perception of an intentional agent, whereas a benefit tended to have the opposite effect; a decrease in perceptions of intentionality.

This pattern of results could be explained by referencing some kind of self-interested motive: given what we know about people (they seem to prefer more money to less), an unusually altruistic offer is just that – unusual. It would seem more likely that selfish offers come from self-interested agents, rather than an altruistic offer coming from a selfish agent. In an attempt to remove such thinking from (conscious) consideration, a second experiment was run. This time, the person deciding how the pot would be split had no personal stake in the decision; that is to say, subjects were told that their partners would not receive any payment from the games. No matter whether the proposer offered $0.75 or $2.25, no matter whether the offer was accepted or rejected, proposers made no money. However, the results in this case still looked similar: selfish offers were seen as being made predominately by people, and altruistic offers were seen as being made predominately by computers.

But so help me if a computer has been screwing me over again…

In the next two experiments Morewedge (2009) got a bit more explicit about the role of chance. In the first case, subjects came into the experiment and met a confederate before both were taken into separate rooms and seated at computers. On the computer was a circle divided up into five sections with a spinning arrow in the middle. Each of the five sections of the circle was filled with one of two colors, with the numbers of each color varying from one to four across each spin (so the circle could have 4 blue and 1 yellow slots in one round, then 3 blue and 2 yellow in the next, and so on). Subjects chose one color for the experiment beforehand, and if the spinner landed on their color in any given round, they would win the amount of money listed on the screen; if it didn’t land on their color, they would lose the amount on the screen. These amounts varied between zero and one dollar in $0.25 increments.

The subjects were informed of two other things: first, that the confederate was in a similar position (the confederate had picked one of the two colors as well, so they might both have the same color or not), and second, that the confederate could control the outcome of five of the forty rounds that would be played. It was the subject’s job to report the extent to which they felt the confederate had used their influence on each round. When the subjects had lost money, they tended to attribute responsibility for the action more towards the confederate than chance, and this tendency was exacerbated as the monetary stakes got higher. This effect is all the more interesting when considering the fact that subjects couldn’t be sure whether the confederate had similar or divergent interests. If they had the same interests, the confederate would be hurting both of them if he used his influence to purposely lose. Also of interest is that, when asked to estimate how much money they had ended up with – $5 in all cases; the initial endowment they were given – subjects underestimated, reporting that ended up with only $2.89 on average. They were either bad at math, or their memory was distorted towards recalling losses selectively.

Finally, the last experiment used a paradigm similar to that typically used for children: how long participants would spend looking at something. In this case, that something was information presented to the participant following a round with the spinner. The set up was similar, except the subjects were told a confederate could control half of the outcomes, and after each round the subject was told whether the confederate had controlled the round or not. Once presented with this information, subjects had to press a button to proceed to the next screen, and the amount of time they waited before pressing that button was used as a measure of the time participants were processing the information on the screen. Participants ended up spending more time looking at the screen when it was revealed that the confederate was responsible for their win, relative to being responsible for their loss, but looked equally as long when chance was implicated as responsible. This result could tentatively suggest that participants found it surprising that the confederate was responsible for their wins, implying that the more automatic process might be one of blaming others for losses.

For example, I’d be surprised if he still has that suit and that prostitute after the dice land.

Now to the theory. I will give credit where credit is due: Morewedge (2009) does at least suggest there might have been evolutionary advantages to such a bias, but promptly fails to elaborate on them in any substantial way. The first possible explanation given is that this bias could be used to defer responsibility for negative outcomes from oneself to others, which is an odd explanation given that the subjects in this experiment had no responsibility to defer. The second possible explanation is that people might attribute negative outcomes to others in order to not feel sad, which is, frankly, silly. The forth (going out of order) explanation is that such a bias might just represent a common a mild form of “disordered” thinking concerning a persecution complex, which is really no explanation at all. The third, least silly explanation, is that:

“By assuming the presence of antagonist, one may better be able to avoid a quick repetition of the unpleasant event one has just experienced” (p. 543)

Here, though, I feel Morewedge is making the mistake of assuming past selection pressures resembled the conditions set up in the experiment. I’m not quite sure how else to read that section, nor do I feel that experimental paradigm was particularly representative of past selection pressures or relevant environmental contexts

Now, if Morewedge had placed his findings in some framework concerning how social actions are all partly a result of intentional and chance factors, how perpetrators tend to conceal or downplay their immoral actions or intentions, how victims need to convince third parties to punish others who haven’t wronged them directly, and how certain inputs (such as harm) might better allow victims to persuade others, he’d have a very nice paper indeed. Unfortunately, he misses the strategic, functional element to these biases. When taken out of context, such biases can look “disordered” indeed, in much the same way that, when put underwater, my car seems disordered in its use as a submarine.

References: Morewedge CK (2009). Negativity bias in attribution of external agency. Journal of experimental psychology. General, 138 (4), 535-45 PMID: 19883135

Social Banking

“Bankers have a limited amount of money, and must choose who to invest it in. Each choice is a gamble: taken together, they must ultimately yield a net profit, or the banker will go out of business. This set of incentives yield a common complaint about the banking system: that bankers will only lend money to individuals who don’t need it. The harsh irony of the banker’s paradox is this: just when individuals need the money most desperately, they are also the poorest credit risk and, therefore, the least likely to be selected to receive a loan” – Tooby & Cosmides (1996, p. 131)

While perhaps more of a set of unfortunately circumstances than an actual paradox (in true Alanis Morrissette fashion), the banker’s paradox can be a useful metaphor for understanding social interactions. Specifically, it can help guide predictions as to how we would expect the victim/perpetrator/third party dynamic to play itself out, and, more importantly, help explain why we would have such expectations. The time and energy we can invest in others socially – in terms of building and maintaining friendships – is a lot like money; we cannot spend it in two places at once. Given that we have a limited budget with which to build and maintain relationships, it’s of vital importance for some cognitive system to assess the probability of social returns from its investment; likewise, individuals have a vested interest in manipulating that assessment in others in order to further their goals.

And, for the record, reading my site will yield a large social return on your investment. Promise.

The first matter to touch on is why a third party would feel compelled to get involved in other people’s disputes. One reason might be the potential for the third party to gain accurate information about the likely behavior of others. If person A claims that person B is a liar, and it’s true, person C could potentially benefit from knowing that. Of course, if it’s not true, then person C would likely have been better off ignoring that information. Further, if the behavior of person B towards person A lacks predictive value of how person B will behave towards person C, then the usefulness of such information is again compromised. For instance, while an older sibling might physically dominate a younger sibling, it does not mean that older sibling will in turn dominate his other classmates or his friends. Given the twin possibilities of either receiving inaccurate information or accurate but useless information, it remains questionable as to how much third party involvement this hypothesis could explain.

Beyond information value, however, third parties may also get involved in others’ conflicts in the service of forming and maintaining valuable social alliances. Here, the accuracy of the information is less of an issue. Even if it’s true that person B is an unsavory character, he may also be a useful person to have as an ally (or at least, not have as enemy; as the now famous quote goes, more or less: “He might be a son of a bitch, but he’s our son of a bitch”). As I touched on previously the accuracy of our perceptions are only relevant to the extent that accuracy leads to useful outcomes; accuracy for its own sake is not something that could be selected for. This suggests that we shouldn’t expect our evaluations of victimhood claims to be objective or consistent; we should expect them to be useful and strategic. Our moral templates shouldn’t be automatically completed in all cases, as our visual templates are for the Kanizsa Triangle; in fact, we should expect inputs to often be erased from our moral templates – something of an automatic removal.

Let’s now return to the banker’s paradox. In the moral realm, our investments come with higher stakes than they do in the friendship realm. To side with one party in a moral judgment is not to simply invest your time in one person over another; it involves actively harming other potential investment partners, potentially alienating them directly and their allies indirectly (and harming them can bring with it associated retribution). That said, aligning yourself with someone making a moral claim can bring huge benefits, in the form of reciprocal social support and building alliances. As Tooby and Cosmides put it:

…[I]f you are unusually or uniquely valuable to someone else – for whatever reason – then that person has an uncommonly strong interest in your survival during times of difficulty. The interest they have in your survival makes them, therefore, highly valuable to you. (p.140)

So the question remains: In the context of claims to victimhood, how does someone make themselves appear valuable to others, in order to recruit their support?

Please say the answer involves trips to the red light district…

There are two distinct ways of doing this which come to mind: making yourself look like a better investment, and/or make others appear to be a worse investment. Victims face a tricky dilemma in regard to the first item: they need to make themselves appear to genuinely have been a victim while not making themselves look too easily victimizable. To make oneself look to victimizable is to make oneself look like a bad investment; one that will frequently need support and be relatively inept at returning the assistance. Going too far in other direction though, by making oneself out to be relatively unharmable, could have a negative effect on your ability to recruit support as well. This is because, as in the banker’s paradox, rich people don’t really need money, and, accordingly, people in strong positions socially are not generally viewed as needing help either; you rarely find people concerned with the plight of the rich and powerful. Those who don’t need help may not be the most grateful for it, nor the most likely to reciprocate it. Tooby and Cosmides (1996) recognized this issue, writing:

“…[I]f a person’s trouble is temporary or they can easily be returned to a position of full benefit-dispersing competence by feasible amounts of assistance…then personal troubles should not make someone a less attractive object of assistance. Indeed, a person who is in this kind of trouble might be a more attractive object of investment than one who is currently safe, because the same delivered investment will be valued more by the person in dire need.” (p.132)

Such a line of reasoning would imply we should expect to find victims trying to manipulate (a) the perceptions others have of their need, (b) their eventual usefulness, and (c) the perceptions others have concerning the needs and usefulness of the perpetrator. Likewise, perpetrators should be engaging counter manipulation along precisely the same dimensions. We would  also expect that victims and perpetrators might try and sway the cost/benefit analysis in third parties via the use of warnings and threats – implicit or explicit – about the consequences of siding with one party or another. Remember, third parties are not making these judgments in a vacuum; if the majority of third parties side with person A, third parties that sided with person B might now find themselves on the receiving end of social sanctions or ostracism.

DeScioli & Kurzban (2012), realizing this issue, posit that human mind contains adaptions for coordinating which side to take in a dispute with other third parties, so as to avoid the costs of potential despotism on the one hand, and the costs of inter-alliance fighting on the other. If a publicly observable signal not tied to one’s individual identity is used for coordinating third party involvement – i.e. all third parties will align together against an actor for doing X (killing, lying, saying the wrong thing, etc), no matter who does it – third parties can solve the problem of discoordination with one another. However, one notable problem with this approach is the informational hurdle I mentioned previously: most people are not witnesses to the vast majority of acts people engage in. Now, if person A suggests that person B has done something morally wrong, and person B denies it, provided the two are the only witnesses to the act, there’s not a whole lot to go on in terms of publicly observable signals. Without such signals (and even with them), the mind needs to use whatever available information it has to make such a judgment, and that information largely revolves around the identity of the actors in question.

And some people just aren’t very good actors.

I’d like to return briefly to a finding I’ve discussed before: men and women agree that women tend to be more discriminated against than men, even in the face of contradictory evidence. This finding might arise because people are perceiving – accurately – that women tend to be objectively more victimized. It might also arise because certain classes of people – in this case, women, relative to men – are viewed as being better investments of limited social capital. For instance, in terms of future rewards, it might be a good idea for a man to align himself with a woman – or, at the very least, not align himself against her – even in the event she’s guilty; moral condemnation does not tend to get the romance following, from my limited understanding of human interaction.

It would follow, then, that the automatic completion vs automatic deletion threshold for our moral templates should vary, contingent on the actor in question: friends and family have a different threshold than strangers; possible romantic interests have a different threshold than those we find romantically repulsive. Alliances might even serve as potential tipping points for third parties. Let’s say person A and B get involved in a dispute; even if person A is clearly in the wrong, if person A already has a large number of partial backers, the playing field is no longer level for third party involvement. Third party involvement can be driven by a large number of factors, and we shouldn’t expect all moral claims to be viewed equally, even in cases where the underlying logic is the same. The goal is usefulness; not consistency.

References: DeScioli, P., & Kurzban, R. (2012). A Solution to the Mysteries of Morality Psychological Bulletin DOI: 10.1037/a0029065

Tooby, J. & Cosmides, L. (1996). Friendship and the banker’s paradox:Other pathways to the evolution of adaptations for altruism. Proceedings of the British Academy, 88, 119-143

Kanizsa’s Morality

While we rely on our senses to navigate through life, there are certain quirks about the way our perception works that we often aren’t consciously aware of. It’s only when we encounter illusions, the most well-known of which tend to inhabit the visual domain, that certain inner workings of our perception modules become apparent. Take the following example as a good for instance: the checkerboard illusion. Given the proper context, our visual system is capable perceiving the two squares to be different colors despite the fact that they are the same color. On top of bringing certain facets of our visual modules into stark relief, the illusion demonstrates one other very important fact about our cognition: accuracy need not always be the goal. Our visual systems were only selected to be as good as they needed to be in order for us to do useful things, given the environment we tended to find ourselves in; they were not selected to be perfectly accurate in each and every situation they might find themselves in.

See, Criss Angel? It’s not that hard to do your job.

That endearing little figure is known as Kanizsa’s Triangle. While there is no actual triangle in the figure, some cognitive template is being automatically filled in given inputs from certain modules (probably ones designed for detecting edges and contrast), and the end result is that illusory perception; our mind automatically completes the picture, so to speak. This kind of automatic completion can have its uses, like allowing inferences to be drawn from a limited amount of information relatively quickly. Without such cognitive templates, tasks like learning language – or not walking into things – would be far more difficult, if not downright impossible. While picking up on recurrent and useful patterns of information in the world might lead to a perceptual quirk here and there, especially in highly abnormal and contrived scenarios like the previous two illusions, the occasional misfire is worth the associated gains.

Now let’s suppose that instead of detecting edges and contrasts we’re talking about detecting intentions and harm – the realm of morality. Might there be some input conditions that (to some extent) automatically result in a cognitive moral template being completed? Perhaps the most notable case came from Knobe (2003):

The vice president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits and it will also harm the environment.” The chairman of the board answered, “I don’t care at all about harming the environment, I just want to make as much profit as I can. Let’s start the new program.” They started the new program. Sure enough, the environment was harmed.

When asked, in this case most people suggested that the negative outcome was brought about intentionally and the chairman should be punished. When the word “harm” is replaced by help, people’s answers reverse and they now say the chairman wasn’t helping intentionally and deserves no praise.

Further research on the subject by Guglielmo & Malle (2010) found that the “I don’t care at all about…” in the preceding paragraph was indeed viewed differently by people depending on whether or not the person who said it was conforming to or violating a norm. When violating a norm, people tended to perceive some desire for that outcome in the violator, despite the violator stating they don’t care one way or the other; when conforming to a norm, people didn’t perceive that same desire given the same statement of indifference. The violation of a norm, then, might be used as part of an input for automatically filling in some moral template concerning perceptions of the violator’s desires, much like the Kanizsa Triangle. It can cause people to perceive a desire, even if there is none. This finding is very similar to another input condition I recently touched on: the perception of a person’s desires on their blameworthiness contingent on their ability to benefit from others being harmed (even if the person in question didn’t directly or indirectly cause the harm).

“I don’t care at all about the NFL’s dress code!”

A recent paper by Grey et al (PDF here) builds upon this analogy rather explicitly. In it, they point out two important things: first, any moral judgment requires there be a victim and a perpetrator dyad; this represents the basic cognitive template of moral interactions through which all moral judgments can be understood. Second, the authors note that there need not actually be a real perpetrator or a victim for a moral judgment to take place; all that’s required is the perception of this pair.

Let’s return briefly to vision: when it comes to seeing the physical world, it’s better to have an accurate picture of it. This is because you can’t do things like persuade a cliff it’s not actually a cliff, or tell gravity to not pull you down. Thankfully, since the physical environment isn’t trying to persuade us of anything in particular either, accurate pictures of the world are relatively easy to come by. The social world, however, is full of agents that might be (and thus, probably are) misrepresenting information for their own benefit.

Taken together with the work just reviewed, this suggests that the moral template can be automatically completed: people can be led to either perceive victims or perpetrators where there are none (given they already perceive one or the other), or fail perceive victims and perpetrators that actually exist (given that they fail to perceive one or the other). Since accuracy isn’t the goal of these perceptions per se, whether the inputs given to the moral template are erased or cause it to be filled in will likely depend on their context; that is to say people should strategically “see” or “fail to see” victims or perpetrators, quite unlike whether people see the Kanizsa Triangle (they almost universally do). Some of possible reasons why people might fall in one direction or the other will be the topic of the next post.

References: Guglielmo, S., & Malle, B.F. (2010). Can Unintended Side Effects Be Intentional? Resolving a Controversy Over Intentionality and Morality Personality and Social Psychology Bulletin DOI: 10.1177/0146167210386733

Knobe, J. (2003). Intentional Action and Side Effects in Ordinary Language Analysis DOI: 10.1093/analys/63.3.190

 

Assumed Guilty Until Proven Innocent

Motley Crue is a band that’s famous for a lot of reasons, their music least of all. Given their reputation, it was a little strange to see them doing what celebrities do best: selling out by endorsing Kia. At least I assume they were selling out. When I first saw the commercial, I doubted that Motley Crue just happened to really love Kia cars and had offered to appear in one of their commercials, letting it feature one of their many songs about overdosing. No; instead, my immediate reaction to the commercial was that Motley Crue probably didn’t care one way or another when it came to Kia, but since the company likely ponied up a boat-load of cash, Motley Crue agreed to, essentially, make a fake and implicit recommendation on the car company’s behalf. (Like Wayne’s World, but without the irony)

What’s curious about that reaction is that I have no way of knowing whether or not it’s correct; I’ve never talked to any of the band members personally, and I have no idea what the terms of that commercial were. Despite this, I feel, quite strongly, that my instincts on the matter were accurate. More curious still, seeing the commercial actually lowered my opinion of the band. I’m going to say a little more about what I think this reaction reflects later, but first I’d like to review a study with some very interesting results (and the usual set of theoretical shortcomings).

I’m not being paid to say it’s interesting, but I’ll scratch that last bit if the price is right.

The paper, by Inbar et al (2012), examined the question of whether intentionality and causality are necessary components when it comes to attributions of blameworthiness. As it turns out, people appear quite willing to (partially) blame others for outcomes that they had no control over – in this case, natural disasters – so long as said others might only have desired it to happen.

In the first of four experiments, the subjects in one condition read about how a man at a large financial firm was investing in “catastrophe bonds”, which would be worth a good deal of money if an earthquake struck a third world country within a two year period. Alternatively, they read about man investing in the same product, except this time the investment would pay out if an earthquake didn’t hit the country. In both cases, the investment ends up paying off. When subjects were asked about how morally wrong such actions are, and how morally blameworthy the investor was, the investor was rated as being more morally wrong and blameworthy in the condition where he benefited from harm, controlling for how much the subjects liked him personally.

The second experiment expanded on this idea. This time, the researchers varied the outcome of the investment: now, the investments didn’t always work out in the investor’s favor. Some of the people who were betting on the bad outcome actually didn’t profit because the good outcome obtained, and vice versa. The question being asked here was whether or not these judgments of moral wrongness and blameworthiness were contingent on profiting from a bad outcome or just being in the position to potentially benefit. As it turns out, actually benefiting wasn’t required: the results showed that the investor simply desiring the harmful outcome (that one didn’t cause, directly or indirectly) was enough to trigger these moral judgments. This pattern of results neatly mirrors judgments of harm – where attempted but failed harm is rated as being just about as bad as the completed and intended variety.

The third experiment sought to examine whether the benefits being contingent on harm – and harm specifically – mattered. In this case, an investor takes out that same catastrophe bond, but there are other investments in place, such that the firm will make the same amount of money whether or not there’s a natural disaster. In other words, now the investor has no specific reason to desire the natural disaster. In this case, subjects now felt the investor wasn’t morally in the wrong or blameworthy. So long as the investor wasn’t seen as wanting the negative outcome specifically, subjects didn’t seem to care about his doing the same thing. It just wasn’t wrong anymore.

“I’ve got some good news and some bad news…no, wait; that bad news is for you. I’m still rich.”

The final experiment in this study looked at whether or not selling that catastrophe bonds off would be morally exculpatory. As it turned out, it was: while the people who bought the bonds in the first place were not judged as nice people, subsequently selling the bonds the next day to pay off an unexpected offense reduced their blameworthiness. It was only when someone was currently in a position to benefit from harm that they were seen as more morally blameworthy.

So how might we put this pattern of results into a functional context?. Inbar et al (2012) note that moral judgments typically involve making a judgment about an actor’s character (or personality, if you prefer). While they don’t spell it out, what I think they’re referring to is the fact that people have to overcome an adaptive hurdle when engaging socially with others: they need to figure out which people in their social world to invest their scarce resources in. In order to successfully deal with this issue, one needs to make some (at least semi-accurate) predictions concerning the likely future behavior of others. If one sends the message that their interests are not your interests – such as by their profiting if you lose – there’s probably a good chance that they aren’t going to benefit you in the long term, at least relative to someone who sends the opposite signal.

However, one other aspect that Inbar et al (2012) don’t deal with brings us back to my feelings about Motley Crue. When deciding whether or not to blame someone, the decision needs to be made, typically, in the absence of absolute certainty regarding guilt. In my case, I made a judgment based on zero information, other than my background assumptions about the likely motives of celebrities and advertisers: I judged the band’s message as disingenuous, suggesting they would happily alter their allegiances if the price was right; they were fair-weather friends, who aren’t the best investments. In another case, let’s say that a dead body turns up, and they’ve clearly been murdered. The only witness to this murder was the killer, and whoever it is doesn’t feel like admitting it. When it comes time for the friends of the deceased to start making accusations, who’s going to seem like a better candidate: a stranger, or the burly fellow the dead person was fighting with recently? Those who desired to harm others tended to, historically, have the ability to translate those desires into actions, and, as such, make good candidates for blame.

“I really just don’t see how he could have been responsible for the recent claw attacks”

Now in the current study there was no way the actor in question could have been the cause of the natural disaster, but our psychology is, most likely, not built for dealing with abstract cases like that. While subjects may report that, no, that man was not directly responsible, some modules that are looking for possible candidates to blame are still hard at work behind the scenes, checking for those malicious desires; considering who would benefit from the act, and so on (“It just so happened that I gained substantially from your loss, which I was hoping for,” doesn’t make the most convincing tale). In much the same way, pornography can still arouse people, even though the porn offers no reliable increase in fitness and “the person” “knows” that. What I feel this study is examining, albeit not explicitly, are the input conditions for certain modules that deal in the uncertain and fuzzy domain of morality.

(As an aside, I can’t help but wonder whether the people in the stories – investment firms and third world countries – helped the researchers find the results they were looking for. It seems likely that some modules dealing with determining plausible perpetrators might tap into some variable like relative power or status in their calcuations, but that’s a post for another day.)

References: Inbar, Y., Pizarro, D., & Cushman, F. (2012). Benefiting From Misfortune: When Harmless Actions Are Judged to Be Morally Blameworthy Personality and Social Psychology Bulletin, 38 (1), 52-62 DOI: 10.1177/0146167211430232

Intentional Or Not, Incest Is Still Gross (And Wrong)

For a moment, let’s try to imagine a world that isn’t our own. In this world, the intentions behind an act are completely disregarded when it comes to judging that act morally; the only thing that matters is the outcome. In this world, a man who trips and falls down the stairs, accidentally hitting another man on the way down, is just as wrong as the man who walks up to another and angrily punches him right in the face. In another case, a sniper tries to assassinate the president of the country, but since he misses by an inch no one seems to care.

Such a world would be a strange place to us, yet our sense of disgust seems to resemble the psychology of that world to some degree. While intent doesn’t stop mattering altogether when it comes to disgust, it would seem to matter in a different way than is typically envisioned when it comes to the domain of physical harm.

Sure, it may look disgusting – morally or otherwise – but who doesn’t love Red Velvet?

A recent paper by Young & Saxe (2011) set out to examine the role that intentions placed in the contexts of a more physical harm – poisoning – relative to their role in contexts that elicited disgust – the ever popular case of sibling incest. Subjects read stories in which incest was committed or a friend served another friend peanuts despite knowing about their friend’s peanut allergy; for these stories there was a bad intent and a bad outcome. When both acts were committed intentionally, harm tended to be rated as slightly more morally wrong than incest (6.68 vs 6.03, out of 7). However, the story changed when both acts were committed by accident – when there was still a bad outcome, but only neutral intentions. While the harm condition was now rated as not very wrong, the incest condition was still rated as fairly wrong (2.05 vs 4.24, out of 7).

Another study basically replicated the results of the first, but with one addition: there was now an attempt condition in which an actor intends to commit an act (either harm someone or commit incest), but fails to do so. While the intentional condition (bad intent and bad outcome) was rated as the worst for both incest and harm, and the accidental condition (neutral intent and bad outcome) saw incest rated as worse than harm, the attempt condition showed a different pattern of results: while attempted harm was rated to be just as bad as intentional harm (6.0 and 6.5, respectively), attempted incest was rated more leniently than intentional incest (4.2 and 6.4). In other words, moral judgments of incest were more outcome dependent, relative to moral judgments of harm.

One final study on the topic looked at two different kinds of failed attempts concerning incest and harm: the ‘true belief but failed act’ and the ‘false belief but completed act’. The former involved (in the case of incest) two siblings correctly believe they’re siblings and attempt to engage in intercourse but are interrupted before they complete the act. The latter involved two people who incorrectly believe they’re siblings and actually engage in intercourse. The harm contexts were again outcome independent: whether the harm was completed or not didn’t matter. However, the incest contexts told a different story: the ‘true belief but failed act’ condition  was rated as being more immoral than the ‘false belief but completed act’ condition (5.65 vs 4.2). This means subjects were likely rating the act relative to how close it approximated actual incest, and the subjects apparently felt an unconsummated attempt at real incest looked more like incest than a consummated act where the two were just mistaken about their being siblings.

And I think we can all relate to that kind of disappointment…

A further two studies in the paper sought to examine two potential ways to account for this effect. In one case, subjects rated the two stories with respect to how emotionally upsetting they were, how much control over the situation and knowledge of the situation the actors had, and the extent to which the agents were acting intentionally. In no case were there any significant differences, whether concerning disgust or harm, or whether the act was intentional or accidental. The subjects seemed to be assessing the two stories in the same fashion. The second study sought to examine whether subjects were using moral judgments to express the disgust they felt about the story, rather than actually judging the act to be immoral. However, while subjects rated intentional incest as worse than accidental incest, they rated both to be equally as disgusting. Accordingly, it seems unlikely that people were simply using the morality scale as a proxy for their disgust.

It is my great displeasure to have to make this criticism of a paper again, but here goes: while the results are interesting,Young & Saxe (2012) really could have used some theory in this paper. Here’s their stated rationale for the current research:

Our hypothesis was initially motivated by an observation: in at least some cultures, people who commit purity violations accidentally or unknowingly are nevertheless considered impure and immoral.

Observing something is all well and good, but to research it, one should – in my opinion – have a better reason for doing so than just a hunch you’ll see an effect. The closest the authors come to a reasonable explanation of their findings – rather than just a restatement of them – is found in the discussion section, and it takes the form of a single sentence, again feeling like an afterthought, rather than a guiding principle:

…[R]ules against incest and taboo foods may have developed as a means for individuals to protect themselves, for their own good, from possible contamination.

Unfortunately, none of their research really speaks to that possibility. I’d like to quickly expand on that hypothesis, and then talk about a possible study that could have been done to examine it.

Finding an act disgusting is a legitimate reason to not engage it yourself. While that would explain why someone might not want to have sex with their parents or siblings, it would not explain why one would judge others as morally wrong for doing so. For instance, I might not feel inclined to eat insects, but I wouldn’t want someone else punished because they enjoyed doing so. However, within the realm of disgust, the threat of contamination looms large, and pathogens aren’t terribly picky about who they infect. If someone else does something that leads to their becoming infected, they are now a potential infection risk to anyone they interact with (depending on how the pathogen spreads). Accordingly, it’s often not enough to simply avoid engaging in a behavior yourself; one needs to avoid interacting with other infected agents as well. One way to successfully deter people from interacting with you just happens to be aggressive behavior. This might, to some extent, explain the link between disgust and moral judgments. It would also help explain the result that disgust judgments are outcome dependent: even if you didn’t intend to become infected with a pathogen, once you are infected you pose the same risk as someone who was infected more intentionally. So how might we go about testing such an idea?

One quick trip to the bookstore later…

While you can’t exactly assign people to a ‘commit incest’ condition, you could have confederates that do other potentially disgusting things, either intentionally or accidentally, or attempt to do them, but fail (in both cases of the false or true beliefs). Once the confederate does something ostensibly disgusting, you assign them a partner in one of two conditions: interacting at a distance, or interacting in close proximity. After all, if avoiding contamination is the goal, physical distance should have a similar effect, regardless of how it’s achieved. From there, you could compare the willingness of subjects to cooperate or punish the confederate, and check the effect of proximity on behavior. Presumably, if this account is correct, you’d expect people to behave less cooperatively and more selfishly when the confederate had successfully done something disgusting, but this effect would be somewhat moderated by physical distance: the closer the target of disgust is, the more aggressive we’d expect subjects to be.

One final point: the typical reaction to incest – that it’s morally wrong – is likely a byproduct of the disgust system, in this account. Incestuous acts are, to the best of my knowledge, no more likely to spread disease than non-incestuous intercourse. That people tend to find them personally rather disgusting might result in their being hooked onto the moral modules by proxy. So long as morally condemning those who engaged in acts like incest didn’t carry any reliable fitness costs, such a connection would not have been selected against.

References: Young, L., & Saxe, R. (2011). When ignorance is no excuse: Different roles for intent across moral domains Cognition, 120 (2), 202-214 DOI: 10.1016/j.cognition.2011.04.005

Group Selectionists Make Basic Errors (Again)

In my last post, I wrote about a basic error most people seem to make when thinking about evolutionary psychology: they confuse the ultimate adaptive function of a psychological module with the proximate functioning of said module. Put briefly, the outputs of an adapted module will not always be adaptive. Organisms are not designed to respond perfectly to each and every context they find themselves in. This is especially the case regarding novel environmental contexts. These are things that most everyone should agree on, at least in the abstract. Behind those various nods of agreement, however, we find that applying this principle and recognizing maladaptive or nonfunctional outputs is often difficult for people in practice, laymen and professional alike. Some of these professionals, like Gintis et al (2003), even see fit to publish their basic errors.

Thankfully for the authors, the paper was peer reviewed by people who didn’t know what they were talking about either

There are two main points to discuss about this paper. The first point is to consider why the authors feel current theories are unable to account for certain behaviors, and the second is to consider the strength of the alternative explanations put forth. I don’t think I’m spoiling anything by saying the authors profoundly err on both accounts.

On the first point, the behavior in question – as it was in the initial post – is altruism. Gintis et al (2003) discuss the results of various economic games showing that people sometimes act nicely (or punitively) when niceness (or punishment) doesn’t end up ultimately benefiting them. From these maladaptive (or what economists might call “irrational”) outcomes, the authors conclude, therefore, that cognitive adaptations designed for reciprocal altruism or kin selection can’t account for the results. So right out of the gate they’re making the very error the undergraduates were making. While such findings would certainly be a problem for any theory that purports humans will always be nice when it pays more, and will never be nice when it pays less, and are always able to correctly calculate which situation is which, neither theory presumes any of those things. Unfortunately for Gintis et al, their paper does make some extremely problematic assumptions, but I’ll return to that point later.

The entirety of the argument that Gintis et al (2003) put forth rests on the maladaptive outcomes that are obtained in these games cutting against the adaptive hypothesis. As I covered previously, this is bad reasoning; brakes on cars sometimes fail to stop the car because of contextual variables – like ice – but that doesn’t mean that brakes aren’t designed to stop cars. One big issue with the maladaptive outcomes Gintis et al (2003) consider is that they are largely due to issues of novel environmental contexts. Now, unlike the undergraduate tests I just graded, Gintis et al (2003) have the distinct benefit of being handed the answer by their critics, which are laid out, in text, as such:

Since the anonymous, nonrepeated interactions characteristic of experimental games were not a significant part of our evolutionary history, we could not expect subjects in experimental games to behave in a fitness-maximizing manner. Rather, we would expect subjects to confuse the experimental environment in more evolutionarily familiar terms as a nonanonymous, repeated interaction, and to maximize fitness with respect to this reinterpreted environment.

My only critique of that section is the “fitness maximizing” terminology. We’re adaptation executioners, not fitness maximizers. The extent that adaptions maximize fitness in the current environment is an entirely separate questions to how we’re designed to process information. That said, the authors reply to the critique thusly:

But we do not believe that this critique is correct. In fact, humans are well capable of distinguishing individuals with whom they are likely to have many future interactions, from others, with whom future interactions are less likely

Like the last post, I’m going to rephrase the response in terms of arousal to pornography instead of altruism to make the failings of that argument clearer: “In fact, humans are well capable of distinguishing [real] individuals with whom they are likely to have [sex with], from [pornography], with [which] future [intercourse is] less likely.”

I suppose I should add a caveat about the probability of conception from intercourse…

Humans are well capable of distinguishing porn from reality. “A person” “knows” the difference between the two, so arousal to pornography should make as little sense as sexual arousal to any other inanimate object, like a chair or a wall. Yet people are routinely aroused by pornography. Are we to conclude from this, as Gintis et al might, that, therefore sexual arousal to pornography is itself functional? The proposition seems doubtful. Likewise, when people take birth control, if “they” “know” that they can’t get pregnant, why do they persist in having sex?

A better explanation is that “a person” is really not a solitary unit at all, but a conglomeration of different modules, and not every module is going to “know” the same thing. A module generating arousal to visual depictions of intercourse might not “know” the visual depiction is just a simulation, as it was never designed to tell the difference, since there never was a difference. The same goes for sex and birth control. That the module that happens to be talking to other people can clearly articulate that it “knows” the sex on the screen isn’t real, or that it “knows” it can’t increase its fitness by having sex while birth control is involved, other modules, could they speak, would give a very different answer. It seems Gintis et al (2003) fail to properly understand, or at least account for, modularity.

Maybe people can reliably tell the difference between those with whom they’ll have future contact and those with whom they likely won’t. Of course, there are always risks that module will miscalculate given the uncertainty of the future, but that task might have been something that a module could plausibly have been designed to do. What modules were unlikely to be designed to do, however, is interact with people anonymously, much less interact anonymously under the specific set of rules put forth in these experimental conditions. Gintis et al (2003) completely avoid this point in their response. They are talking about novel environmental contexts, and are somehow surprised when the mind doesn’t function perfectly in them. Not only do they fail to make use of modularity properly, they fail to account for novel environments as well.

So the problem that Gintis et al see is not actually a problem. People don’t universally behave as Gintis et al (2003) think other models predict they should. Of course, the other models don’t make those predictions, but there’s an even larger issue looming: the solution to this non-problem that Gintis et al favor introduces a greater, actual issue. This is the big issue I alluded to earlier: the “strong reciprocity” trait that Gintis et al (2003) put forth does make some very problematic assumptions. A little juxtaposition will let one stand out, like something a good peer reviewer should have noted:

One such trait, which we call strong reciprocity (Gintis, 2000b; Henrich et al., 2001), is a predisposition to cooperate with others and to punish those who violate the norms of cooperation, at personal cost, even when it is implausible to expect that these costs will be repaid either by others or at a later date…This is not because there are a few ‘‘bad apples’’ among the set of employees, but because only 26% of employees delivered the level of effort they promised! We conclude that strong reciprocators are inclined to compromise their morality to some extent, just as we might expect from daily experience. [emphasis mine]

So the trait being posited by the authors allows for cooperation even when cooperating doesn’t pay off. Leaving aside whether such a trait is plausibly something that could have evolved, indifference to cost is supposed to be part of the design. It is thus rather strange that the authors themselves note people tend to modify their behavior in ways that are sensitive to those costs. Indeed, only 1 in 4 of the people in the experiment they mention could even potentially fit the definition of a strong altruist, even (and only) if the byproducts of reciprocal altruism modules counted for absolutely nothing.

25% of the time, it works 100% of the time

It’s worth noticing the trick that Gintis et al (2003) are trying to use here as well: they’re counting the hits and not the misses. Even though only a quarter of the people could even potentially (and I do stress potentially) be considered strong reciprocators that are indifferent to the costs and benefits, they go ahead a label the employees strong reciprocators anyway (just strong reciprocators that do things strong reciprocators aren’t supposed to do, like be sensitive to costs and benefits). Of course, they could more parsimoniously be labeled reciprocal altruists who happen to be behaving maladaptively in a novel circumstance, but that’s apparently beyond consideration.

References: Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (2003). Explaining altruistic behavior in humans Evolution and Human Behavior, 24 (3), 153-172 DOI: 10.1016/S1090-5138(02)00157-5