When Is It Useful To Be A Victim?

At one point or another, we have all had to deal with at least one person who behaved like a professional victim. Nothing is ever even partially their fault or responsibility, and they never seem to get what they think they deserve. Quite the opposite actually; they like to go on long and frustrated rants about how other people in the world are out to actively snub them. They’re unable to be reasoned with, as to even suggest that things aren’t really that bad is to question their victim status. It will be taken as a slight against them, one that will cause a deep mental wound and be used as just another example to demonstrate how hard their life is.

And the world already has one Trent Reznor for that job; it doesn’t need more.

To some extent, all that has probably described you well at some points throughout your life; perhaps more than you would care to admit and definitely more than you realize. It, of course, has never described me, due to my virtue of consistently being spot-on-correct when it comes to everything. That little facet of my mind coincidentally leaves me in a good position to examine this rather annoying human behavior: specifically, what might the benefits be to being mantled in the label “victim”, and why might some people compete for that label? We’ll start this examination by asking a question that may well offend you, but keep that “I’m offended” card tucked away in your deck for now: why do we have a gay pride parade, but no straight pride one? Bear in mind, this question could apply equally well across a number of different domains (such as a degree in men’s studies or white pride month), but we’ll stick to the sexual orientation one. People have responded to that question in multiple ways, but the answers seem to center around one common theme: you can be proud of your sexual orientation when it negatively impacts you.

What both those linked responses have in common is that they explicitly stress overcoming bigotry and hatred while fostering acceptance, which are certainly worthy goals and accomplishments of which to be proud. That raises the natural question of why we don’t then just call it the acceptance parade, or the overcoming bigotry parade? What’s not clear is where the link to being proud of your sexual orientation specifically – which most people would not class as an accomplishment – enters into the equation. What both answers also imply is that were homosexuals not discriminated against for their orientation, there would be no need for gay pride anymore. This further emphasizes the point that one’s sexual orientation is not the deciding factor in pride despite the parade’s namesake focusing on it. What the calling it gay pride seems to do is, perhaps unsurprisingly, suggest the notion that issues faced by certain groups of people, in this case homosexuals, are more hurtful, more legitimate, and overcoming them is a special accomplishment. Getting people to stop bullying you at school is an accomplishment, but if you get people to stop bullying you for being gay, then you get extra points. After that, they’ll be left with just teasing you for one of the no doubt awkward features of your adolescent body or personality

One could be left wondering what a straight pride parade would even look like anyway, and admittedly, I have no idea. Of course, if I didn’t already know what gay pride parades do look like, I don’t know why I would assume they would be populated with mostly naked men and rainbows, especially if the goal is fostering acceptance and rejection of bigotry. The two don’t seem to have any real connection, as evidenced by black civil rights activists not marching mostly naked for the rights afforded to whites, and suffragettes not holding any marches while clad in assless leather chaps. Neither group even had the rainbow behind them; they marched completely in black and white.

There’s a movement that I could really “get behind”. See what I did there? It’s because I’m clever.

These groups, because of their socially disadvantaged status, were able to successfully demand and obtain social change from the advantaged groups. As I’ve mentioned before, people working to change society in some way never use the motto, “Things are pretty good right now, but maybe they could be better” for this very reason. What’s even more impressive is that these groups were able to achieve change without initiating violence. Being seen as a victim legitimized their demands, eliminating a need for force. However, being seen a victim also has the power to help legitimize other behaviors that aren’t quite on the level of demanding equal rights. Being seen a victim can assistant in legitimizing otherwise non-legitimate behaviors as well, so now it’s time to abandon our discussion of pride for one’s sexual orientation and turn our focus towards quasi-thievery.

A recent paper by Gray and Wegner (2011) examined the link between how much someone is to be blamed for a misdeed as a result of their being painted as either more of a hero or more of a victim. Heroes are those who otherwise do good deeds, whereas victims are people who have had bad things happen to them. In the first experiment, subjects read a story about a fictitious character, George. In one case, George gets $100 from his paycheck stolen each week by his boss (victim); in a second case George gives $100 to charity each week (her); in a final case, he spends that $100 on normal purchases (neutral). In all stories, George sees a woman drop $10 and picks it up. Rather than returning it to the owner, George opts to keep the money. Subjects were asked to assess how much blame George deserves for keeping the money. The results indicated that hero George was blamed the most, followed by neutral George, while victim George got the least blame.

Maybe those results aren’t too surprising. Victim George got lots of money stolen from him, so maybe he deserved that money the woman dropped more than the other two. The second experiment in the paper looked at the question from a different angle. In this experiment, subjects read another story about two hypothetical people working as cooks. Among other things found in the story, one person was either described as having started a charity in college (hero), having been hit by a drunk driver in college, but had long since recovered (victim), or had worked in a hardware store (neutral). Later in the story, the cooks ignore a request for a peanut-free salad, almost killing a woman with an allergy to peanuts. The results indicated that people tended to blame heroes more than neutral parties, and blame neutral parties more than victims. It’s important to note here that the status of hero, victim, or neutral, was derived from a completely unrelated incident that happened years prior, yet it still had an effect in determining who was to blame.

In the final experiment, subjects read yet another story about a fictitious person, Graham. In the story, he is described in a number of ways, but at one point, he is described as either a hero (having worked at a charity), victim (again, hit by a drunk driver), or neutrally. Graham is then described as going through his morning routine, doing many normal things, and also picking up and keeping $10 he watched a woman drop at some point. Following the story, subjects did an unrelated task for a few minutes to distract them, and were then asked to recall five things about Graham. When it came to hero Graham, 68% of the participants listed that he had kept the $10 among the five items However, people only listed the keeping of the money incident 42% of the time in the victim condition. The quasi-theft was also listed sooner in the recalled list of the hero or neutral condition, relative to the victim one, suggesting that the misdeed stuck out less when the victim did it.

Man; victim Graham has really been milking that car accident for all it’s worth…

As part of the answer to the question posed by the title, it is useful to be seen as a victim to excuse a misdeed, even if you’re considered a victim for reasons completely unrelated to the current situation. Of course, it probably would only magnify the effect were those reasons related to the situation at hand. I would predict further that the degree to which one is seen as a victim would also be an important variable. The more “victim-y” someone is, the most justified their behavior and demands become. What this further suggests to me is that people are going to be biased in their perceptions of victimhood; they’re going to tend to see themselves as being greater victims than others would, and may go to great lengths to convince others of their victim status.

One point worth keeping in mind is that for a victim to exist, there needs to be a perpetrator. If the results of this research are any indication, that perpetrator will tend to be thought of – by the victim – as being relatively benefited in their life (being more hero-like). Indeed, we would probably expect to find a correlation between how privileged a perpetrator is seen as being and how victim-y their victim is. After all, if the perpetrator is also a victim, that would, to some extent, help excuse and justify their behavior; if the perpetrator’s behavior is somewhat excused, the victim of their actions becomes less of a victim. The results surrounding the people classed as “heroes” also sheds some light on why the, “I have friends who are [X], so I can’t be [X]ist” arguments don’t work. Making yourself sound benevolent towards a group may actively make you look worse; at the very least it probably won’t help make your case. It also helps explain oddities of other arguments, like why some people are quick to suggest that men are privileged in certain areas of society but will generally refuse to admit that anything resembling a female privilege exists. Some even going as far to suggest that women can’t be sexist, but men can. To admit that being a female brings benefits and being a man brings costs, depending on the situation, would be to weaken one’s victim status, and – by extension – your social sway. The same goes for admitting that one’s social group is quite capable of being nasty themselves. (Quick note: some people will point out that men are relatively disadvantaged sometimes, but they are only allowed to do so in an accepted fashion by keeping the perpetrator constant. “The Patriarchy hurts men too” is the compromise.)

While it may be annoying when people seem to actively compete for winner of the “biggest victim” award, or to be asked about obstacles you’ve overcome in life on college applications, understanding the relationship between victimhood status and legitimization of behavior helps to clarify why things like those happen. Being a victim can be a powerful tool in getting what you want when used successfully.

References: Gray, K. & Wegner, D.M. (2011) To escape blame, don’t be a hero – be a victim. Journal of Experimental Social Psychology, 47, 516-519.

The Science Of White Knighting

As a male, it’s my birthright to be a chauvinistic sexist, sort of like original sin. I still remember the day, though I was still very young, that some representatives from the Patriarchy approached me with my membership card. They extended their invitation to join the struggle to keep gender roles distinct, help maintain male privilege, and make sure the women got the short end of the social stick despite both genders being identical in every way. I’m proud to report that towards this end I’ve watched many movies and played many games where the male protagonist saves an attractive woman from the clutches of some evil force (typically another male character), and almost none where the roles have been reversed. Take that, 19th amendment!

While we having yet figured out how to legally bar women from playing video games, we can at least patronize them while they do.

I have some lingering doubts as to whether I, as a man, am doing enough to maintain my privileged position in the world. Are sexism and recurrent cultural accidents the only reasons that the theme of man-saves-woman is so popular in the media, but the woman-saves-man theme is far less common? While I certainly hope it is to maximize the oppression factor, there have been two recent papers that suggest the theme of a damsel in distress being rescued by their white knight (or Mario, if you’re into short Italian plumbers) has more to do with getting the girl than oppressing her by reinforcing the idea that women need men to save them. A worrying thought for you other sexist pigs, I know.

While there’s always an interest in studying heroic behavior, researchers can’t put people into life-and-death scenarios for experimental purposes without first filling out the proper paperwork, and that can be quite tedious. The next best thing that we’re able to do is to get subjects to volunteer for self-inflicted discomfort. Towards this end, McAndrew & Perilloux (2012) brought some undergraduates into the lab under the pretenses of a “group-problem solving study”, when the actual objective was to see who would volunteer for discomfort. The undergrads were tested in groups of three and given three minutes to assign each group member one of three jobs: astronaut, diver, and pitcher. The astronaut’s job was to write down arguments in favor of taking three items from a hypothetical crashed spaceship. The diver was tasked with, first, submerging their arm in icy water for forty seconds, and then sit under a large water-balloon that the pitcher would attempt to break by throwing balls at a target (in keeping with the “3″ theme, the pitcher had three minutes to accomplish this task).  Needless to say, this would soak the diver, which was pretty clearly the worst job to have. Afterwards, the subjects decided how to split up the $45 payment privately. People who volunteered for the diver role were accordingly paid and liked more, on average.

If that’s where the experiment ended there wouldn’t be much worth caring about. The twist is that the groups were either made up of two men and one women, or two women and one man. In the latter groups, men and women ended up in each role at chance levels. However, when the group was made up of two men and a woman, the men ended up in the diver role 100% of the time, and the pitcher role almost as often (the one exception being a woman who was actually a pitcher for a softball team). It seemed to be that the presence of another man led the men to compete for the position of altruist, as if to show off for the woman and show-up the other man. If we were to translate this result into the world of popular movie themes, a close fit would probably be “male friend tries to convince a girl he really cares about her and he has has been the one for all along, not that jerk of a boyfriend she’s had”.

“Tell me more about all the men you date who aren’t me. I’m selflessly concerned with all your problems and can take the pain; not like those Jerks.”

Now it’s worth pointing out that the mating motive I’m suggesting as an explanation is an assumed one, as nothing in the study directly tested whether the behavior in the two male groups was intended to get the girl. One could be left wondering why the two female groups did not universally have the male volunteering for the diver position as well, were that the case. It could be that a man would only feel the need to compete (i.e. display) when there’s an alternative to him available; when there’s little to no choice for the women (one man, take him or leave him), the motivation to endure these costs and show off might not be aroused. While that answer may be incomplete, it’s at least a plausible starting point.

A second paper paints a broader picture of this phenomena and helps us infer sexual motives more clearly. In this study, Van Vugt & Iredale (2012) looked at contributions to public goods – sacrificing for the good of the group – rather than the willingness to get a little wet. In the first experiment, subjects played an anonymous public goods game with either no observer, an attractive observer of the same sex sitting close by, or one of the opposite sex. Across the three conditions, women were equally as likely to donate money to a group account. Men, however, donated significantly more to the account, but only when being observed by a member of the opposite sex. Further, the amount men donated correlated with how attractive they thought that observer was; the more attractive the men felt the woman was, the nicer the men were willing to behave. I’m sure this facet of the male psychology has not escaped the notice of almost any woman on Earth. To the infuriation of many girlfriends, their significant others will seem to take on a new persona around other women that’s just so friendly and accommodating, leading to all manner of unpleasant outcomes for everyone.

The next experiment in the paper also looked at male-male competition for behaving altruistically in a public goods game. Male subjects were brought into the lab one at a time and photographed. Their photos were then added alongside two others so subjects could see who they were playing with. Feedback on how much money participants gave was made available after each of the five rounds. Additionally, some participants were led to believe there would be an attractive observer – either of the same or opposite sex – watching the game, and the photos of the fake observers were included as well. Finally, at the conclusion of the experiment, participants were asked to make a commitment to a charitable organization. The results showed that men tended to increase their contribution between the beginning and the end of the game, but only when they thought they were being observed by an attractive woman; when they weren’t contributions steadily declined. Similarly, men also volunteered for more charity time following the experiment if they had been observed by an attractive woman.

All the sudden, the plight of abused children just became a lot more real.

While male behavior was only studied under these two situations, I don’t see any reasons to suspect that the underlying psychological mechanisms don’t function similarly in others. Men are willing to compete for women, and that competition can take many forms, altruism being one of them. Everyone is familiar with the stereotypical guy who befriends a woman, is always there to help her, and is constantly looking out for her, with the end goal of course being sex (however vehemently it may be denied). Given that women tend to value kindness and generosity in a partner, being kind and generous as a way to someone’s pants isn’t the worst idea in the world. Demonstrating your ability and willingness to invest is a powerful attractant. That comes with a caveat: it’s important for some frustrated men out there to bear in mind that those two factors are not the only criteria that women use to make decisions about who to hook up with.

I say that because there are many men who bemoan how women always seem to go for “jerks”, though most women – and even some men – will tell you that most guys are pretty nice overall, and being nice does not make one exceptionally attractive. They’ll also tell you that women, despite the stereotype and for the most part, don’t like being with assholes. Real jerks fail to provide many benefits and even inflict some heavy costs than nicer men would. To the extent that women go for guys who don’t really treat them well or care about them, it’s probably due in large part to those men being either exceptionally good looking, rich, or high-status (or all three, if you’re lucky like I am). Those men are generally desirable enough, in one way or another, that they are able to effectively play the short-term mating strategy, but it’s worth bearing in mind their jerkiness is not what makes them more attractive generally; it makes them less attractive, they can just make up for it in other ways. Then again, denigrating your competition has a long and proud history in the world of mating, so calling other guys jerks or uncaring probably isn’t a terrible tactic either.

References: McAndrew, F.T. & Perillous, C. (2012). Is self-sacrifical competitive altruism primarily a male activity? Evolutionary Psychology, 10, 50-65

 

Van Vugt, M. & Iredale, W. (2012). Men behaving nicely: Public goods as peacock tails. British Journal of Psychology, Article first published online : 1 FEB 2012

The Lacking Standards Of Philosophical Proof

Recently, I’ve reached that point in life that I know lots of us have struggled with: one day, you just wake up and say to yourself, “I know my multimillion dollar bank account might seem impressive, but I think I want more out of life than just a few million dollars. What I’d like would be more money. Much more”. Unfortunately, the only way to get more money is to do this thing called “work” at a place called a “job”, and these jobs aren’t always the easiest thing to find – especially the cushy ones – in what I’m told is a down economy. Currently, my backup plan has been to become a professor in case that lucrative career as a rockstar doesn’t pan out the way I keep hoping it will.

Unfortunately, working outside of the home means I’ll have less time to spend entertaining my piles of cash.

I’ve been doing well and feeling at home in various schools for almost my entire life, so I’ve not seen much point in leaving the warmth of the academic womb. However, I’ve recently been assured that my odds of securing such a position in one as a long-term career are probably somewhere between “not going to happen” and “never going to happen”. So, half-full kind of guy that I am, I’ve decided to busy myself with pointing out why many people who already have these positions don’t deserve them. With any luck, some universities may take notice and clear up some room in their budgets. Today, I’ll again be turning my eye on the philosophy department. Michael Austin recently wrote this horrible piece over at Psychology Today about why we should reject moral relativism in favor of moral realism – the idea that there are objective moral truths out there to be discovered, like physical constants. Before taking his arguments apart, I’d like to stress that this man actually has a paid position at a university, and I feel the odds are good he makes more money than you. Now that that’s out of the way, onto the fun part.

First, consider that one powerful argument in favor of moral realism involves pointing out certain objective moral truths. For example, “Cruelty for its own sake is wrong,” “Torturing people for fun is wrong (as is rape, genocide, and racism),” “Compassion is a virtue,” and “Parents ought to care for their children.” A bit of thought here, and one can produce quite a list. If you are really a moral relativist, then you have to reject all of the above claims. And this an undesirable position to occupy, both philosophically and personally.

Translation: it’s socially unacceptable to not agree with my views. It’s a proof via threat of ostracism. What Austin attempts to slip by there is the premise that you cannot both think something is morally unacceptable to you without thinking it’s morally unacceptable objectively. Rephrasing the example in the context of language allows us to see the flaw quickly: “You cannot think that the word “sex” refers to that thing you’re really bad at without also thinking that the pattern of sounds that make up the word has some objective meaning which could never mean anything else”. I’m perfectly capable of affirming the first proposition while denying the second. The word “sex” could have easily meant any number of things, or nothing at all, it just happens to refer to a certain thing for certain people. On the same note, I can both say “I find torturing kittens unacceptable” while realizing my statement is perfectly subjective. His argument is not what I would call a “powerful” one, though Austin seems to think it is.

It wasn’t the first time that philosophy rolled off my unsatisfied body and promptly fell asleep, pleased with itself.

Moving on:

 Second, consider a flaw in one of the arguments given on behalf of moral relativism. Some argue that given the extent of disagreement about moral issues, it follows that there are no objective moral truths…But there is a fact of the matter, even if we don’t know what it is, or fail to agree about it. Similarly for morality, or any other subject. Mere disagreement, however widespread, does not entail that there is no truth about that subject.

It is a bad argument to say that just because there is disagreement there is no fact of the matter. However, that gives us no reason to either accept moral realism or reject moral relativism; it just gives us grounds to reject that particular argument. Similarly, Austin’s suggestion that there is definitely a fact of the matter in any subject – or morality specifically – isn’t a good argument. In fact, it’s not even an argument; it’s an assertion. Personal tastes – such as what music sounds good, what food is delicious, and what deviant sexual acts are fun – are often the subject of disagreement and need not have an objective fact of the matter.

If Austin thinks disagreement isn’t an argument against moral realism, he should probably not think that agreement is an argument for moral realism. Unfortunately for us, he does:

There are some moral values that societies share, because they are necessary for any society to continue to exist. We need to value human life and truth-telling, for example. Without these values, without prohibitions on murder and lying, a given society will ultimately crumble. I would add that there is another reason why we often get the impression that there is more moral disagreement than is in fact the case. The attention of the media is directed at the controversial moral issues, rather than those that are more settled. Debates about abortion, same-sex marriage, and the like get airtime, but there is no reason to have a debate about whether or not parents should care for the basic needs of their children, whether it is right for pharmacists to dilute medications in order to make more profit, or whether courage is a virtue.

If most people agreed that the Sun went around Earth, that would in no way imply it was true. It’s almost amazing how he can point out that an argument is bad, then turn around and use an identical argument in the next sentence thinking it’s a killer point. Granted, if people were constantly stealing from and killing each other – that is, more than they do now – society probably wouldn’t fare too well. What the existence of society has to do with whether or not morality is objective, I can’t tell you. From these three points, Austin gives himself a congratulatory pat on the back, feeling confident that we can reject moral relativism and accept moral realism. With standards of proof that loose, philosophy could probably give birth and not even notice.

“Congratulations! It’s a really bad idea”

I’d be curious to see how Austin would deal with the species questions: are humans the only species with morality; do all animals have a sense of morality, social or otherwise; if they don’t, and morality is objective, why not? Again, the question seems silly if you apply the underlying logic to certain other domains, like food preferences: is human waste a good source of nutrients? The answer to that question depends on what species you’re talking about. There’s no objective quality of our waste products that either has the inherent property of nutrition or non-nutrition.

Did I mention Austin is a professor? It’s worth bearing in mind that someone who makes arguments that bad is actually being paid to work in a department dedicated to making and assessing arguments – in a down economy, no less. Even Psychology Today is paying him for his blogging services, I’m assuming. Certainly makes you wonder about the quality of candidates who didn’t get hired.

Free Will Doesn’t Matter Morally, But We Think It Does

The study of the world around is dubbed science, and in order to pursue it, you first need to purchase several large, expensive doohickeys in order to conduct experiments, hire scientists, and the like. The study of the theoretical world is dubbed mathematics, and you need only paper, a pencil, and a trashcan within reasonable distance. In comparison, the study of nothing in particular may be dubbed “philosophy”, and all you have to do is keep talking. It may be noted that working on philosophical matters is generally very cheap to do. That may very well be because it isn’t worth a penny to anyone anyway - Uncyclopedia

Academic xenophobe that I am, I don’t care much for philosophers. One reason I don’t much care for them is that they, as a group, have a habit of getting stuck in arguments that remain unresolved – or are even unresolvable – for centuries about topics of dubious importance. One of those topics that tends to evade clear thinking and relevance is that of free will. The definition of the term itself often avoids even being nailed down, meaning most of these arguments are probably not even being had about the same topic.

There’s been a lot of hand-wringing over whether determinism precludes moral responsibility. Today, I’m going to briefly step foot into the world of philosophy to demonstrate why this debate has a simple answer, and hopefully, when we reach that point, we can start making some actual progress in understanding human moral psychology.

An artist’s depiction of a philosopher; notice how it does nothing important and goes nowhere.

Let’s take a completely deterministic universe in which the movement and action of every single bit of matter and energy is perfectly predictable. Living organisms would be no exception here; you could predict every single behavior of an organism from before the moment of its conception till the moment of death. Every thought, every feeling, every movement of every part of its cellular machinery. People seem to worry that in this universe we would be unable to justifiably condemn people for their actions, as they are not seen as having a “choice” in the matter (choice is another one of those very blurry concepts, but we’ll forget about what it’s supposed to mean here; just use your best guess). What most people fail to realize about this example is that it in no way precludes making moral judgments (“he ought not to have done that”) or holding people responsible for their actions. “But how can we justify holding someone responsible for a predetermined action?” I already hear someone missing the point objecting. The answer here is simple: you wouldn’t need to justify those moral judgments or punishment in some objective sense anymore than a killer would need to justify why they killed.

If the killer was predetermined to kill, others were also predetermined to feel moral outrage at that killing; nothing about determinism precludes feelings, no matter their content. Additionally, those who feel that moral outrage are determined to attempt and convince others about the content of that outrage, which they may be successful at doing. From there, people are likewise determined to attempt and punish the person who committed the crime, and so on. Suffice it to say, a deterministic world would look no different than the world we currently inhabit, and determinism and moral responsibility get to live hand-in-hand. However, I already feel dirty enough playing philosopher that I don’t feel a need to continue on with this example.

I feel even dirtier than I did last Christmas. The reaction of those children – and that jury – was priceless though…

After successfully resolving centuries of philosophical debate in the matter of a few minutes (you’re welcome), it’s time to think about what this example can teach us about our moral psychology. Refreshingly, we will be stepping out of the realm of philosophy into that of science for this part. What I think is the most important lesson to take away from this example is the idea that if we can fully explain a behavior, we must also condone it (or, at the very least, not condemn others for it). Evolutionary psychology tends to get a fair share of scorn directed its way for even proposing that certain traits – typically politically unpalatable ones, such as sex differences or violence – are adaptations, and that ire typically comes in the form of, “well you’re just trying to justify [spousal abuse/rape/sexism/etc] by explaining it”. It’s also worth noting that those claims will be tossed at evolutionary psychologists even if those same psychologists say, “We aren’t trying to justify anything”.

I cited a figure a while back about how 86% of people viewed determinism as incompatible with moral responsibility, so this sentiment appears to be a rather popular one. There are two papers that have recently come across my desk that expand on this point a little further. The first comes from Miller, Gordon, and Buddie (1999), who basically demonstrated the effect I mentioned above. Subjects were presented with a vignette of a story involving a perpetrator causing some harm and asked to either try and explain that behavior first and then react to it, or react to it first and then explain it. The results showed that those who explained the behavior first took a significantly more forgiving and condoning stance towards the perpetrator. Additionally, when other observers read these explanations, the observers rated the attitudes of those doing the explaining as even more condoning of the harm than the explainers themselves had predicted.So while the explainers were slightly more condoning of the behavior of the perpetrator in the story, observers who read those explanations thought they were more condoning still. Sounds like the perfect mix for moral outrage.

“We’d like to respectfully disagree with your well-articulated position, and, if that fails, burn you and your books.”

Miller et al. (1999) went on to examine how different types of explanations might effect the explaining-condoning link. The authors suggest that explanations that portray the perpetrator as low in personal responsibility (it was the situation that made him do it) would be viewed as more condoning than those referencing the perpetrator’s disposition (he acted that way because he’s a cruel son-of-a-bitch). Towards this end, they presented subjects with the results of two hypothetical experiments: in one, the presence of a mirror dramatically affected the rates of cheating (5% cheating in the mirror condition, 90% cheating in the no mirror condition) or had no effect (50% cheating in both situations). The first experiment served to emphasize the effect of the situation, the second de-emphasizing the effect of the situation, as being the important explanatory factor.

The results here indicated that those who read the results stating that the situation held a lot of influence were more condoning of the cheating behavior when compared to those who read the dispositional explanations. What was more interesting, however, is that these same participants also rated their judgments of the cheater’s behavior as being significantly more negative than what they thought the hypothetical researcher’s judgments were. The subjects seemed to think the researchers were giving the perpetrators a pass.

The second experiment was conducted by Greene and Cahill (2011). Here, the researchers tested, basically, the suggestion that neuroscience imaging might overwhelm participant’s judgment with flashy pictures and leave them unable to consider the evidence of the case. In this experiment, participants were given the facts of a criminal case (either a low-severity or a high-severity case) and were presented with one of three conditions: (1) the defendant was labeled as psychotic by an expert; (2) in addition, results of neurological tests that found deficiencies consistent with damage to the frontal area of the defendants brain were presented ; and (3) in addition to that, colorful brain scans were presented documenting that damage.

The results of this study demonstrated that participants were about as likely to sentence the defendant to death across all three conditions when the defendant was deemed to be low in future dangerousness. However, when the defendant was high in future dangerousness they were overwhelmingly more likely to be sentenced to death, but only by group (1). In groups (2) and (3), they were far, far less likely to be sentenced to death (a drop from about 65% likely to be sentenced to death down to a low of near 15%, no different from the low-dangerousness group). Further, in conditions (2) and (3), the mock jurors rated the defendant as more remorseful and less in control of his behavior.

Unable to control his behavior and highly likely to be violent again? Sounds like the kind of guy we’d want to keep hanging around.

These two papers provide a complementary set of results, demonstrating some the effects that explanations can have both on our sense of moral responsibility and our perception of the explainer. What those two papers don’t do, however, is explain those effects in any satisfying manner. I feel there are several interesting predictions to be made here, but placing these results into their proper theoretical context will be a job for another day. In the mean time, I’m going to go shower until that sullied feeling that philosophy brings on goes away.

(One thought to consider is that perhaps terms like “free will” and “choice” are (sort of) intentionally nebulous, for to define them concretely and explain how they work would – like Kyrptonite to Superman – sap them of their ability to imbue moral responsibility)

References: Greene, E. & Cahill, B.S. (2011). Effects of neuroimaging evidence on mock juror decision making. Behavioral Sciences & the Law, DOI: 10.1002/bsl.1993

Miller, A.G., Gordon, A.K., & Buddie, A.M. (1999). Accounting for evil and cruelty: Is to explain to condone? Personality and Social Psychology Review, 3, 254-268.

So Drunk You’re See Double (Standards)

 

Jack and Jill (both played by Adam Sandler, for our purposes here) have been hanging out all night. While both have been drinking, Jill is substantially drunker than Jack. While Jill is in the bushes puking, she remembers that Jack had mentioned earlier he was out of cigarettes and wanted to get more. Once she emerges from the now-soiled side of the lawn she was on, Jill offers to drive Jack to the store so he can buy the cigarettes he wants. Along the way, they are pulled over for drunk driving. Jill wakes up the next day and doesn’t even recall getting into the car, but she regrets doing it.

Was Jill responsible for making the decision to get behind the wheel?

More importantly, should the cop had let them off in the hopes that the drunk driving would have stopped this movie from ever being made?

Jack and Jill (again, both played by Adam Sandler, for our purposes here) have been hanging out all night. While both have been drinking, Jill is substantially drunker than Jack. While Jill is in the bushes puking, she remembers that Jack had mentioned earlier he thought she was attractive and wanted to have sex. Once she emerges from the now-soiled side of the lawn she was on, Jill offers to have sex with Jack, and they go inside and get into bed together. The next morning, Jill wakes up, not remembering getting into bed with Jack, but she regrets doing it.

Was Jill responsible for making the decision to have sex?

More importantly, was what happened sex, incest, or masturbation? Either way, if Adam Sandler was doing it, it’s definitely gross.

According to my completely unscientific digging around discussions regarding the issue online, I can conclusively state that opinions are definitely mixed on the second question, though not so much on the first. In both cases, the underlying logic is the same: person X makes decision Y willingly while under the influence of alcohol, and later does not remember and regrets Y. As seen previously, slight changes in phrasing can make all the difference when it comes to people’s moral judgments, even if the underlying proposition is, essentially, the same.

To explore these intuitions in one other context, let’s turn down the dimmer, light some candles, pour some expensive wine (just not too much, to avoid impairing your judgment), and get a little more personal with them: You have been dating your partner – let’s just say they’re Adam Sandler, gendered to your preferences – who decided one night to go hang out with some friends. You keep in contact with your partner throughout the night, but as it gets later, the responses stop coming. The next day, you get a phone call; it’s your partner. Their tone of voice is noticeably shaken. They tell you that after they had been drinking for a while, someone else at the bar had started buying them drinks. Their memory is very scattered, but they recall enough to let you know that they had cheated on you, and, that at the time, they had offered to have sex with the person they met at the bar. They go on to tell you they regret doing it.

Would you blame your partner for what they did, or would you see them as faultless? How would you feel about them going out drinking alone the next weekend?

If you assumed the Asian man was the Asian woman’s husband, you’re a racist asshole.

Our perceptions of the situation and the responsibilities of the involved parties are going to be colored by self-interested factors (Kearns & Fincham, 2005). If you engage in a behavior that can do you or your reputation harm – like infidelity – you’re more likely to try and justify that behavior in ways that remove as much personal responsibility as possible (such as: “I was drunk” or “They were really hot”). On the other hand, if you’ve been wronged you’re also more likely to try and lump as much blame on others as possible on the party that wronged you, discounting environmental factors. Both perpetrators and victims bias their views on the situation, they just tend to do so in opposite directions.

What you can bet on, despite my not having available data on the matter, is that people won’t take kindly to having either their status as “innocent from (most) wrong-doing” or a “victim” be questioned. There is often too much at stake, in one form or another, to let consistency get in the way. After all, being a justified victim can easily put one into a strong social position, just as being known as one who slanders others in an unjustified fashion can drop you down the social ladder like a stone.

References: Kearns, J.N. & Fincham, F.D. (2005). Victim and perpetrator accounts of interpersonal transgressions: Self-serving or relationship-serving biases? Personality and Social Psychology Bulletin, 31, 321-333

Performance Enhancing Surgery

In the sporting world – which I occasionally visit via a muted TV on at a bar – I’m told that steroid use is something of hot topic. Many people don’t seem to take too kindly to athletes that use these performance enhancing drugs, as they are seen as being dangerous and giving athletes an unfair advantage. As I’ve written previously, when concerns for “fairness” start getting raised, you can bet there’s more than just a hint of inconsistency lurking right around the corner.

It starts with banning steroids, then, before you know it, I won’t be able to use my car in the Tour de France.

On the one hand, steroids certainly allow people to surpass the level of physical prowess they could achieve without them; I get that. How that makes them unfair isn’t exactly obvious, though. Surely, other athletes are just as capable of using steroids, which would level the playing field. “But what about those athletes who don’t want to use steroids?” I already hear you objecting. Well, what about those athletes who don’t want to exercise? Exercise and exercise equipment also allows people to surpass the level of physical prowess they could achieve without them, but I don’t see anyone lining up to ban gym use.

Maybe the gym and steroids differ in some important, unspecified way. Sure, people who work out more may have an advantage of those who eschew the gym, but those advantages are not due to the same underlying reason that come with steroid use. How about glasses or contacts? Now, to the best of my provincial knowledge of the sporting world, no one has proposed we ban athletes from correcting their vision. As contact lenses allow one to artificially improve their natural vision, that could be a huge leg up, especially for any sports that involve visual acuity (almost all of them). A similar tool that allowed an athlete to run a little faster, throw a little faster, or hit a little harder, to makeup for some pre-existing biological deficit in strength would probably be ruled out of consideration from the outset.

“Just try and tackle me now, you juiced up clowns!”

I don’t think this intuition is limited to sports; we may also see it in the animosity directed towards plastic surgery. Given that most people in the world haven’t been born with my exceptional level of charm and attractiveness, it’s understandable that many turn to plastic surgery. A few hundred examples of people’s thoughts surrounding plastic surgery can be found here. If you’re not bored enough to scroll through them, here’s a quick rundown of the opinions you’ll find: I would definitely get it; I would never get it; I would only get it if I was disfigured by some accident – doing it for mere vanity is wrong.
Given that the surgery generally makes people more attractive (Dayan, Clark, & Ho, 2004), the most interesting question is why wouldn’t people want it, barring a fear of looking better? The opposition towards plastic surgery – and those who get it – probably has a lot to do with the sending and receiving of honest signals. In order for a signal to be honest, it needs to be correlated to some underlying biological trait. Artificially improving facial attractiveness by normalizing traits somewhat, or improving symmetry, may make the bearer more physically attractive, but those attractive traits would not be passed on to their future offspring. It’s the biological equivalent of paying for a purchase using counterfeit bills.

“I couldn’t afford plastic surgery, so these discount face tattoos will have to do”

Similar opposition can sometimes be seen even towards people who choose to wear makeup. Any attempts to artificially increase one’s attractiveness have a habit of drawing its fair share of detractors. As for why there seems to be a difference between compensating for a natural disadvantage (in the case of contacts) in some cases, but not for surpassing natural limits (in the case of steroids or plastic surgery) in others, I can’t definitively say. Improving vision is somehow more legitimate than improving one’s appearance, strength, or speed (in ways that don’t involve lifting weights and training, anyway).

Perhaps it has something to do with people viewing attractiveness, strength, and speed as traits capable of being improved through “natural” methods – there’s no machine at the gym for improving your vision, no matter how many new years resolutions you’ve made to start seeing better. Of course, there’s also no machine at the gym for improving for your facial symmetry, but facial symmetry plays a much greater role in determining your physical attractiveness relative to visual acuity, so surgery could be viewed as form of cheating, in the biological sense, to a far greater extent than contacts.

References: Dayan, S., Clark, K., & Ho, A.A. (2004). Altering first impressions after plastic surgery. Aesthetic Plastic Surgery, 28, 301-306.

Is Working Together Cooperation?

“[P]rogress is often hindered by poor communication between scientists, with different people using the same term to mean different things, or different terms to mean the same thing…In the extreme, this can lead to debates or disputes when in fact there is no disagreement, or the illusion of agreement when there is disagreement” – West et al. (2007)

I assume most of you are little confused by the question, “Is working together cooperation?” Working together is indeed the very first definition of cooperation, so it would seem the answer should be a transparent “yes”. However, according to a paper by West et al. (2007), there’s some confusion that needs to be cleared up here. So buckle up for a little safari into the untamed jungles of academic semantic disagreements.

An apt metaphor for what clearing up confusion looks like.

West et al. (2007) seek to define cooperation as such:

Cooperation: a behavior which provides a benefit to another individual (recipient), and which is selected for because of its beneficial effect on the recipient. [emphasis, mine]

In this definition, benefits are defined in terms of ultimate fitness (reproductive) benefits. There is a certain usefulness to this definition, I admit. It can help differentiate between behaviors that are selected to deliver benefits from behaviors that deliver benefits as a byproduct. The example West et al. use is an elephant producing dung. The dung an elephant produces can be useful to other organisms, such as a dung beetle, but the function of dung production in the elephant is not to provide a benefit the beetle; it just happens to do so as a byproduct. On the other hand, if a plant produces nectar to attract pollinators, this is cooperation, as the nectar benefits the pollinators in the form of a meal, and the function of the nectar is to do so, in order to assist in reproduction by attracting pollinators.

However, this definition has some major drawbacks. First, it defines cooperative behavior in terms of actual function, not in terms of proper function. An example will make this distinction a touch clearer: let’s say two teams are competing for a prize in a winner-take-all game. All the members of each team work together in an attempt to achieve the prize, but only one team gets it. By the definition West et al. use, only the winning team’s behavior can be labeled “cooperation”. Since the losers failed to deliver any benefit, their behavior would not be cooperation, even if their behavior was, more or less, identical. While most people would call teamwork cooperation – as the intended goal of the teamwork was to achieve a mutual goal – the West et al. definition leaves no room for this consideration.

I’ll let you know which team was actually cooperating once the game is over.

West et al. (2007) also seem to have a problem with the term “reciprocal altruism”, which is basically summed up by the phrase, “you scratch my back (now) and I’ll scratch yours (at some point in the future)”.  The authors have a problem with the term reciprocal altruism because this mutual delivery of benefits is not altruistic, which they define as such:

Altruism: a behavior which is costly to the actor and beneficial to the recipient; in this case and below, costs and benefits are defined on the basis of the lifetime direct fitness consequences of a behavior.

Since reciprocal altruism is eventually beneficial to the individual paying the initial cost, West et al. (2007) feel it should be classed as “reciprocal cooperation”. Except there’s an issue here: Let’s consider another case: organism X pays a cost (c) to deliver a benefit (b) to another organism, Y, at some time (T1). At some later time (T2), organism Y pays a cost (c) to deliver a benefit (b) back to organism X. So long as (c) < (b), they feel we should call the interaction between X and Y cooperation, not reciprocal altruism.

Here’s the problem: the future is always uncertain. Let’s say there’s a parallel case to the one above, except at some point after (T1) and before (T2), organism X dies. Now, organism X would be defined as acting altruistically (paid a cost to deliver a benefit), and organism Y would be defined as acting selfishly (took a benefit without repaying). What this example tells us is that a behavior can be classed as being altruistic, mutually beneficial, cooperative, or selfish, depending on a temporal factor. In terms of “clearing up confusion” about how to properly use a term or classify a behavior, the definitions provided by West et al. (2007) are not terribly helpful. They note as much, when they write, “we end with the caveat that: (viii) classifying behaviors will not always be the easiest or most useful thing to do” (p.416), which, to me, seems to defeat the entire purpose of this paper.

“We’ve successfully cleared up the commuting issue, though using our roads might not be the easiest or most useful thing to do…”

One final point of contention is that West et al. (2007) feel “…behaviors should be classified according to their impact on total lifetime reproductive success” (emphasis, mine). I understand what they hope to achieve with that, but they make no case whatsoever for why we should stop considering the ultimate effects of a behavior at the end of an organism’s individual lifetime. If an individual behaves in a way that ensures he leaves behind ten additional offspring by the time he dies, but, after he is dead, the fallout from those behaviors further ensures that none of those offspring reproduce, how is that behavior to be labeled?

It seems to me there are many different ways to think about an organism’s behavior, and no one perspective needs to be monolithic across all disciplines. While such a unified approach no doubt has its uses, it’s not always going to clear up confusion.

References: West, S.A., Griffin, A.S., & Gardner, A. (2007). Social semantics: Altruism, cooperation, mutualism, strong reciprocity and group selection. Journal of Evolutionary Biology, 20, 415-432

A Sense Of Entitlement (Part 2)

One of the major issues that has divided people throughout recorded human history is precisely the matter of division – more specifically, how scarce resources ought to be divided. A number of different principles have been proposed, including: everyone should get precisely the same share, people should receive a share according to their needs, and people should receive a share according to how much effort they put in. None of those principles tend to be universally satisfying. The first two open the door wide for free-riders who are happy to take the benefits of others’ work while contributing none of their own; the third option helps to curb the cheaters, but also leaves those who simply encounter bad luck on their own. Which principles people will tend to use to justify their stance on a matter will no doubt vary across contexts.

If you really wanted toys – or a bed – perhaps you should have done a little more to earn them. Damn entitled kids…

The latter two options also open the door for active deception. If I can convince you that I worked particularly hard – perhaps a bit harder than I actually worked – then the amount I deserve goes up. This tendency to make oneself appear more valuable than one actually is is widespread, and one good example is that about 90% of college professors rate themselves above average in teaching ability. When I was collecting data on women’s perceptions of how attractive they thought they were, from 0 to 9, I don’t think I got a single rating below a 6 from over 40 people, though I did get many 8s and 9s. It’s flattering to think my research attracts so many beauties, and it certainly bodes well for my future hook-up prospects (once all these women realize how good looking and talented I keep telling myself I am, at any rate).

An alternative, though not mutually exclusive, path to getting more would be to convince others that your need is particularly great. If the marginal benefits of resources flowing to me are greater than the benefits of those same resources going to you, I have a better case for deserving them. Giving a millionaire another hundred dollars probably won’t have much of an effect on their bottom line, but that hundred dollars could mean the difference between eating or not for some people. Understanding this strategy allows one to also understand why people working to change society in some way never use the motto, “Things are pretty good right now, but maybe they could be better”.

Their views on societal issues are basically the opposite of their views on pizza.

This brings us to the science (Xiao & Bicchieri, 2010). Today, we’ll be playing a trust game. In this game, player A is given an option: he can end the game and both players will walk away with 40 points, or he can trust and give 10 points to player B, which then gets multiplied by three, meaning player B now has 70 points. At this time, player B is given the option of transferring some of their points back to player A. In order for player A to break even, B needs to send back 10 points; anymore than 10 and player A profits. This is a classic dilemma faced by anyone extending a favor to a friend: you suffer an initial cost by helping someone out, and you need to trust your friend will pay you back by being kind to you later, hopefully with some interest.

Slightly more than half of the time (55%), player B gave 10 points or more back to player A, which also means that about half the time player B also took the reward and ran, leaving player A slightly poorer and a little more bitter. Now, here comes the manipulation: a second group played the same game, but this time the payoffs were different. In this group, if player A didn’t trust player B and ended the game, they walk away with 80 points and B leaves with 40. If player A did trust, that meant they both now have 70 points; it also meant that if player B transferred any points back to player A, he would be putting himself at a relative disadvantage in terms of points.

In this second group, player B ended up giving back 10 or more points only 26% of the time. Apparently, repaying a favor isn’t that important when the person you’re repaying it to would be richer than you because of it. It would seem that fat cats don’t get too much credit tossed their way, even if they behave in an identical fashion towards someone else. Interestingly, however, many player As understood this would happen; in fact, 61% of them expected to get nothing back in this condition (compared to 23% expecting nothing back in the first condition).

“Thanks for the all money. I would pay you back, it’s just that I kind of deserved it in the first place”

That inequality seemed to do two things: the first is that it appeared to create a sense of entitlement on the behalf of the receiver that negated most of the desire for reciprocity. The second thing that happened is that the mindset of the people handing over the money changed; they fully expected to get nothing back, meaning many of these donations appeared to look more like charity, rather than a favor.

Varying different aspects of these games allows researchers to tap different areas of human psychology, and it’s important to keep that in mind when interpreting the results of these studies. In your classic dictator game, when receivers are allowed to write messages to the dictators to express their feelings about the split, grossly uneven splits are met with negative messages about 44% of the time (Xiao & Houser, 2009). However, in these conditions, receivers are passive, so what they get looks more like charity. When receivers have some negotiating power, like in an ultimatum game, they respond to unfair offers quite differently, with uneven splits being met by negative messages 79% of the time (Xiao & Houser, 2005). It would seem that giving someone some power also spikes their sense of entitlement; they’re bargaining now, not getting a handout, and when they’re bargaining they’re likely to over-emphasize their need and their value to get more.

Resources: Xiao, E. & Houser, D. (2005). Emotion expression in human punishment behavior. Proceedings of the Nation Academy of Sciences of the United States of America, 102, 7398-7401

Xiao, E. & Houser, D. (2009). Avoiding the sharp tongue: Anticipated written messages promote fair economic exchange. Journal of Economic Psychology, 30, 393-404

Xiao, E., & Houser, D. (2010). When equality trumps reciprocity. Journal of Economic Psychology, 31, 456-470

A Sense Of Entitlement (Part 1)

There’s a lot to be said for studying behavior in the laboratory or some other artificially-generated context: it helps a researcher control a lot of the environment, making the data less noisy. Most researchers abhor noise almost as much they do having to agree with someone (when agreement isn’t in the service of disagreeing with some mutual source of contention, anyway), so we generally like to keep things nice and neat. One of the downsides of keeping things so tidy is that the world often isn’t, and it can be difficult to distinguish between “noise” and “variable of interest” at times. The flip-side of this issue is that the environment of the laboratory can also create certain conditions, some that are not seen in the real world, without realizing it. Needless to say, this can have an effect on the interpretation of results.

“Few of our female subjects achieved orgasm while under observation by five strangers. Therefore, reports of female orgasm must be a myth”.

Here’s a for instance: say that you’re fortunate enough to find some money just laying around, perhaps on the street, inexplicably in a jar in front of a disheveled-looking man. Naturally you would collect the discarded cash, put it into your wallet, and be on your way. What you probably wouldn’t do is give some of that cash to a stranger anonymously, much less half of it, yet this is precisely the behavior we see from many people in a dictator game. So why do they do it in the lab, but not in real life?

Part of the reason seems to lie in the dictator’s perceptions of the expectations of the receivers. The rules of game set up a sense of entitlement on the behalf of the receivers – complete with a sense of obligation on behalf of the dictator – with the implicit suggestion being that dictators are supposed to share the pot, perhaps even fairly. But suppose there was a way to get around some of those expectations – how might that affect the behavior of dictators?

“Oh, you can split the pot however you want. You can even keep it all if you don’t care about anyone but yourself. Just saying…”

The possibility was examined by Dana, Cain, and Dawes (2006), who ran the standard dictator game at first, telling dictators to decide how to divide up $10 between themselves and another participant. After the subjects entered their first offer, they were then given a new option: the receivers didn’t know the game was being played yet, and if the dictators wanted, they could take $9 and leave, which means the receivers would get nothing, but would also never know they could have gotten something. Bear in mind, the dictators could have kept $9 and give a free dollar to someone else, or keep an additional dollar with the receiver still getting nothing in their initial offer, so this exit option destroys overall welfare for true ignorance. When given this option, about a third of the dictators opted out, taking the $9 welfare destroying option and leaving to make sure the receiver never knew.

These dictators exited the game when there was no threat of them being punished for dividing unfairly or having their identity disclosed to the receiver, implying that this behavior was largely self-imposed. This tells us many dictators aren’t making positive offers – fair or otherwise – because they find the idea of giving away money particularly thrilling. They seem to want to appear to be fair and generous, but would rather avoid the costs of doing so, or rather the costs of not doing so and appearing selfish. There were, however, still a substantial number of dictators who offered more than nothing. There are a number of reasons this may be the case, most notably because the manipulation only allowed dictators to bypass the receiver’s sense of entitlement effectively, not their own sense of obligation that the game itself helped to create.

Another thing this tells us is that people are not going to respond universally gratefully to free money – a response many dictators apparently anticipated. Finding a free dollar on the street is a much different experience than being given a dollar by someone who is deciding how to divide ten. One triggers a sense of entitlement – clearly, you deserve more than a free dollar, right? – the other does not, meaning one will tend to be viewed as a loss, despite the fact that it isn’t.

How these perceptions of entitlement change under certain conditions will be the subject of the next post.

References: Dana, J., Cain, D.M., & Dawes, R.M. (2006). What you don’t know won’t hurt me: Costly (but quiet) exit in dictator games. Organizational Behavior and Human Decision Processes, 100, 193-201.

Somebody Else’s Problem

Let’s say you’re a recent high-school graduate at the bank, trying to take out a loan to attend a fancy college so you can enjoy the privileges of complaining about how reading is, like, work, and living the rest of life in debt from student loans. A lone gunman rushes into the bank, intent on robbing the place. You notice the gunman is holding a revolver, meaning he only has six bullets. This is good news, as there happen to be 20 people in the bank; if you all rush him at the same time, he wouldn’t be able to kill more than six people, max; realistically, he’d only be able to get off three or four shots before he was taken down, and there’s no guarantee those shots will even kill the people they hit. The only practical solution here should be to work together to stop the robbery, right?

Look on the bright side: if you pull through, this will look great on your admissions essay.

The idea that evolutionary pressures would have selected for such self-sacrificing tendencies is known as “group selection”, and is rightly considered nonsense by most people who understand evolutionary theory. Why doesn’t it work? Here’s one reason: let’s go back to the bank. The benefits of stopping the robbery will be shared by everyone at the abstract level of the society, but the costs of stopping the robbery will be disproportionately shouldered by those who intervene. While everyone is charging the robber, if you decide that you’re quite comfortable hiding in the back, thank you very much, your chances of getting shot decline dramatically and you still get the benefit; just let it be somebody else’s problem. Of course, most other people should realize this as well, leaving everyone pretty uninclined to try and stop the robbery. Indeed, there are good reasons to suspect that free-riding is the best strategy (Dreber et al., 2008).

There are, unfortunately, some people who think group selection works and actually selects for tendencies to incur costs at no benefit. Fehr, Fischbacher, and Gachter (2002) called their bad idea “strong reciprocity”:

“A person is a strong reciprocator if she is willing to sacrifice resources (a) to be kind to those who are being kind… and (b) to punish those who are being unkind…even if this is costly and provides neither present nor future material rewards for the reciprocator” (p.3, emphasis theirs)

So the gist of the idea would seem to be (to use an economic example) that if you give away your money to people you think are nice – and burn your money to ensure that mean people’s money also gets burned – with complete disregard for your own interests, you’re going to somehow end up with more money. Got it? Me neither…

“I’m telling you, this giving away cash thing is going to catch on big time”

So what would drive Fehr, Fischbacher, and Gachter (2002) to put forth such a silly idea? They don’t seem to think existing theories – like reciprocal altruism, kin selection, costly signaling theory, etc – can account for the way people behave in laboratory settings. That, and existing theories are based around selfishness, which isn’t nice, and the world should be a nicer place. The authors seems to believe that those previous theories lead to predictions like: people should “…always defect in a sequential, one-shot [prisoner's dilemma]” when playing anonymously.That one sentence contains two major mistakes: the first mistake is that those theories most definitely do not say that. The second mistake is part of the first: they assume that people’s proximate psychological functioning will automatically fall in-line with the conditions they attempt to create in the lab, which it does not (as I’ve mentioned recently). While it might be adaptive, in those conditions, to always defect at the ultimate level, it does not mean that the proximate level will behave that way. For instance, it’s a popular theory that sex evolved for the purposes of reproduction. That people have sex with birth control does not mean the reproduction theory is unable to account for that behavior.

As it turns out, people’s psychology did not evolve for life in a laboratory setting, nor is the functioning of our psychology going to be adaptive in each and every context we’re in. Were this the case, returning to our birth control example, simply telling someone that having sex when the pill is involved removes the possibility of pregnancy would lead to people to immediately lose all interest in the act (either having sex or using the pill). Likewise, oral sex, anal sex, hand-jobs, gay sex, condom use, and masturbation should all disappear too, as none are particularly helpful in terms of reproduction.

Little known fact: this parade is actually a celebration of a firm understanding of the proximate/ultimate distinction. A very firm understanding.

Nevertheless, people do cooperate in experimental settings, even when cooperating is costly, the game is one-shot, there’s no possibility of being punished, and everyone’s ostensibly anonymous. This poses another problem for Fehr and his colleagues: their own theory predicts this shouldn’t happen either. Let’s consider an anonymous one-shot prisoner’s dilemma with a strong reciprocator as one of the players. If they’re playing against another strong reciprocator, they’ll want to cooperate; if they’re playing against a selfish individual, they’ll want to defect. However, they don’t know ahead of time who they’re playing against, and once they make their decision it can’t be adjusted. In this case, they run the risk of defecting on a strong reciprocator or benefiting a selfish individual while hurting themselves. The same goes for a dictator game; if they don’t know the character of the person they’re giving money to, how much should they give?

The implications of this extend even further: in a dictator game where the dictator decides to keep the entire pot, third-party strong reciprocators should not really be inclined to punish. Why? Because they don’t know a thing about who the receiver is. Both the receiver and dictator could be selfish, so punishing wouldn’t make much sense. The dictator could be a strong reciprocator and the receiver could be selfish, in which case punishment would make even less sense. Both could be strong reciprocators, unsure of the others’ intentions. It would only make sense if the dictator was selfish and the receiver was a strong reciprocator, but a third-party has no way of knowing whether or not that’s the case. (It also means if strong reciprocators and selfish individuals are about equal in the population, punishment in these cases would be a waste three-forths of the time – maybe half at best, if they want to punish selfish people no matter who they’re playing against – meaning strong reciprocator third-parties should never punish).

There was some chance he might of been being a dick…I think.

The main question for Fehr and his colleagues would then not be, “why do people reciprocate cooperation in the lab” – as reciprocal altruism and the proximate/ultimate distinction can already explain that without resorting to group selection – but rather, “why is there any cooperation in the first place?” The simplest answer to the question might seem to be that some people are prone to give the opposing player the benefit of the doubt and cooperate on the first move, and then adjust their behavior accordingly (even if they are not going to be playing sequential rounds). The problem here is that this is what a tit-for-tat player already does, and it doesn’t require group selection.

It also doesn’t look good for the theory of social preferences invoked by Fehr et al. (2002) when the vast majority of people don’t seem to have preferences for fairness and honesty when they don’t have to, as evidenced by 31 of 33 people strategically using an unequal distribution of information to their advantage in ultimatum games (Pillutla and Murnighan, 1995). In every case Fehr et al. (2002) looks at, outcomes have concrete values that everyone knows about and can observe. What happens when intentions can be obscured, or values misrepresented, as they often can be in real life? Behavior changes and being a strong reciprocator would be even harder. What might happen when the cost/punishment ratio changes from a universal static value, as it often does in real life (not everyone can punish others at the same rate)? Behavior will probably change again.

Simply assuming these behaviors are the result of group selection isn’t enough.The odds are better that the results are only confusing when their interpreter has an incorrect sense of how things should have turned out.

References: Dreber, A., Rand, D.G., Fudenberg, D, & Nowak, M.A. (2008). Winners don’t punish. Nature, 452,  348-351

Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13, 1-25.

Pillutla, M.M. & Murnighan, J.K. (1995). Being fair or appearing fair: Strategic behavior in ultimatum bargaining. Academy of Management Journal, 38, 1408-1426.