Can Situations Be Strong Or Weak?

“The correspondence bias is the tendency to draw inferences about a person’s unique and enduring dispositions from behaviors that can be entirely explained by the situation in which they occur. Although this tendency is one of the most fundamental phenomena in social psychology, its causes and consequences remain poorly understood” – Gilbert and Malone, 1995

Social psychologists are not renowned for being particularly good at understanding things, even things which are (supposedly) fundamental to their field of study. Like the proverbial drunk looking for his missing keys at night under a streetlight rather than in the park where he lost them “because the light is better”, part of the reason social psychologists are not very good at providing genuine understanding is because they often begin with some false premise or assumption. In the case of the correspondence bias, as defined by Gilbert & Malone (1995), I feel one of these misunderstandings is the idea that behavior can be caused or explained by the situation at all (let alone ‘entirely’); that is, unless one defines “the situation” in a way that ceases to be of any real value.

Which is about as valuable as the average research paper in psychology.

According to Gilbert and Malone (1995), an “eminently reasonable” rule is that “…one should not explain with dispositions that which has already been explained by the situation”. They go on to suggest that people tend to “underestimate the power of situations”, frequently mistaking “…strong situation[s] for relatively weak one[s]. To use some more concrete examples, people seemed to perform poorly at tasks like predicting how much shock subjects in the Milgram experiment on obedience would give when asked to by an experimenter, or tend to do things like judge basketball players as less competent when they were shooting free throws in a dimly-lit room, relative to a well-lit one. In these experiments, the command of an experimenter and the lighting of a room are supposed to be, I think, “strong” situations that “highly constrain” behavior. Not to put too fine of a point on it, but that makes no sense.

A simple example should demonstrate why. Let’s say you wanted to see how “strong” of a situation a hamburger is, and you measure the strength of the situation by how much subjects are willing to pay for that burger. An initial experiment finds that subjects are, on average, willing to pay a maximum of about $5 for that average burger. Good to know. Now, a second experiment is run, but this time subjects are divided into three groups: group 1 has just finished eating a larger meal, group 2 ate that same meal four hours prior and nothing else since, and group 3 ate that meal 8 hours prior and nothing else since. These three groups are now presented with the same average hamburger. The results you’d now find is that group 1 seems relatively uninterested in paying for that burger (say, $0.50, on average), group 2 is somewhat interested in paying for it ($5), and group 3 is very interested in paying for it ($10).

From this hypothetical pattern of results, what are we to conclude about the “strength” of the situation of an opportunity to buy a burger? Obviously, the burger itself (the input provided by the environment) explains nothing about the behavior of the subjects and has no intrinsic “strength”. This shouldn’t be terribly surprising because abstract situations aren’t what generate behavior; psychological modules do. Whether that burger is currently valuable or not is going to depend crucially on the outputs of certain modules monitoring things like one’s current caloric state, and other modules recognizing the hamburger as a good source of potential calories. That’s not to say that the situations are irrelevant to the behavior that is eventually generated, of course; it just implies that which aspects of the environment matter, how much they matter, when they matter, and why they matter, are all determined by the current state of the existing psychological structures of the organism in question. A resource that is highly valuable in one situation is not necessarily valuable in another.

“If it makes my car go fast, just imagine how much time it’ll cut off my 100m”

Despite their use of inconsistent and sloppy language regarding the interaction between environments and dispositions in generating behavior, Glibert and Malone (1995) seem to understand that point to some extent. Concerning an example where a debate coach instructs subjects to write a pro-Castro speech, the authors write:

“…[T]he debate coach’s instructions merely alter the payoffs associated with the two behavioral options…the essayist’s behavioral options are not altered by the debate coach’s instructions; rather, the essayist’s motivation to enact each of the behavioral options is altered.”

What people are often underestimating (or overestimating, depending on context), then, is not the strength of the situation, but the strength of other competing dispositions, given a certain set of environmental inputs. While this might seem like a minor semantic issue, I feel it might holds a deeper significance, insomuch as it leads researchers to ask the wrong kinds of questions. For instance, what’s noteworthy to me about Gilbert and Malone’s (1995) analysis of the ultimate causes of the correspondence bias are not the candidate explanations they put forward, but rather which questions they don’t ask and what explanations they don’t give.

The authors suggest that the correspondence bias might not have historically had many negative consequences for a number of reasons that I won’t get into here. The only possible positive consequence they discuss is that the bias might allow people to predict the behavior of others. This is a rather strange benefit to posit, I feel, given that almost the entirety of their paper up to that point had been focused on how this bias is likely to lead to incorrect predictions, all things considered. Even granting that the correspondence bias might only tend to be an actual problem in contexts artificially created in psychology experiments (such as by randomly assigning subjects to groups), in no case does it seem to lead to more accurate predictions of others’ behavior.

The ultimate explanations offered for the correspondence bias left me feeling like (and I could be wrong about this) the authors were still thinking about the bias as an error in the way we think; they don’t seem to give the impression that the bias had an real function. Now, that could be true; the bias might well be a neutral-to-maladaptive byproduct, though what the bias would be a byproduct of isn’t immediately clear. While, from a strictly accuracy-based point of view, the bias might often lead to inaccurate conclusions, as I’ve mentioned before, accuracy is only important to the extent that it helps organisms do useful things. The question that Gilbert and Malone (1995) fail to ask, given their focus on accuracy, is why would people bother attributing the behavior of others to situational or dispositional characteristics in the first place?

My road rage happens to be indifferent to whether you were lost or just a slow driver.

Being able to predict the behavior of other organisms is useful, no doubt; it lets you know who is likely to be a good social investment and who isn’t, which will in turn affect the way you behave towards others. Given the stakes at hand, and since you’re dealing with organisms that can be persuaded, accuracy in perceptions might not always be the best policy. Suppose you’re in competition with a rival over some resource; since the Olympics are currently going on, you’re now a particularly good swimmer and competing in some respective event. Let’s say you don’t come in first; you end up placing behind one of your country’s bitter rivals. How are you going to explain that loss to other people? You might concede that your rival was simply a better swimmer than you, but that’s not likely to garner you a whole lot of support. Alternatively, you might suggest that you were really the better swimmer, but some aspect of the situation ended up giving your rival a temporary upper-hand. What you’d be particularly unlikely to do would be to both suggest that your rival was actually the better swimmer and still beat you despite some situational factor that ended up putting you at an advantage.

As Gilbert and Malone (1995) mention in their introduction, a niece who is perceived as intentionally breaking a vase by their aunt would receive the thumbscrews, while the niece who is perceived as breaking a vase on accident would not. Depending on the nature of the situation – whether it’s one that will result in blame or praise – it might serve you will to minimize or maximize the perception of your involvement in bringing the events about. It would similarly serve you will to manipulate the perceptions of other people’s involvement in act. One way of doing this would involve going after the perceptions of whether a behavior was caused by a situation or a disposition; whether the outcome was a fluke or likely to be consistent across situations. This would lead to the straight-forward prediction that such attributional biases will tend to look remarkably self-serving, rather than just wrong in some general way. I’ll leave it up to you as to whether or not that seems to be the case.

References: Gilbert, D.T., & Malone, P.S. (1995). The correspondence bias Psychological Bulletin, 117, 21-38 DOI: 10.1037/0033-2909.117.1.21

Why All The Fuss About Equality?

In my last post, I discussed why the term “inequality aversion” is a rather inelegant way to express certain human motivations. A desire to not be personally disadvantaged is not the same thing as a desire for equity, more generally. Further, people readily tolerate and create inequality when it’s advantageous for them to do so, and such behavior is readily understandable from an evolutionary perspective. What isn’t as easily understandable – at least not as immediately – is why equality should matter at all. Most research on the subject of equality would appear to just take it for granted (more-or-less) that equality matters without making a concerted attempt to understand why that should be the case. This paper isn’t much different.

On the plus side, at least they’re consistent.

The paper by Raihani & McAuliffe (2012) sought to disentangle two possible competing motives when it comes to punishing behavior: inequality and reciprocity. The authors note that previous research examining punishment often confounds these two possible motives. For instance, let’s say you’re playing a standard prisoner’s dilemma game: you have a choice to either cooperate or defect, and let’s further say that you opt for cooperation. If your opponent defects, not only do you lose out on the money you would have made had he cooperated, but your opponent also ends up with more money than you do overall. If you decided to punish the defector, the question arises of whether that punishment is driven by the loss of money, the aversion to the disadvantageous inequality, or some combination of the two.

To separate the two motives out, Raihani & McAuliffe (2012) used a taking game. Subjects would play the role of either player 1 or player 2 in one of three condition. In all conditions, player 1 was given an initial endowment of seventy cents; player 2, on the other hand, started out with either ten cents, thirty cents, or seventy cents. In all conditions, player 2 was then given the option of taking twenty cents from player 1, and following that decision player 1 was then given an option of playing ten cents to reduce player 2′s payoff by thirty cents. The significance of these conditions is that, in the first two, if player 2 takes the twenty cents, no disadvantageous inequality is created for player 1. However, in the last condition, by taking the twenty cents, player 2 creates that inequality. While the overall loss of money across conditions is identical, the context of that loss in terms of equality is not. The question, then, is how often player 1 would punish player 2.

In  the first two conditions, where no disadvantageous inequality was created for player 1, player 1 didn’t punish significantly more whether player 2 had taken money or not (approximately 13%). In the third treatment, where player 2′s taking did create that kind of inequality, player 1 was now far more likely to pay to punish (approximately 40%). So this is a pretty neat result, and it mirrors past work that came at the question from the opposite angle (Xiao & Houser, 2010; see here). The real question, though, concerns how we are to interpret this finding. These results, in and of themselves, don’t tell us a whole lot about why equality matters when it comes to punishment decisions.

They also doesn’t tell me much about this terrible itch I’ve been experiencing lately, but that’s a post for another day.

I think it’s worth noting that the study still does, despite it’s best efforts, confound losing money and generating inequality; in no condition can player 2 create disadvantageous inequality for player 1 without taking money away as well. Accordingly, I can’t bring myself to agree with the authors, who conclude:

  Together, these results suggest that disadvantageous inequality is the driving force motivating punishment, implying that the proximate motives underpinning human punishment might therefore stem from inequality aversion rather than the desire to reciprocate losses.

It could still well be the case that player one would rather not have twenty cents taken from them, thank you very much, but don’t reciprocate with punishment for other reasons. To use a more real-life context, let’s say you have a guest come to your house. At some point after that guest has left, you discover that he apparently also left with some of the cash you had been saving to buy whatever expensive thing you had your eye on at the time. When it came to deciding whether or not you desired to see that person punished for what they did, precisely how well off they were relative to you might not be your first concern. The theft would not, I imagine, automatically become OK in the event that the guy only took your money because you were richer than he was. A psychology that was designed to function in such a manner would leave one wide-open for exploitation by selfish others.

However, how well off you were, relative to how needy the person in question was, might have a larger effect in the minds of other third party condemners. The sentiment behind the tale of Robin Hood serves as an example towards that end: stealing from the rich is less likely to be condemned by others than stealing from one of lower standing. If other third parties are less likely, for whatever reason, to support your decision to punish another individual in contexts where you’re advantaged over the person being punished, punishment immediately risks becoming more costly. At that point. it might be less costly to tolerate the theft rather than risking condemnation by others for taking action against it.

What might be better referred to as, “The Metallica V. Napster Principle”.

One final issue I have with the paper is a semantic one: the authors label the act of player 2 taking money as cheating, which doesn’t fit my preferred definition (or, frankly, any definition of cheating I’ve ever seen). I favor the Tooby and Cosmides definition where a cheater is defined as “…an individual who accepts a benefit without satisfying the requirements that provision of that benefit was made contingent upon.” As there was no condition required for player 2 to be allowed to take money from player 1, it could hardly be considered an act of cheating. This seemingly minor issue, however, might actually hold some real significance, in the Freudian sense of the word.

To me, that choice of phrasing implies that the authors realize that, as I previously suggested, player 1s would really prefer if player 2s didn’t take any money from them; after all, why would they? More money is better than less. This highlights, for me, the very real and very likely possibility that what player 1s were actually punishing was having money taken from them, rather than the inequality, but they were only willing to punish in force when that punishment could more successfully be justified to others.

References: Raihani NJ, & McAuliffe K (2012). Human punishment is motivated by inequity aversion, not a desire for reciprocity. Biology letters PMID: 22809719

Xiao, E., & Houser, D. (2010). When equality trumps reciprocity Journal of Economic Psychology, 31, 456-470 DOI: 10.1016/j.joep.2010.02.001

50 Shades Of Grey (When It Comes To Defining Rape)

For those of you who haven’t have been following such things lately, Daniel Tosh recently catalyzed an internet firestorm of offense.The story goes something like this: at one of his shows, he was making some jokes or comments about rape. One woman in the audience was upset by whatever Daniel said and yelled out that rape jokes are never funny. In response to the heckler, Tosh either (a) made a comment about how the heckler was probably raped herself, or (b) suggested it would be funny were the heckler to get raped, depending upon which story you favor. The ensuing outrage seems to have culminated in a petition to have Daniel Tosh fired from Comedy Central, which many people ironically suggested has nothing at all to do with censorship.

This whole issue has proved quite interesting to me for several reasons. First, it highlights some of the problems I recently discussed concerning third party coordination: namely that publicly observable signals aren’t much use to people who aren’t at least eye witnesses. We need to rely on what other people tell us, and that can be problematic in the face of conflicting stories. It also demonstrated the issues third parties face when it comes to inferring things like harm and intentions: the comments about the incident ranged from a heckler getting what they deserved through to the comment being construed as a rape threat. Words like “rape-apologist” then got thrown around a lot towards Tosh and his supporters.

Just like how whoever made this is probably an anti-Semite and a Nazi sympathizer

While reading perhaps the most widely-circulated article about the affair, I happened to come across another perceptual claim that I’d like to talk about today:

According to the CDC, one in four female college students report that they’ve been sexually assaulted (and when you consider how many rapes go unreported, because of the way we shame victims and trivialize rape, the actual number is almost certainly much higher).

Twenty-five percent would appear alarmingly high; perhaps too high, especially when placed in the context of a verbal mud-slinging. A slightly involved example should demonstrate why this claim shouldn’t be taken at face value: in 2008, the United States population was roughly 300 million (rounding down). To make things simple, let’s assume (a) half the population is made up of women, (b) the average woman finishing college is around 22 and (c) any woman’s chances of being raped are equal, set at 25%. Now, in 2008, there were roughly 15 million women in the 18-24 age group; they are our first sample. If the 25% number was accurate, you’d expect that, among women ages 18-24, 3.75 million of them should have been raped at some point throughout their lives, or roughly 170,000 rape victims per year in that cohort (assuming rape rates are constant from birth to 24). In other words, each year, roughly 1.13% of the women who hadn’t previously been raped would be raped (and no one else would be).

Let’s compare that 1% number to the number of  reported rapes in the entire US in 2008: thirty rapes per hundred-thousand people, or 0.03%. Even after doubling that number (assuming all reported rapes come from women, and women are half the population, so the reported number is out of fifty-thousand, not a hundred-thousand) we only make it to 0.06%. In order to make it to 1.13% you would have to posit that for each reported rape there were about 19 unreported ones. For those who are following along with the math, that would mean that roughly 95% of rapes would never have been reported. While 95% unreported might seem like a plausible rate to some it’s a bit difficult to verify.

Rape, of course, doesn’t have a cut-off point for age, so let’s expand our sample to include women ages 25-44. Using the same assumptions of a 1% growth rate in rape victims per year rate, that would now mean that by age 44 almost half of all women would have experienced an instance of rape. We’re venturing farther into the realm of claims losing face value. Combining those two figures would also imply that a woman between 18 and 44 is getting raped in the US roughly every 30 seconds. So what gives: are things really that bad, are the assumptions wrong, is my math off, or is something else amiss?

Since I can’t really make heads or tails of any of this, I’m going with my math.

Some of the assumptions are, in fact, not likely to be accurate (such as a consistent rate of victimization across age groups), but there’s more to it than that. Another part of the issue stems from defining the term “rape” in the first place. As Koss (1993) notes, the way she defined rape in her own research – the research that came upon that 25% figure – appeared to differ tremendously from the way her subjects did. The difference was so stark that roughly 75% of the participants that Koss had labeled as having experiencing rape did not, themselves, consider the experience to be rape. This is somewhat concerning for two big reasons: first, the perceived rate of rape might be a low-ball estimate (we’ll call this the ignorance hypothesis), or the rate of rape might be being inflated rather dramatically by definitional issues (we’ll call this the arrogance hypothesis).

Depending on your point of view – what you perceive to be, or label as, rape – either one of these hypotheses could be true. What is not true is the notion that one in four college-aged women report that they’ve been sexually assaulted; they might report they’ve had unwanted sex, or have been coerced into having sex, but not that they were assaulted. As it turns out, that’s quite a valuable distinction to make.

Hamby and Koss (2003) expanded on this issue, using focus groups to help understand this discrepancy. Whereas one in four women might describe their first act of intercourse as something they went along with but was unwanted, only one in twenty-five report that it was forced (in, ironically, a forced-choice survey). Similarly, while one in four women might report that they gave into having sex due to verbal or psychological pressure, only one in ten report that they engaged in sexual intercourse because of the use or threat of physical force. It would seem that there is a great deal of ambiguity surrounding words like coercion, force, voluntary, or unwanted when it comes to asking about sexual matters: was the mere fear of force, absent any explicit uses or threats enough to count? If the woman didn’t want to have sex, but said yes to try and maintain a relationship, did that count as coercion? The focus groups had many questions, and I feel that means many researchers might be measuring a number of factors they hadn’t intended on, lumping all of them together under the umbrella of sexual assault.

The focus groups, unsurprisingly, made distinctions between wanting sex and voluntarily having sex; they also noted that it might often be difficult for people to distinguish between internal and external pressures to have sex. These are, frankly, good distinctions to make. I might not want to go into work, but that I show up there anyway doesn’t mean I was being made to work involuntarily. I might also not have any internal motivation to work, per se, but rather be motivated to make money; that I can only make money if I work doesn’t mean most people would agree that the person I work for is effectively forcing me to work.

No one makes me wear it; I just do because I think it’s got swag

When we include sex that was acquiesced to, but unwanted, in these figures – rather than what the women themselves consider rape – then you’ll no doubt find more rape. Which is fine, as far as definitional issues go; it just requires the people reporting these numbers to be specific as to what they’re reporting about. As concepts like wanting, forcing, and coercing are measured in degree rather than kind, one could, in principle, define rape in a seemingly endless number of ways.This puts the burden on researchers to be as specific as possible when formulating these questions and drawing their conclusions, as it can be difficult to accurately infer what subjects were thinking about when they were answering the questions.

References: Hamby, S.L., & Koss, M.P. (2003). Shades of gray: A qualitative study of terms used in the measurement of sexual victimization. Psychology of Women Quarterly DOI: 10.1111/1471-6402.00104

Koss. M.P. (1993). Detecting the scope of rape: A review of prevalence research methods. Journal of Interpersonal Violence DOI: 10.1177/088626093008002004

Why Domain General Does Not Equal Plasticity

“It is because of, and not despite, this specificity of inherent structure that the output of computational systems is so sensitively contingent on environmental inputs. It is just this sensitive contingency to subtitles of environmental variation that make a narrow intractability of outcomes unlikely” - Tooby and Cosmides

In my last post, I mentioned that Stanton Peele directed at evolutionary psychology the criticism of genetic determinism. For those of you who didn’t read the last entry, the reason he did this is because he’s stupid and seems to have issues engaging with source material. This mistake – of confusing genetic determinism with evolutionary psychology – is unnervingly common among the critics who also seem to have issues engaging with source material. The mistake itself tends to take the form of pointing out that some behavior is variable, either across time, context, or people, and then saying, “therefore, genes (or biology) can’t be a causal factor in determining it”. For example, if people are nice sometimes and mean at others, it can’t be the genes; genes can only make people nice or mean at all times, not contingently. This means there must be something in the environment – like the culture – that makes people differ in their behavior the way they do, and the cognitive mechanisms that generate this behavior must be general-purpose. In other words, rather than resembling a Swiss Army knife – a series of tools with specified functions – the mind more closely resembles an unformed lump of clay, ready to adapt to whatever it encounters.

Unformed clay is known for being excellent at “solving problems” by “doing useful things”.

There are two claims found in this misguided criticism of evolutionary psychology. The first is that environments matter when it comes to development, behavior, or anything really, which they clearly do. This is something that’s been noted clearly and repeatedly by every professional evolutionary psychologist I’ve come across. The second claim is that to call a trait “genetic”, or to note that our genes play a role in determining behavior, implies inflexibly across environmental contexts. This second claim is, of course, nonsense. The opposite of “genetic” is not “environmental” or “flexible” for a simple reason: organisms need to be adapted to do anything, flexibly or otherwise. (Note: that does not mean everything an organism does it was adapted to do; the two propositions are quite different)

A quick example should make this point clear: consider my experiments with cats. Not many people know this about me, but I’m a big fan of the field of aviation. While up in the air, I’ve been known to throw cats out of the airplane. You know, for things like science and grant money. My tests have shown the following pattern of results: cats suck at flying. No matter how many times I’ve run the experiment – and believe me, I’ve run it many, many times, just to be sure – the results are always the same. How should I interpret the fact that I’m quickly running out of cats?

Discussion: The previous results were replicated purrrrfectly.

One way would be to suggest that cats would be able to fly, were they not constrained against flight by their genes; in other words, the cat’s behavior would be more “domain general” – even capable of flight – if genetics played less of a role in determining how they acted and developed. Another, more sane, route would be to suggest that cats were never adapted for flight in the first place. They can’t fly because their genes contain no programs that allow for it. Maybe that example sounds silly, but it does well to demonstrate a valuable point: adaptions do not make an organism’s behavior less flexible; it makes them more flexible. In fact, adaptations are what allows an organism to behave at all in the first place; organisms that are not adapted to behave in certain ways won’t behave at all.

So what about domain general abilities, like learning? For the same reasons, simply chalking some behavior up to “learning” or “culture” is often an inadequate explanation by itself. Learning is not something that just happens in the same way that flight doesn’t just happen; the ability to learn itself is an adaptation. It should come as no surprise then that some organisms are relatively prone to learning some things and relatively resistant to learning others. As Dawkins once noted, there are many more ways of being dead than being alive. On a similar note, there are many more ways of learning being useless or harmful than there are of learning being helpful. If an organism learns about the wrong subjects, it wastes time and energy; if an organism learns the wrong thing about the right subject, or if the organism fails to learn the right thing quickly enough, the results would often be deadly.

Cook and Mineka (1989) ran a series of experiments looking at how Rhesus monkeys acquired their fear response. The lab-raised monkeys with no prior exposure to snakes or crocodiles do not show a fear response to toy models of the two potential threats. The researchers then attempted to condition fear into these animals vicariously by showing them a video of another monkey reacting fearfully to either a snake or crocodile model. As expected, after watching the fearful reaction of another monkey, the lab-raised monkeys themselves developed a fear response to the toys. They learned quickly to be afraid when observing that fear reaction in another individual. What was particularly interesting about these studies is that the researchers tried the same thing, but substituted either a brightly-colored flower or a rabbit in place of the snake or crocodile. In these trials, the monkeys did not acquire a fear response to flowers or rabbits. In other words, the monkeys were biologically prepared to quickly learn fear to some objects (historically deadly ones), but not others.

Just remember, they’re more afraid of you than you are of them. Also, remember fear can make one irritable and defensive.

The results of this study make two very important points. The first is that, as I just mentioned, learning is not a completely open-ended process. We’re prepared to learn some things (like certain fears, taste aversions, or language) relatively automatically, given the proper environmental stimulation. I can’t stress the word “proper” there enough. For instance, there are also some learning associations that organisms are unable to make: rats will only learn taste aversion in the presence of nausea, not light or sound, though they will readily associate shocks with light and sound.

The second point is that these results (should) put to bed the mistaken notion that biology and environment are two competing sources of explanation; they are not. Genetics do not make an organism less flexible and environments do not make them more flexible. Learning is not something to be contrasted with biology, but rather learning is biology. This is a point that is repeatedly stressed in introduction level classes on evolutionary psychology, along with every major work within the field. Anyone who is still making this error in their criticisms is demonstrating a profound lack of expertise, and should be avoided.

References: Cook, M. & Mineka, S. (1989). Observational condition of fear to fear-relevant versus fear-irrelevant stimuli in Rhesus monkeys. Journal of Abnormal Psychology, 98, 448-459.

Is Working Together Cooperation?

“[P]rogress is often hindered by poor communication between scientists, with different people using the same term to mean different things, or different terms to mean the same thing…In the extreme, this can lead to debates or disputes when in fact there is no disagreement, or the illusion of agreement when there is disagreement” – West et al. (2007)

I assume most of you are little confused by the question, “Is working together cooperation?” Working together is indeed the very first definition of cooperation, so it would seem the answer should be a transparent “yes”. However, according to a paper by West et al. (2007), there’s some confusion that needs to be cleared up here. So buckle up for a little safari into the untamed jungles of academic semantic disagreements.

An apt metaphor for what clearing up confusion looks like.

West et al. (2007) seek to define cooperation as such:

Cooperation: a behavior which provides a benefit to another individual (recipient), and which is selected for because of its beneficial effect on the recipient. [emphasis, mine]

In this definition, benefits are defined in terms of ultimate fitness (reproductive) benefits. There is a certain usefulness to this definition, I admit. It can help differentiate between behaviors that are selected to deliver benefits from behaviors that deliver benefits as a byproduct. The example West et al. use is an elephant producing dung. The dung an elephant produces can be useful to other organisms, such as a dung beetle, but the function of dung production in the elephant is not to provide a benefit the beetle; it just happens to do so as a byproduct. On the other hand, if a plant produces nectar to attract pollinators, this is cooperation, as the nectar benefits the pollinators in the form of a meal, and the function of the nectar is to do so, in order to assist in reproduction by attracting pollinators.

However, this definition has some major drawbacks. First, it defines cooperative behavior in terms of actual function, not in terms of proper function. An example will make this distinction a touch clearer: let’s say two teams are competing for a prize in a winner-take-all game. All the members of each team work together in an attempt to achieve the prize, but only one team gets it. By the definition West et al. use, only the winning team’s behavior can be labeled “cooperation”. Since the losers failed to deliver any benefit, their behavior would not be cooperation, even if their behavior was, more or less, identical. While most people would call teamwork cooperation – as the intended goal of the teamwork was to achieve a mutual goal – the West et al. definition leaves no room for this consideration.

I’ll let you know which team was actually cooperating once the game is over.

West et al. (2007) also seem to have a problem with the term “reciprocal altruism”, which is basically summed up by the phrase, “you scratch my back (now) and I’ll scratch yours (at some point in the future)”.  The authors have a problem with the term reciprocal altruism because this mutual delivery of benefits is not altruistic, which they define as such:

Altruism: a behavior which is costly to the actor and beneficial to the recipient; in this case and below, costs and benefits are defined on the basis of the lifetime direct fitness consequences of a behavior.

Since reciprocal altruism is eventually beneficial to the individual paying the initial cost, West et al. (2007) feel it should be classed as “reciprocal cooperation”. Except there’s an issue here: Let’s consider another case: organism X pays a cost (c) to deliver a benefit (b) to another organism, Y, at some time (T1). At some later time (T2), organism Y pays a cost (c) to deliver a benefit (b) back to organism X. So long as (c) < (b), they feel we should call the interaction between X and Y cooperation, not reciprocal altruism.

Here’s the problem: the future is always uncertain. Let’s say there’s a parallel case to the one above, except at some point after (T1) and before (T2), organism X dies. Now, organism X would be defined as acting altruistically (paid a cost to deliver a benefit), and organism Y would be defined as acting selfishly (took a benefit without repaying). What this example tells us is that a behavior can be classed as being altruistic, mutually beneficial, cooperative, or selfish, depending on a temporal factor. In terms of “clearing up confusion” about how to properly use a term or classify a behavior, the definitions provided by West et al. (2007) are not terribly helpful. They note as much, when they write, “we end with the caveat that: (viii) classifying behaviors will not always be the easiest or most useful thing to do” (p.416), which, to me, seems to defeat the entire purpose of this paper.

“We’ve successfully cleared up the commuting issue, though using our roads might not be the easiest or most useful thing to do…”

One final point of contention is that West et al. (2007) feel “…behaviors should be classified according to their impact on total lifetime reproductive success” (emphasis, mine). I understand what they hope to achieve with that, but they make no case whatsoever for why we should stop considering the ultimate effects of a behavior at the end of an organism’s individual lifetime. If an individual behaves in a way that ensures he leaves behind ten additional offspring by the time he dies, but, after he is dead, the fallout from those behaviors further ensures that none of those offspring reproduce, how is that behavior to be labeled?

It seems to me there are many different ways to think about an organism’s behavior, and no one perspective needs to be monolithic across all disciplines. While such a unified approach no doubt has its uses, it’s not always going to clear up confusion.

References: West, S.A., Griffin, A.S., & Gardner, A. (2007). Social semantics: Altruism, cooperation, mutualism, strong reciprocity and group selection. Journal of Evolutionary Biology, 20, 415-432