Group Selectionists Make Basic Errors (Again)

In my last post, I wrote about a basic error most people seem to make when thinking about evolutionary psychology: they confuse the ultimate adaptive function of a psychological module with the proximate functioning of said module. Put briefly, the outputs of an adapted module will not always be adaptive. Organisms are not designed to respond perfectly to each and every context they find themselves in. This is especially the case regarding novel environmental contexts. These are things that most everyone should agree on, at least in the abstract. Behind those various nods of agreement, however, we find that applying this principle and recognizing maladaptive or nonfunctional outputs is often difficult for people in practice, laymen and professional alike. Some of these professionals, like Gintis et al (2003), even see fit to publish their basic errors.

Thankfully for the authors, the paper was peer reviewed by people who didn’t know what they were talking about either

There are two main points to discuss about this paper. The first point is to consider why the authors feel current theories are unable to account for certain behaviors, and the second is to consider the strength of the alternative explanations put forth. I don’t think I’m spoiling anything by saying the authors profoundly err on both accounts.

On the first point, the behavior in question – as it was in the initial post – is altruism. Gintis et al (2003) discuss the results of various economic games showing that people sometimes act nicely (or punitively) when niceness (or punishment) doesn’t end up ultimately benefiting them. From these maladaptive (or what economists might call “irrational”) outcomes, the authors conclude, therefore, that cognitive adaptations designed for reciprocal altruism or kin selection can’t account for the results. So right out of the gate they’re making the very error the undergraduates were making. While such findings would certainly be a problem for any theory that purports humans will always be nice when it pays more, and will never be nice when it pays less, and are always able to correctly calculate which situation is which, neither theory presumes any of those things. Unfortunately for Gintis et al, their paper does make some extremely problematic assumptions, but I’ll return to that point later.

The entirety of the argument that Gintis et al (2003) put forth rests on the maladaptive outcomes that are obtained in these games cutting against the adaptive hypothesis. As I covered previously, this is bad reasoning; brakes on cars sometimes fail to stop the car because of contextual variables – like ice – but that doesn’t mean that brakes aren’t designed to stop cars. One big issue with the maladaptive outcomes Gintis et al (2003) consider is that they are largely due to issues of novel environmental contexts. Now, unlike the undergraduate tests I just graded, Gintis et al (2003) have the distinct benefit of being handed the answer by their critics, which are laid out, in text, as such:

Since the anonymous, nonrepeated interactions characteristic of experimental games were not a significant part of our evolutionary history, we could not expect subjects in experimental games to behave in a fitness-maximizing manner. Rather, we would expect subjects to confuse the experimental environment in more evolutionarily familiar terms as a nonanonymous, repeated interaction, and to maximize fitness with respect to this reinterpreted environment.

My only critique of that section is the “fitness maximizing” terminology. We’re adaptation executioners, not fitness maximizers. The extent that adaptions maximize fitness in the current environment is an entirely separate questions to how we’re designed to process information. That said, the authors reply to the critique thusly:

But we do not believe that this critique is correct. In fact, humans are well capable of distinguishing individuals with whom they are likely to have many future interactions, from others, with whom future interactions are less likely

Like the last post, I’m going to rephrase the response in terms of arousal to pornography instead of altruism to make the failings of that argument clearer: “In fact, humans are well capable of distinguishing [real] individuals with whom they are likely to have [sex with], from [pornography], with [which] future [intercourse is] less likely.”

I suppose I should add a caveat about the probability of conception from intercourse…

Humans are well capable of distinguishing porn from reality. “A person” “knows” the difference between the two, so arousal to pornography should make as little sense as sexual arousal to any other inanimate object, like a chair or a wall. Yet people are routinely aroused by pornography. Are we to conclude from this, as Gintis et al might, that, therefore sexual arousal to pornography is itself functional? The proposition seems doubtful. Likewise, when people take birth control, if “they” “know” that they can’t get pregnant, why do they persist in having sex?

A better explanation is that “a person” is really not a solitary unit at all, but a conglomeration of different modules, and not every module is going to “know” the same thing. A module generating arousal to visual depictions of intercourse might not “know” the visual depiction is just a simulation, as it was never designed to tell the difference, since there never was a difference. The same goes for sex and birth control. That the module that happens to be talking to other people can clearly articulate that it “knows” the sex on the screen isn’t real, or that it “knows” it can’t increase its fitness by having sex while birth control is involved, other modules, could they speak, would give a very different answer. It seems Gintis et al (2003) fail to properly understand, or at least account for, modularity.

Maybe people can reliably tell the difference between those with whom they’ll have future contact and those with whom they likely won’t. Of course, there are always risks that module will miscalculate given the uncertainty of the future, but that task might have been something that a module could plausibly have been designed to do. What modules were unlikely to be designed to do, however, is interact with people anonymously, much less interact anonymously under the specific set of rules put forth in these experimental conditions. Gintis et al (2003) completely avoid this point in their response. They are talking about novel environmental contexts, and are somehow surprised when the mind doesn’t function perfectly in them. Not only do they fail to make use of modularity properly, they fail to account for novel environments as well.

So the problem that Gintis et al see is not actually a problem. People don’t universally behave as Gintis et al (2003) think other models predict they should. Of course, the other models don’t make those predictions, but there’s an even larger issue looming: the solution to this non-problem that Gintis et al favor introduces a greater, actual issue. This is the big issue I alluded to earlier: the “strong reciprocity” trait that Gintis et al (2003) put forth does make some very problematic assumptions. A little juxtaposition will let one stand out, like something a good peer reviewer should have noted:

One such trait, which we call strong reciprocity (Gintis, 2000b; Henrich et al., 2001), is a predisposition to cooperate with others and to punish those who violate the norms of cooperation, at personal cost, even when it is implausible to expect that these costs will be repaid either by others or at a later date…This is not because there are a few ‘‘bad apples’’ among the set of employees, but because only 26% of employees delivered the level of effort they promised! We conclude that strong reciprocators are inclined to compromise their morality to some extent, just as we might expect from daily experience. [emphasis mine]

So the trait being posited by the authors allows for cooperation even when cooperating doesn’t pay off. Leaving aside whether such a trait is plausibly something that could have evolved, indifference to cost is supposed to be part of the design. It is thus rather strange that the authors themselves note people tend to modify their behavior in ways that are sensitive to those costs. Indeed, only 1 in 4 of the people in the experiment they mention could even potentially fit the definition of a strong altruist, even (and only) if the byproducts of reciprocal altruism modules counted for absolutely nothing.

25% of the time, it works 100% of the time

It’s worth noticing the trick that Gintis et al (2003) are trying to use here as well: they’re counting the hits and not the misses. Even though only a quarter of the people could even potentially (and I do stress potentially) be considered strong reciprocators that are indifferent to the costs and benefits, they go ahead a label the employees strong reciprocators anyway (just strong reciprocators that do things strong reciprocators aren’t supposed to do, like be sensitive to costs and benefits). Of course, they could more parsimoniously be labeled reciprocal altruists who happen to be behaving maladaptively in a novel circumstance, but that’s apparently beyond consideration.

References: Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (2003). Explaining altruistic behavior in humans Evolution and Human Behavior, 24 (3), 153-172 DOI: 10.1016/S1090-5138(02)00157-5

3 comments on “Group Selectionists Make Basic Errors (Again)

  1. Hello there, I found your website by means of Google at the same time as looking for a related subject, your site got here up, it seems to be great. I have bookmarked to favourites|added to my bookmarks.

  2. Pingback: ResearchBlogging.org News » Blog Archive » Editor’s Selections: Absolutes, Profile Pictures, and Parking Spaces

  3. Pingback: Assumed Guilty Until Proven Innocent | Pop Psychology