Back in May, I posed a question concerning why an organism would want to be a member of group: on the one hand, an organism might want to join a group because, ultimately, that organism calculates that joining a group would likely lead to benefits for itself that the organism would not otherwise obtain; in other words, organisms would want to join a group for selfish reasons. On the other hand, an organism might want to join a group in order to deliver benefits to the entire group, not just themselves. In this latter case, the organism would be joining the group for, more or less, altruistic reasons. For reasons that escape my current understanding, there are people who continue to endorse the second reason for group-joining as plausible, despite it being anathema to everything we currently know about how evolution works.
The debate over whether adaptations for cooperation and punishment were primarily forged by selection pressures at the individual or group level has gone on for so long because, in part, much of the evidence that was brought to bear on the matter could have been viewed as being consistent with either theory – if one was creative enough in their interpretation of the results, anyway. The results of a new study by Krasnow et al (2012) should do one of two things to the group selectionists: either make them reconsider their position or make them get far more creative in their interpreting.
The study by Krasnow et al (2012) took the sensible route towards resolving the debate: they created contexts where the two theories make opposing predictions. If adaptations for social exchange (cooperation, defection, punishment, reputation, etc) were driven primarily by self-regarding interests (as it is the social exchange model), information about how your partner behaved towards you should be more relevant than information about how your partner behaved towards others when you’re deciding how to behave towards them. In stark contrast, a group selection model would predict that those two types of information should be of similar value when deciding how to treat others, since the function of these adaptations should be to provide group-wide gains; not selfish ones.
These contexts were created across two experiments. The first experiment was designed in order to demonstrate that people do, in fact, make use of what the authors called “third-party reputation”, defined as a partner’s reputation for behaving a certain way towards others. Subjects were brought into the lab to play a trust game with a partner who, unbeknownst to the subjects, were computer programs and not real people. In a trust game, a player can either not trust their partner, resulting in an identical mid-range payoff for both (in this case, $1.20 for both), or trust their partner. If the first player trusts, their partner can either cooperate – leading to an identical payoff for both players that’s higher than the mid-range payoff ($1.50 for both) – or defect – leading to an asymmetrical payoff favoring the defector ($1.80 and $0.90). In the event that the player trusted and their partner defected, the player was given an option to pay to punish their partner, resulting in both their payoffs sitting at a low level ($0.60 for both).
Before the subjects played this trust game, they were presented with information about their partner’s third-party reputation. This information came in the form of questions that their partner had ostensibly filled out earlier, which assessed the willingness of that partner to cheat given freedom from detection. Perhaps unsurprisingly, subjects were less willing to trust a partner who indicated they would be more likely to cheat, given a good opportunity. What this result tells us, then, is that people are perfectly capable of making use of third-party reputation information when they know nothing else about their partner. These results do not help us distinguish between group and individual-level accounts, however, as both models predict that people should act this way; that’s where the second study came in.
The second study added in the crucial variable: first-party reputation, or your partner’s past behavior towards you. This information was provided through the results of two prisoner’s dilemma games that were visible to the subject, one which was played between a subject and their partner and the other played between the partner and a third party. This led to subjects encountering four kinds of partners: one who defected both on the subject and a third party, one who cooperated with both, and one who defected on one (either the subject or the third party) but cooperated with the other. Following this initial game, subjects again played a two-round trust game with their partners. This allowed the following question to be answered: when subjects have first-party reputation available, do they still make use of third-party reputation?
The answer could not have been a more resounding, “no”. When deciding whether they were going to trust their partner or not, the third-party reputation did not predict the outcome at all, whereas first-party reputation did, and, unsurprisingly, subjects were less willing to trust a partner who had previously defected on them. Further, a third-party reputation for cheating did not make subjects any more likely to punish their partner, though first-party reputation didn’t have much value in those predictions either. That said, the social exchange model does not predict that punishment should be enacted strictly on the grounds of being wronged; since punishment is costly it should only be used when subjects hope to recoup the costs of that punishment in subsequent exchanges. If subjects do not wish to renegotiate the terms of cooperation via punishment, they should simply opt to refrain from interacting with their partner altogether.
That precise pattern of results was borne out: when a subject were defected on and the subject then punished the defector, that same subject was also likely to cooperate in subsequent rounds with their partner. In fact, they were just as likely to cooperate with their partner as they were cases where the partner did not initially defect. It’s worth repeating that subjects did this while, apparently, ignoring how their partner had behaved towards anyone else. Subjects only seemed to punish the partner in order to persuade their partner to treat them better; they did not punish because their partner had hurt anyone else. Finally, first-party reputation, unlike third-party reputation, had an effect on whether subjects were willing to cooperate with their partner on their first move in the trust game. People were more likely to cooperate with a partner who had cooperated with them, irrespective of how that partner behaved towards anyone else.
To sum up, despite group selection models predicting that subjects should make use of first- and third-party information equally, or at least jointly, they did not. Subjects only appeared to be interested in information about how their partner behaved towards others to the extent that such information might predict how their partner would behave towards them. However, since information about how their partner had behaved towards them is a superior cue, subjects made use of that first-party information when it was available to the exclusion of third-party reputation.
Now, one could make the argument that you shouldn’t expect to see subjects making use of information about how their partners behaved towards other parties because there is no guarantee that those other parties were members of the subject’s group. After all, according to group selection theories, altruism should only be directed at members of one’s own group specifically, so maybe these results don’t do any damage to the group selectionist camp. I would be sympathetic to that argument, but there are two big problems to be dealt with before I extend that sympathy: first, it would require that group selectionists give up all the previously ambiguous evidence they have said is consistent with their theory, since almost all of that research does not explicitly deal with a subject’s in-group either; they don’t get to recognize evidence only in cases where it’s convenient for their theory and ignore it when it’s not. The second issue is the one I raised back in May: “the group” is a concept that tends to lack distinct boundaries. Without nailing down this concept more concretely, it would be difficult to build any kind of stable theory around it. Once that concept had been developed more completely, then it would need to be shown that subjects will act altruistically towards their group (and not others) irrespective of the personal payoff for doing so; demonstrating that people act altruistically with the hopes that they will be benefited down the road from doing so is not enough.
Will this study be the final word on group selection? Sadly, probably not. On the bright side, it’s at least a step in the right direction.
References: Krasnow, M.M., Cosmides, L., Pederson, E.J., & Tooby, J. (2012). What are punishment and reputation for? PLOS ONE, 7