Group Selectionists Make Basic Errors (Again)

In my last post, I wrote about a basic error most people seem to make when thinking about evolutionary psychology: they confuse the ultimate adaptive function of a psychological module with the proximate functioning of said module. Put briefly, the outputs of an adapted module will not always be adaptive. Organisms are not designed to respond perfectly to each and every context they find themselves in. This is especially the case regarding novel environmental contexts. These are things that most everyone should agree on, at least in the abstract. Behind those various nods of agreement, however, we find that applying this principle and recognizing maladaptive or nonfunctional outputs is often difficult for people in practice, laymen and professional alike. Some of these professionals, like Gintis et al (2003), even see fit to publish their basic errors.

Thankfully for the authors, the paper was peer reviewed by people who didn’t know what they were talking about either

There are two main points to discuss about this paper. The first point is to consider why the authors feel current theories are unable to account for certain behaviors, and the second is to consider the strength of the alternative explanations put forth. I don’t think I’m spoiling anything by saying the authors profoundly err on both accounts.

On the first point, the behavior in question – as it was in the initial post – is altruism. Gintis et al (2003) discuss the results of various economic games showing that people sometimes act nicely (or punitively) when niceness (or punishment) doesn’t end up ultimately benefiting them. From these maladaptive (or what economists might call “irrational”) outcomes, the authors conclude, therefore, that cognitive adaptations designed for reciprocal altruism or kin selection can’t account for the results. So right out of the gate they’re making the very error the undergraduates were making. While such findings would certainly be a problem for any theory that purports humans will always be nice when it pays more, and will never be nice when it pays less, and are always able to correctly calculate which situation is which, neither theory presumes any of those things. Unfortunately for Gintis et al, their paper does make some extremely problematic assumptions, but I’ll return to that point later.

The entirety of the argument that Gintis et al (2003) put forth rests on the maladaptive outcomes that are obtained in these games cutting against the adaptive hypothesis. As I covered previously, this is bad reasoning; brakes on cars sometimes fail to stop the car because of contextual variables – like ice – but that doesn’t mean that brakes aren’t designed to stop cars. One big issue with the maladaptive outcomes Gintis et al (2003) consider is that they are largely due to issues of novel environmental contexts. Now, unlike the undergraduate tests I just graded, Gintis et al (2003) have the distinct benefit of being handed the answer by their critics, which are laid out, in text, as such:

Since the anonymous, nonrepeated interactions characteristic of experimental games were not a significant part of our evolutionary history, we could not expect subjects in experimental games to behave in a fitness-maximizing manner. Rather, we would expect subjects to confuse the experimental environment in more evolutionarily familiar terms as a nonanonymous, repeated interaction, and to maximize fitness with respect to this reinterpreted environment.

My only critique of that section is the “fitness maximizing” terminology. We’re adaptation executioners, not fitness maximizers. The extent that adaptions maximize fitness in the current environment is an entirely separate questions to how we’re designed to process information. That said, the authors reply to the critique thusly:

But we do not believe that this critique is correct. In fact, humans are well capable of distinguishing individuals with whom they are likely to have many future interactions, from others, with whom future interactions are less likely

Like the last post, I’m going to rephrase the response in terms of arousal to pornography instead of altruism to make the failings of that argument clearer: “In fact, humans are well capable of distinguishing [real] individuals with whom they are likely to have [sex with], from [pornography], with [which] future [intercourse is] less likely.”

I suppose I should add a caveat about the probability of conception from intercourse…

Humans are well capable of distinguishing porn from reality. “A person” “knows” the difference between the two, so arousal to pornography should make as little sense as sexual arousal to any other inanimate object, like a chair or a wall. Yet people are routinely aroused by pornography. Are we to conclude from this, as Gintis et al might, that, therefore sexual arousal to pornography is itself functional? The proposition seems doubtful. Likewise, when people take birth control, if “they” “know” that they can’t get pregnant, why do they persist in having sex?

A better explanation is that “a person” is really not a solitary unit at all, but a conglomeration of different modules, and not every module is going to “know” the same thing. A module generating arousal to visual depictions of intercourse might not “know” the visual depiction is just a simulation, as it was never designed to tell the difference, since there never was a difference. The same goes for sex and birth control. That the module that happens to be talking to other people can clearly articulate that it “knows” the sex on the screen isn’t real, or that it “knows” it can’t increase its fitness by having sex while birth control is involved, other modules, could they speak, would give a very different answer. It seems Gintis et al (2003) fail to properly understand, or at least account for, modularity.

Maybe people can reliably tell the difference between those with whom they’ll have future contact and those with whom they likely won’t. Of course, there are always risks that module will miscalculate given the uncertainty of the future, but that task might have been something that a module could plausibly have been designed to do. What modules were unlikely to be designed to do, however, is interact with people anonymously, much less interact anonymously under the specific set of rules put forth in these experimental conditions. Gintis et al (2003) completely avoid this point in their response. They are talking about novel environmental contexts, and are somehow surprised when the mind doesn’t function perfectly in them. Not only do they fail to make use of modularity properly, they fail to account for novel environments as well.

So the problem that Gintis et al see is not actually a problem. People don’t universally behave as Gintis et al (2003) think other models predict they should. Of course, the other models don’t make those predictions, but there’s an even larger issue looming: the solution to this non-problem that Gintis et al favor introduces a greater, actual issue. This is the big issue I alluded to earlier: the “strong reciprocity” trait that Gintis et al (2003) put forth does make some very problematic assumptions. A little juxtaposition will let one stand out, like something a good peer reviewer should have noted:

One such trait, which we call strong reciprocity (Gintis, 2000b; Henrich et al., 2001), is a predisposition to cooperate with others and to punish those who violate the norms of cooperation, at personal cost, even when it is implausible to expect that these costs will be repaid either by others or at a later date…This is not because there are a few ‘‘bad apples’’ among the set of employees, but because only 26% of employees delivered the level of effort they promised! We conclude that strong reciprocators are inclined to compromise their morality to some extent, just as we might expect from daily experience. [emphasis mine]

So the trait being posited by the authors allows for cooperation even when cooperating doesn’t pay off. Leaving aside whether such a trait is plausibly something that could have evolved, indifference to cost is supposed to be part of the design. It is thus rather strange that the authors themselves note people tend to modify their behavior in ways that are sensitive to those costs. Indeed, only 1 in 4 of the people in the experiment they mention could even potentially fit the definition of a strong altruist, even (and only) if the byproducts of reciprocal altruism modules counted for absolutely nothing.

25% of the time, it works 100% of the time

It’s worth noticing the trick that Gintis et al (2003) are trying to use here as well: they’re counting the hits and not the misses. Even though only a quarter of the people could even potentially (and I do stress potentially) be considered strong reciprocators that are indifferent to the costs and benefits, they go ahead a label the employees strong reciprocators anyway (just strong reciprocators that do things strong reciprocators aren’t supposed to do, like be sensitive to costs and benefits). Of course, they could more parsimoniously be labeled reciprocal altruists who happen to be behaving maladaptively in a novel circumstance, but that’s apparently beyond consideration.

References: Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (2003). Explaining altruistic behavior in humans Evolution and Human Behavior, 24 (3), 153-172 DOI: 10.1016/S1090-5138(02)00157-5

The Difference Between Adaptive And Adapted

This is going to be something of a back to basics post, but a necessary one. Necessary, that is, if the comments I’ve been seeing lately are indicative of the thought processes of the population at large. It would seem that many people make a fundamental error when thinking about evolutionary explanations for behavior. The error involves thinking about the ultimate function of an adaptation or the selection pressures responsible for its existence, rather than the adaptation’s input conditions, in considering whether said adaptation is responsible for generating some proximate behavior. (If that sounded confusing, don’t worry; it’ll get cleared up in a moment) While I have seen the error made frequently among various lay people, it appears to even be common among those with some exposure to evolutionary psychology; out of the ninety undergraduate exams I just finished grading, only five students got the correct answer to a question dealing with the subject. That is somewhat concerning.

I hope that hook up was worth the three points it cost you on the test because you weren’t paying attention.

Here’s the question that the students were posed with:

People traveling through towns that they will never visit again nonetheless give tips to waiters and taxi drivers. Some have claimed that the theory of reciprocal altruism seems unable to explain this phenomenon because people will never be able to recoup the cost of the tip in a subsequent transaction with the waiter or the driver. Briefly explain the theory of reciprocal altruism, and indicate whether you think that this theory can or cannot explain this behavior. If you say it can, say why. If you say it cannot, provide a different explanation for this behavior.

The answers I received suggested that the students really did understand the function of reciprocal altruism: they were able to explain the theory itself, as well as some of the adaptive problems that needed to be solved in order for the behavior to be selected for, such as the ability to remember individuals and detect cheaters. So far, so good. However, almost all the students then indicated that the theory could not explain tipping behavior, since there was no chance that the tip could ever be reciprocated in the future. In other words, tipping in that context was not adaptive, so adaptations designed for reciprocal altruism could not be responsible for the behavior. The logic here is, of course, incorrect.

To understand why that answer is incorrect, let’s rephrase the question, but this time, instead of tipping strangers, let’s consider two people having sex:

People who do not want to have children still wish to have sex, so they engage in intercourse while using contraceptives. Some have claimed that the theory of sexual reproduction seems unable to explain this phenomenon because people will never be able to reproduce by having sex under those conditions. Briefly explain the theory of sexual reproduction, and indicate whether you think that this theory can or cannot explain this behavior. If you say it can, say why. If you say it cannot, provide a different explanation for this behavior.

No doubt, there are still many people who would get this question wrong as well; they might even suggest that the ultimate function of sex is just to “feel pleasure”, not reproduction, because feeling pleasure – in and of itself – is somehow adaptive (Conley, 2011, demonstrating that this error also extends to published literature). Hopefully, however, for most people at least one error should now appear a little clearer: contraceptives are an environmental novelty, and our psychology is not evolved to deal with a world in which they exist. Without contraceptives, the desire to have children is irrelevant to whether or not some sexual act will result in pregnancy.

That desire is also irrelevant if you’re in the placebo group

Contraceptives are a lot like taxi drivers, in that both are environmental novelties. Encountering strangers that you were not liable to interact with again was probably the exception, rather than the rule, for most of human evolution. That said, even if contraceptives were taken out of the picture and our environment was as “natural” as possible, our psychology would still not be perfectly designed for each and every context we find ourselves in. Another example about sex easily demonstrates this point: a man and a woman only need to have sex once, in principle, to achieve conception. Additional copulations before or beyond that point are, essentially, wasted energy that could have been spent doing other things. I would wager, however, that for each successful pregnancy, most couples probably have sex dozens or hundreds of times. Whether because the woman is not and will not be ovulating, because one partner is infertile, or because the woman is currently pregnant or breastfeeding, there are plenty of reasons why intercourse does not always lead to conception. In fact, intercourse itself would probably not be adaptive in the vast majority of occurrences, despite it being the sole path to human reproduction (before the advent of IVF, of course).

Turning the focus back to reciprocal altruism, throughout their lives, people behave altruistically towards a great many people. In some cases, that altruism will be returned in such a way that the benefits received will outweigh the initial costs of the altruistic act; in other cases, that altruism will not be returned. What’s important to bear in mind is that the output of some module adapted for reciprocal altruism will not always be adaptive. The same holds for the output of any psychological module, since organisms aren’t fitness maximizers – they’re adaptation executioners. Adaptations that tended to increase reproductive success in the aggregate were selected for, even if they weren’t always successful. These sound like basic points (because they are), but they’re also points that tend to frequently trip people up, even if those people are at least somewhat familiar with all the basic concepts themselves. I can’t help but wonder if that mistake is made somewhat selectively, contingent on topic, but that’s a project for another day.

References: Conley, T. (2011). Perceived proposer personality characteristics and gender differences in acceptance of casual sex offers. Journal of Personality and Social Psychology, 100 (2), 309-329 DOI: 10.1037/a0022152

Making Your Business My Business

“The government has no right to do what it’s doing, unless it’s doing what I want it to do” – Pretty much everyone everywhere.

As most people know by now, North Carolina recently voted on and approved an amendment to the state’s constitution that legally barred gay marriage. Many supporters of extending marriage rights to the homosexual community understandably found this news upsetting, which led the predictable flood of opinions about how it’s none of the government’s business who wants to marry who. I found the whole matter to be interesting on two major fronts: first, why would people support/oppose gay marriage in general, and, secondly, why on earth would people try to justify their stance using a line of reasoning that is (almost definitely) inconsistent with other views they hold?

Especially when they aren’t even running for political office.

Let’s deal with these issues in reverse order. First, let’s tackle the matter of inconsistency. We all (or at least almost all) want sexual behavior legislated, and feel the government has the right to do that, despite many recent protests to the contrary. As this helpful map shows, there are, apparently, more states that allow for first cousin marriage than gay marriage (assuming the information there is accurate). That map has been posted several times, presumably in support of gay marriage. Unfortunately, the underlying message of that map would seem to be that, since some people find first cousin marriage gross, it should be shocking that it’s more legal that homosexuality. What I don’t think that map was suggesting is that it’s not right that first cousin marriage isn’t more legal, as the government has no right legislating sexuality. As Haidt’s research on moral dumbfounding shows, many people are convinced that incest is wrong even when they can’t find a compelling reason why, and many people likewise feel it should be made illegal.

On top of incest, there’s also the matter of age. Most people will agree that children below a certain age should not be having sex, and, typically, that agreement is followed with some justification about how children aren’t mature enough to understand the consequences of their actions. What’s odd about that justification is that people don’t go on to then say that people should be allowed to have sex at any age, just so long as they can demonstrate that they understand the consequences of their actions through some test. Conversely, they also don’t say that people above the age of consent should be forbade from having sex until they can pass such a test. There are two points to make about this: the first is that no such maturity test exists in the first place, so when people make the judgments about maturity they’re just assuming that some people aren’t mature enough to make those kinds of decisions; in other words, children shouldn’t be allowed to consent to sex because they don’t think children should be allowed to consent to sex. The second point is, more importantly, even if such a test existed, suggesting that people shouldn’t be allowed to have sex without passing it would still be legislating sexuality. It would still be the government saying who can and can’t have sex and under what circumstances.

Those are just two cases, and there are many more. Turns out people are pretty keen on legislating the sexual behavior of others after all. (We could have an argument about those not being cases of sexuality per se, but rather about harm, but it turns out people are pretty inconsistent about defining and legislating harm as well) The point here, to clarify, is not that legalizing gay marriage would start us on a slippery slope to legalizing other, currently unacceptable, forms of sexuality; the point is that people try to justify their stances on matters of sexuality with inconsistently applied principles. Not only are these justifications inconsistent, but they may also have little or nothing to do with the actual reasons you or I end up coming to whatever conclusions we do, despite what people may say. As it turns out, our powers of introspection aren’t all they’re cracked up to be.

Letting some light in might just help you introspect better; it is dark in there…

Nisbett and Wilson (1977) reviewed a number of examples concerning the doubtful validity of introspective accounts. One of these finding concerned a display of four identical nylon stockings. Subjects were asked about which of the four pairs was the best quality, and, after they had delivered their judgment, why they had picked the pair the did. The results showed that people, for whatever reason, tended to overwhelmingly prefer the garment on the right side of the display (they preferred it four-times as much, relative to the garment on the left side). When queried about their selection, unsurprisingly, zero of the 52 subjects made mention of the stocking’s position in the lineup. When subjects were asked directly about whether the position of the pair of stockings had any effect on their judgment, again, almost all the subjects denied that it did.

While I will not re-catalog every example that Nisbett and Wilson (1977) present, the unmistakable conclusion arose that people have, essentially, little to no actual conscious insight into the cognitive processes underlying their thoughts and behavior. They often were unable to report that an experimental manipulation had any effect (when it did), or reported that irrelevant manipulations actually had (or would have had) some effect. In some cases, they were unable to even report that there was any effect at all, when there had in fact been one. As the authors put it:

… [O]thers have argued persuasively that “we can know more than we can tell,” by which it is meant that people can perform skilled activities without being able to describe what they are doing and can make fine discriminations without being able to articulate their basis. The research described above suggest that that converse is also true – that we sometimes tell more than we can know. More formally, people sometimes makes assertions about mental events to which they may have no access and these assertions may bear little resemblance to the actual events.

This – coupled with the inconsistent use of principled justifications – casts serious doubts on the explicit reasons people often give for either supporting or opposing gay marriage. For instance, many people might support gay marriage because they think it would make gay people happier, on the whole. For the sake of argument, suppose that you discovered gay marriage actually made gay people unhappier, on the whole: would you then be in favor of keeping it illegal? Presumably, you would not be (if you were in favor of legalization to begin with, that is). While making people happy might seem like a plausible and justifiable reason for supporting something, it does not mean that it was the – or a – cause of your judgment.

Marriage: a known source of lasting happiness

If the typical justifications that people give for supporting or opposing gay marriage are not likely to reflect the actual cognitive process that led to their decisions, what cognitive mechanisms might actually be underlying them? Perhaps the most obvious class of mechanisms are those that involve an individual’s mating strategy. Weeden et al. (2008) note that the decision to pursue a more short or long-term mating strategy is a complicated matter, full of tradeoffs concerning local environmental, individual, and cultural factors. They put forth what they call the Reproductive Religiosity Model, which posits that a current function of religious participation is to help ensure the success of a certain type of mating strategy: a more monogamous, long-term, high-fertility mating style. Men pursuing this strategy tend to forgo extra-pair matings in exchange for an increase in paternity certainty, whereas women similarly tend to forgo extra-pair matings for better genes in exchange for increased levels of paternal investment.

As Chris Rock famously quipped, “A man is only as faithful as his options”, though the sentiment would apply equally well to women. It does the long-term mating strategy no good to have plenty of freely sexually available conspecifics hanging around. Thus, according to this model, participation in religious groups helps to curb the risks involved in this type of mating style. This is why certain religious communities might want to decrease the opportunities for promiscuity and increase the social costs for engaging in it.  In order to decrease sexual availability, then, you might find religious groups doing things like opposing and seeking to punish people for engaging in: divorce, birth control use, abortion, promiscuity, and, relevant to the current topic, sexual openness or novelty (like pornography, sexual experimentation, or homosexuality). In support of this model, Weeden et al (2008) found that, controlling for non-reproductive variables, sexual variables were not only predictive of religious attendance, but also that, controlling for sexual variables, the non-reproductive variables were no longer predictive of religious attendance.

While the evidence is not definitively causal in nature, and there is likely more to this connection than a unidirectional arrow, it seems highly likely that cognitive mechanisms responsible for determining one’s currently preferred mating strategy also play a role in determining one’s attitudes towards the acceptability of other’s behaviors. It is also highly likely that the reasons people tend to give for their attitudes will be inconsistent, given that they don’t often reflect the actual functioning of their mind. We all have an interest in making other people’s business our business, since other people’s behaviors tend to eventually have an effect on us – whether that effect is relatively distant or close in the causal chain, or whether it is relatively direct or indirect. We just tend to not consciously understand why.

References: Nisbett, R., & Wilson, T. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84 (3), 231-259 DOI: 10.1037//0033-295X.84.3.231

Weeden, J., Cohen, A., & Kenrick, D. (2008). Religious attendance as reproductive support Evolution and Human Behavior, 29 (5), 327-334 DOI: 10.1016/j.evolhumbehav.2008.03.004

A Sense Of Entitlement (Part 3)

Recently, I came across this clip online, and it was just too interesting not to share (which just goes to show not all my time spent mindless browsing the internet is a waste; full talk can be found here). In this experiment (Brosnan & de Waal, 2003), five female capuchin monkeys are trained to do a simple task: exchange rock tokens they are given with the experimenter for a food reward. The monkey gives the experimenter a token, and it gets a slice of cucumber. Typically, the monkey will then eat the snack without a fuss, which isn’t terribly surprising. However, if the monkey getting the cucumber now witnesses another monkey doing the same task, but getting rewarded with a much more desirable food item – in this case, a grape – not only does the initial monkey get visibly agitated when it receives a cucumber again, but actually starts to refuse the cucumber slices. While the cucumber had been an acceptable reward for the task mere moments ago, it now appears to be insulting.

For the sake of comparison, when both monkeys were exchanging a token for a slice of cucumber, they would only either fail to return the token or reject the food about 5% of the time. When the other monkey was getting a grape for the same exchange, now the female getting the cucumber would reject the food or not return the token roughly 45% of the time. Things got even worse if the monkey getting the grape was getting it for free, without having to even return a token; in that case, the rate of rejections and non-returns jumped to almost 80%. It should be noted that the monkey that got the grapes, however, would happily munch away on them, rather than reject them out of concerns for their unfair, but beneficial, outcomes (Wolkenten et al., 2007).

It would seem monkeys have a sense for what they ought to be getting from the exchange, and that sense is based, in part, by comparing their payoff to what others are getting for similar tasks. A monkey might get the sense that it deserves more, that the payoff isn’t fair, if it isn’t getting as much as another. Further, when a monkey’s sense of what it ought to be getting is violated, they seem to behave as if they were being harmed. Of course, giving a monkey a cucumber isn’t harm; it’s a previously acceptable benefit that seems to get re-conceptualized as harm, since the benefit isn’t as large as someone else’s.

Pictured above: people not being harmed.

Humans have shown a similar pattern of results in ultimatum games: when confronted with an offer that they don’t find fair enough, people will often reject it, ensuring that they (and their partner) leave the lab with nothing. Of course, if you just gave these subjects that same amount of money without that context of the ultimatum game, rejecting it outright would be rare behavior. This is no doubt because free money isn’t normally conceptualized as harmful. As I wrote about in part 2 of this series, dictator and ultimatum games seem to evoke different responses to the same offer, as judged by the responses receivers write to selfish proposers. When confronted with an ultimatum game, receivers write many more negative messages to selfish proposers, compared to dictator games. While those contexts might not be exactly analogous to the monkey’s, we could imagine situations that are similar, such as finding out that you’re being paid less at work for doing the same job as a co-worker. Even if you were previously happy with your salary, your satisfaction with it may now drop somewhat, and public complaints would begin. However, if you found out you were actually making more than other co-workers, it would probably be rare for you to march into your boss’s office and demand to be paid less to make it fair. I imagine it would be equally rare for you to make that new-found knowledge as public as possible.

So how can we explain these results? Simply stating that some monkeys and people have a “concern for fairness” is clearly not enough. Not only would it not really be an explanation, but it would miss a key finding: concerns for fairness only seem to appear in the context of being disadvantaged. The monkeys receiving a grape did not throw it back at the experimenter because their other monkey partner was not receiving one as well. To drive this point home, consider some research done by Pillutla & Murnighan (1995). In one experiment, subjects were playing an ultimatum game. Participants in this experiment were dividing sums of money that ranged from a low of ten dollars to a high of seventy. However, the receiver of these offers either had complete information (knew how much money was being divided) or incomplete information (did not know the size of the pot being divided). This gave the subjects making the offer the potential for a strategic advantage. Did they use it? Well, out of 33 offerers, 31 used that information asymmetry to their advantage, so, “yes”. Much like the monkeys, people only seemed to have a concern for fairness when it benefited them.

Wolkenten et al. (2007) propose the following explanation: these ostensible fairness concerns can be better conceptualized as a solution for cooperative effort problems – contexts in which organisms cooperate with each other in the service of achieving some goal without knowing ahead of time how the payoffs will be divided. One model for this might be a variant of the Stag Hunt. In this example, there are two hunters who have to decide between hunting rabbits or a stag. While each hunter can successfully hunt rabbits alone, the same cannot be said of stag, and rabbits offer a smaller overall payoff (say, 1). If the hunters work together, they can bring down a stag, which is the larger payoff (say 6). However, in the context of cooperative effort problems, the two hunters won’t know how the stag will be divided until after the kill. If one hunter monopolizes most of the payoff (taking 4), the hunter who got the smaller share can exert pressure on their partner by refusing to cooperate again.

Just like how I threaten to take my game and go home unless my friend takes his hotels off the greens.

Now, while the hunter who got the smaller share in the example might still be better off cooperating and hunting stag, relative to rabbit, by not hunting stag until he gets a more equitable division he can impose an even greater loss on his partner. The hunter who got the smaller share will only lose out on 1 unit of payoff by not cooperating, while his partner would lose out on 3 because of that refusal. This asymmetry in relative payoff and subsequent loses can be leveraged in social bargaining in order to net a better payoff in the long-term by suffering a short-term cost. As I have written about previously, depression might have a similar function.

That would imply that while it’s in the interests of an individual to maximize their own payoff, that same individual has an interest in maintaining the cooperation of others in the service of that goal. This requires balancing two competing sets of demands: appearing fair, so as to maintain cooperation, while simultaneously being as inequitable as possible. The importance of the “as possible” part of that last sentence really cannot be overstated. The greater the inequity in resource allocation, the greater the chance you might lose support in future ventures by triggering the fairness concerns of others. The costs of being too selfish might not end at ostracism either; selfishness might also invite violent retribution by others who feel cheated. Like all matters relating to selection, there are tradeoffs to be made; risks and rewards to be balanced. In some cases, a more proximately cooperative strategy can end up being more ultimately selfish.

Thankfully (or not so thankfully, depending on your place in a given situation), this feeling of what one ought to get from the payoff – what one deserves – normally depends on many fuzzy variables. For instance, not all cooperators necessarily put in the same amount of effort towards achieving a joint goal. If one cooperator takes on greater risks, or another attempts to free-ride by not investing as much energy as they could have, intuitions about how much an individual deserves tend to change accordingly. This leaves open multiple avenues for attempts to convince others that you deserve more, or others deserve less, as well as counter-adaptations to defend against such claims. It’s a delicate balancing act that we all engage in, playing multiple roles over time and circumstance, and one that’s guaranteed to generate more than a fair share of hypocrisy.

References: Brosnan, S., & de Waal, F. (2003). Monkeys reject unequal pay Nature, 425 (6955), 297-299 DOI: 10.1038/nature01963

Pillutla, M., & Murnighan, J. (1995). BEING FAIR OR APPEARING FAIR: STRATEGIC BEHAVIOR IN ULTIMATUM BARGAINING. Academy of Management Journal, 38 (5), 1408-1426 DOI: 10.2307/256863

van Wolkenten M, Brosnan SF, & de Waal FB (2007). Inequity responses of monkeys modified by effort. Proceedings of the National Academy of Sciences of the United States of America, 104 (47), 18854-9 PMID: 18000045

I Know I Am, But What Are You? Competitive Use Of Victimhood

It’s no secret; I’m a paragon of mankind. Beyond simply being a wildly-talented genius, I’m also in such peak physical form that it’s common for people to mistake me for a walking Statue of David with longer hair. As the now-famous Old Spice commercial says, “Sadly, you are not me”, but wouldn’t it be nice for you if you could convince other people that you were? There’s no need to answer that; of course it would be, but the chances of you successfully pulling such a feat off are slim to none.

The more general point here is that, in the social world, you can benefit yourself by strategically manipulating what and how other people think about you and those around you. Further, this manipulation is going to be easier to pull off the less objectively observable the object of that manipulation is. For instance, if I could convince you that my future prospects are good – that I would be a powerful social ally – you might be more inclined to invest in maintaining a relationship with me and giving me assistance in the hopes that I will repay you in kind at some later date. However, I would have harder time trying to convince you I have blue eyes when you can easily verify that they are, in fact, brown.

He’s going to have a hell of a time convincing his boyfriend he’s gay now.

As I’ve written about before, one of those fuzzy concepts open to manipulation is victimhood. Given that legitimate victimhood status can be a powerful resource in the social world, and victimhood requires there be one or more perpetrators, it should come as no surprise that people often find themselves in disagreement about almost every facet of it: from harm, to intent, to blame, and far beyond. Different social contexts – such as morally condemning others vs. being morally condemned yourself – pose people with different adaptive problems to solve, and we should expect that people will process information in different ways, contingent on those contexts. A recent paper by Sullivan et al. (2012) examined the matter over the course of five studies, asking about people’s intuitions concerning the extent of their own victimhood in three contexts: one in which there was no harm being done, one in which a group they belong to was accused of doing harm, and one in which another group was accused of doing harm.

In the first study, 49 male undergrads were presented with a news story (though it was actually a fake news story because psychologists are tricksters) that had one of three conclusions: (a) men and women had equal opportunities in modern society, (b) women were discriminated against in modern society, but it was due to their own choices and biology, or (c) women were discriminated against, and this discrimination was intentionally perpetrated by men. Following this, the men indicated on a 7-point scale whether they thought men or women suffered more relative discrimination in modern society (where 1 indicated men suffered less, 4 indicated they suffered equally, and 7 indicated men suffered more). When confronted with the story where women were not discriminated against, men averaged a 1.69 on the scale; a similar set of results was found when women were depicted as suffering from self-inflicted discrimination, averaging a 1.87. However, when men were depicted as being the perpetrators of this discrimination, the ratings of perceived male oppression rose to 2.61. When men, as a group, were accused of causing harm, they reacted by suggesting they were themselves a victim of more discrimination, as if to suggest that the discrimination women faced wasn’t so bad.

What’s curious about those results is that men didn’t rate the discrimination they faced, relative to women, as more equal when the news article suggested equality in that domain. Rather, they only adjusted their ratings up when their group was painted as the perpetrators of discrimination. The information they were being given didn’t seem to phase them much until it got personal, which is pretty neat.

A similar pattern of findings arose for women in a following experiment. One-hundred forty-two women read a fictional news story about how men were discriminated against when it came to hiring practices, and this discrimination either came from other men or women. Following that, the women filled out the same 7-point scale as before. When men were depicted as responsible for the discrimination against other men, women averaged a 5.16 on the scale, but when women were depicted as being the cause of that discrimination, that number rose to a 5.42. While this rise in ratings of victimhood was smaller than the rise seen with the men, it was still statistically significant. The difference in the scale of these results might be due to the subjects, (the males were undergrads whereas the women were recruited on Mturk) or perhaps the nature of the stories themselves, which were notably different across experiments.

Sexist male behavior is the cause of all the problems for women across society. On the other hand, 65% of women might not hire a man. Seems even-handed to me.

Two of the five experiments also examined whether one group discriminating against another in general was enough to trigger competitive victimhood, or whether one’s own group had to be the perpetrator of the discrimination to cause the behavior. Since they had similar results, I’ll focus on the one regarding race. In this experiment, 51 White students read a story about how Black students tended to be discriminated against when it came to university admissions, and this discrimination was perpetrated either predominately by other White people, or by Asian people. Following this, they filled out that same 7-point scale. When Blacks were being discriminated against by Asians, the White participants averaged a 2.0 on the scale, but when it was Whites discriminating against Black students, this average rose to 2.78. What these results demonstrate is that it’s not enough for some group to just be claiming victimhood status; in order to trigger competitive victimhood, your group needs to be named as the perpetrator.

These results fit neatly with previous research demonstrating that when it comes to assigning blame, people are less likely to assign blame to a victim, relative to a non-victim or hero. When people are being blamed for causing some harm, they tend to see themselves as greater victims, likely in order to better dissuade others from engaging in punishment. However, when people are not being blamed, there is no need to deflect punishment, and, accordingly, the bias to see oneself as a victim diminishes.

There is one part of the paper that bothered me in a big way: the authors’ suggestions about which groups face more victimization objectively. As far as I can tell, there is no good way to measure victimhood objectively, and, as the results of this experiment show, subjective claims and assessments of victimhood themselves are likely to modified by outside factors. For example, consider two cases: (a) a woman suggests that her boyfriend is physically abusing her, vs. (b) a man suggesting that his girlfriend is physically abusing him. Strictly in terms of which claim is more likely to be believed – regardless of whether it’s true or not – I would put the man’s claim at a disadvantage. Further, if it is believed, there are likely different costs and benefits for men and women surrounding such a claim. Perhaps women would be more likely to receive support, where a man might just be painted as a wimp and lose status among both his male and female peers.

Whether that pattern itself actually holds is besides the point. The larger issue here is that this strategy of claiming victimhood may not work equally well for all people, and it’s important to consider that when assessing people’s judgments of their victimhood. The third-parties that are assessing these claims are not merely passive pawns waiting to be manipulated by others; they have their own adaptive problems to solve when it comes to assessment. To the extent that these problems entailed reproductive costs and benefits, selection would have fashioned psychological mechanisms to deal with them. A man might have more of a vested interest in concerning himself with an attractive woman’s claim to victimhood over a sexually unappealing man, as preferentially helping one of the two might tend to be more reproductively useful.

How often do you come across stories of knights rescuing strange “dudes in distress”, relative to strange damsels?

It should be noted that claiming victimhood is not the only way of deflecting punishment; shifting the blame back towards the victim would likely work as well. The results indicated that competitive victimhood was not triggered in those contexts, presumably because there was no need for it. That’s not to say that they two could not work together – i.e. you’re the cause of your own misfortune as well as the cause of mine – but rather to note that different strategies are available, and will likely be utilized differently by different groups, contingent on their relative costs and benefits. Further work is going to want to not only figure out what those other tactics are, but assess their effectiveness, as rated by third-parties.

I’d like to conclude by talking briefly about the quality of the “theory” put forth by the authors in this paper to explain their results: social identity theory. Here is how they define it in the introduction:

Individuals are motivated to maintain a positive moral evaluation of their social group…we argue that when confronted with accusations of in-group harm doing…individuals will defensively attempt to bolster the in-group’s moral status in order to diffuse the threat.

As Steven Pinker has noted, explanations like these are most certainly not theories; they are simply restatements of findings that need a theory to explain them. Unfortunately, non-evolutionary minded researchers will often resort to this kind of circularity as they lack any way of escaping it. To suggest that people have all these cognitive biases to just “feel good” about themselves or their group is nonsense (Kurzban, 2010). Feeling good, on its own, is not something that could possibly have been selected for in the first place, but even if it could have been, it would be curious why people wouldn’t simply just feel good about their social group, rather than going through cognitive gymnastics to try and justify it. I find the evolutionary framework to provide a much more satisfying answer to the question, as well as illuminating future directions for research. As far as I can tell, the “feel good” theory does not.

References: Kurzban, R. (2010). Why everyone (else) is a hypocrite. Princeton, NJ: Princeton University Press.

Sullivan, D., Landau, M.J., Branscombe, N.R., & Rothschild, Z.K. (2012). Competitive victimhood as a response to accusations of ingroup harm doing. Journal of Personality and Social Psychology, 102, 778-795.

Depressed To Impress

Reflecting on this morning’s usual breakfast cake brought to mind a thought that only people like myself think: the sugar in my breakfast is not sweet. Sweetness, while not a property of the sugar, is an experience generated by our mind when sugar is present on the tongue. We generally find the presence of sugar to be a pleasant experience, and often times find ourselves seeking out similar experiences in the future. The likely function of this experience is to motivate people to preferentially seek out and consume certain types of foods, typically the high-calorie variety. As dense packages of calories can be very beneficial to an organism’s survival, especially when they’re rare, the tendency to experience a pleasant sweetness in the presence of sugar was selected for; individuals who were indifferent between eating sand or honey were no one’s ancestors.

On a related note, there’s nothing intrinsically painful about damage done to your body. Pain, like sweetness, is an important signal; pain signals when your body is being damaged, in turn motivating people to stop doing or get away from whatever is causing the harm and avoiding making current injuries worse. Pain feels so unpleasant because, if it didn’t, the proper motivation would not be provided. However, in order to feel pain, an organism must have evolved that ability; it’s not present as a default, as evidenced by the rare people born without the ability to feel pain. As one could imagine, those who were indifferent to the idea of having their leg broken rarely ended up reproducing as well as others who found the experience excruciating.

Walk it off.

Sensations like pain or sweetness can be explained neatly and satisfyingly through these functional accounts. With these accounts we can understand why things that feel pleasant – like gorging myself on breakfast cake – are not always a good thing (when calories are abundant), whereas unpleasant feelings – like sticking your arm in a wood-chipper – can be vital to our survival. Conversely, lacking these functional accounts can lead to poor outcomes. For instance, treating a fever as a symptom of an infection to be reduced, rather than a body’s adaptive response to help fight the infection, can actually lead to a prolonging and worsening of said infection (Nesse & Williams, 1994). Before trying to treat something as a problem and make it go away just because it feels unpleasant, or not treat a problem because it might be enjoyable, it’s important to know what function those feelings might serve and what costs and benefits of reducing or indulging in them might entail. This brings us to the omnipresent subject of unpleasant feelings that people want to make go away in psychology: depression.

Depression, I’m told, is rather unpleasant to deal with. Most commonly triggered by a major, negative life event, depression leads to a loss of interest and engagement in almost all activities, low energy levels, and, occasionally, even suicide. Despite these apparent costs, depression continues to be a fairly prevalent complaint the world over, and is far more common among women than men. Given its predictability and prevalence, might there be a function behind this behavior? Any functional account of depression would need to both encompass these known facts, as well as purpose subsequent gains that would tend to outweigh these negative consequences. As reviewed by Hagen (2003), previous models of depression suggested that sadness served as a type of psychic pain: when one is unsuccessful in navigating the social world in some way, it is better to disengage from a failing strategy than to continue to pursue it, as one would be wasting time and energy that could be spent elsewhere. However, such a hypothesis fails to account for major depression, positing instead that major depression is simply a maladaptive byproduct of an otherwise useful system. Certainly, activities like eating shouldn’t be forgone because an unrelated social strategy has failed, nor should one engage in otherwise harmful behaviors (potentially suicidal ones) for similar reasons; it’s unclear from the psychic pain models why these particular maladaptive byproducts would arise and persist in the first place. For example, touching a hot pan causes one to rapid withdraw their hand, but it does not cause people to stop cooking food altogether for weeks on end.

Hagen (2003) puts forth the idea that depression functions primarily as a social bargaining mechanism. Given this function, Hagen suggests the following contexts should tend to provoke depressive episodes: a person should experience a perceived negative life event, the remedy to this event should be difficult or impossible to achieve on their own, and there must be conflict over other people’s willingness to provide assistance in achieving a remedy. Conflict is ubiquitous in the social realm of life; that much is uncontested. When confronted with a major negative life event, such as the death of a spouse or the birth of an unwanted child, social support from others can be its most important. Unfortunately for those in need, others people are not always the most selfless when it comes to providing for those needs, so the needy require methods of eliciting that support. While violence is one way to make others do what you’d like, it is not always the most reliable or safest method, especially if the source you’re attempting to persuade is stronger than you or outnumber you. Another route to compelling a more powerful other to invest in you is to increase the costs of not investing, and this can be done by simply withholding benefits that you can provide others until things change. Essentially, depression serves as a type of social strike, the goal of which is to credibly signal that one is not sufficiently benefiting from their current state, and is willing to stop providing benefits to others until the terms of their social contract have been renegotiated.

“What do we want? A more satisfying life. When do we want it? I’ll just be in bed until that time…whatever”

Counter-intuitive as it may sound, despite depression feeling harmful to the person suffering from it, the function of depression would be to inflict costs on others who have an interest in you being productive and helpful. By inflicting costs on yourself (or, rather, failing to provide benefits to others), you are thereby motivating others to help you to so they can, in turn, help themselves by regaining access to whatever benefits you can provide. Then again, perhaps this isn’t as counter-intuitive as it may sound, taking the case of suicide as an example. Suicide definitely represents a cost to other people in one’s life, from family members, to spouses, to children, to friends, or trade partners. It’s much more profitable to have a live friend or spouse capable of providing benefits to you than a dead one. Prior to any attempt being made, suicidal people tend to warn others of their intentions and, if any attempt is made, they are frequently enacted in manners which are unreliably lethal. Further still, many people, whether family or clinicians, view suicidal thoughts and attempts as cries for help, rather than as a desire to die per se, suggesting people have some underlying intuitions about the ultimate intentions of such acts. That a suicide is occasionally completed likely represents a maladaptive outcome of an evolutionary arms race between the credibility of the signal and the skepticism that others view the signal with. Is the talk about suicide just that – cheap talk – or is it actually a serious threat?

There are two social issues that depression needs to deal with that can also be accounted for in this model. The first issue concerns how depressed individuals avoid being punished by others. If an individual is taking benefits from other group members, but not reciprocating those benefits (whether due to depression or selfishness), they are likely to activate the cheater-detection module of the mind. As we all know, people don’t take kindly to cheaters and do not tend to offer them more support to help out. Rather, cheaters tend to be punished, having further costs inflicted upon them. If the goal of depression is to gain social support, punishment is the last thing that would help achieve that goal. In order to avoid coming off as a cheater, a depressed individual may need to forgo accepting many benefits that others provide, which would help explain why depressed individuals often give up activities like eating or even getting out of bed. A more-or-less complete shutdown of behavior might be required in order to avoid coming off as a manipulative cheater.

The second issue concerns the benefits that a depressed individual can provide. Let’s use the example of a worker going on strike: if this worker is particularly productive, having him not show up to work will be a genuine cost on the employer. However, if this worker either poorly skilled – thus able to deliver little, if any, benefits to the employer – or easily replaceable, not showing up to work won’t cause the employer any loss of sleep or money. Accordingly, in order for depression to be effective, the depressed individual needs to be socially valuable, and the more valuable they are seen as being, the more of a monopoly they hold over the benefits they provide, the more effective depression can be in achieving its goal. What this suggests is that depression would work better in certain contexts, perhaps when population sizes are smaller and groups more mutually dependent on one another – which would have been the social contexts under which depression evolved. What this might also suggest is that depression may become more prevalent and last longer the more replaceable people become socially due to certain novel features of our social environment; there’s little need to give a striking worker a raise if there are ten other capable people already lined up for his position.

You should have seen the quality of resumes I was getting last time I was single.

That depression is more common among women would suggest, then, that depression is a more profitable strategy for women, relative to men. There are several reasons this might be the case. First, women might be unable to engage in direct physical aggression as effectively as men, restricting their ability to use aggressive strategies to gain the support of others. Another good possibility is that, reproductively, women tend to be a more valuable resource (or rather, a limiting one) relative to men. Whereas almost all women had a valuable resource they could potentially restrict access to, not all men do. If men are more easily replaceable, they hold less bargaining power by threatening to strike. Another way of looking at the matter is that the costs men incur by being depressed and shutting down are substantially greater than the costs women do, or the costs they are capable of imposing on others aren’t as great. A depressed man may quickly fall in the status hierarchy, which would ultimately do more harm than the depressive benefits would be able to compensate for. It should also be noted that one of the main ways depression is alleviated is following a positive life change, like entering into a new relationship or getting a new job, which is precisely what the bargaining model would predict, lending further support to this model.

So given this likely function of depression, is it a mental illness that requires treatment? I would say no to the first part and maybe to the second. While generally being an unpleasant experience, depression, in this model, is no more of a mental illness than the experience of physical pain is. Whether or not it should be treated is, of course, up to the person suffering from it. There are very real costs to all parties involved when depression is active, and it’s certainly understandable why people would want to make them go away. What this model suggests is that, like treating a fever, just making the symptoms of depression go away may have unintended social costs elsewhere, either in the short- or long-term. While keeping employees from striking certainly keeps them working, it also removes some of their ability to bargain for better pay or working conditions. Similarly, simply relieving depression may keep people happier and more productive, but it may also lead them to accept less fulfilling or supportive circumstances in their life.

References: Hagen, E.H. (2003). The bargaining model of depression. In: Genetic and Cultural Evolution of Cooperation, P. Hammerstein (ed.). MIT Press, 95-123

Nesse, R.M., & Williams G.C. (1994). Why We Get Sick: The New Science Of Darwinian Medicine. Vintage books.

No, Really; Group Selection Doesn’t Work.

Group selection is kind of like the horror genre of movies: even if the movie was terrible, and even if the main villain gets killed off, you can bet there will be still be a dozen sequels. Like the last surviving virgin among a group of teenage campers, it’s now on my agenda to kill this idea once again, because it seems the first couple dozen times it was killed, the killing just didn’t stick. Now I have written about this group selection issue before, but only briefly. Since people seem to continue and actually take the idea seriously, it’s time to explicitly go after the fundamental assumptions made by group selection models. Hopefully, this will put the metaphorical stake through the heart of this vampire, saving me time in future discussions as I can just link people here instead of rehashing the same points over and over again. It probably won’t, as some people seem to like group selection for some currently unknown reason, but fingers crossed anyway.

Friday the 13th, part 23: At this point, you might as well just watch the first movie again, because it’s the same thing.

Recently, Jon Gotschall wrote an article for Psychology Today about how E.O. Wilson thinks the selfish gene metaphor is a giant mistake. As he didn’t explicitly say this idea is nonsense – the proper response – I can only assume he is partially sympathetic to group selection. Et tu, Jon? There’s one point from that article I’d like to tackle first, before moving onto other, larger matters. Jon writes the following:

In effect, this defined altruism-real and authentic selflessness–out of existence. On a planet ruled by selfish genes, “altruism” was just masked selfishness.

The first point is that I have no idea what Jon means when he’s talking about “real” altruism. His comments there conflate proximate and ultimate explanations, which is a mistake frequently cautioned against in your typical introductory level evolutionary psychology course. No one is saying that other-regarding feelings don’t exist at a proximate level; they clearly do. The goal is explain what the ultimate function of such feelings are. Parents genuinely tend to feel selfless and act altruistically towards their children. That feeling is quite genuine, and it happens to exist in no small part because that child carries half of that parent’s genes. By acting altruistically towards their children, parents are helping their own genes reproduce; genes are benefiting copies of themselves that are found in other bodies. The ultimate explanation is not privileged over the proximate one in terms of which is “real”. It makes no more sense to say what Jon did than for me to suggest that my desire to eat chocolate cake is really a reproductive desire, because, eventually, that desire had to be the result of an adaptation designed to increase my genetic fitness. Selfish genes really can create altruistic behavior, they just only do so when the benefits of being altruistic tend to outweigh the costs in the long run.

Speaking of benefits outweighing the costs, it might be helpful to take a theoretical step back and consider why an organism would have any interest in joining a group in the first place. Here are two possible answers: (1) An organism can benefit in some way by entering into a coalition with other organisms, achieving goals it otherwise could not, or (2) an organism joins a group in order to benefit that group, with no regard for its own interests. The former option seems rather plausible, representing cases like reciprocal altruism and mutualism, whereas the latter option does not appear very reasonable. Self-interest wins the day over selflessness when it comes to explaining why an organism would bother to join a group in the first place. Glad we’ve established that. However, to then go on to say that, once it has joined a coalition, an organism converts its selfish interests to selfless ones is to now, basically, endorse the second explanation. It doesn’t matter to what extent you think an organism is designed to do that, by the way. Any extent is equally as problematic.

You will need a final answer at some point: Selfish, or Selfless?

But organisms do sometimes seem to sometimes put their own interests aside to benefit members of their group, right? Well, that’s going to depend on how you’re conceptualizing their interests. Let’s say I’m a member of a group that demands a monthly membership fee, and, for the sake of argument, this group totally isn’t a pornography website. I would be better off if I could keep that monthly membership fee to myself, so I must be acting selflessly by giving it to the group. There’s only one catch: if I opt to not pay that membership fee, there’s a good chance I’ll lose some or all of the benefits that the group provides, whatever form those benefits come in. Similarly, whether through withdrawal of social support or active punishment, groups can make leaving or not contributing costlier than staying and helping. Lacking some sort of punishment mechanism, cooperation tend to fall apart. The larger point there here is that if by not paying a cost, you end up paying an even larger cost, that’s not exactly selfless behavior requiring some special explanation.

Maybe that example isn’t fair though; what about cases like when a soldier jumps on a grenade to save his fellow soldiers? Well, there are a couple of points to make about the grenade-like examples: first, grenades are obviously an environmental novelty. Humans just aren’t adapted to an environment containing grenades and, I’m told, most of us don’t make a habit of jumping into dangerous situations to help others, blind to the probably of injury to death. That said, if you had a population of soldiers, some of which had a heritable tendency to jump on grenades to save others, while other soldiers had no such tendency, if grenades kept getting thrown at them, you could imagine which type would tend to out-reproduce the other, all else being equal. A second vital point to make is that every single output of an cognitive adaptation need not be adaptive; so long as whatever module led to such a decision tended to be beneficial overall, it would still spread and be maintained throughout the population, despite occasional maladaptive outcomes. Sometimes a peacock’s large tail spells doom for the bird who carries it as it is unable to escape from a predator, but that does mean, on the whole, any one bird would be better suited to just not bother growing their tail; it’s vital for attracting a mate, and surviving means nothing absent reproduction.

Now, onto the two major theoretical issues with group selection itself. The first is displayed by Jon in his article here:

Let’s run a quick thought experiment to see how biologists reached this conclusion. Imagine that long before people spread out of Africa there was a tribe called The Selfless People who lived on an isolated island off the African coast. The Selfless People were instinctive altruists, and their world was an Eden. 

The thought experiment is already getting ahead of itself in a big way. In this story, it’s already assumed that a group of people exist with these kind of altruistic tendencies. Little mind is paid to how the members of this group came to have these tendencies in the first place, which is a rather major detail, especially because, as many note, within groups selfishness wins. Consider the following: in order to demonstrate group selection, you would need a trait that conferred group-level fitness benefits at individual-level fitness costs. If the trait benefited the individual bearer in any way, then it would spread through standard selection and there would be no need to invoke group-level selection. So, given that we’re, by definition, talking about a trait that actively hinders itself getting spread in order to benefit others, how does that trait spread throughout the population resulting in a population of ‘selfless people’? How do you manage to get from 1 to 2 by way of subtraction?

Perhaps it’s from all that good Karma you build up?

No model of group selection I’ve come across yet seems to deal with this very basic problem. Maybe there are accounts out there I haven’t read that contain the answer to my question; maybe the accounts I have seen have an answer that I’ve just failed to understand. Maybe. Then again, maybe none of the accounts I read have actually provided a satisfying answer because they start with the assumption that the traits they’re seeking to prove exists already exists in some substantial way. That kind of strikes me as cheating. Jon’s thought experiment certainly makes that assumption. The frequently cited paper by Boyd and Richardson (1990) seems to make that assumption as well; people who act in favor of their group selflessly just kind of exist. That trait needs an explanation; simply assuming it into existence and figuring out the benefits from that point is not good enough. There’s a chance that the trait could spread by drift, but drift has, to the best of my knowledge, never been successfully invoked to explain the existence of any complex adaption. Further, drift only really works when a trait is, more or less, reproductively neutral. A trait that is actively harmful would have a further hurdle to overcome.

Now positing an adaptation designed to deliver fitness benefits to others at fitness costs to oneself might seen anathema to natural selection, because it is, but the problems don’t stop there. There’s still another big issue looming: how we are to define the group itself; you know, the thing that’s supposed to be receiving these benefits. Like many other concepts, what counts as a group – or a benefit to a group – can be fuzzy and is often arbitrary. Depending on what context I currently find myself in, I could be said to belong to an almost incalculably large number of potential groups, and throughout the course of my life I will enter and leave many explicitly and implicitly. Some classic experiments in psychology demonstrate just how readily group memberships can be created and defined. I would imagine that for group selection to be feasible, at the very least, group membership needs to be relatively stable; people should know who their “real” group is and act altruistically towards it, and not other groups. Accordingly, I’d imagine group membership should be a bit more difficult to just make up on the spot. People shouldn’t just start classifying themselves into groups on the basis of being told, “you are now in this group” anymore than they should start thinking about a random woman as their mother because someone says, “this woman is now your mother” (nor would we expect this designated mother to start investing in this new person over her own child). That group membership is relatively easy to generate demonstrates, in my mind, the reality that group membership is a fuzzy and fluid concept, and, subsequently, not the kind of thing that can be subject to selection.

Now perhaps, as Jon suggested, the selfless people will always win against the selfish people. It’s an possible state of affairs, sure, but it’s important to realize that it’s an assumption being made, not a prediction being demonstrated. Such conditions can be artificially created in the lab, but whether they exist in the world, and, if they do, how frequently they appear, is another matter entirely. The more general point here is that group selection can work well in the world of theory, but that’s because assumptions are made there that define it as working well. Using slightly tweaked sets of assumptions, selfless groups will always lose. They win when they are defined as winning, and lose when they are defined as losing. Using another set of assumptions, groups of people with psychic abilities win against groups without them. The key then, is to see how these states of affairs hold up in real life. If people don’t have psychic abilities, or if psychic abilities are impossible for one reason or another, no number of assumptions will change that reality.

Finally, the results of thought experiments like the foot-bridge dilemma seem to cut against the group selection hypothesis: purposely sacrificing one person’s life to save the lives of five others is, in terms of the group, the better choice, yet people consistently reject this course of action (there, B=5, C=1). When someone jumps on a grenade, we praise them for it; when someone throws another person on a grenade, we condemn them, despite this outcome being better from the group perspective (worst case, you’ve kill a non-altruist who wouldn’t jump on it anyway, best case, you helped an altruist act). Those outcomes conflict with group selection predictions, which, I’d think, should tend to favor more utilitarian calculations – the ones that are actually better for a group. I would think it should also predict Communism would work out better than it tends to, or that people would really love to pay their taxes. Then again, group selection doesn’t seem to be plausible in the first place, so perhaps result like these shouldn’t be terribly surprising.

References: Boyd, R., & Richerson, P.J. (1990). Group selection among alternative evolutionary stable strategies. Journal of Theoretical Biology, 145, 331-342.

You’ve Got Some (Base)Balls

Since Easter has rolled around, let’s get in the season and consider very briefly part of the story of Jesus. The Sparknotes version of the story involves God symbolically sacrificing his son in order to in some way redeem mankind. There’s something very peculiar about that line of reasoning, though: the idea that punishing someone for a different person’s misdeed is acceptable. If Bill is driving his car and strikes a pedestrian in a crosswalk, I imagine many of us would find it very odd, if not morally repugnant, to then go an punish Kyle for what happened. Not only did Kyle not directly cause the act to take place, but Kyle didn’t even intend for the action to take place – two of the criteria typically used to assess blame – so it makes little sense to punish him. As it turns out though, people who might very well disagree with punishing Kyle in the previous example can still quite willing to accept that kind of outcome in other contexts.

Turns out that the bombing of Pearl Harbor during World War II was one of those contexts.

If Wikipedia is to be believed, following the bombing of Pearl Harbor, a large number of people of Japanese ancestry – most of which were American citizens – were moved into internment camps. This move was prompted by fears of further possible Japanese attacks on the United States amidst concerns about the loyalty of the Japanese immigrants, who might act in some way against the US with their native country. The Japanese, due to their perceived group membership, were punished because of acts perpetrated by others viewed as sharing that same group membership, not because they had done anything themselves, just that they might do something. Some years down the road, the US government issued an apology on behalf of those who committed the act, likely due to some collective sense of guilt about the whole thing. Not only did guilt get spread to the Japanese immigrants because of the actions of other Japanese people, but the blame for the measures taken against the Japanese immigrants was also shared by those who did not enact it because of their association.

Another somewhat similar example concerns the US government’s response following the attacks of September 11th, 2001. All the men directly responsible for the hijackings were dead, and as such, beyond further punishment. However, their supporters – the larger group to which they belonged – was very much still alive, and it was on that group that military descended (among others). Punishment of group members in this case is known as Accomplice Punishment: where members of a group are seen as contributing to the initial transgression in some way; what is known typically as conspiracy. In this case, people view those being punished as morally responsible for the act in question, so this type of punishment isn’t quite analogous to the initial example of Bill and Kyle. Might there be an example that strips the moral responsibility of the person being punished out of the equation? Why yes, it’s turn out there is at least one: baseball.

In baseball, a batter will occasionally be hit by a ball thrown by a pitcher (known as getting beaned). Sometimes these hits are accidental, sometimes they’re intentional. Regardless, these hits can sometimes cause serious injury, which isn’t shocking considering the speed at which the pitches are thrown, so they’re nothing to take lightly. Cushman, Durwin, and Lively (2012) noted that sometimes a pitcher from one team will intentionally bean a player on the opposing team in response to a previous beaning. For instance, if the Yankees are playing the Red Sox, and a Red Sox pitcher hits a Yankee batter, the Yankee pitcher would subsequently hit a Red Sox batter. The researchers sought to examine the moral intuitions of baseball fans concerning these kinds of revenge beanings.

Serves someone else right!

The first question Cushman et al. asked was whether the fans found this practice to be morally acceptable. One-hundred forty five fans outside of Fenway Park and Yankee Stadium were presented with a story in which the the pitcher for the Cardinals intentionally hit a player for the Cubs, causing serious injury. In response, the pitcher from the Cubs hits a batter from the Cardinals. Fans were asked to rate the moral acceptability of the second pitcher’s actions on a scale from 1 to 7. Those who rated the revenge beaning of an innocent player as at least somewhat morally acceptable accounted for 44% of the sample; 51% found it unacceptable, with 5% being unsure. In other words, about half of the sample saw punishing an innocent player by proxy as acceptable, simply because he was on the same team.

But was the batter hit by the revenge bean actually viewed as innocent? To address this question, Cushman et al. asked a separate sample of 131 fans from online baseball forums whether or not they viewed the batter who was hit second as being morally responsible for the actions of the pitcher form their team. The answers here were quite interesting. First off, they were more in favor of revenge beanings, with 61% of the sample indicating the practice was at least somewhat acceptable. The next finding was that roughly 80% of the people surveyed agreed that, yes, the batter being hit was not morally responsible. This was followed by an agreement that it was, in fact, OK to hit that innocent victim because he happened to belong to the same team.

The final finding from this sample was also enlightening. The order in which people were asked about moral responsibility and endorsement of revenge beaning was randomized, so in some cases people were asked whether punishment was OK first, followed by whether the batter was responsible, and in other cases that order was reversed. When people endorsed vicarious punishment first, they subsequently rated the batter as having more moral responsibility; when rating the moral responsibility first, there was no correlation between moral responsibility and punishment endorsement. What makes this finding so interesting is that it suggests people were making rationalizations for why someone should be punished after they had already decided to punish; not before. They had already decided to punish; now they were looking to justify why they had made that decision. This in turn actually made the batter appear to seem more morally responsible.

“See? Now that he has those handcuffs on his rock-solid alibi is looking weaker already.”

This finding ties in nicely with a previous point I’ve made about how notions of who’s a victim and who’s a perpetrator are fuzzy concepts. Indeed, Cushman et al. present another result along those same lines: when it’s actually their team doing the revenge beaning, people view the act as more morally acceptable. When the home team was being targeted for revenge beaning, 43% of participants said the beaning was acceptable; when it was the home team actually enacting the revenge, 67% of the subjects now said it was acceptable behavior. Having someone on your side of things get hurt appears to make people feel more justified in punishing someone, whether that someone is guilty or not. Simply being associated with the guilty party in name is enough.

Granted, when people have the option to enact punishment on the actual guilty party, they tend to prefer that. In the National League, pitchers also come up to bat, so the option of direct punishment exists in those cases. When the initial offending pitcher was beaned in the story, 70% of participants found the direct form of revenge morally acceptable. However, if direct punishment is not an option, vicarious punishment of a group member seemed to still be a fairly appealing option. Further, this vicarious punishment should be directed towards the offending team, and not an unrelated team. For example, if a Cubs pitcher hits a Yankee batter, only about 20% of participants would say it’s then OK for a Yankee pitcher to hit a Red Sox batter the following night. I suppose you could say the silver-lining here is that people tend to favor saner punishment when it’s an option.

Whether or not people are adapted to punish others vicariously, and, if so, in what contexts is such behavior adaptive and why, is a question left untouched by this paper. I could imagine certain contexts where aggressing against the family or allies of one who aggressed against you could be beneficial, but it would depend on a good deal of contingent factors. For instance, by punishing family members of someone who wronged you, you are still inflicting reproductive costs on the offending party, and by punishing the initial offenders allies, you make siding with and investing in said offender costlier. While the punishment might reach its intended target indirectly, it still reaches them. That said, there would be definite risks of strengthening alliances against you – as you are hurting others, which tends to piss people off – as well as possibly calling retaliation down on your own family and allies. Unfortunately, the results of this study are not broken by gender, so there’s no way to tell if men or women differ or not in their endorsement of vicarious punishment. It seems these speculations will need to remain, well, speculative for now.

References:  Cushman, F., Durwin, A.J., & Lively, C. (2012). Revenge without responsibility? Judgments about collective punishment in baseball. Journal of Experimental Social Psychology. (In Press)

Tucker Max, Hitler, And Moral Contagion.

Disgust is triggered off not primarily by the sensory properties of an object, but by ideational concerns about what it is, or where it has been…The first law, contagion, states that “things which have once been in contact with each other continue ever afterwards to act on each other”…When an offensive (or revered) person or animal touches a previously neutral object, some essence or residue is transmitted, even when no material particles are visible. – Haidt et al. (1997, emphasis theirs).

Play time is over; it’s time to return to the science and think about what we can learn of human psychology from the Tucker Max and Planned Parenthood incident. I’d like to start with a relevant personal story. A few years ago I was living in England for several months. During my stay, I managed to catch my favorite band play a few times. After one of their shows, I got a taxi back to my hotel, picked up my guitar from my room, and got back to the venue. I waited out back with a few other fans by the tour bus. Eventually, the band made their way out back, and I politely asked if they would mind signing my guitar. They agreed, on the condition that I not put it on eBay (which I didn’t, of course), and I was soon the proud owner of several autographs. I haven’t played the guitar since for fear of damaging it.

This is my guitar; there are many like it, but this one is mine…and some kind of famous people wrote on it once.

My behavior, and other similar behavior, is immediately and intuitively understandable by almost all people, especially anyone who enjoys the show Pawnstars, yet very few people take the time to reflect on just how strange it is. By getting the signatures on the guitar, I did little more than show it had been touched very briefly by people I hold in high esteem. Nothing I did fundamentally altered the guitar in anyway, and yet somehow it was different; it was distinguished in some invisible way from the thousands of others just like it, and no doubt more valuable in the eyes of other fans. This example is fairly benign; what happened with Planned Parenthood and Tucker Max was not. In that case, the result of such intuitive thinking was that a helpful organization was out $500,000 and many men and women lost access to their services locally. Understanding what’s going on in both cases better will hopefully help people not make mistakes like that again. It probably won’t, but wouldn’t it be nice if did?

The first order of business in understanding what happened is to take a step back and consider the universal phenomenon of disgust. One function of our disgust psychology is to deal with the constant threat of microbial and parasitic organisms. By avoiding ingesting or contacting potentially contaminated materials, the chances of contracting costly infections or harmful parasites are lowered. Further, if by sheer force of will or accident a disgusting object is actually ingested, it’s not uncommon for a vomiting reaction to be triggered, serving to expel as much of the contaminant as possible. While a good portion of our most visceral disgust reactions focus on food, animals, or bodily products, not all of them do; the reaction extends into the realm of behavior, such as deviant sexual behavior, and perceived physical abnormalities, like birth defects or open wounds. Many of the behaviors that trigger some form of disgust put us in no danger of infection or toxic exposure, so there must be more to the story than just avoiding parasites and toxins.

One way Haidt et al. (1997) attempt to explain the latter part of this disgust reaction is by referencing concerns about humans being reminded of their animal nature, or thinking of their body as a temple, which are, frankly, not explanations at all. All such an “explanation” does is push the question back a step to, “why would being reminded of our animal nature or profaning a temple cause disgust?” I feel there are two facts that stand out concerning our disgust reaction that help to shed a lot of light on the matter: (1) disgust reactions seem to require social interaction to develop, meaning what causes disgust varies to some degree from culture to culture, as well as within cultures, and (2) disgust reactions concerning behavior or physical traits tend to focus heavily on behaviors or traits that are locally abnormal in some way. So, the better question to ask is: “If the function of disgust is primarily related to avoidance behaviors, what are the costs and benefits to people being disgusted by whatever they are, and how can we explain the variance?” This brings us nicely to the topic of Hitler.

Now I hate V-neck shirts even more.

As Haidt et al. (1997) note, people tend to be somewhat reluctant to wear used clothing, even if that clothing had been since washed; it’s why used clothing, even if undamaged, is always substantially cheaper than a new, identical article. If the used clothing in question belonged to a particularly awful person – in this case, Hitler – people are even less interested in wearing it. However, this tendency is reversed for items owned by well-liked figures, just like my initial example concerning my guitar demonstrated. I certainly wouldn’t let a stranger draw on my guitar, and I’d be even less willing to let someone I personally disliked give it a signature. I could imagine myself even being averse to playing an instrument privately that’s been signed by someone I disliked. So why this reluctance? What purpose could it possibly serve?

One very plausible answer is that the core issue here is signaling, as it was in the Tucker Max example. People are morally disgusted by, and subsequently try and avoid, objects or behaviors that could be construed as sending the wrong kind of signal. Inappropriate or offensive behavior can lead to social ostracism, the fitness consequences of which can be every bit as extreme as those from parasites. Likewise, behavior that signals inappropriate group membership can be socially devastating, so you need to be cautious about what signal you’re sending. One big issue that people need to contend with is that signals themselves can be interpreted many different ways. Let’s say you go over to a friend’s house, and find a Nazi flag hanging in the corner of a room; how should you interpret what you’re seeing? Perhaps he’s a history buff, specifically interested in World War II; maybe a relative fought in that war and brought the flag home as a trophy; he might be a Nazi sympathizer; it might even be the case that he doesn’t know what the flag represents and just liked the design. It’s up to you to fill in the blanks, and such a signal comes with a large risk factor: not only could an interpretation of the signal hurt your friend, it could hurt you as well for being seen as complicit in his misdeed.

Accordingly, if that signaling model is correct, then I would predict that signal strength and sign should tend to outweigh the contagion concerns, especially if that signal can be interpreted negatively by whoever you’re hoping to impress. Let’s return to the Hitler example: the signaling model would predict that people should prefer to publicly wear Hitler’s actual black V-neck shirt (as it doesn’t send any obvious signals) over wearing a brand new shirt that read “I Heart Hitler”. This parallels the Tucker Max example: people were OK with the idea of him donating money so long as he did so in a manner that kept his name off the clinic. Tucker’s money wasn’t tainted because of the source as much as it was tainted because his conditions made sure the source was unambiguous. Since people didn’t like the source and wanted to reject the perceived association, their only option was to reject the money.

This signaling explanation also sheds light on why the things that cause disgust are generally seen as, in some way, abnormal or deviant. Those who physically look abnormal may carry genes that are less suited for the current environment, or be physically compromised in such a way as it’s better to avoid them than invest in them. Those who behave in a deviant, inappropriate, or unacceptable manner could be signaling something important about their usefulness, friendliness, or their status as a cooperative individual, depending on the behavior. Disgust of deviants, in this case, helps people pick which conspecifics they’d be most profitably served by, and, more generally, helps people fit into their group. You want to avoid those who won’t bring you much reward for your investment, and avoid doing things that get on other people’s bad side. Moral disgust would seem to serve both functions well.

Which is why I now try and make new friends over mutual hatreds instead of mutual interests.

Now returning one final time to the Planned Parenthood issue, you might not like the idea of Tucker Max having his name on a clinic because you don’t like him. I understand that concern, as I wouldn’t like to play a guitar that was signed by members of the Westboro Baptist Church. On that level, by criticizing those who don’t like the idea of a Tucker Max Planned Parenthood clinic, I might seem like a hypocrite; I would be just as uncomfortable in a similar situation. There is a major difference between the two positions though, as a quick example will demonstrate.

Let’s say there’s a group of starving people in a city somewhere that you happen to be charge of. You make all the calls concerning who gets to bring anything into your city, so anyone who wants to help needs to go through you. In response to the hunger problem, the Westboro Baptist Church offers to donate a truck load of food to those in need, but they have one condition: the truck that delivers the food will bear a sign reading “This food supplied courtesy of the Westboro Baptist Church”. If you dislike the Church, as many people do, you have something of a dilemma: allow an association with them in order to help people out, or turn the food away on principle.

For what it’s worth, I would rather see people eat than starve, even if it means that the food comes from a source I don’t like. If your desire to help the starving people eat is trumped by your desire to avoid associating with the Church, don’t tell the starving people you’re really doing it for their own good, because you wouldn’t be; you’d be doing it for your own reasons at their expense, and that’s why you’d be an asshole.

References: Haidt, J., Rozin, P., McCauley, C., & Imada, S. (1997). Body, psyche, and culture: The relationship between disgust and morality. Psychology and Developing Societies, 9, 107-131.

Tucker Max V. Planned Parenthood

My name is Tucker Max, and I am an asshole. I get excessively drunk at inappropriate times, disregard social norms, indulge every whim, ignore the consequences of my actions, mock idiots and posers, sleep with more women than is safe or reasonable, and just generally act like a raging dickhead. -Tucker Max

It should come as no surprise that there are more than a few people in this world who don’t hold Tucker Max in high esteem. He makes no pretenses of being what most would consider a nice person, and makes no apologies for his behavior; behavior which is apparently rewarded with tons of sex and money. Recently, however, this reputation prevented him from making a $500,000 donation to Planned Parenthood. Naturally, this generated something of a debate, full of plenty of moral outrage and inconsistent arguments. Since I’ve been thinking and writing about reasoning and arguing lately, I decided to treat myself and indulge in a little bit. I’ll do my best to make this educational as well as personal, but I make no promises; this is predominately intellectual play for me.

Sometimes you just have to kick back and treat yourself in a way that avoids going outside enjoying the nice weather.

So here’s the background, as it’s been recounted: Tucker find himself with a tax burden that can be written off to some extent if he donates money charitably. Enterprising guy that he is, he also wants to donate the money in such a way that it can help generate publicity for his new book. After some deliberation, he settles on a donation of $500,000 to Planned Parenthood, as he describes himself as always having been pro-choice, having been helped by Planned Parenthood throughout his life, and, perhaps, finding the prospect funny. His condition for the donation is that he wanted his name on a clinic, which apparently is something Planned Parenthood will consider if you donate enough money. A meeting is scheduled to hammer out the details, but is cancelled a few hours before it was set to take place – as Tucker is driving to it – because Planned Parenthood suddenly became concerned about Tucker’s reputation and backs out of the meeting without offering any alternative options.

I’ll start by stating my opinion: Planned Parenthood made a bad call, and those who are arguing that Planned Parenthood made the correct call don’t have a leg to stand on.

Here’s what wasn’t under debate: whether Planned Parenthood needed money. Their funding was apparently cut dramatically in Texas, where the donation was set to take place, and the money was badly needed. So if Planned Parenthood needed money and turned down such a large sum of it, one can only imagine they had some reasons to do so. One could also hope those reasons were good. From the various articles and comments on the articles that I’ve read defending Planned Parenthood’s actions, there are two sets of reasons why they feel this decision was the right one. The first set I’ll call the explicit arguments – what people say – and the second I’ll call the implicit motivations – what I infer (or people occasionally say) the motivations behind the explicit arguments are.

…but didn’t have access to any reproductive care, as the only Planned Parenthood near me closed.

The explicit arguments contain two main points. The first thrust of the attack is that Tucker’s donation is selfish; his major goal is writing off his taxes and generating publicity, and this taints his action. That much is true, but from there this argument flounders. No one is demanding that Planned Parenthood only accept truly selfless donations. Planned Parenthood itself did not suggest that Tucker’s self-interest had anything at all to do with why they rejected the offer. This explicit argument serves only one real purpose, and that’s character assassination by way of framing Tucker’s donation in the worst possible light. One big issue with this is that I find it rather silly to try and malign Tucker’s character, as he does a fine job of that himself; his self-regarding personality is responsible for a good deal of why he’s famous. Another big issue is that Tucker could have donated that money to any non-profit he wanted, and I doubt Planned Parenthood was the only way he could have achieved his main goals. Just because caring for Planned Parenthood might not have been his primary motive with the donation, it does not mean it played no part in motivating the decision. Similarly, just because someone’s primary motivation for working at their job is money, it does not mean money is the only reason they chose the job they did, out of all the possible jobs they could have picked.

The second explicit argument is the more substantial half. Since Tucker Max is a notable asshole, many people voiced concerns that putting his name on a clinic would do Planned Parenthood a good deal of reputational damage, causing other people to withdraw or withhold their financial or political support. Ultimately, the costs of this reputational damage would end up outweighing Tucker’s donation, so really, it was a smart economic (and political, and moral) move. In fact, one author goes so far as to suggest that taking Tucker’s donation could have put the future of Planned Parenthood as a whole in jeopardy. This argument, at it’s core, suggests that Planned Parenthood lost the battle (Tucker’s donation) to win the war (securing future funding).

There are two big problems with this second argument. Most importantly, the negative outcome of accepting Tucker’s donation is purely imagined. It might have happened, it might not have happened, and there’s absolutely zero way of confirming whether it would have. That does not stop people from assuming that the worst would have happened, as making that assumption gives those defending Planned Parenthood an unverifiable potential victim. As I’ve mentioned before, having a victim on your side of the debate is crucial for engaging the moral psychology of others, and when people are making moral pronouncements they do actively search for victims. The other big problem with this second argument is that it’s staggering inconsistent with the first. Remember, people were very critical of Tucker’s motivations for the donation. One of the most frequently trotted out lines was, “If Tucker really cared about Planned Parenthood, he would have made the donation anonymously anyway. Then, he could have helped the women out and avoided the reputational harm he would have done to Planned Parenthood. Since he didn’t donate anonymously (or at least, I think he didn’t; that’s kind of the rub with anonymous donations), he’s just a total asshole”.

“I was going to refill my birth control prescription here, but if Tucker Max helped keep this clinic open, maybe I’ll just get pregnant instead”

The inconsistency is as follows: people assume that other donors would avoid or politically attack Planned Parenthood if Tucker Max was associated with it. Perhaps some women would even avoid the clinic itself, because it would make them feel upset. Again, maybe that would happen, maybe it wouldn’t. Assuming that it would, one could make the case that if those other supporters really cared about Planned Parenthood, then they shouldn’t let something like an association of a single clinic with Tucker Max dissuade them. The only reason that someone who previously supported Planned Parenthood would be put off would be for personal, self-interested reasons. The very same kind of motivation they criticized Tucker for initially. Instead of bloggers and commenters writing well-reasoned posts about how people shouldn’t stop supporting Planned Parenthood just because Tucker Max has his name on one, they instead praise excluding his sizable donation. One would think anyone who truly supported Planned Parenthood would err on the side of making arguments concerning why people should continue to support it, not why it would be justifiable for people to pull their support in fear of association with someone they don’t like.

Which brings us very nicely to the implicit motivations. The core issue here can be best summed up by Tucker himself:

Most charities are not run to help people, they are run because they are ways for people to signal status about themselves to other people…I wasn’t the “right type” of person to take money from so they’d rather close clinics. It’s the worst kind of elitism, the kind that cloaks itself in altruism. They care more about the perception of themselves and their organization than they care about its effectiveness at actually serving the reproductive needs of women.

People object to Tucker Max’s donation on two main fronts: (1) they don’t want to do anything that benefits Tucker in any way, and (2) they don’t personally want to be associated with Tucker Max in any way. Those two motivations are implicitly followed by a, “…and that’s more important to me than ensuring Planned Parenthood can continue to serve the women and men of their communities”. It looks a lot like a costly display on the part of those who supported the decision. They’re demonstrating their loyalty to their group, or to their ideals, and they’re willing to endure a very large, very real cost to do so. At least, they’re willing to let other people suffer that cost, as I don’t assume all, or even most, of the bloggers and commenters will be directly impacted by this decision.

Whatever ideal it is that they’re committed to, whatever group they’re displaying for, it is not Planned Parenthood. Perhaps they feel they’re fighting to end what they perceive as sexism, or misogyny, or a personal slight because Tucker wrote something about fat girls they found insulting. What they’re fighting for specifically is irrelevant. What is relevant is that they’re willing to see Planned Parenthoods close and men and women lose access to their services before they’re willing to compromise whatever it is they’re primarily fighting for. They might dress their objections up to make it look like they aren’t self-interested or fighting some personal battle, but the disguise is thin indeed. One could make the case that such behavior, co-opting the suffering of another group to bolster your own cause, is rather selfish; the kind of thing a real asshole would do.