Differentiating Between Effects And Functions

A few days ago, I had the misfortune of forgetting my iPod when I got to the gym. As it turns out, I hadn’t actually forgotten it; it had merely fallen out of my bag in the car and I hadn’t noticed, but the point is that I didn’t have it on me. Without the music that normally accompanies my workout I found the experience to be far less enjoyable than it normally is; I would even go so far as to say that it was more difficult to lift what I normally do without much problem. When I mentioned the incident to a friend of mine she expressed surprise that I actually managed to stick around to finish my workout without it; in fact, on the rare occasions I end up arriving at the gym without any source of music, I typically don’t end up even working out at all, demonstrating the point nicely.

“If you didn’t want that bar to be crushing your windpipe, you probably shouldn’t have forgotten your headphones…”

In my experience, listening to music most certainly has the effect of allowing me to enjoy my workout more and push myself harder. The question remains, however, as to whether such effects are part of the function of music; that is to ask do we have some cognitive adaptation(s) designed to generate that outcome from certain given inputs? On a somewhat related note, I recently got around to reading George C Williams book, Adaptation and Natural Selection (1966). While I had already been familiar with most of what he talked about, it never hurts to actually go back and read the classics. In the book, Williams makes a lot of the above distinction between effects and functions throughout the book; what we might also label as byproducts and adaptations respectively. A simple example would demonstrate the point: while a pile of dung might serve as a valuable resource for certain species of insects, the animals which produce such dung are not doing so because it benefits the insects; the effect in this case (benefiting insects) is not the function of the behavior (excreting wastes).

This is an important theoretical point; one which Williams repeatedly brings to bear against the group selection arguments that people were putting forth at the time he was writing. Just because populations of organisms tend to have relatively stable population sizes – largely by virtue or available resources and predation – that effect does not imply there is a functional group-size-regulation adaptation activity generating that outcome. While effects might be suggestive of functions, or at least preliminary requirements for demonstrating function, they are not alone sufficient evidence for them. Adapted functionality itself is often a difficult thing to demonstrate conclusively, which is why Williams offered his now famous quote about adaptation being an onerous concept.

This finally brings us to a recent paper by Dunbar et al (2012) in which the authors find an effect of performing music on pain tolerance; specifically, it’s the performance of music per se, not the act of passively listening to it, that results in an increased pain tolerance. While it’s certainly a neat effect, effects are a dime a dozen; the question of relevance would seem to be whether this effect bears on a possible function for music. While Dunbar et al (2012) seem to think it does, or at least that it might, I find myself disagreeing with that suggestion rather strongly; what they found strikes me more as an effect without any major theoretical implications.

If that criticism stings too much, might I recommend some vigorous singing?

First, a quick overview of the paper: subjects were tested twice for their pain tolerance (as measured by the time people could stand the application of increasing pressure or holding cold objects), both before and after a situation in which they either performed music (singing, drumming, dancing, or practicing) or listened to it (varying the tempo of the music). In most cases it was the active performance of music which led to a subsequent increase in pain tolerance, rather than listening. The exception to that set of findings was that the groups that were simply practicing in a band setting did not show this increase, a finding which Dunbar et al (2012) suggest has to do with the vigor, likely the physical kind, with which the musicians were engaged in their task, not the performance of music per se.

Admittedly, that last point is rather strange from the point of view of trying to build a functional account for music. If it’s the physical activity that causes an increase in pain tolerance, that would not make the performance of music special with respect to any other kind of physical activity. In other words, one might be able to make a functional account for pain sensitivity, but it would be orthogonal to music. For example, in their discussion, the authors also note that laughter can also lead to an increase in pain tolerance as well. So really there isn’t much in this study that can speak to a function of music specifically. Taking this point further, Dunbar et al (2012) also fail to provide a good theoretical account as to how one goes from an increased pain tolerance following music production to increases in reproductive success. From my point of view, I’m still unclear as to why they bothered to examine the link between music production and pain the first place (or, for that matter, why they included dancing, since while dancing can accompany music, it is not itself a form of music, just like my exercise can accompany music, but it not music-related itself).

Dunbar et al (2012) also mention in passing at the end of their paper that music might provide some help to the ability to entrain synchronized behavior, which in turn could lead to increases in group cooperation which, presumably, they feel would be a good thing, adaptively speaking, for the individuals involved in said group. Why this is in the paper is also a bit confusing to me, since it appears to have nothing to do with anything they were talking about or researching up to that point. While it would appear to be, at least on the face of it, a possible theoretical account for a function of music (or at least a more plausible one than their non-existent reason for examining pain tolerance) nothing in the paper seems to directly or indirectly speak to it.

And believe you me, I know a thing or two about not being spoken to…

While this paper serves as an excellent example of some of the difficulties in going from effect to function, another point worth bearing in mind is how little gets added to this account by sketching out the underlying physical substrates through which this effect is generated. Large sections of the Dunbar et al paper is dedicated to these physiological outlines of the effect without many apparent payoff. Don’t get me wrong: I’m not saying that exploring the physiological pathways through which adaptations act is a useless endeavor, it’s just that such sketches do not add anything to an account that’s already deficient in the first place. They’re the icing on top of the cake; not it’s substance. Physiological accounts, while they can be neat if they’re your thing, are not sufficient for demonstrating functionality for exactly the same reasons that effects aren’t; all physiological accounts are, essentially, simply detailed accounts of effects, and byproducts and adaptations alike both have effects.

While this review of the paper itself might have been cursory, there are some valuable lessons to learn from it: (1) always try and start your research with some clearly stated theoretical basis, (2) finding effects does not mean you’ve found a function, (3) sketching effects in greater detail at a physiological level does not always help for developing a functional account, and (4) try and make sure the research you’re doing maps onto your theoretical basis, as tacking on an unrelated functional account at the end of your paper is not good policy; that account should come first, not as an afterthought.

References: Dunbar RI, Kaskatis K, Macdonald I, & Barra V (2012). Performance of music elevates pain threshold and positive affect: Implications for the evolutionary function of music. Evolutionary psychology : an international journal of evolutionary approaches to psychology and behavior, 10 (4), 688-702 PMID: 23089077

Williams, G.C. (1966). Adaptation and natural selection: A critique of some current evolutionary thought. Princeton University Press: NJ

No, Really; Domain General Mechanisms Don’t Work (Either)

Let’s entertain a hypothetical situation in which your life path had led you down the road to becoming a plumber. Being a plumber, your livelihood depends on both knowing how to fix certain plumbing-related problems and having the right tools for getting the job done: these tools would include a plunger, a snake, and a pair of clothes you don’t mind not wearing again. Now let’s contrast being a plumber with being an electrician. Being an electrician also involves specific knowledge and the right tools, but those sets do not overlap well with those of the plumber (I think, anyway; I don’t know too much about either profession, but you get the idea). A plumber that shows up for their job with a soldering iron and wire-strippers is going to be seriously disadvantaged at getting that job done, just as a plunger and a snake are going to be relatively ineffective at helping you wire up the circuits in a house. The same can be said for your knowledge bases as well: knowing how to fix a clogged drain will not tell you much about how to wire a circuit, and vice versa.

Given that these two jobs make very different demands, it would be surprising indeed to find a set of tools and knowledge that worked equally well for both. If you wanted to branch out from being a plumber to also being an electrician, you would subsequently need new additional tools and training.

And/Or a very forgiving homeowner’s insurance policy…

Of course, there is not always, or even often, a 1-to-1 relationship between the intended function of a tool and the applications towards which it can be put. For example, if your job involves driving in a screw and you happen to not have a screwdriver handy, you could improvise and use, say, a knife’s blade to turn the screw as well. That a knife can be used in such a fashion, however, does not mean it would be preferable to do away with screwdrivers altogether and just carry knives instead. As anyone who has ever attempted such a stunt before can attest to, this is because knives often do not make doing the job very quick or easy; they’re generally inefficient in achieving that goal, given their design features, relative to a more functionally-specific tool. While a knife might work well as a cutting tool and less well as screwdriver, it would function even worse still if used as a hammer. What we see here is that as tools become more efficient at one type of task, they often become less efficient at others to the extent that those tasks do no overlap in terms of their demands. This is why it’s basically impossible to design a tool that simply “does useful things”; the request is massively underspecified, and the demands of one task do not often highly correlate to the demands of another. You first need narrow the request by defining what those useful things are you’re trying to do, and then figure out ways of effectively achieving your more specific goals.

It should have been apparent well before this point that my interest is not in jobs and tools per se, but rather in how these examples can be used to understand the functional design of the mind. I previously touched briefly on why it would be a mistake to assume that domain-general mechanisms would lead to plasticity in behavior. Today I hope to expand on that point and explain why we should not expect domain-general mechanisms – cognitive tools that are supposed to be jacks-of-all-trades and masters of none – to even exist. This will largely be accomplished by pointing out some of the ways that Chiappe & MacDonald (2005) err in their analysis of domain-general and domain-specific modules. While there is a lot wrong with their paper, I will only focus on certain key conceptual issues, the first of which involves the idea, again, the domain-specific mechanisms are incapable of dealing with novelty (in much the same way that a butter knife is clearly incapable of doing anything that doesn’t involve cutting and spreading butter).

Chiappe & MacDonald claim that a modular design in the mind should imply inflexibility: specifically, that organisms with modular minds should be unable to solve novel problems or solve non-novel problems in novel ways. A major problem that Chiappe & MacDonald’s account encounters is a failure to recognize that all problems organisms face are novel, strictly speaking. To clarify that point, consider a predator/prey relationship: while rabbits might be adapted for avoiding being killed by foxes, generally speaking, no rabbit alive today is adapted to avoid being killed by any contemporary fox. These predator-avoidance systems were all designed by selection pressures on past rabbit populations. Each fox that a rabbit encounters in its life is a novel fox, and each situation that fox is encountered in is a novel situation. However, since there are statistical similarities between past foxes and contemporary ones, as well as between the situations in which they’re encountered, these systems can still respond to novel stimuli effectively. This evaporates the novelty concern rather quickly; domain-specific modules can, in fact, only solve novel problems, since novel problems are the only kinds of problems that an organism will encounter. How well they will solve those problems will depend in large part on how much overlap there is between past and current scenarios.

Swing and a miss, novelty problem…

A second large problem in the account involves the lack of distinction on the part of Chiappe and MacDonald between the specificity of inputs and of functions. For example, the authors suggest that our abilities for working memory should be classified as domain-general abilities because many different kinds of information can be stored in working memory. This strikes me as a rather silly argument, as it could be used to classify all cognitive mechanisms as domain-general. Let’s return to our knife example; a knife can be used for cutting all sorts of items: it could cut bread, fabric, wood, bodies, hair, paper, and so on. From this, we could conclude that a knife is a domain-general tool, since its function can be used towards a wide-variety of problems that all involve cutting. On the other hand, as mentioned previously, a knife can efficiently do far fewer things than what it can’t do: knives are awful hammers, fire extinguishers, water purifiers, and information-storage devices. The knife has a relatively specific function which can be effectively applied to many problems that all require the same general solution – cutting (provided, of course, the materials are able to be cut by the knife itself. That I might wish to cut through a steel door does not mean my kitchen knife is up to the task). To tie this back to working memory, our cognitive systems that dabble in working memory might be efficient at holding many different sorts of information in short-term memory, but they’d be worthless at doing things like regulating breathing, perceiving the world, deciphering meaning, or almost any other task. While the system can accept a certain range of different kinds of inputs, its function remains constant and domain-specific.

Finally, there is the largest issue their model encounters. I’ll let Chiappe & MacDonald spell it out themselves:

A basic problem [with domain-general modules] is that there are no problems that the system was designed to solve. The system has no preset goals and no way to determine when goals are achieved, an example of the frame problem discussed by cognitive scientists…This is the problem of relevance – the problem of determining which problems are relevant and what actions are relevant for solving them. (p.7)

Though they mention this problem in the beginning of their paper, the authors never actually take any steps to address that series of rather large issues. No part of their account deals with how their hypothetical domain-general mechanisms generate solutions to novel problems. As far as I can tell, you could replace the processes by which their domain-general mechanisms identify problems, figure out which information is and isn’t useful in solving said problems, figure out how to use that information to solve the problems, and figure out when the problem has been solved, with the phrase “by magic” and not really affect the quality of their account much. Perhaps “replace” is the wrong word, however, as they don’t actually put forth any specifics as to how these tasks are accomplished under their perspective. The closest they seem to come is when they write things along the lines of “learning happens” or “information is combined and manipulated” or “solutions are generated”. Unfortunately for their model, leaving it at that is not good enough.

A lesson that I thought South Park taught us long time ago.

In summary, their novelty problem isn’t one, their “domain-general” systems are not general-purpose at the functional level at all, and the ever-present framing problem is ignored, rather than addressed. That does not leave much of an account left. While, as the authors suggest, being able to adaptively respond to non-recurrent features in our environment would probably be, well, adaptive, so would the ability to allow our lungs to become more “general-purpose” in the event we found ourselves having to breathe underwater. Just because such abilities would be adaptive, however, does not mean that they will exist.

As the classic quote goes, there are far more ways of being dead than there are of being alive. Similarly, there are far more ways of not generating adaptive behavior than there are of behaving adaptively. Domain-general information processors that don’t “know” what to do with the information they receive will tend to get things wrong far more often than they’ll get them right on those simple statistical grounds. Sure, domain-specific information processors won’t always get the right answer either, but the pressing question is, “compared to what?”. If that comparison is made to a general-purpose mechanism, then there wouldn’t appear to be much of a contest.

References: Chiappe, D., & MacDonald, K. (2005). The Evolution of Domain-General Mechanisms in Intelligence and Learning The Journal of General Psychology, 132 (1), 5-40 DOI: 10.3200/GENP.132.1.5-40

No, Really; Group Selection Still Doesn’t Work

Back in May, I posed a question concerning why an organism would want to be a member of group: on the one hand, an organism might want to join a group because, ultimately, that organism calculates that joining a group would likely lead to benefits for itself that the organism would not otherwise obtain; in other words, organisms would want to join a group for selfish reasons. On the other hand, an organism might want to join a group in order to deliver benefits to the entire group, not just themselves. In this latter case, the organism would be joining the group for, more or less, altruistic reasons. For reasons that escape my current understanding, there are people who continue to endorse the second reason for group-joining as plausible, despite it being anathema to everything we currently know about how evolution works.

The debate over whether adaptations for cooperation and punishment were primarily forged by selection pressures at the individual or group level has gone on for so long because, in part, much of the evidence that was brought to bear on the matter could have been viewed as being consistent with either theory – if one was creative enough in their interpretation of the results, anyway. The results of a new study by Krasnow et al (2012) should do one of two things to the group selectionists: either make them reconsider their position or make them get far more creative in their interpreting.

Though I think I have a good guess which route they’ll end up taking.

The study by Krasnow et al (2012) took the sensible route towards resolving the debate: they created contexts where the two theories make opposing predictions. If adaptations for social exchange (cooperation, defection, punishment, reputation, etc) were driven primarily by self-regarding interests (as it is the social exchange model), information about how your partner behaved towards you should be more relevant than information about how your partner behaved towards others when you’re deciding how to behave towards them. In stark contrast, a group selection model would predict that those two types of information should be of similar value when deciding how to treat others, since the function of these adaptations should be to provide group-wide gains; not selfish ones.

These contexts were created across two experiments. The first experiment was designed in order to demonstrate that people do, in fact, make use of what the authors called “third-party reputation”, defined as a partner’s reputation for behaving a certain way towards others. Subjects were brought into the lab to play a trust game with a partner who, unbeknownst to the subjects, were computer programs and not real people. In a trust game, a player can either not trust their partner, resulting in an identical mid-range payoff for both (in this case, $1.20 for both), or trust their partner. If the first player trusts, their partner can either cooperate – leading to an identical payoff for both players that’s higher than the mid-range payoff ($1.50 for both) – or defect – leading to an asymmetrical payoff favoring the defector ($1.80 and $0.90). In the event that the player trusted and their partner defected, the player was given an option to pay to punish their partner, resulting in both their payoffs sitting at a low level ($0.60 for both).

Before the subjects played this trust game, they were presented with information about their partner’s third-party reputation. This information came in the form of questions that their partner had ostensibly filled out earlier, which assessed the willingness of that partner to cheat given freedom from detection. Perhaps unsurprisingly, subjects were less willing to trust a partner who indicated they would be more likely to cheat, given a good opportunity. What this result tells us, then, is that people are perfectly capable of making use of third-party reputation information when they know nothing else about their partner. These results do not help us distinguish between group and individual-level accounts, however, as both models predict that people should act this way; that’s where the second study came in.

“Methods: We took 20 steps, turned, and fired”

The second study added in the crucial variable: first-party reputation, or your partner’s past behavior towards you. This information was provided through the results of two prisoner’s dilemma games that were visible to the subject, one which was played between a subject and their partner and the other played between the partner and a third party. This led to subjects encountering four kinds of partners: one who defected both on the subject and a third party, one who cooperated with both, and one who defected on one (either the subject or the third party) but cooperated with the other. Following this initial game, subjects again played a two-round trust game with their partners. This allowed the following question to be answered: when subjects have first-party reputation available, do they still make use of third-party reputation?

The answer could not have been a more resounding, “no”. When deciding whether they were going to trust their partner or not, the third-party reputation did not predict the outcome at all, whereas first-party reputation did, and, unsurprisingly, subjects were less willing to trust a partner who had previously defected on them. Further, a third-party reputation for cheating did not make subjects any more likely to punish their partner, though first-party reputation didn’t have much value in those predictions either. That said, the social exchange model does not predict that punishment should be enacted strictly on the grounds of being wronged; since punishment is costly it should only be used when subjects hope to recoup the costs of that punishment in subsequent exchanges. If subjects do not wish to renegotiate the terms of cooperation via punishment, they should simply opt to refrain from interacting with their partner altogether.

That precise pattern of results was borne out: when a subject were defected on and the subject then punished the defector, that same subject was also likely to cooperate in subsequent rounds with their partner. In fact, they were just as likely to cooperate with their partner as they were cases where the partner did not initially defect. It’s worth repeating that subjects did this while, apparently, ignoring how their partner had behaved towards anyone else. Subjects only seemed to punish the partner in order to persuade their partner to treat them better; they did not punish because their partner had hurt anyone else. Finally, first-party reputation, unlike third-party reputation, had an effect on whether subjects were willing to cooperate with their partner on their first move in the trust game. People were more likely to cooperate with a partner who had cooperated with them, irrespective of how that partner behaved towards anyone else.

Let’s see you work that into your group selection theory.

To sum up, despite group selection models predicting that subjects should make use of first- and third-party information equally, or at least jointly, they did not. Subjects only appeared to be interested in information about how their partner behaved towards others to the extent that such information might predict how their partner would behave towards them. However, since information about how their partner had behaved towards them is a superior cue, subjects made use of that first-party information when it was available to the exclusion of third-party reputation.

Now, one could make the argument that you shouldn’t expect to see subjects making use of information about how their partners behaved towards other parties because there is no guarantee that those other parties were members of the subject’s group. After all, according to group selection theories, altruism should only be directed at members of one’s own group specifically, so maybe these results don’t do any damage to the group selectionist camp. I would be sympathetic to that argument, but there are two big problems to be dealt with before I extend that sympathy: first, it would require that group selectionists give up all the previously ambiguous evidence they have said is consistent with their theory, since almost all of that research does not explicitly deal with a subject’s in-group either; they don’t get to recognize evidence only in cases where it’s convenient for their theory and ignore it when it’s not. The second issue is the one I raised back in May: “the group” is a concept that tends to lack distinct boundaries. Without nailing down this concept more concretely, it would be difficult to build any kind of stable theory around it. Once that concept had been developed more completely, then it would need to be shown that subjects will act altruistically towards their group (and not others) irrespective of the personal payoff for doing so; demonstrating that people act altruistically with the hopes that they will be benefited down the road from doing so is not enough.

Will this study be the final word on group selection? Sadly, probably not. On the bright side, it’s at least a step in the right direction.

References: Krasnow, M.M., Cosmides, L., Pederson, E.J., & Tooby, J. (2012). What are punishment and reputation for? PLOS ONE, 7

Altruism Is Not The Basis Of Morality

“Biologists call this behavior altruism, when we help someone else at some cost to ourselves. If you think about it, altruism is the basis of all morality. So the larger question is, Why are we moral?” – John Horgan [emphasis mine]

John Horgan, a man not known for a reputation as a beacon of understanding, recently penned the above thought that expresses what I feel to be an incorrect sentiment. Before getting to the criticism of that point, however, I would like to first commend John for his tone in this piece: it doesn’t appear as outwardly hostile towards the field of evolutionary psychology as several of his past pieces have been. Sure, there might be the odd crack about “hand-wavy speculation” and the reminder about how he doesn’t like my field, but progress is progress; baby steps, and all that.

Just try and keep those feet pointed in the right direction and you’ll do fine.

I would also like to add at the outset that Horgan states that:

“Deceit is obviously an adaptive trait, which can help us (and many other animals) advance our interest” [Emphasis mine]

I find myself interested as to why Horgan seems to feel that deceit is obviously adaptive, but violence (in at least some of its various forms) obviously isn’t. Both certainly can advance an organism’s interests, and both generally advance those interests at the expense of other organisms. Given that Horgan seems to offer nothing in the way of insight into how he arbitrates between adaptations and non-adaptations, I’ll have to chalk his process up to mere speculation. Why one seems obvious and the other does not might have something to do with Trivers accepting Horgan’s invitation to speak at his institution last December, but that might be venturing too far into that other kind of “hand-wavy speculation” Horgan says he dislikes so much. Anyway…

The claim that altruism is the basis of all morality might seem innocuous enough – the kind of thing ostensibly thoughtful people would nod their head at – but an actual examination of the two concepts will show that the sentiment only serves to muddy the waters of our understanding of morality. Perhaps that revelation could have been reached had John attempted to marshal more support for the claim beyond saying, “If you think about it” (which is totally not speculation…alright; I’ll stop), but I suppose we can’t hope for too much progress at once. So let’s begin, then, by considering the ever-quotable line from Adam Smith:

“It is not from the benevolence of the butcher, the brewer, or the baker, that we can expect our dinner, but from their regard to their own interest”

Smith is describing a scenario we’re all familiar with: when you want a good or service that someone else can provide you generally have to make it worth their while to provide it to you. This trading of benefits-at-a-cost is known as reciprocal altruism. However, when I go to the mall and give Express money so they will give me a new shirt, this exchange is generally not perceived as two distinct, altruistic acts (I endure the cost of losing money to benefit Express and Express endures the cost of losing a shirt to benefit me) that just happen to occur in close temporal proximity to one another, nor is it viewed as a particularly morally praiseworthy act. In fact, such exchanges are often viewed as two selfish acts, given that the ostensible altruism on the behavioral level is seen as a means for achieving benefits, not an end in and of itself. One could also consider the example with regards to fishing: if you sit around all day waiting for a fish to altruistically jump into your boat so you can cook it for dinner, you’ll likely be waiting a long time; better to try and trick the fish by offering it a tasty morsel on the end of a hook. You suffer a cost (the loss of the bait and your time spent sitting on a boat) and deliver a benefit to the fish (it gets a meal), whereas the fish suffers a cost (it gets eaten soon after) that benefits you, but neither you or the fish were attempting to benefit the other. 

There’s a deeper significance to that point, though: reciprocally altruistic relationships tend to break down in the event that one party fails to return benefits to the other (i.e. when the payments over time for one party actually resembles altruism). Let’s say my friend helps me move, giving up his one day off a month he has in the process. This gives me a benefit and him a cost. At some point in the future, my friend is now moving. In the event I fail to reciprocate his altruism, there are many who might well say that I behaved immorally, most notably my friend himself. This does, however, raise the inevitable question: if my friend was expecting his altruistic act to come back and benefit him in the future (as evidenced by his frustration that it did not do so), wasn’t his initial act a selfish one on precisely the same level as my shopping or fishing example above?

Pictured above: not altruism

What these examples serve to show is that, depending on how you’re conceptualizing altruism, the same act can be viewed as selfish or altruistic, which throws a wrench into the suggestion that all morality is based on altruism. One needs to really define their terms well for that statement to even mean anything worthwhile. As the examples also show, precisely how people behave towards each other (whether selfishly or altruistically) is often a topic of moral consideration, but just because altruism can be a topic of moral consideration it does not mean it’s the basis of moral judgments. To demonstrate that altruism is not the basis of our moral judgments, we can also consider a paper by DeScioli et al (2011) examining the different responses people have to moral omissions and moral commissions.

In this study, subjects were paired into groups and played a reverse dictator game. In this game, person A starts with a dollar and person B has a choice between taking 10 cents of that dollar, or 90 cents. However, if person B didn’t make a choice within 15 seconds, the entire dollar would automatically be transferred to them, with 15 cents subtracted for running out of time. So, person B could be altruistic and only take 10 cents (meaning the payoffs would be 90/10 for players A and B, respectively), be selfish and take 90 cents (10/10 payoff), or do nothing making the payoffs 0/85 (something of a mix of selfish and spite). Clearly, failing to act (an omission) was the worst payoff for all parties involved and the least “altruistic” of the options. If moral judgments use altruism as their basis, one should expect that, when given the option, third parties should punish the omissions more harshly than either of the other two conditions (or, at the very least, punish the latter two conditions equally as harshly). However, those who took the 90 cents were the ones who got punished the most; roughly 21 cents, compared to the 14 cents that those who failed to act were punished. An altruism-based account of morality would appear to have a very difficult time making sense of that finding.

Further still, an altruism-based account of morality would fail to provide a compelling explanation for the often strong moral judgments people have in reaction to acts that don’t distinctly harm or benefit anyone, such as others having a homosexual orientation or deciding not to have children. More damningly still, an altruism basis for moral judgments would have a hell of a time trying to account for why people morally support courses of action that are distinctly selfish: people don’t tend to routinely condemn others for failing to give up their organs, even non-vital ones, to save the lives of strangers, and most people would condemn a doctor for making the decision to harvest organs from one patient against that patient’s will in order to save the lives of many more people.

“The patient in 10B needs a kidney and we’re already here…”

An altruism account would similarly fail to explain why people can’t seem to agree on many moral issues in the first place. Sure, it might be altruistic for a rich person to give up some money in order to feed a poor person, but it would also be altruistic for that poor person to forgo eating in order to not impose that cost on the rich one. Saying that morality is based in altruism doesn’t seem to even provide much information about how precisely that altruism will be enacted, moral interactions will play out, or seem to lend any useful or novel predictions more generally. Then again, maybe an altruism-based account of morality can obviously deal with these objects if I just think about it

References: DeScioli, P., Christner, J., & Kurzban, R. (2011). The omission strategy Psychological Science, 22, 442-446 DOI: 10.1177/0956797611400616