Conscience Does Not Explain Morality

“We may now state the minimum conception: Morality is, at the very least, the effort to guide one’s conduct by reason…while giving equal weight to the interests of each individual affected by one’s decision” (emphasis mine).

The above quote comes to us from Rachaels & Rachaels (2010) introductory chapter entitled “What is morality?” It is readily apparent that their account of what morality is happens to be a conscience-centric one, focusing on self-regulatory behaviors (i.e. what you, personally, ought to do). These conscience-based accounts are exceedingly popular among many people, academics and non-academics alike, perhaps owing to its intuitive appeal: it certainly feels like we don’t do certain things because they feel morally wrong, so understanding morality through conscience seems like the natural starting point. With all due respect to the philosopher pair and the intuitions of people everywhere, they seem to have begun their analysis of morality on entirely the wrong foot.

So close to the record too…

Now, without a doubt, understanding conscience can help us more fully understand morality, and no account of morality would be complete without explaining conscience; it’s just not an ideal starting point for beginning our analysis (DeScioli & Kurzban, 2009; 2013). This is because moral conscience does not, in and of itself, explain our moral intuitions well. Specifically, it fails to highlight the difference between what we might consider ‘preferences’ and ‘moral rules’. To better understand this distinction, consider two following statements: (1) “I have no interest in having homosexual intercourse”, and (2) “Homosexual intercourse is immoral”. These two statements are distinct utterances, aimed at expressing different thoughts. The first expresses a preference, and that preference would appear sufficient for guiding one’s behavior, all else being equal; the latter statement, however, appears to express a different sentiment altogether. That second sentiment appears to imply that others ought to not have homosexual intercourse, regardless of whether you (or they) want to engage in the act.

This is the key distinction, then: moral conscience (regulating one’s own behavior) does not appear to straightforwardly explain moral condemnation (regulating the behavior of others). Despite this, almost every expressed moral rule or law involves punishing others for how they behave – at least implicitly. While the specifics of what gets punished and how much punishment is warranted vary to some degree from individual to individual, the general form of moral rules does not. Were I to say I do not wish to have homosexual intercourse, I’m only expressing a preference, a bit like stating whether or not I would like my sandwich on white or wheat bread. Were I to say homosexuality is immoral, I’m expressing the idea that those who engage in the act ought to be condemned for doing so. By contrast, I would not be interested in punishing people for making the ‘wrong’ choice about bread, even if I think they could have made a better choice.

While we cannot necessarily learn much about moral condemnation via moral conscience, the reverse is not true: we can understand moral conscience quite well through moral condemnation. Provided that there are groups of people who will tend to punish for you for doing something, this provides ample motivation to avoid engaging in that act, even if you otherwise highly desire to do so. Murder is a simple example here: there tend to be some benefits for removing specific conspecifics from one’s world. Whether because those others inflict costs on you or prevent the acquisition of benefits, there is little question that murder might occasionally be adaptive. If, however, the would-be target of your homicidal intentions happens to have friends and family members that would rather not see them dead, thank you very much, the potential costs those allies might inflict need to be taken into account. Provided those costs are appreciably great, and certain actions are punished with sufficient frequency over time, a system for representing those condemned behaviors and their potential costs – so as to avoid engaging in them – could easily evolve.

“Upon further consideration, maybe I was wrong about trying to kill your mom…”

That is likely what our moral conscience represents. To the extent that behaviors like stealing from or physically harming others tended to be condemned and punished, we ought to be expected to have a cognitive system to represent that fact. Now perhaps that all seems a bit perverse. After all, many of us simply experience the sensation that an act is morally wrong or not; we don’t necessarily think about our actions in terms of the likelihood and severity of punishment (we do think such things some of the time, but that’s typically not what appears to be responsible for our feeling of “that’s morally wrong”. People think things are morally wrong regardless of whether one is caught doing it). That all may be true enough, but remember, the point is to explain why we experience those feelings of moral wrongness; not to just note that we do experience them and that they seem to have some effect on our behavior. While our behavior might be proximately motivated by those feelings of moral wrongness, those feelings came to exist because they were useful in guiding out behavior in the face of punishment. That does raise a rather important question, though: why do we still feel certain acts are immoral even when the probability of detection or punishment are rather close to zero?

There are two ways of answering that question, neither of which is mutually exclusive with the other. The first is that the cognitive systems which compute things like the probability of being detected and estimate the likely punishment that will ensue are always working under conditions of uncertainty. Because of this uncertainty, it is inevitable that the system will, on occasion, make mistakes: sometimes one could get away without repercussions when behaving immorally, and one would be better off if they took those chances than if they did not. One also needs to consider the reverse error as well, though: if you assess that you will not be caught or punished when you actually will, you would have been better off not behaving immorally. Provided the costs of punishment are sufficiently high (the loss of social allies, abandonment by sexual partners, the potential loss of your life, etc), it might pay in some situations to still avoid behaving in morally unacceptable ways even when you’re almost positive you could get away with it (Delton et al, 2012). The point here is that it doesn’t just matter if you’re right or wrong about whether you’re likely to be punished: the costs to making each mistake need to be factored into the cognitive equation as well, and those costs are often asymmetric.

The second way of approaching that question is to suggest that the conscience system is just one cognitive system among many, and these systems don’t always need to agree with one another. That is, a conscience system might still represent an act as morally unacceptable while other systems (those designed to get certain benefits and assess costs) might output an incompatible behavioral choice (i.e. cheating on your committed partner despite knowing that it is morally condemned to do so, as the potential benefits are perceived as being greater than the costs). To the extent that these systems are independent, then, it is possible for each to hold opposing representations about what to do at the same time. Examples of this happening in other domains are not hard to find: the checkerboard illusion, for instance, allows us to hold both the representation that A and B are different colors and that A and B are the same color in our mind at once. We need not be of one mind about all such matters because our mind is not one thing.

“Well, shoot; I’ll get the glue gun…”

Now, to be sure, there are plenty of instances where people will behave in ways deemed to be immoral by others (or even by themselves, at different times) without feeling the slightest sensation of their conscience telling them “what you’re doing is wrong”. Understanding how the conscience develops, and the various input conditions likely to trigger it – or fail to do so – are interesting matters. In order to make better progress on researching them, however, it would benefit researchers to begin with an understanding of why moral conscience exists. Once the function of conscience – avoiding condemnation – has been determined, figuring out what questions to ask about conscience becomes an altogether easier task. We might expect, for instance, that moral conscience is less likely to be triggered when others (the target and their allies) are perceived to be incapable of effective retaliation. While such a prediction might appear eminently sensible when beginning with condemnation, it is not entirely clear how one could deliver such a prediction if they began their analysis with conscience instead.

References: Delton, A., Krasnow, M., Cosmides, L., & Tooby, J. (2012). Evolution of direct reciprocity under uncertainty can explain human generosity in one-shot encounters.  Proceedings of the National Academy of Sciences, 108, 13335-13340.

DeScioli P, & Kurzban R (2009). Mysteries of morality. Cognition, 112 (2), 281-99 PMID: 19505683

DeScioli P, & Kurzban R (2013). A solution to the mysteries of morality. Psychological bulletin, 139 (2), 477-96 PMID: 22747563

Rachaels, J. & Rachels S. (2010). The Elements of Moral Philosophy. New York, NY: McGraw Hill.

The Popularity Of Popularity

Whatever your personal views on the book, one would have a hard time denying the popularity of 50 Shades of Grey, which has sold more than 70 million copies worldwide. The Harry Potter series manages to dwarf that popularity, having sold over 450 million copies worldwide. While books like these have garnered incredible amounts of attention and fandom, hundreds of thousands of other books linger on in obscurity, while others never even see print. It’s nearly impossible to overstate that magnitude of difference in popularity. Similar patterns hold across other domains of cultural products, like music and academia; some albums or papers are just vastly more notable than the rest. For anyone looking to create, then, whether that creation is a book, music, a scholarly paper, or a clothing brand, being aware of factors that separate out the bestsellers from the bargain bin can potentially make or break the endeavor.

“We’d like to thank our fan for coming out to see us tonight”

One of the problems that continuously vexes those who decide which cultural products to invest in is the matter of figuring out which investments are their best bets. For every Harry Potter, Snuggie, or Justin Bieber, there are millions of cultural products and creators who will likely end up resigned to the dustbin of history (if the seemingly-endless lines of people auditioning for shows like American Idol and (ahem) writing blogs are any indication). Despite this tremendous monetary incentive in finding the next big thing, predicting who or what it will turn out to be is incredibly difficult. Some of these cultural products seem to take on a life all their own, despite initially being passed over: for instance, the Beatles were at one point told they had “no future in show business”  before rocketing to international and intergenerational fame. Whoops. Others draw tremendous amounts of investment, only to come up radically short (the issues surrounding the video game Kingdoms of Amalur serve as a good example).

What could have inspired such colossal mistakes? The answer is probably not that the people doing the investing are stupid, wicked, or hate money (in general, at least; some of them may well be any of these things individually); a more likely reason can be summed up by what’s called the Matthew effect. Named after a passage from the Gospel of Matthew, the basic principle is that the rich get richer, while the poor get poorer (also summarized in the Alice Cooper song “Lost in America”). Framed in terms of cultural products, the principle is that as something – be a book, song, or anything else – becomes popular, it will become more popular in turn because of that popularity. Obviously this becomes a problem for anyone trying to predict the success of a cultural product, because when they start out that key variable can’t be readily assessed with much precision; it’s simply not one of the inherent characteristics of the product itself. Even if the product itself is solid in every measurable respect, that’s not a guarantee of success.

While such a proximate explanation sounds appealing, solid experimental evidence of such an effect can be difficult to come by; reality is a large and messy place, so parceling out the effect of popularity can be tricky. Thankfully, some particularly clever researchers managed to leverage the power of the internet to study the effect. Salganik, Dodds, & Watts (2006) began their project by asking people on the internet to do one of the things they do best: download music (taking the coveted third-place spot of popular online activites, behind viewing pornography and complaining). The participants (all 14,341 of them) were invited to listen to any of 48 songs from new artists and, afterwards, rate and download them for free if they so desired (they couldn’t download until they listened). In the first condition, people were just presented with the songs in a random grid or list format and left to explore. In the second condition (the social world), everything was the same as the first, except the listeners were given one other key piece of information: how often that song had been downloaded by others.

And you want to like the same music as all those strangers, don’t you?

Further – and this is the rather clever part – there were eight of these different social worlds. This resulted in each world having initially-different numbers of downloads; which songs were the most downloaded initially in one world were not necessarily the most popular in another, depending on who listened and downloaded first. Salganik et al (2006) were able to report a number of interesting conclusions from these different worlds: first, the “intrinsic” quality of the songs themselves – as assessed by their download count in the asocial world – did tend to predict that song’s popularity in the social worlds. Those songs which tended to do good without the social information also tended to do good with it. That said, the social information mattered quite a bit As the authors put it:

the “best” songs never do very badly, and the “worst” songs never do extremely well, but almost any other result is possible.

When the social information was present, the ultimate popularity of a song became harder to predict; this was especially the case when the information was presented as an ordered list, from most popular to least (as bestseller lists often are), relative to the grid format. On top of the greater unpredictability generated by the social information, there was also an increase in the inequality of popularity. That is, the most popular songs were substantially more popular than the least popular in the social conditions, relative to the asocial ones. In short, the information about other people’s preferences generated more of a winner-takes-all type of environment, and who won became increasing difficult to predict.

This study was followed up by a 2008 paper by Salganik & Watts. In this next study, the design was largely the same, but the authors attempted to add a new twist: the possibility of a self-fulfilling prophecy. Might a “bad” song be able to be made popular by manipulating other people’s beliefs about how many others had downloaded it? Another 12,207 subjects were recruited to again listen to, rate, and download songs. Once the social worlds were initially seeded with download information from the first 2,000 people, the researchers then went in and inverted the ordering: the most and least popular songs switched their download counts, then second most and second least popular, and so on until the whole list was reversed. The remaining 10,000 subjects saw the new download counts, but everything else was left the same.

In the initial, unmodified worlds, each person listened to 7.5 of the 48 songs and downloaded 1.4 of them, on average. Unsurprisingly, people tended to download the songs they gave the highest ratings to. Somewhat more surprisingly, the fake download counts also pushed the previously-unpopular songs towards much higher degree of popularity: when the ranks weren’t inverted, there was a good correlation in the pre- and post-manipulation points (r = 0.84); when the ranks were inverted, that correlation dropped off to near non-existence (r = 0.16). The mere illusion of popularity was not absolute, though: over time, the initially “better” songs tended to regain some of their footing after inversion. Further, the songs in the inverted world tended to be listened to (average 6.3) and downloaded (1.1) less overall, suggesting that people weren’t as thrilled about the “lower-quality” songs they were now being exposed to in greater numbers.

“If none of us applaud, maybe we won’t encourage them to keep playing…”

Towards the end of their paper, the authors also dip into some strategic thinking, likening this effect to a tragedy-of-the-commons-style signaling problem: each creator has an interest in sending inflated signals about the popularity of their product, so as to garner more popularity, but as the number of these signals increases, their value decreases, and all the art as a whole suffers (a topic I touched on lately). I think such a speculation is a well-founded beginning, and I would like to see that line of reasoning taken further. For instance, there are some bands who might settle for a more niche-level popularity at the expense of broader appeal (i.e. the big fish in the small pond, talking about how the bigger fish in the bigger pond are all selling out, mass-produced products for crowds of mindless sheep; or just imagine someone going on endlessly about how bad twilight is at every opportunity they get). Figuring out why people might embrace or reject certain kinds of popularity could open the door for new avenues of profitable research.

References: Salganik, M., Dodds, P., & Watts, D. (2006). Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market. Science, 311, 854-856 DOI: 10.1126/science.1121066

Salganik, M., & Watts, D. (2008). Leading the Herd Astray: An Experimental Study of Self-fulfilling Prophecies in an Artificial Cultural Market. Social Psychology Quarterly , 71, 338-355 DOI: 10.1177/019027250807100404

More About Memes

Sometime ago, I briefly touched on the why I felt the concept of a meme didn’t help us understand some apparent human (and nonhuman) predispositions for violence. I don’t think my concerns about the idea that memes are analogs to genes – both being replicators that undergo a selective process, resulting in what one might call evolution by natural selection – were done full justice there. Specifically, I only scratched the surface of one issue, without explicitly getting down to the deeper, theoretical concerns with the ‘memes-as-replicators’ idea. As far I can see at the moment, memetics proves to be too underspecified in many key regards to profitably help us understand human cognition and behavior. By extension, the concept of cultural group selection faces many of the same challenges. None of what I’m about to say discredits the notion that people can often end up with similar ideas in their heads: I didn’t think up the concepts of evolution by natural selection, genes, or memes, yet here I am discussing them (with people who will presumably understand them to some degree as well). The point is that those ideas probably didn’t end up in our heads because the ideas themselves were good replicators.

Good luck drawing the meme version of the tree of life.

The first of these conceptual issues concerns the problem of discreteness. This is the basic question of what are the particulate units of inheritance that are being replicated? Let’s use the example provided by Wikipedia:

A meme has no given size. Susan Blackmore writes that melodies from Beethoven’s symphonies are commonly used to illustrate the difficulty involved in delimiting memes as discrete units. She notes that while the first four notes of Beethoven’s Fifth Symphony form a meme widely replicated as an independent unit, one can regard the entire symphony as a single meme as well.

So, are those first four notes to be regarded an independent meme or part of a larger meme? The answer, unhelpfully, seems to be, “yes”. To see why this answer is unhelpful, consider a biological context:organisms are collections of traits, traits are collections of proteins, proteins are coded for by genes, and genes are made up of alleles. By contrast, this post (a meme) is made up of paragraphs (memes), which are made up of sentences (memes), which are made up of words (memes), which are made up of letters (memes), all of which are intended to express abstract ideas (also memes). In the biological sense, then, the units of heredity (alleles/genes) can be conceived of and spoken about in a distinct manner from their products (proteins, traits, and organisms). The memetics sense, blurs this distinction; the hypothetical units of heredity (memes) are the same as their products (memes), and can broken down into effectively-limitless combinations (words, letters, notes, songs, speeches, cultures, etc). If the definition of a meme can be applied to accommodate almost anything, it adds nothing to our understanding of ideas.

This definitional obscurity has other conceptual downsides as well that begin to tip the idea that ‘memes replicate‘ into the realm of unfalsifiability. Let’s return to the biological domain: here, two organisms can have identical sets of genes, yet display different phenotypes, as their genetic relatedness is a separate concept from their phenotypic relatedness. The reverse can also hold: two organisms can have phenotypically similar traits – like wings – despite not inheriting that trait from a genetic common ancestor (think bats and pigeons). What these examples tell us is that phenotypic resemblance – or lack thereof – is not necessarily a good cue for determining biological relatedness. In the case of memes, there is no such conceptual dividing line using parallel concepts: the phenotype of a meme is its genotype. This makes it very difficult to do things like measure relatedness between memes or determine if they have a common ancestor. To make this example more concrete, imagine you have come up with a great idea (or a truly terrible one; the example works regardless of quality). When you share this idea with your friend, your friend appears stunned, for just the other day they had precisely the same idea.

Assuming both of you have identical copies of this idea in your respective heads, does it make sense to call one idea a replication of the other? It would seem not. Though they might resemble one another in every regard, one is not the offspring of another. To shift the example back to biology, were a scientist to create a perfect clone of you, that clone would not be a copy of you by descent; you would not share any common ancestors, despite your similarities. The conceptualization of memes appears to blur this distinction, as there is currently no way of separating out descent from a common ancestor from separate creation events in regards to ideas. Without this distinction, the potential application of natural selection to memes is weakened substantially. One could make the argument that memes, like adaptations, are too improbably organized to arise spontaneously, which would imply they represent replications with mutations/modifications, rather than independent creation events. That argument would be deficient on at least two counts.

One case in which there is a real controversy.

The first problem with that potential counterargument is that there are two competing accounts for special design: evolution and creationism. In the case of biology, that debate is (or at least ought to be) largely over. In the case of memes, however, the creationism side has a lot going for it; not in the supernatural-sense, mind you, but rather in the information-processing sense. Our minds are not passive receptors for sensory information, attempting to bring perceptions from ‘out there’ inside; they actively process incoming information, structuring it in predictable ways to create our subjective experience of the world (Michael Mills has an excellent post on that point). Brains are designed to organize and represent incoming information in particular ways and, importantly, this organization is often not recoverable from the information itself. There is nothing about certain wavelengths of light that would lead to their automatic perception as “green” or “red”, and nothing intrinsic about speech that makes it grammatical. This would imply that at least some memes (like grammatical rules) need to be created in a more or less de novo fashion; others need to be given meaning not found in the information itself: while a parrot can be taught to repeat certain phrases, it is unlikely that the phrases trigger the same set of representations inside the parrot’s head as they do in ours.

The second response to the potential rebuttal concerns the design features of memes more generally, and again returns us to their definitional obscurity. Biological replicators which create more copies of themselves become more numerous, relative to replicators that do a worse job; that much is a tautology. The question of interest is how they manage to do so. There are scores of adaptive problems that need to be successfully solved for biological organisms to reproduce. When we look for evidence of special design, we are looking for evidence of adaptations designed to solve those kinds of problems. To do so requires (a) the identification of an adaptive problem, (b) a trait that solves the problem, and (c) an account of how it does does so.  As the basic structure of memes has not been formally laid out, it becomes impossible to pick out evidence of memetic design features that came to be because they solved particular adaptive problems. I’m not even sure whether proper adaptive problems faced by memes specifically, and not adaptive problems faced by their host organism, have even been articulated.

One final fanciful example that highlights both these points is the human ability to (occasionally) comprehend scrambled words with ease:

I cdn’uolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg: the phaonmneel pweor of the hmuan mnid. Aoccdrnig to a rseearch taem at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae.

In the above passage, what is causing some particular meme (the word ‘taht’) to be transformed into a different meme (the word ‘that’)? Is there some design feature of the word “that” which is particularly good at modifying other memes to make copies of itself? Probably not, since no one read “cluod” in the above passage as “that”. Perhaps the meme ‘taht’ is actually composed of 4 different memes, ‘t’, ‘a’, ‘h’, and ‘t’, which have some affinity for each other. Then again, probably not, since I doubt non-English speakers would spontaneously turn the four into the word ‘that’. The larger points here are that (a) our minds are not passive recipients of information, but rather activity represent and create it, and (b) if one cannot speak meaningfully about different features of memes (like design features, or heritable units) beyond, “I know it when I see it”, the enterprise of discussing memes seems to more closely resemble a post hoc fitting of any observed set of data to the theory, rather than the theory driving predictions about unknown data. 

“Oh, it’ll fit alright…”

All of this isn’t to say that memetics will forever be useless in furthering our understanding of how ideas are shaped and spread but, in order to be useful, a number of key concepts would need to be deeply clarified at a minimum. A similar analysis applies to other similar types of explanations, such as cultural ones: it’s beyond doubt that local conditions – like cultures and ideas – can shape behavior. The key issue, however, is not noting that these things can have effects, but rather developing theories that deliver testable predictions about the ways in which those effects are realized. Adaptationism and evolution by natural selection fit the bill well, and in that respect I would praise memetics: it recognizes the potential power of such an approach. The problem, however, lies in the execution. If these biological theories are used loosely to the point of metaphor, their conceptual power to guide research wanes substantially.

Simple Rules Do Useful Things, But Which Ones?

Depending on who you ask – and their mood at moment – you might come away with the impression that humans are a uniquely intelligent species, good at all manner of tasks, or a profoundly irrational and, well, stupid one, prone to frequent and severe errors in judgment. The topic often penetrates into lay discussions of psychology, and has been the subject of many popular books, such as the Predictably Irrational series. Part of the reason that people might give these conflicting views of human intelligence – either in terms of behavior or reasoning – is the popularity of explaining human behavior through cognitive heuristics. Heuristics are essentially rules of thumb which focus only on limited sets of information when making decisions. A simple, perhaps hypothetical example of a heuristic might be something like a “beauty heuristic”. This heuristic might go something along the lines of when deciding who to get into a relationship with, pick the most physically attractive available option; other information – such as the wealth, personality traits, and intelligence of the perspective mates – would be ignored by the heuristic.

Which works well when you can’t notice someone’s personality at first glance.

While ignoring potential sources information might seem perverse at first glance, given that one’s goal is to make the best possible choice, it has the potential to be a useful strategy. One of these reasons is that the world is a rather large place, and gathering information is a costly process. The benefits of collecting additional bits of information are outweighed by the costs of doing so past a certain point, and there are many, many potential sources of information to choose from. To the extent that additional information helps one make a better choice, making the best objective choice is often a practical impossibility. In this view, heuristics trade off accuracy with effort, leading to ‘good-enough’ decisions. A related, but somewhat more nuanced benefit of heuristics comes from the sampling-error problem: whenever you draw samples from a population, there is generally some degree of error in your sample. In other words, your small sample is often not entirely representative of the population from which it’s drawn. For instance, if men are, on average, 5 inches taller than women the world over, if you select 20 random men and women from your block to measure, your estimate will likely not be precisely 5 inches; it might be lower or higher, and the degree of that error might be substantial or negligible.

Of note, however, is the fact that the fewer people from the population you sample, the greater your error is likely to be: if you’re only sampling 2 men and women, your estimate is likely to be further from 5 inches (in one direction or the other) relative to when you’re sampling 20, relative to 50, relative to a million. Importantly, the issue of sampling error crops up for each source of information you’re using. So unless you’re sampling large enough quantities of information capable of balancing that error out across all the information sources you’re using, heuristics that ignore certain sources of information can actually lead to better choices at times. This is because the bias introduced by the heuristics might well be less predictively-troublesome than the degree of error variance introduced by insufficient sampling (Gigerenzer, 2010). So while the use of heuristics might at times seem like a second-best option, there appear to be contexts where it is, in fact, the best option, relative to an optimization strategy (where all available information is used).

While that seems to be all well and good, the acute reader will have noticed the boundary conditions required for heuristics to be of value: they need to know how much of which sources of information to pay attention to. Consider a simple case where you have five potential sources of information to attend to in order to predict some outcome: one of these is sources strongly predictive, while the other four are only weakly predictive. If you play an optimization strategy and have sufficient amounts of information about each source, you’ll make the best possible prediction. In the face of limited information, a heuristic strategy can do better provided you know you don’t have enough information and you know which sources of information to ignore. If you picked which source of information to heuristically-attend to at random, though, you’d end up making a worse prediction than the optimizer 80% of the time. Further, if you used a heuristic because you mistakenly believed you didn’t have sufficient amounts of information when you actually did, you’ve also made a worse prediction than the optimizer 100% of the time.

“I like those odds; $10,000 on blue! (The favorite-color heuristic)”

So, while heuristics might lead to better decisions than attempts at optimization at times, the contexts in which they manage that feat are limited. In order for these fast and frugal decision rules to be useful, you need to be aware of how much information you have, as well as which heuristics are appropriate for which situations. If you’re trying to understand why people use any specific heuristic, then, one would need to make substantially more textured predictions about the functions responsible for the existence of the heuristic in the first place. Consider the following heuristic, suggested by Gigerenzer (2010): if there is a default, do nothing about it. That heuristic is used to explain, in this case, the radically different rates of being an organ donor between countries: while only 4.3% of Danish people are donors, nearly everyone in Sweden is (approximately 85%). Since the explicit attitudes about the willingness to be a donor don’t seem to differ substantially between the two countries, the variance might prove a mystery; that is, until one realizes that the Danes have an ‘opt in’ policy to be a donor, whereas the Swedes have an ‘opt out’ one. The default option appears to be responsible for driving most of variance in rates of organ donor status.

While such a heuristic explanation might seem, at least initially, to be a satisfying one (in that it accounts for a lot of the variance), it does leave one wanting in certain regards. If anything, the heuristic seems more like a description of a phenomenon (the default option matters sometimes) rather than an explanation of it (why does it matter, and under what circumstances might we expect it to not?). Though I have no data on this, I imagine if you brought subjects into the lab and presented them with an option to give the experimenter $5 or have the experimenter give them $5, but highlighted the first option as default, you would probably find very few people who did not ignore the default heuristic. Why, then, might the default heuristic be so persuasive at getting people to be or fail to be organ donors, but profoundly unpersuasive at getting people to give up money? Gigerenzer’s hypothesized function for the default heuristic – group coordination – doesn’t help us out here, since people could, in principle, coordinate around either giving or getting. Perhaps one might posit that another heuristic – say, when possible, benefit the self over others – is at work in the new decision, but without a clear, and suitably textured theory for predicting when one heuristic or another will be at play, we haven’t explained these results.

In this regard, then, heuristics (as explanatory variables) share the same theoretical shortcoming as other “one-word explanations” (like ‘culture’, ‘norms’, ‘learning’, ‘the situation’, or similar such things frequently invoked by psychologists). At best, they seem to describe some common cues picked up on by various cognitive mechanisms, such as authority relations (what Gigerenzer suggested formed the following heuristic: if a person is an authority, follow requests) or peer behavior (the imitate-your-peers heuristic: do as your peers do) without telling us anything more. Such descriptions, it seems, could even drop the word ‘heuristic’ altogether and be none the worse for it. In fact, given that Gigerenzer (2010) mentions the possibility of multiple heuristics influencing a single decision, it’s unclear to me that he is still be discussing heuristics at all. This is because heuristics are designed specifically to ignore certain sources of information, as mentioned initially. Multiple heuristics working together, each of which dabble in a different source of information that the others ignore seem to resemble an optimization strategy more closely than heuristic one.

And if you want to retain the term, you need to stay within the lines.

While the language of heuristics might prove to be a fast and frugal way of stating results, it ends up being a poor method of explaining them or yielding much in the way of predictive value. In determining whether some decision rule even is a heuristic in the first place, it would seem to behoove those advocating the heuristic model to demonstrate why some source(s) of information ought to be expected to be ignored prior to some threshold (or whether such a threshold even exists). What, I wonder, might heuristics have to say about the variance in responses to the trolley and footbridge dilemmas, or the variation in moral views towards topics like abortion or recreational drugs (where people are notably not in agreement)? As far as I can tell, focusing on heuristics per se in these cases is unlikely to do much to move us forward. Perhaps, however, there is some heuristic heuristic that might provide us with a good rule of thumb for when we ought to expect heuristics to be valuable…

References: Gigerenzer, G. (2010). Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality Topics in Cognitive Science., 2, 528-554 DOI: 10.1111/j.1756-8765.2010.01094.x

Do People Try To Dishonestly Signal Fairness?

“My five-year old, the other day, one of her toys broke, and she demanded I break her sister’s toy to make it fair. And I did.” – Louis CK

This quote appeared in a post of mine around the middle of last month, in which I wanted to draw attention to the fact that a great deal of caution is warranted in inferring preferences for fairness per se from the end-states of economic games. Just because people behaved in ways that resulted in inequality being reduced, it does not necessarily follow that people were consciously acting in those ways to reduce inequality, or that humans have cognitive adaptations designed to do so; to achieve fairness. In fact, I have the opposite take on matter: since achieving equality per se doesn’t necessarily do anything useful, we should not expect to find cognitive mechanisms designed for achieving that end state. In this view, concerns for fairness are byproducts of cognitive systems designed to do other useful things. Fairness, after all, would be – and indeed can only be – an additional restriction to tack onto the range of possible, consequentially-usefully outcomes. As the Louis CK quote makes clear, concerns for fairness might involve doing things which are actively detrimental, like destroying someone else’s property to maintain some kind of equal distribution of resources. As his quote also makes clear, people are, in fact, sometimes willing to do just that.

Which brings us nicely to the topic of fairness and children.

There has been some research on children where an apparent preference for fairness (the Louis CK kind) has been observed. In the first study in a paper by Shaw et al (2012), children, ages 6 to 8, were asked a series of questions so as to be rewarded with colorful erasers (a valued resource for children). The experimenter also told the child that another, non-present child had finished a similar task and, as such, had also earned some erasers. Initially, the experimenter divided four erasers equally between the two, then left the room to retrieve a final eraser they had ostensibly forgotten. The experimenter returned to the room, and asked the child what they should do with the fifth eraser: should the child themselves get it, should the non-present child get it, or should it be thrown away? A remarkable 80% of children suggested that the eraser should be thrown away, rather than taking it for themselves or giving it to the other child. The first thing worth noting here is that children appeared willing to achieve equality through welfare destruction; equality made no one better off here, and at least one person worse off. This is what I meant when I said that achieving equality only limits the possible range of behaviors. The more interesting finding, though, is what happened when children had the available option for non-transparently unfair behavior.

The two other conditions in this first study tracked the possibility that children only wanted to appear fair, without actually being fair. In these conditions, the erasers were placed inside envelops, so as to be concealed from view. In the first of these two conditions, the child was given 1 eraser while the other non-present child was given two. When the experimenter left the room to retrieve the last eraser, a confederate came in and placed an additional eraser inside the child’s envelop and told the child to keep it secret. Then, the experimenter returned with the final eraser and asked the child what they should do with it. In this condition, only 25% of children said the eraser should be thrown away, with the rest opting instead to keep it for themselves; an unfair distribution. The second version of this condition was the same, except it was the non-present child who got the 1 eraser initially, with the confederate adding the same secret eraser to the non-present child’s envelop. In that condition, 60% of children suggested the experimenter should throw away the last eraser, with the remaining 40% keeping it for themselves (making them appear indifferent between a fair distribution or a selfish, unfair one).

So, just to recap, children will publicly attempt to achieve a fair outcome, even though doing so results in worse consequentialist outcomes (there is no material benefit to either child to throwing away an otherwise-valued eraser). However, privately, children are perfectly content to behave unfairly. The proffered explanation for these findings is that children wanted to send a signal of fairness to others publicly, but actually preferred to behave unfairly, and when they had some way of obscuring that they were doing so, they would make use of it. Indeed, findings along these same lines have been demonstrated across a variety of studies in adults as well – appear publicly fair and privately selfish – so the patterns of behavior appear sound. While I think there is certainly something to the signaling model proposed by Shaw et al (2012), I also think the signaling explanation requires some semantic and conceptual tweaking in order to make it work since, as it stands, it doesn’t make good sense. These alterations focus on two main areas: the nature of communication itself and the necessary conditions for signals to evolve, and also on how to precisely conceptualize what signal is – or rather isn’t – being sent, as well as why we ought to expect that state of affairs. Let’s begin by talking honesty.

Liar, Liar, I’m bad a poetry and you have third degree burns now.

The first issue with the signaling explanation involves a basic conceptual point about communication more generally: in order for a receiver to care about a signal from a sender in the first place, the signal needs to (generally) be honest. If I publicly proclaim that I’m a millionaire when I’m actually not, it would behoove listeners to discount what it is I have to say. A dishonest signal is of no value to the receiver. The same logic holds throughout the animal kingdom, which is why ornaments that signal an animal’s state – like the classic peacock tail – are generally very costly to grow, maintain, and survive with. These handicaps ensure the signal’s honesty and make it worth the peahen’s while to respond to. If, on the other hand, the peacocks could display a train without actually being in better condition, the signal value of the trait is lost, and we should expect peahens to eventually evolve in the direction of no longer caring about the signal. The fairness signaling explanation, then, seems to be phrased rather awkwardly: in essence, it would appear to say that, “though people are not actually fair, they try to signal that they are because other people will believe them”. This requires positing that the signal itself is a dishonest one and that receivers care about it. That’s a conceptual problem.

The second issue is that, even if one was actually fair in terms of resource distribution both publicly and privately, it seems unclear to me that one would benefit in any way by signaling that fact about themselves. Understanding why should be fairly easy: partial friends – one’s who are distinctly and deeply interested in promoting your welfare specifically – are more desirable allies than impartial ones. Someone who would treat all people equally, regardless of preexisting social ties, appears to pose no distinct benefits as an association partner. Imagine, for instance, how desirable a romantic partner would be who is just as interested in taking you out for dinner as they are in taking anyone else out. If they don’t treat you special in any way, investing your time in them would be a waste. Similarly, a best friend who is indifferent between spending time with you or someone they just met works as well for the purposes of this example. Signaling you’re truly fair, then, is signaling that you’re not a good social investment. Further, as this experiment demonstrated, achieving fairness can often mean worse outcomes for many. Since the requirement of fairness is a restriction on the range of possible behaviors on can engage it, fairness per se cannot lead to better utilitarian outcomes. Not only would signaling true fairness make you seem like a poor friend, it would also tell others that you’re the type of person who will tend to make worse decisions, overall. This doesn’t paint a pretty picture of fair individuals.

So what are we to make of the children’s curious behavior of throwing out an eraser? My intuition is that children weren’t trying to send a signal of fairness so much as they were trying to avoid sending a signal of partiality. This is a valuable distinction to make, as it makes the signaling explanation immediately more plausible: now, instead of a dishonest signal that needs to be believed, we’re left with the lack of a distinct signal that need not be considered either honest or dishonest. The signal is what’s important, but the children’s goal is avoid letting signals leak, rather than actively sending them. This raises the somewhat-obvious question of why we might expect people to sometimes forgo personal benefits to themselves or others so as to avoid sending a signal of partiality. This is an especially important consideration, as not throwing away a resource can (potentially) be beneficial no matter where it ends up: either directly beneficial in terms of gaining a resource for yourself, or beneficial in terms of initiating or maintaining new alliances if generously given to others. Though I don’t have a more-definite response to that concern, I do have some tentative suggestions.

Most of which sadly request that I, “Go eat a…”, well, you know.

An alternative possibility is that people might wish to, at times, avoid giving other people information pertaining to the extent of their existing partial relationships. If you know, for instance, that I am already deeply invested in friendships with other people, that might make me look like a bad potential investment, as I have, proportionately, fewer available resources to invest in others than if I didn’t have those friendships; I would also have less of a need for additional friends (as I discussed previously). Further, social relationships can come with certain costs or obligations, and there are times where initiating a new relationship with someone is not in your best interests: even if that person might treat you well, associating with them might carry costs from others your new partner has mistreated in the past. Though these potentials might not necessarily explain why the children are behaving the way they do with respect to erasers, it at least gives us a better theoretical grounding from which to start considering the question. What I feel we can be confident about is that the strategy that children are deploying resembles poker players trying to avoid letting other people see what cards they’re holding, rather than trying to lie to other people about what they are. There is some information that it’s not always wise to send out into the world, and some information it’s not worth listening to. Since communication is a two-way street, it’s important to avoid trying to think about each side individually in these matters.

References: Shaw A, Montinari N, Piovesan M, Olson KR, Gino F, & Norton MI (2013). Children Develop a Veil of Fairness. Journal of experimental psychology. General PMID: 23317084

Why Would Bad Information Lead To Better Results?

There are some truly strange arguments made in the psychological literature from time to time. Some might even be so bold as to call that frequency “often”, while others might dismiss the field of psychology as a variety of pseudoscience and call it a day. Now, were I to venture some guesses as to why strange arguments seem so popular, I’d have two main possibilities in mind: first, there’s the lack of well-grounded theoretical framework that most psychologists tend to suffer from, and, second, there’s a certain pressure put on psychologists to find and publish surprising results (surprising in that they document something counter-intuitive or some human failing. I blame this one for the lion’s share of these strange arguments). These two factors might come together to result in rather nonsensical arguments being put forth fairly-regularly and their not being spotted for what they are. One of these strange arguments that has come across my field of vision fairly frequently in the past few weeks is the following: that our minds are designed to actively create false information, and because of that false information we are supposed to be able to make better choices. Though it comes in various guises across different domains, the underlying logic is always the same: false beliefs are good. On the face of it, such an argument seems silly. In all fairness, however, it only seems that way because, well, it is that way.

If only all such papers came with gaudy warning hats…

Given the strangeness of these arguments, it’s refreshing to come across papers critical of them that don’t pull any rhetorical punches. For that reason, I was immediately drawn towards a recent paper entitled, “How ‘paternalistic’ is spatial perception? Why wearing a heavy backpack doesn’t – and couldn’t – make hills steeper” (Firestone, 2013; emphasis his). The general idea that the paper argues against is the apparently-popular suggestion that our perception essentially tells us – the conscious part of us, anyway – many little lies to get us to do or not do certain things. As the namesake of the paper implies, one argument goes that wearing a heavy backpack will make hills actually look steeper. Not just feel harder to climb, mind you, but actually look visually steeper. The reason some researchers posited this might be the case is because they realized, correctly, that wearing a heavy backpack makes hills harder to climb. In order to dissuade us from climbing them under such conditions, then, out perceptual system is thought makes the hill look harder to climb than it actually is, so we don’t try. Additionally, such biases are said to make decisions easier by reducing the cognitive processing required to make them.

Suggestions like these do violence to our intuitive experience of the world. Were you looking down the street unencumbered, for instance, your perception of the street would not visibly lengthen before your eyes were you to put on a heavy backpack, despite the distance now being harder to travel. Sure; you might be less inclined to take that walk down the street with the heavy backpack on, but that’s a different matter as to whether you would see the world any differently. Those who favor the embodied model might (and did) counter that it’s not that the distances themselves change, but rather the units on the ruler used to measure one’s position relative to them that does (Proffitt, 2013). In other words, since our measuring tool looks different, the distances look different. I find such an argument wanting, as it appears to be akin to suggesting that we should come to a different measurement of a 12-foot room contingent on whether we’re using foot-long or yard-long measuring sticks, but perhaps I’m missing some crucial detail.

In any case, there are many other problems with the embodied account that Firestone (2013) goes through, such as the magnitude of the effect sizes – which can be quite small – being insufficient to accurately adjust behavior, their being little to no objective way of scaling one’s relative abilities to certain kinds of estimates, and, perhaps most damningly, that many of these effects fail to replicate or can be eliminated by altering the demand characteristics of the experiments in which they’re found. Apparently subjects in these experiments seemed to make some connection – often explicitly – between the fact they were just asked to put on a heavy backpack and then make an estimate of the steepness of a hill. They’re inferring what the experimenter wants and then adjusting their estimates accordingly.

While Firestone (2013) makes many good points in suggesting why the paternalistic (or embodied) account probably isn’t right, there are some I would like to add to the list. The first of these additions is that, in many cases, the embodied account seems to be useless for discriminating between even directly-comparable actions. Consider the following example in which such biases might come into play: you have a heavy load to transport from point A to point B, and you want to figure out the easiest way of doing so. One route takes you over a steep hill; another route takes you the longer distance around the hill. How should we expect perceptual estimates to be biased in order to help you solve the task? On the one hand, they might bias you to avoid the hill, as the hill now looks steeper; on the other hand, they might bias you to avoid the more circuitous route, as distances now look longer. It would seem the perceptual bias resulting from the added weight wouldn’t help you make a seemingly simple decision. At best, such biases might make you decide to not bother carrying the load in the first place, but the moment you put it down, the perceptions of these distances ought to shrink, making the task seem more manageable. All such a biasing system would seem to do in cases like this, then, is add extra cognitive processing into the mix in the form of whatever mechanisms are required to bias your initial perceptions.

“It’s symbolic; things don’t always have to “do” things. Now help me plug it into the wall”

The next addition I’d like to make is also in regards to the embodied account not being useful: the embodied account, at least at times, would seem to get causality backwards. Recall that the hypothesized function of these ostensible perceptual distortions is to guide actions. Provided I’m understanding the argument correctly, then, these perceptual distortions ought to occur before one decides what to do; not after the decision has already been made. The problem is that they don’t seem to be able to work in that fashion, and here’s why: these biasing systems would be unable to know in which direction to bias perceptions prior to a decision being made. If, for instance, some part of your mind is trying to bias your perception of the steepness of a hill so as to dissuade you from climbing it, that would seem to imply that some part of your mind already made the decision as to whether or not to try and make the climb. If the decision hadn’t been made, the direction or extent of the bias would remain undetermined. Essentially, these biasing rules are being posited to turn your perceptual systems into superfluous yes-men.

On that point, it’s worth noting that we are talking about biasing existing perceptions. The proposition on the table seems to be the following chain of events: first, we perceive the world as it is (or at least as close to that state as possible; what I’ll call the true belief). This leaves most of cognitive work already done, as I mentioned above. Then, from those perceptions, an action is chosen based on some expected cost/benefit analysis (i.e. don’t climb the hill because it will be too hard). Following this, our mind takes the true belief it already made the action decision with and turns it into a false one. This false belief then biases our behavior so as to get us to do what we were going to do anyway. Since the decision can be made on the basis of the initially-calculated true information, the false belief seem to have no apparent benefit for your immediate decision. The real effect of these false beliefs, then, ought to be expected to be seen in subsequent decisions. This raises yet another troubling possibility for the model: in the event that some perception – like steepness – is used to generate estimates of multiple variables (such as energy expenditure, risk, or so on), a biased perception will similarly bias all these estimates.

A quick example should highlight some of potential problems with this. Let’s say you’re a camper returning home with a heavy load of gear on your back. Because you’re carrying a heavy load, you mistakenly perceive that your camping group is farther away than they actually are. Suddenly, you notice an rather hungry-looking predator approaching you. What do you do? You could try and run back to the safety of your group, or you could try and fight it off (forgoing other behavioral options for the moment). Unfortunately, because you mistakenly believe that your group is farther away than they are, you miscalculate the probability of making it to them before the predator catches up with you and opt to fight it off instead. Since the basis for that decision is false information, the odds of it being the best choice are diminished. This analysis works in the opposition direction as well. There are two types of errors you might make: thinking you can make the distance when you can’t, or thinking you can’t make it when you can. Both of these are errors to be avoided, and avoiding errors is awfully hard when you’re working with bad information.

Especially when you just disregarded the better information you had

It seems hard to find the silver lining in these false-belief models. They don’t seem to save any cognitive load, as they require the initially true beliefs to already be present in the mind somewhere. They don’t seem to help us make a decision either. At best, false beliefs lead us to do the same thing we would do in the presence of true beliefs anyway; at worst, false beliefs lead us to make worse decisions than we otherwise would. These models appear to require that our minds take the best possible state of information they have access to and then add something else to it. Despite these (perhaps not-so) clear shortcomings, false belief models appear to be remarkably popular, and are used to explain topics from religious beliefs to ostensible misperceptions of sexual interest. Given that people generally seem to understand that it’s beneficial to see through the lies of others and not be manipulated with false information, it seems peculiar that they have a harder time recognizing that it’s similarly beneficial to avoid lying to ourselves.

References: Firestone, C. (2013). How “Paternalistic” Is Spatial Perception? Why Wearing a Heavy Backpack Doesn’t- and Couldn’t – Make Hills Look Steeper. Perspectives on Psychological Science, 8, 455-473

Proffitt, D. (2013). An Embodied Approach to Perception: By What Units Are Visual Perceptions Scaled? Perspectives on Psychological Science, 8, 474-483.

PZ Myers; Again…

Since my summer vacation is winding itself to a close, it’s time to relax with a fun, argumentative post that doesn’t deal directly with research. PZ Myers, an outspoken critic of evolutionary psychology – or at least an imaginary version of the field, which may bear little or no resemblance to the real thing – is back at it at again. After a recent defense of the field against PZ’s rather confused comments by Jerry Coyne and Steven Pinker, PZ has now responded to Pinker’s comments. Now, presumably, PZ feels like he did a pretty good job here. This is somewhat unfortunate, as PZ’s response basically plays by every rule outlined in the pop anti-evolutionary-psychology game: he asserts, incorrectly, what evolutionary psychology holds to as a discipline, fails to mention any examples of this going on in print (though he does reference blogs, so there’s that…), and then expressing wholehearted agreement with many of the actual theoretical commitments put forth by the field. So I wanted to take this time to briefly respond to PZ’s recent response and defend my field. This should be relatively easy, since it’s takes PZ a full two sentences into his response proper to say something incorrect.

Gotta admire the man’s restraint…

Kicking off his reply, PZ has this to say about why he dislikes the methods of evolutionary psychology:

PZ: That’s my primary objection, the habit of evolutionary psychologists of taking every property of human behavior, assuming that it is the result of selection, building scenarios for their evolution, and then testing them poorly.”

Familiar as I am with the theoretical commitments of the field, I find it strange that I overlooked the part that demands evolutionary psychologists assume that every property of human behavior is the result of selection. It might have been buried amidst all those comments about things like “byproducts”, “genetic drift”, “maladaptiveness” and “randomness” by the very people who, more or less, founded the field. Most every paper using the framework in the primary literature I’ve come across, strangely, seem to write things like “…the current data is consistent with the idea that [trait X] might have evolved to [solve problem Y], but more research is needed”, or might posit that,”…if [trait X] evolved to [solve problem Y], we. ought to expect [design feature Z]“. There is, however, a grain of truth to what PZ writes, and that is this: that hypotheses about adaptive function tend to make better predictions that non-adaptive ones. I highlighted this point in my last response to a post by PZ, but I’ll recreate the quote by Tooby and Cosmides here:

“Modern selectionist theories are used to generate rich and specific prior predictions about new design features and mechanisms that no one would have thought to look in the absence of these theories, which is why they appeal so strongly to the empirically minded….It is exactly this issue of predictive utility, and not “dogma”, that leads adaptationists to use selectionist theories more often than they do Gould’s favorites, such as drift and historical contingency. We are embarrassed to be forced, Gould-style, to state such a palpably obvious thing, but random walks and historical contingency do not, for the most part, make tight or useful prior predictions about the unknown design features of any single species.”

All of that seems to be besides the point, however, because PZ evidently doesn’t believe that we can actually test byproduct claims in the first place. You see, it’s not enough to just say that [trait X] is a byproduct; you need to specify what it’s a byproduct of. Male nipples, for instance, seem to be a byproduct of functional female nipples; female orgasm may be a byproduct of a functional male orgasm. Really, a byproduct claim is more a negative claim than anything else: it’s a claim that [trait X] has (or rather, had) no adaptive function. Substantiating that claim, however, requires one to be able to test for and rule out potential adaptive functions. Here’s what PZ had to say in his comments section about doing so:

PZ: My argument is that most behaviors will NOT be the product of selection, but products of culture, or even when they have a biological basis, will be byproducts or neutral. Therefore you can’t use an adaptationist program as a first principle to determine their origins.”

Overlooking the peculiar contrasting of “culture” and “biological basis” for the moment, if one cannot use an adaptationist paradigm to test for possible functions in the first place, then it seems one would be hard-pressed to make any claim at all about function – whether that claim is that there is or isn’t one. One could, as PZ suggests, assume that all traits are non-functional until demonstrated otherwise, but, again, since we apparently cannot use an adaptationist analysis to determine function, this would leave us assuming things like “language is a byproduct”. This is somewhat at odds with PZ’s suggestion that “there is an evolved component of human language”, but since he doesn’t tell us how he reached that conclusion – presumably not through some kind of adaptationism program – I suppose we’ll all just have to live with the mystery.

Methods: Concentrated real hard, then shook five times.

Moving on, PZ raises the following question about modularity in the next section of his response:

“PZ: …why talk about “modules” at all, other than to reify an abstraction into something misleadingly concrete?”

Now this isn’t really a criticism about the field so much as a question about it, but that’s fine; questions are generally welcomed. In fact, I happen to think that PZ answers this question himself without any awareness of it, when he was previously discussing spleen function:

PZ: What you can’t do is pick any particular property of the spleen and invent functions for it, which is what I mean by arbitrary and elaborate.”

While PZ is happy with the suggestion that spleen itself serves some adapted function, he overlooks the fact, and indeed, would probably take it for granted, that it’s meaningful to talk about the spleen as being a distinct part of the body in which it’s found. To put PZ’s comment in context, imagine some anti-evolutionary physiologist suggesting that it’s nonsensical to try and “pick any particular” part of the body and talk about “it’s specific function” as if it’s distinct from any other part (I imagine the exchange might go like this: “You’re telling me the upper half of the chest functions as a gas exchanger and the lower half functions to extract nutrients from food? What an arbitrary distinction!”). Of course, we know it does make sense to talk about different parts of the body – the heart, the lungs, and spleen – and we do so as each is viewed as having different functions. Modularity essentially does the same thing for the brain. Though the brain might outwardly appear to be a single organ, it is actually a collection of functionally-distinct pieces. The parts of your brain that process taste information aren’t good at solving other problems, like vision. Similarly, a system that processes sexual arousal might do terribly at generating language. This is why brain damage tends to cause rather-selective deficits in cognitive abilities, rather than global or unpredictable ones. We insist on modularity of the mind for the same reason PZ insists on modularity of the body.

PZ also brings the classic trope of dichotomizing “learned/cultural” and “evolved/genetic” to bear, writing:

“PZ: …I suspect it’s most likely that they are seeing cultural variations, so trying to peg them to an adaptive explanation is an exercise in futility”

I will only give the fairly-standard reply to such sentiments, since they’ve been voiced so often before that it’s not worth spending much time on. Yes, cultures differ, and yes, culture clearly has effects on behavior and psychology. I don’t think any evolutionary psychologist would tell you differently. However, these cultural differences do not just come from nowhere, and neither do our consistent patterns of responses to those differences. If, for instance, local sex-ratios have some predictable effects on mating behavior, one needs to explain why that is the case. This is like the byproduct point above: it’s not enough to say “[trait X] is a product of culture” and leave it at that if you want an explanation of trait X that helps you understand anything about it. You need to explain why that particular bit of environmental input is having the effect that it does. Perhaps the effect is the result of psychological adaptation for processing that particular input, or perhaps the effect is a byproduct of mechanisms not designed to process it (which still requires identifying the responsible psychological adaptations), or perhaps the consistent effect is just a rather-unlikely run of random events all turning out the same. In any case, to reach any of these conclusions, one needs an adaptationist approach – or PZ’s magic 8-ball.

Also acceptable: his magic Ouija board.

The final point I want to engage with are two rather interesting comments from PZ. The first comment comes from his initial reply to Coyne and the second from his reply to Pinker:

PZ: I detest evolutionary psychology, not because I dislike the answers it gives, but on purely methodological and empirical grounds…Once again, my criticisms are being addressed by imagining motives”

While PZ continues to stress that, of course, he could not possibly have ulterior, conscious or unconscious, motives for rejecting evolutionary psychology, he then makes a rather strange comment in the comments section:

PZ: Evolutionary psychology has a lot of baggage I disagree with, so no, I don’t agree with it. I agree with the broader principle that brains evolved.”

Now it’s hard to know precisely what PZ meant to imply with the word “baggage” there because, as usual, he’s rather light on the details. When I think of the word “baggage” in that context, however, my mind immediately goes to unpleasant social implications (as in, “I don’t identify as a feminist because the movement has too much baggage”). Such a conclusion would imply there are non-methodological concerns that PZ has about something related to evolutionary psychology. Then again, perhaps PZ simply meant some conceptual, theoretical baggage that can be remedied with some new methodology that evolutionary psychology currently lacks. Since I like to assume the best (you know me), I’ll be eagerly awaiting  PZ’s helpful suggestions as to how the field can be improved by shedding its baggage as it moves into the future.

Why Do People Adopt Moral Rules?

First dates and large social events, like family reunions or holiday gatherings, can leave people wondering about which topics should be off-limits for conversations, or even dreading which topics will inevitably be discussed. There’s nothing quite like the discomfort that a drunken uncle feeling the need to let you know precisely what he thinks about the proper way to craft immigration policy or what he thinks about gay marriage can bring. Similarly, it might not be a good idea to open up a first date with an in-depth discussion of your deeply held views on abortion and racism in the US today. People realize, quite rightly, that such morally-charged topics have the potential to be rather divisive, and can quickly alienate new romantic partners or cause conflict within otherwise cohesive groups. Alternatively, however, in the event that you happen to be in good agreement with others on such topics, they can prove to be fertile grounds for beginning new relationships or strengthening old ones; the enemy of my enemy is my friend and similar such sayings attest to that. All this means you need to be careful about where and how you spread your views about these topics. Moral stances are kind of like manure in that way.

Great on the fields; not so great for tracking around everywhere you walk.

Now these are pretty important things to consider if you’re a human, since a good portion of your success in life is going to be determined by who your allies are. One’s own physical prowess is no longer sufficient to win conflicts when you’re fighting against increasingly larger alliances, not to mention the fact that allies also do wonders for your available options regarding other cooperative ventures. Friends are useful, and this shouldn’t news to anyone. This would, of course, drive selection pressures for adaptations that help people to build and maintain healthy alliances. However, not everyone ends up with a strong network of alliances capable of helping them protect or achieve their interests. Friends and allies are a zero-sum resource, as the time they spend helping one person (or one group of people) is time not spent with another. The best allies are a very limited and desirable resource, and only a select few will have access to them: those who have something of value to offer in return. So what are the people towards the bottom of the alliance hierarchy to do? Well, one potential answer is the obvious, and somewhat depressing, outcome: not much. They tended to get exploited by others; often ruthlessly so. They either need to increase their desirability as a partner to others in order to make friends who can protect them, or face those severe and persistent social costs.

Any available avenue for those exploited parties that help them avoid such costs and protect their interests, then, ought to be extremely appealing. A new paper by Petersen (2013) proposes that one of these avenues might be for those lacking in the alliance department to be more inclined to use moralization to protect their interests. Specifically, the proposition on offer is that if one lacks the private ability to enforce their own interests, in the form of friends, one might be increasingly inclined to turn towards public means of enforcement: recruiting third-party moralistic punishers. If you can create a moral rule that protects your self-interest, third parties – even those who otherwise have no established alliance with you – ought to become your de facto guardians whenever those interests are threatened. Accordingly, the argument goes that those lacking in friends ought to be more likely to support existing rules that protect them against exploitation, whereas those with many friends, who are capable of exploiting others, ought to feel less interest in supporting moral rules that prevent said exploitation. In support of this model, Petersen (2013) notes that there is a negative correlation – albeit a rather small one – between proxies for moralization and friend-based social support (as opposed to familial or religious support, which tended to correlate as well, but in the positive direction).

So let’s run through a hypothetical example to clarify this a bit: you find yourself back in high school and relatively alone in that world, socially. The school bully, with his pack of friends, have been hounding you and taking your lunch money; the classic bully move. You could try and stand up to the bullies to prevent the loss of money, but such attempts are likely to be met with physical aggression, and you’d only end up getting yourself hurt on top of then losing your money anyway. Since you don’t have enough friends who are willing and able to help tip the odds in your favor, you could attempt to convince others that it ought to be immoral to steal lunch money. If you’re successful in your efforts, the next time the bullies attempt to inflict costs on you, they would find themselves opposed by the other students who would otherwise just stay out of it (provided, of course, that they’re around at the time). While these other students might not be your allies at other times, they are your allies, temporarily, when you’re being stolen from. Of course, moralizing stealing prevents you from stealing from others – as well as having it done to you – but since you weren’t in the position to be stealing from anyone in the first place, it’s really not that big of a loss for you, relative to the gain.

Phase Two: Try to make wedgies immoral.

While such a model posits a potentially interesting solution for those without allies, it leaves many important questions unaddressed. Chief among these questions is the matter of what’s in it for third parties? Why should other people adopt your moral rules, as opposed to their own, let alone be sure to intervene even if you share the moral rule? While third-party support is certainly a net benefit for the moralizer who initially can’t defend their own interests, it’s a net cost to the people who actually have to enforce the moral rule. If those bullies are trying to steal from you, the costs of deterring, and if necessary, fighting them off, falls on shoulders of others who would probably rather avoid such risks. These costs are magnified further because a moral rule against stealing lunch money ought to require people to punish any and all instance of the bullying; not just your specific one. As punishing people is generally not a great way to build or maintain relationships with them, supporting this moral rule, then, could prevent the punishers from forming what might be otherwise-useful alliances with the bullying parties. Losing potential friendships to temporarily support someone you’re not actually friends with and won’t become friends with doesn’t sound like a very good investment.

The costs don’t even end there, though. Let’s say, hypothetically, that most people do agree that the stealing of lunch money ought to be stopped and are willing to accept the moral rule in the first place. There are costs involved in enforcing the rule, and it’s generally in everyone’s best interest to not suffer those costs personally. So, while people might be perfectly content with their being a rule against stealing, they don’t want to be the ones who have to enforce it; they would rather free-ride on other people’s punishment efforts. Unfortunately, the moral rule requires a large number of potential punishers for it to be effective. This means that those willing to punish would need to incentivise non-punishers to start punishing as well. These incentives, of course, aren’t free to deliver. This now leads to punishers needing to, in essence, not only punish those who commit the immoral act, but also punish those who fail to punish people who commit the immoral act (which leads to punishing those who fail to punish those who fail to punish as well, and so on. The recursion can be hard to keep track of). As the costs of enforcement continue to mount, in the absence of compensating benefits it’s not at all clear to me why third parties should become involved in the disputes of others, or try to convince other people to get involved. Punishing an act “because it’s immoral” is only a semantic step away from punishing something “just because”.

A more plausible model, I feel, would be an alliance-based model for moralization: people might be more likely to adopt moral rules in the interests of increasing their association value to specific others. Let’s use one of the touchy, initial subjects – abortion – as a test case here: if I adopt a moral stance opposing the practice, I would make myself a less-appealing alliance partner for anyone who likes the idea of abortions being available, but I would also make myself a more-appealing to partner to anyone who dislike the idea (all else being equal). Now that might seem like a wash in terms of costs and benefits on the whole – you open yourself up to some friends and foreclose on others – but there are two main reasons I would still favor the alliance account. The first is the most obvious: it locates some potential benefits for the rule-adopters. While it is true that there are costs to making a moral stance, there aren’t only costs anymore. The second benefit of the alliance account is that the key issue here might not be whether you make or lose friends on the whole, but rather that it can ingratiate you to specific people. If you’re trying to impress a particular potential romantic partner or ally, rather than all romantic partners or allies more generally, it might make good sense to tailor your moral views to that specific audience. As was noted previously, friendship is a zero-sum game, and you don’t get to be friends with everyone.

Basically, these two aren’t trying to impress each other.

It goes without saying that the alliance model is far from complete in terms of having all its specific details fleshed out, but it gives us some plausible places with which to start our analysis: considerations of what specific cues to people might use to assess relative social value, or how those cues interact with current social conditions to determine the degree of current moral support. I feel the answers to such questions will help us shed light on many additional ones, such as why almost all people will agree with the seemingly-universal rule stating “killing morally is wrong” and then go on to expand upon the many, many non-universal exceptions to that moral rule over which they don’t agree (such as when killing in self-defense, or when you find your partner having sex with another person, or when killing a member of certain non-human species, or killing unintentionally, or when killing a terminally ill patient rather than letting them suffer, and so on…). The focus, I feel, should not be on why how powerful of a force third-party punishment can be, but rather why third parties might care (or fail to care) about the moral violations of others in the first place. Just because I think murder is morally wrong, it doesn’t mean I’m going to react the same way to any and all cases of murder.

References: Petersen, M. (2013). Moralization as protection against exploitation: Do individuals without allies moralize more? Evolution and Human Behavior, 34, 78-85 DOI: 10.1016/j.evolhumbehav.2012.09.006

The Inferential Limits Of Economic Games

Having recently returned from the Human Behavior & Evolution Society’s (HBES) conference, I would like to take a moment to let everyone know what an excellent time I had there. Getting to meet some of my readers in person was a fantastic experience, as was the pleasure of being around the wider evolutionary research community and reconnecting with old friends. The only negative parts of the conference involved making my way through the flooded streets of Miami on the first two mornings (which very closely resembled this scene from the Simpsons) and the pool party at which I way over-indulged in drinking. Though there was a diverse array of research presented spanning many different areas, I ended up primarily in the seminars on cooperation, as the topic tends most towards my current research projects. I would like to present two of my favorite findings from those seminars, which serve as excellent cautionary tales concerning what conclusions one can draw from economic games. Despite the popular impression, there’s a lot more to evolutionary psychology than sex research.

Though the Sperm-Sun HBES logo failed to adequately showcase that diversity.

The first game to be discussed is the classic dictator game. In this game, two participants are brought into the lab and assigned the role of either ‘dictator; or ‘recipient’. The dictator is given a sum of money (say, $10) and is given the option to divide it however they want between the pair. If the dictator was maximally selfish – as standard economic rationality might suggest – they would consistently keep all the money and given none to the recipient. Yet this is not what we frequently see: dictators tend to give at least some of the money to the other person, and an even split is often made. While giving these participants anonymity from one another does tend to reduce offers, even ostensibly anonymous dictators continue to give. This result clashes somewhat with our every day experiences: after all, provided we have money in our pocket, we’re faced with possible dictator-like experiences every time we pass someone on the street, whether they’re homeless and begging for money or apparently well-off. Despite the near-constant opportunities during which we could transfer money to others, we frequently do not. So how do we reconcile the two experimental and everyday results?

One possibility is to suggest that the giving in dictator games is largely induced by experimental demand effects: subjects are being placed into a relatively odd situation and are behaving rather oddly because of it (more specifically, because they are inferring what the experimenter “wants” them to do). Of course, it’s not so easy to replicate the effects the contexts of the dictator game (a sudden windfall of a divisible asset and a potential partner to share it with) without subjects knowing they’re talking part in an experiment. Winking & Mizer (2013) manged to find a way around these problems in Las Vegas. In this field experiment, a confederate would be waiting at a bus stop when the ignorant subject approached. Once the subject was waiting for the bus as well, the confederate would pretend to take a phone call and move slightly away from the area with their back turned to the subject. It was at this point that the experiment approached on his cell, ostensibly in a hurry. As the experimenter passed the subject, he gave them $20 in poker chips, saying that he was late for his ride to the airport and didn’t have time to cash them in. These casino chips are an excellent stimuli, as they provided a good cover story for why they were being handed over: they only have value when cashed in, and the experimenter didn’t have time to do so. Using actual currency wouldn’t work well, as it might raise suspicions about the setup, since currency travels well from place to place.

In the first condition, the experimenter left and the confederate returned without further instruction; in the second condition, the experimenter said, “I don’t know. You can split them with that guy however you want” while gesturing at the confederate before he ran off. A third condition involved an explicit version of the dictator game experiment with poker chips, during which anonymity was granted. In the standard version of the experiment – when the subjects knew about the game explicitly – 83% of subjects offered at least some of the chips to other people with a median offer around $5, resembling previous experimental results fairly well. How about the other two conditions? Well, of the 60 participants who were not told they were explicitly taking part in the game, all of them kept all the money. This suggests very strongly that all – or at least most – of the giving we observe in dictator games is grounded in the nature of the experiment itself. Indeed, many of the subjects in the first condition, where the instruction to split was not given, seemed rather perplexed by the purpose of the study during the debriefing. The subjects wondered precisely why in the world they would split the money with the confederate in the first place. Like all of us walking down the street with money on our person, the idea that they would just give that money to other people seemed rather strange.

“I’m still not following: you want to do what with all this money, again?”

The second paper of interest looked at behavior in another popular game: the public goods game. In these games, subjects are typically placed together in groups of four and are provided with a sum of money. During each round, players can invest any amount of their money in the public pot and keep the rest. All the money in the pot is then multiplied by some amount and then divided equally amongst all the participants. In this game, the rational economic move is typically to not put any money in, as for each dollar you put in, you receive less than a dollar back (since the multiplier is below the number of subjects in the group); not a great investment. On the other hand, the group-maximizing outcome is for all the subjects to donate all their money, so everyone ends up richer than when they started. Again, we find that subjects in these games tend to donate some of their money to the public pot, and many researchers have inferred from this giving that people have prosocial preferences (i.e. making other people better off per se increases my subjective welfare). If such an inference was correct, then we ought to expect that subjects should give more money to the public good provided they know how much good they’re doing for others.

Towards examining this inference, Burton-Chellew & West (2013) put subjects into a public goods game in three different conditions. First, there was the standard condition, described above. Second was a condition like the standard game, except subjects received an additional piece of information in the form of how much the other players in the game earned. Finally, there was a third condition in which subjects didn’t even know the game was being played with other people; subjects were merely told they could donate some fraction of their money (from 0 to 40 units) to a “black box” which would perform a transformation on the money received and give them a non-negative payoff (which was the same average benefit they received in the game when playing with other people, but they didn’t know that). In total, 236 subjects played in one of the first two conditions and also in the black box condition, counterbalancing the order of the games (they were informed the two were entirely different experiments).

How did contributions change between the standard condition and the black box condition over time? They didn’t. Subjects that knew they were playing a public goods game donated approximately as much during each round as the subjects who were just putting payments into the black box and getting some payment out: donations started out relatively high, and declined over time (presumably and subjects were learning they tended to get less money by contributing). The one notable difference was in the additional information condition: when subjects could see the earnings of others, relative to their contributions, subjects started to contribute less money to the public good. As a control condition, all the above three games were replicated with a multiplication rule that led the profit-maximizing strategy to being donate all of one’s available money, rather than none. In these conditions, the change in donations between standard and black box conditions again failed to differ significantly, and contributions were still lower in the enhanced-information condition. Further, in all these games subjects tended to fail to make the profit-maximizing decision, irrespective of whether that decision was to donate all their money or none of it. Despite this strategy being deemed relatively to “easy” to figure out by researchers, it apparently was not.

Other people not included, or required

Both of these experiments pose some rather stern warnings about the inferences we might draw from the behavior of people playing economic games. Some our our experiments might end up inducing certain behaviors and preferences, rather than revealing them. We’re putting people into evolutionarily-strange situations in these experiments, and so we might expect some evolutionarily-strange outcomes. It is also worth noting that just because you observe some prosocial outcome – like people giving money apparently altruistically or contributing to the good of others – it doesn’t follow that these outcomes are the direct result of cognitive modules designed to bring them about. Sure, my behavior in some of these games might end up reducing inequality, for instance, but it doesn’t following that people’s psychology was selected to do such things. There are definite limits to how far these economic games can take us inferentially, and it’s important to be aware of them. Do these studies show that such games are worthless tools? I’d say certainly not, as behavior in them is certainly not random. We just need to be mindful of their limits when we try and draw conclusions from them.

References: Burton-Chellew MN, & West SA (2013). Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences of the United States of America, 110 (1), 216-21 PMID: 23248298

Winking, J., & Mizer, N. (2013). Natural-field dictator game shows no altruistic giving. Evolution and Human Behavior. http://dx.doi.org/10.1016/j.evolhumbehav.2013.04.002

Equality-Seeking Can Lift (Or Sink) All Ships

There’s a saying in economics that goes, “A rising tide lifts all ships”. The basic idea behind the saying is that marginal benefits that accrue from people exchanging goods and services is good for everyone involved – and even for some who are not directly involved – in much the same way that all the boats in a body of water will rise or fall in height together as the overall water level does. While there is an element of truth to the saying (trade can be good for everyone, and the resources available to the poor today can, in some cases, be better than those available to even the wealthy in generations past), economies, of course, are not like bodies of water that rise and fall uniformly; some people can end up radically better- or worse-off than others as economic conditions shift, and inequality is a persistent factor in human affairs. Inequality – or, more aptly, the perception of it – is also commonly used as a justification for furthering certain social or moral goals. There appears to be something (or somethings) about inequality that just doesn’t sit well with people.

And I would suggest that those people go and eat some cake.

People’s ostensible discomfort with inequality has not escaped the eyes of many psychological researchers. There are some who suggest that humans have a preference for avoiding inequality; an inequality aversion, if you will. Phrased slightly differently, there are some who suggest that humans have an egalitarian motive (Dawes et al, 2007) that is distinct from other motives, such as enforcing cooperation or gaining benefits. Provided I’m parsing the meaning of the phrase correctly, then, the suggestion being made by some is that people should be expected to dislike inequality per se, rather than dislike inequality for other, strategic reasons. Demonstrating evidence of a distinct preference for inequality aversion, however, can be difficult. There are two reasons for this, I feel: the first is that inequality is often confounded with other factors (such as someone not cooperating or suffering losses). The second reason is that I think it’s the kind of preference that we shouldn’t expect to exist in the first place.

Taking these two issues in order, let’s first consider the paper by Dawes et al (2007) that sought to disentangle some of these confounding issues. In their experiment, 120 subjects were brought into the lab in groups of 20. These groups were further divided into anonymous groups of 4, such that each participant played in five rounds of the experiment, but never with the same people twice. The subjects also did not know about anyone’s past behavior in the experiment. At the beginning of each round, every subject in each group received a random number of payment units between some unmentioned specific values, and everyone was aware of the payments of everyone else in their group. Naturally, this tended to create some inequality in payments. Subjects were given means by which to reduce this inequality, however: they could spend some of their payment points to either add or subtract from other people’s payments at a ratio of 3 to 1 (in other words, I could spend one unit of my payment to either reduce your payment by three points or add three points to your payment). These additions and deductions were all decided on in private an enacted simultaneously, so as to avoid retribution and cooperation factors. It wasn’t until the end of each round that subjects saw how many additions and reductions they had received. In total, each subject had 15 chances to either add to or deduct from someone else payment (3 people per round over 5 rounds).

The results showed that most subjects paid to either add to or deduct from someone else’s payment at least once: 68% of people reduced the payment of someone else at least once, whereas 74% increased someone’s payment at least once. It wasn’t what one might consider a persistent habit, though: only 28% reduced people’s payment more than five times, while 33% added, and only 6% reduced more than 10 times, whereas 10% added. This, despite their being inequality to be reduced in all cases. Further, an appreciable number of the modifications didn’t go in the equality-reducing direction: 29% of reductions went to below-average earners, and 38% of the additions went to above-average earners. Of particular interest, however, is the precise way in which subjects ended up reducing inequality: the people who earned the least in each round tended to spend 96% more on deductions than top earners. In turn, top earners averaged spending 77% more on additions than the bottom earners. This point is of interest because positing a preference for avoiding inequality does not necessarily help one predict the shape that equality will ultimately take.

You could also cut the legs off the taller boys in the left picture so no one gets to see.

The first thing worth point out here, then, is that about half of all the inequality-reducing behaviors that people engaged in ended up destroying overall welfare. These are behaviors in which no one is materially better off. I’m reminded of part of a standup routine by Louis CK, concerning that idea, in which he recounts the following story (starting at about a 1:40):

“My five-year old, the other day, one of her toys broke, and she demanded I break her sister’s toy to make it fair. And I did.”

It’s important to note this so as to point out that achieving equality itself doesn’t necessarily do anything useful. It is not as if equality automatically makes everyone – or anyone – better off. So what kind of useful outcomes might such spiteful behavior result in? To answer that question, we need to examine the ways people reduced inequality. Any player in this game could reduce the overall amount of inequality by either deducting from high earners payment or adding to low earners. This holds for both the bottom and top earners. This means that there are several ways of reducing inequality available to all players. Low earners, for instance, could reduce inequality by engaging in spiteful reductions towards everyone above them until they’re all down at the same low level; they could also reduce the overall inequality by benefiting everyone above them, until everyone (but them) is at the same high level. Alternatively, they could engage in a mixture of these strategies, benefiting some people and harming others. The same holds for high earners, just in the opposite directions. Which path people would take depends on what their set point for ‘equal’ is. Strictly speaking, then, a preference for equality doesn’t tell us which method people should opt for, nor does it tell us what levels of inequality will be relatively accepted and efforts to achieve equality will cease.

There are, however, other possibilities for explaining these results beyond a preference for inequality per se. One particularly strong alternative is that people use perceptions of inequality as inputs for social bargaining. Consider the following scenario: two people are working together to earn a joint prize, like a $10 reward. If they work together, they get the $10 to split; if they do not work together, neither will receive anything. Further, let’s assume one member of this pair is greedy, and in round one, after they cooperate, takes $9 of the pot for themselves. Now, strictly speaking, the person who received $1 is better off than if they received nothing at all, but that doesn’t mean they ought to accept that distribution, and here’s why: if the person with $1 refuses to cooperate during the next round, they only lose that single dollar; the selfish player would lose out on nine-times as much. This asymmetry in losses puts the poorer player in a stronger bargaining position, as they have far less to lose from not cooperating. It is from bargaining structures similar in structure to this that our sense of fairness likely emerged.

So let’s apply this analysis back to the results of the experiment: people all start off with different amounts of money and people are in positions to benefit or harm each other. Everyone wants to leave with as much benefit as possible, which means contributing nothing and getting additions from everyone else. However, since everyone is seeking this same outcome and they can’t all have it, certain compromises need to be reached. Those in high-earning positions face a different set of problems in that compromise than those in low-earning positions: while the high earners are doing something akin to trying to maintain cooperation by increasing the share of resources other people get (as in the previous example), low earners are faced with the problem of negotiating for a better payoff, threatening to cut off cooperation in the process. Both parties seem to anticipate this, with low earners disproportionately punishing high earners, and high earners disproportionately benefiting low earners. That there is no option for cooperation or bargaining present in this experiment is, I think besides the point, as our minds were not designed to deal with the specific context presented in the experiment. Along those same lines, simply telling people that “you’re now anonymous” doesn’t mean that their mind will automatically function as if it was positive no one could observe its actions, and telling people their computer can’t understand their frustration won’t stop them from occasionally yelling at it.

“Listen only to my voice: you are now anonymous. You are now anonymous”

As a final note, one should be careful about inferring a motive or preference for equality just because inequality was sometimes reduced. A relatively simple example should demonstrate why: consider an armed burglar who enters a store, points their gun at the owner, and demands all the money in the register. If the owner hands over the money, they have delivered a benefit to the burglar at a cost to themselves, but most of us would not understand this as an act of altruism on the part of the owner; the owner’s main concern is not getting shot, and they are willing to pay a small cost (the loss of money) so as to avoid a larger one (possible death). Other research has found, for instance, that when given the option to pay a fixed cost (a dollar) to reduce another person’s payment by any amount (up to a total of $12), when people engage in reduction, they’re highly likely to generate inequality that favors themselves. (Houser & Xiao, 2010). It would be inappropriate to suggest that people are equality-averse from such an experiment, however, and, more to the point, doing so wouldn’t further our understanding of human behavior much, if at all. We want to understand why people do certain things; not simply that they do them.

References: Dawes CT, Fowler JH, Johnson T, McElreath R, & Smirnov O (2007). Egalitarian motives in humans. Nature, 446 (7137), 794-6 PMID: 17429399

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment Economic Letters DOI: 10.1016/j.econlet.2010.07.008.