PZ Myers; Again…

Since my summer vacation is winding itself to a close, it’s time to relax with a fun, argumentative post that doesn’t deal directly with research. PZ Myers, an outspoken critic of evolutionary psychology – or at least an imaginary version of the field, which may bear little or no resemblance to the real thing – is back at it at again. After a recent defense of the field against PZ’s rather confused comments by Jerry Coyne and Steven Pinker, PZ has now responded to Pinker’s comments. Now, presumably, PZ feels like he did a pretty good job here. This is somewhat unfortunate, as PZ’s response basically plays by every rule outlined in the pop anti-evolutionary-psychology game: he asserts, incorrectly, what evolutionary psychology holds to as a discipline, fails to mention any examples of this going on in print (though he does reference blogs, so there’s that…), and then expressing wholehearted agreement with many of the actual theoretical commitments put forth by the field. So I wanted to take this time to briefly respond to PZ’s recent response and defend my field. This should be relatively easy, since it’s takes PZ a full two sentences into his response proper to say something incorrect.

Gotta admire the man’s restraint…

Kicking off his reply, PZ has this to say about why he dislikes the methods of evolutionary psychology:

PZ: That’s my primary objection, the habit of evolutionary psychologists of taking every property of human behavior, assuming that it is the result of selection, building scenarios for their evolution, and then testing them poorly.”

Familiar as I am with the theoretical commitments of the field, I find it strange that I overlooked the part that demands evolutionary psychologists assume that every property of human behavior is the result of selection. It might have been buried amidst all those comments about things like “byproducts”, “genetic drift”, “maladaptiveness” and “randomness” by the very people who, more or less, founded the field. Most every paper using the framework in the primary literature I’ve come across, strangely, seem to write things like “…the current data is consistent with the idea that [trait X] might have evolved to [solve problem Y], but more research is needed”, or might posit that,”…if [trait X] evolved to [solve problem Y], we. ought to expect [design feature Z]“. There is, however, a grain of truth to what PZ writes, and that is this: that hypotheses about adaptive function tend to make better predictions that non-adaptive ones. I highlighted this point in my last response to a post by PZ, but I’ll recreate the quote by Tooby and Cosmides here:

“Modern selectionist theories are used to generate rich and specific prior predictions about new design features and mechanisms that no one would have thought to look in the absence of these theories, which is why they appeal so strongly to the empirically minded….It is exactly this issue of predictive utility, and not “dogma”, that leads adaptationists to use selectionist theories more often than they do Gould’s favorites, such as drift and historical contingency. We are embarrassed to be forced, Gould-style, to state such a palpably obvious thing, but random walks and historical contingency do not, for the most part, make tight or useful prior predictions about the unknown design features of any single species.”

All of that seems to be besides the point, however, because PZ evidently doesn’t believe that we can actually test byproduct claims in the first place. You see, it’s not enough to just say that [trait X] is a byproduct; you need to specify what it’s a byproduct of. Male nipples, for instance, seem to be a byproduct of functional female nipples; female orgasm may be a byproduct of a functional male orgasm. Really, a byproduct claim is more a negative claim than anything else: it’s a claim that [trait X] has (or rather, had) no adaptive function. Substantiating that claim, however, requires one to be able to test for and rule out potential adaptive functions. Here’s what PZ had to say in his comments section about doing so:

PZ: My argument is that most behaviors will NOT be the product of selection, but products of culture, or even when they have a biological basis, will be byproducts or neutral. Therefore you can’t use an adaptationist program as a first principle to determine their origins.”

Overlooking the peculiar contrasting of “culture” and “biological basis” for the moment, if one cannot use an adaptationist paradigm to test for possible functions in the first place, then it seems one would be hard-pressed to make any claim at all about function – whether that claim is that there is or isn’t one. One could, as PZ suggests, assume that all traits are non-functional until demonstrated otherwise, but, again, since we apparently cannot use an adaptationist analysis to determine function, this would leave us assuming things like “language is a byproduct”. This is somewhat at odds with PZ’s suggestion that “there is an evolved component of human language”, but since he doesn’t tell us how he reached that conclusion – presumably not through some kind of adaptationism program – I suppose we’ll all just have to live with the mystery.

Methods: Concentrated real hard, then shook five times.

Moving on, PZ raises the following question about modularity in the next section of his response:

“PZ: …why talk about “modules” at all, other than to reify an abstraction into something misleadingly concrete?”

Now this isn’t really a criticism about the field so much as a question about it, but that’s fine; questions are generally welcomed. In fact, I happen to think that PZ answers this question himself without any awareness of it, when he was previously discussing spleen function:

PZ: What you can’t do is pick any particular property of the spleen and invent functions for it, which is what I mean by arbitrary and elaborate.”

While PZ is happy with the suggestion that spleen itself serves some adapted function, he overlooks the fact, and indeed, would probably take it for granted, that it’s meaningful to talk about the spleen as being a distinct part of the body in which it’s found. To put PZ’s comment in context, imagine some anti-evolutionary physiologist suggesting that it’s nonsensical to try and “pick any particular” part of the body and talk about “it’s specific function” as if it’s distinct from any other part (I imagine the exchange might go like this: “You’re telling me the upper half of the chest functions as a gas exchanger and the lower half functions to extract nutrients from food? What an arbitrary distinction!”). Of course, we know it does make sense to talk about different parts of the body – the heart, the lungs, and spleen – and we do so as each is viewed as having different functions. Modularity essentially does the same thing for the brain. Though the brain might outwardly appear to be a single organ, it is actually a collection of functionally-distinct pieces. The parts of your brain that process taste information aren’t good at solving other problems, like vision. Similarly, a system that processes sexual arousal might do terribly at generating language. This is why brain damage tends to cause rather-selective deficits in cognitive abilities, rather than global or unpredictable ones. We insist on modularity of the mind for the same reason PZ insists on modularity of the body.

PZ also brings the classic trope of dichotomizing “learned/cultural” and “evolved/genetic” to bear, writing:

“PZ: …I suspect it’s most likely that they are seeing cultural variations, so trying to peg them to an adaptive explanation is an exercise in futility”

I will only give the fairly-standard reply to such sentiments, since they’ve been voiced so often before that it’s not worth spending much time on. Yes, cultures differ, and yes, culture clearly has effects on behavior and psychology. I don’t think any evolutionary psychologist would tell you differently. However, these cultural differences do not just come from nowhere, and neither do our consistent patterns of responses to those differences. If, for instance, local sex-ratios have some predictable effects on mating behavior, one needs to explain why that is the case. This is like the byproduct point above: it’s not enough to say “[trait X] is a product of culture” and leave it at that if you want an explanation of trait X that helps you understand anything about it. You need to explain why that particular bit of environmental input is having the effect that it does. Perhaps the effect is the result of psychological adaptation for processing that particular input, or perhaps the effect is a byproduct of mechanisms not designed to process it (which still requires identifying the responsible psychological adaptations), or perhaps the consistent effect is just a rather-unlikely run of random events all turning out the same. In any case, to reach any of these conclusions, one needs an adaptationist approach – or PZ’s magic 8-ball.

Also acceptable: his magic Ouija board.

The final point I want to engage with are two rather interesting comments from PZ. The first comment comes from his initial reply to Coyne and the second from his reply to Pinker:

PZ: I detest evolutionary psychology, not because I dislike the answers it gives, but on purely methodological and empirical grounds…Once again, my criticisms are being addressed by imagining motives”

While PZ continues to stress that, of course, he could not possibly have ulterior, conscious or unconscious, motives for rejecting evolutionary psychology, he then makes a rather strange comment in the comments section:

PZ: Evolutionary psychology has a lot of baggage I disagree with, so no, I don’t agree with it. I agree with the broader principle that brains evolved.”

Now it’s hard to know precisely what PZ meant to imply with the word “baggage” there because, as usual, he’s rather light on the details. When I think of the word “baggage” in that context, however, my mind immediately goes to unpleasant social implications (as in, “I don’t identify as a feminist because the movement has too much baggage”). Such a conclusion would imply there are non-methodological concerns that PZ has about something related to evolutionary psychology. Then again, perhaps PZ simply meant some conceptual, theoretical baggage that can be remedied with some new methodology that evolutionary psychology currently lacks. Since I like to assume the best (you know me), I’ll be eagerly awaiting  PZ’s helpful suggestions as to how the field can be improved by shedding its baggage as it moves into the future.

Why Do People Adopt Moral Rules?

First dates and large social events, like family reunions or holiday gatherings, can leave people wondering about which topics should be off-limits for conversations, or even dreading which topics will inevitably be discussed. There’s nothing quite like the discomfort that a drunken uncle feeling the need to let you know precisely what he thinks about the proper way to craft immigration policy or what he thinks about gay marriage can bring. Similarly, it might not be a good idea to open up a first date with an in-depth discussion of your deeply held views on abortion and racism in the US today. People realize, quite rightly, that such morally-charged topics have the potential to be rather divisive, and can quickly alienate new romantic partners or cause conflict within otherwise cohesive groups. Alternatively, however, in the event that you happen to be in good agreement with others on such topics, they can prove to be fertile grounds for beginning new relationships or strengthening old ones; the enemy of my enemy is my friend and similar such sayings attest to that. All this means you need to be careful about where and how you spread your views about these topics. Moral stances are kind of like manure in that way.

Great on the fields; not so great for tracking around everywhere you walk.

Now these are pretty important things to consider if you’re a human, since a good portion of your success in life is going to be determined by who your allies are. One’s own physical prowess is no longer sufficient to win conflicts when you’re fighting against increasingly larger alliances, not to mention the fact that allies also do wonders for your available options regarding other cooperative ventures. Friends are useful, and this shouldn’t news to anyone. This would, of course, drive selection pressures for adaptations that help people to build and maintain healthy alliances. However, not everyone ends up with a strong network of alliances capable of helping them protect or achieve their interests. Friends and allies are a zero-sum resource, as the time they spend helping one person (or one group of people) is time not spent with another. The best allies are a very limited and desirable resource, and only a select few will have access to them: those who have something of value to offer in return. So what are the people towards the bottom of the alliance hierarchy to do? Well, one potential answer is the obvious, and somewhat depressing, outcome: not much. They tended to get exploited by others; often ruthlessly so. They either need to increase their desirability as a partner to others in order to make friends who can protect them, or face those severe and persistent social costs.

Any available avenue for those exploited parties that help them avoid such costs and protect their interests, then, ought to be extremely appealing. A new paper by Petersen (2013) proposes that one of these avenues might be for those lacking in the alliance department to be more inclined to use moralization to protect their interests. Specifically, the proposition on offer is that if one lacks the private ability to enforce their own interests, in the form of friends, one might be increasingly inclined to turn towards public means of enforcement: recruiting third-party moralistic punishers. If you can create a moral rule that protects your self-interest, third parties – even those who otherwise have no established alliance with you – ought to become your de facto guardians whenever those interests are threatened. Accordingly, the argument goes that those lacking in friends ought to be more likely to support existing rules that protect them against exploitation, whereas those with many friends, who are capable of exploiting others, ought to feel less interest in supporting moral rules that prevent said exploitation. In support of this model, Petersen (2013) notes that there is a negative correlation – albeit a rather small one – between proxies for moralization and friend-based social support (as opposed to familial or religious support, which tended to correlate as well, but in the positive direction).

So let’s run through a hypothetical example to clarify this a bit: you find yourself back in high school and relatively alone in that world, socially. The school bully, with his pack of friends, have been hounding you and taking your lunch money; the classic bully move. You could try and stand up to the bullies to prevent the loss of money, but such attempts are likely to be met with physical aggression, and you’d only end up getting yourself hurt on top of then losing your money anyway. Since you don’t have enough friends who are willing and able to help tip the odds in your favor, you could attempt to convince others that it ought to be immoral to steal lunch money. If you’re successful in your efforts, the next time the bullies attempt to inflict costs on you, they would find themselves opposed by the other students who would otherwise just stay out of it (provided, of course, that they’re around at the time). While these other students might not be your allies at other times, they are your allies, temporarily, when you’re being stolen from. Of course, moralizing stealing prevents you from stealing from others – as well as having it done to you – but since you weren’t in the position to be stealing from anyone in the first place, it’s really not that big of a loss for you, relative to the gain.

Phase Two: Try to make wedgies immoral.

While such a model posits a potentially interesting solution for those without allies, it leaves many important questions unaddressed. Chief among these questions is the matter of what’s in it for third parties? Why should other people adopt your moral rules, as opposed to their own, let alone be sure to intervene even if you share the moral rule? While third-party support is certainly a net benefit for the moralizer who initially can’t defend their own interests, it’s a net cost to the people who actually have to enforce the moral rule. If those bullies are trying to steal from you, the costs of deterring, and if necessary, fighting them off, falls on shoulders of others who would probably rather avoid such risks. These costs are magnified further because a moral rule against stealing lunch money ought to require people to punish any and all instance of the bullying; not just your specific one. As punishing people is generally not a great way to build or maintain relationships with them, supporting this moral rule, then, could prevent the punishers from forming what might be otherwise-useful alliances with the bullying parties. Losing potential friendships to temporarily support someone you’re not actually friends with and won’t become friends with doesn’t sound like a very good investment.

The costs don’t even end there, though. Let’s say, hypothetically, that most people do agree that the stealing of lunch money ought to be stopped and are willing to accept the moral rule in the first place. There are costs involved in enforcing the rule, and it’s generally in everyone’s best interest to not suffer those costs personally. So, while people might be perfectly content with their being a rule against stealing, they don’t want to be the ones who have to enforce it; they would rather free-ride on other people’s punishment efforts. Unfortunately, the moral rule requires a large number of potential punishers for it to be effective. This means that those willing to punish would need to incentivise non-punishers to start punishing as well. These incentives, of course, aren’t free to deliver. This now leads to punishers needing to, in essence, not only punish those who commit the immoral act, but also punish those who fail to punish people who commit the immoral act (which leads to punishing those who fail to punish those who fail to punish as well, and so on. The recursion can be hard to keep track of). As the costs of enforcement continue to mount, in the absence of compensating benefits it’s not at all clear to me why third parties should become involved in the disputes of others, or try to convince other people to get involved. Punishing an act “because it’s immoral” is only a semantic step away from punishing something “just because”.

A more plausible model, I feel, would be an alliance-based model for moralization: people might be more likely to adopt moral rules in the interests of increasing their association value to specific others. Let’s use one of the touchy, initial subjects – abortion – as a test case here: if I adopt a moral stance opposing the practice, I would make myself a less-appealing alliance partner for anyone who likes the idea of abortions being available, but I would also make myself a more-appealing to partner to anyone who dislike the idea (all else being equal). Now that might seem like a wash in terms of costs and benefits on the whole – you open yourself up to some friends and foreclose on others – but there are two main reasons I would still favor the alliance account. The first is the most obvious: it locates some potential benefits for the rule-adopters. While it is true that there are costs to making a moral stance, there aren’t only costs anymore. The second benefit of the alliance account is that the key issue here might not be whether you make or lose friends on the whole, but rather that it can ingratiate you to specific people. If you’re trying to impress a particular potential romantic partner or ally, rather than all romantic partners or allies more generally, it might make good sense to tailor your moral views to that specific audience. As was noted previously, friendship is a zero-sum game, and you don’t get to be friends with everyone.

Basically, these two aren’t trying to impress each other.

It goes without saying that the alliance model is far from complete in terms of having all its specific details fleshed out, but it gives us some plausible places with which to start our analysis: considerations of what specific cues to people might use to assess relative social value, or how those cues interact with current social conditions to determine the degree of current moral support. I feel the answers to such questions will help us shed light on many additional ones, such as why almost all people will agree with the seemingly-universal rule stating “killing morally is wrong” and then go on to expand upon the many, many non-universal exceptions to that moral rule over which they don’t agree (such as when killing in self-defense, or when you find your partner having sex with another person, or when killing a member of certain non-human species, or killing unintentionally, or when killing a terminally ill patient rather than letting them suffer, and so on…). The focus, I feel, should not be on why how powerful of a force third-party punishment can be, but rather why third parties might care (or fail to care) about the moral violations of others in the first place. Just because I think murder is morally wrong, it doesn’t mean I’m going to react the same way to any and all cases of murder.

References: Petersen, M. (2013). Moralization as protection against exploitation: Do individuals without allies moralize more? Evolution and Human Behavior, 34, 78-85 DOI: 10.1016/j.evolhumbehav.2012.09.006

The Inferential Limits Of Economic Games

Having recently returned from the Human Behavior & Evolution Society’s (HBES) conference, I would like to take a moment to let everyone know what an excellent time I had there. Getting to meet some of my readers in person was a fantastic experience, as was the pleasure of being around the wider evolutionary research community and reconnecting with old friends. The only negative parts of the conference involved making my way through the flooded streets of Miami on the first two mornings (which very closely resembled this scene from the Simpsons) and the pool party at which I way over-indulged in drinking. Though there was a diverse array of research presented spanning many different areas, I ended up primarily in the seminars on cooperation, as the topic tends most towards my current research projects. I would like to present two of my favorite findings from those seminars, which serve as excellent cautionary tales concerning what conclusions one can draw from economic games. Despite the popular impression, there’s a lot more to evolutionary psychology than sex research.

Though the Sperm-Sun HBES logo failed to adequately showcase that diversity.

The first game to be discussed is the classic dictator game. In this game, two participants are brought into the lab and assigned the role of either ‘dictator; or ‘recipient’. The dictator is given a sum of money (say, $10) and is given the option to divide it however they want between the pair. If the dictator was maximally selfish – as standard economic rationality might suggest – they would consistently keep all the money and given none to the recipient. Yet this is not what we frequently see: dictators tend to give at least some of the money to the other person, and an even split is often made. While giving these participants anonymity from one another does tend to reduce offers, even ostensibly anonymous dictators continue to give. This result clashes somewhat with our every day experiences: after all, provided we have money in our pocket, we’re faced with possible dictator-like experiences every time we pass someone on the street, whether they’re homeless and begging for money or apparently well-off. Despite the near-constant opportunities during which we could transfer money to others, we frequently do not. So how do we reconcile the two experimental and everyday results?

One possibility is to suggest that the giving in dictator games is largely induced by experimental demand effects: subjects are being placed into a relatively odd situation and are behaving rather oddly because of it (more specifically, because they are inferring what the experimenter “wants” them to do). Of course, it’s not so easy to replicate the effects the contexts of the dictator game (a sudden windfall of a divisible asset and a potential partner to share it with) without subjects knowing they’re talking part in an experiment. Winking & Mizer (2013) manged to find a way around these problems in Las Vegas. In this field experiment, a confederate would be waiting at a bus stop when the ignorant subject approached. Once the subject was waiting for the bus as well, the confederate would pretend to take a phone call and move slightly away from the area with their back turned to the subject. It was at this point that the experiment approached on his cell, ostensibly in a hurry. As the experimenter passed the subject, he gave them $20 in poker chips, saying that he was late for his ride to the airport and didn’t have time to cash them in. These casino chips are an excellent stimuli, as they provided a good cover story for why they were being handed over: they only have value when cashed in, and the experimenter didn’t have time to do so. Using actual currency wouldn’t work well, as it might raise suspicions about the setup, since currency travels well from place to place.

In the first condition, the experimenter left and the confederate returned without further instruction; in the second condition, the experimenter said, “I don’t know. You can split them with that guy however you want” while gesturing at the confederate before he ran off. A third condition involved an explicit version of the dictator game experiment with poker chips, during which anonymity was granted. In the standard version of the experiment – when the subjects knew about the game explicitly – 83% of subjects offered at least some of the chips to other people with a median offer around $5, resembling previous experimental results fairly well. How about the other two conditions? Well, of the 60 participants who were not told they were explicitly taking part in the game, all of them kept all the money. This suggests very strongly that all – or at least most – of the giving we observe in dictator games is grounded in the nature of the experiment itself. Indeed, many of the subjects in the first condition, where the instruction to split was not given, seemed rather perplexed by the purpose of the study during the debriefing. The subjects wondered precisely why in the world they would split the money with the confederate in the first place. Like all of us walking down the street with money on our person, the idea that they would just give that money to other people seemed rather strange.

“I’m still not following: you want to do what with all this money, again?”

The second paper of interest looked at behavior in another popular game: the public goods game. In these games, subjects are typically placed together in groups of four and are provided with a sum of money. During each round, players can invest any amount of their money in the public pot and keep the rest. All the money in the pot is then multiplied by some amount and then divided equally amongst all the participants. In this game, the rational economic move is typically to not put any money in, as for each dollar you put in, you receive less than a dollar back (since the multiplier is below the number of subjects in the group); not a great investment. On the other hand, the group-maximizing outcome is for all the subjects to donate all their money, so everyone ends up richer than when they started. Again, we find that subjects in these games tend to donate some of their money to the public pot, and many researchers have inferred from this giving that people have prosocial preferences (i.e. making other people better off per se increases my subjective welfare). If such an inference was correct, then we ought to expect that subjects should give more money to the public good provided they know how much good they’re doing for others.

Towards examining this inference, Burton-Chellew & West (2013) put subjects into a public goods game in three different conditions. First, there was the standard condition, described above. Second was a condition like the standard game, except subjects received an additional piece of information in the form of how much the other players in the game earned. Finally, there was a third condition in which subjects didn’t even know the game was being played with other people; subjects were merely told they could donate some fraction of their money (from 0 to 40 units) to a “black box” which would perform a transformation on the money received and give them a non-negative payoff (which was the same average benefit they received in the game when playing with other people, but they didn’t know that). In total, 236 subjects played in one of the first two conditions and also in the black box condition, counterbalancing the order of the games (they were informed the two were entirely different experiments).

How did contributions change between the standard condition and the black box condition over time? They didn’t. Subjects that knew they were playing a public goods game donated approximately as much during each round as the subjects who were just putting payments into the black box and getting some payment out: donations started out relatively high, and declined over time (presumably and subjects were learning they tended to get less money by contributing). The one notable difference was in the additional information condition: when subjects could see the earnings of others, relative to their contributions, subjects started to contribute less money to the public good. As a control condition, all the above three games were replicated with a multiplication rule that led the profit-maximizing strategy to being donate all of one’s available money, rather than none. In these conditions, the change in donations between standard and black box conditions again failed to differ significantly, and contributions were still lower in the enhanced-information condition. Further, in all these games subjects tended to fail to make the profit-maximizing decision, irrespective of whether that decision was to donate all their money or none of it. Despite this strategy being deemed relatively to “easy” to figure out by researchers, it apparently was not.

Other people not included, or required

Both of these experiments pose some rather stern warnings about the inferences we might draw from the behavior of people playing economic games. Some our our experiments might end up inducing certain behaviors and preferences, rather than revealing them. We’re putting people into evolutionarily-strange situations in these experiments, and so we might expect some evolutionarily-strange outcomes. It is also worth noting that just because you observe some prosocial outcome – like people giving money apparently altruistically or contributing to the good of others – it doesn’t follow that these outcomes are the direct result of cognitive modules designed to bring them about. Sure, my behavior in some of these games might end up reducing inequality, for instance, but it doesn’t following that people’s psychology was selected to do such things. There are definite limits to how far these economic games can take us inferentially, and it’s important to be aware of them. Do these studies show that such games are worthless tools? I’d say certainly not, as behavior in them is certainly not random. We just need to be mindful of their limits when we try and draw conclusions from them.

References: Burton-Chellew MN, & West SA (2013). Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences of the United States of America, 110 (1), 216-21 PMID: 23248298

Winking, J., & Mizer, N. (2013). Natural-field dictator game shows no altruistic giving. Evolution and Human Behavior. http://dx.doi.org/10.1016/j.evolhumbehav.2013.04.002

Equality-Seeking Can Lift (Or Sink) All Ships

There’s a saying in economics that goes, “A rising tide lifts all ships”. The basic idea behind the saying is that marginal benefits that accrue from people exchanging goods and services is good for everyone involved – and even for some who are not directly involved – in much the same way that all the boats in a body of water will rise or fall in height together as the overall water level does. While there is an element of truth to the saying (trade can be good for everyone, and the resources available to the poor today can, in some cases, be better than those available to even the wealthy in generations past), economies, of course, are not like bodies of water that rise and fall uniformly; some people can end up radically better- or worse-off than others as economic conditions shift, and inequality is a persistent factor in human affairs. Inequality – or, more aptly, the perception of it – is also commonly used as a justification for furthering certain social or moral goals. There appears to be something (or somethings) about inequality that just doesn’t sit well with people.

And I would suggest that those people go and eat some cake.

People’s ostensible discomfort with inequality has not escaped the eyes of many psychological researchers. There are some who suggest that humans have a preference for avoiding inequality; an inequality aversion, if you will. Phrased slightly differently, there are some who suggest that humans have an egalitarian motive (Dawes et al, 2007) that is distinct from other motives, such as enforcing cooperation or gaining benefits. Provided I’m parsing the meaning of the phrase correctly, then, the suggestion being made by some is that people should be expected to dislike inequality per se, rather than dislike inequality for other, strategic reasons. Demonstrating evidence of a distinct preference for inequality aversion, however, can be difficult. There are two reasons for this, I feel: the first is that inequality is often confounded with other factors (such as someone not cooperating or suffering losses). The second reason is that I think it’s the kind of preference that we shouldn’t expect to exist in the first place.

Taking these two issues in order, let’s first consider the paper by Dawes et al (2007) that sought to disentangle some of these confounding issues. In their experiment, 120 subjects were brought into the lab in groups of 20. These groups were further divided into anonymous groups of 4, such that each participant played in five rounds of the experiment, but never with the same people twice. The subjects also did not know about anyone’s past behavior in the experiment. At the beginning of each round, every subject in each group received a random number of payment units between some unmentioned specific values, and everyone was aware of the payments of everyone else in their group. Naturally, this tended to create some inequality in payments. Subjects were given means by which to reduce this inequality, however: they could spend some of their payment points to either add or subtract from other people’s payments at a ratio of 3 to 1 (in other words, I could spend one unit of my payment to either reduce your payment by three points or add three points to your payment). These additions and deductions were all decided on in private an enacted simultaneously, so as to avoid retribution and cooperation factors. It wasn’t until the end of each round that subjects saw how many additions and reductions they had received. In total, each subject had 15 chances to either add to or deduct from someone else payment (3 people per round over 5 rounds).

The results showed that most subjects paid to either add to or deduct from someone else’s payment at least once: 68% of people reduced the payment of someone else at least once, whereas 74% increased someone’s payment at least once. It wasn’t what one might consider a persistent habit, though: only 28% reduced people’s payment more than five times, while 33% added, and only 6% reduced more than 10 times, whereas 10% added. This, despite their being inequality to be reduced in all cases. Further, an appreciable number of the modifications didn’t go in the equality-reducing direction: 29% of reductions went to below-average earners, and 38% of the additions went to above-average earners. Of particular interest, however, is the precise way in which subjects ended up reducing inequality: the people who earned the least in each round tended to spend 96% more on deductions than top earners. In turn, top earners averaged spending 77% more on additions than the bottom earners. This point is of interest because positing a preference for avoiding inequality does not necessarily help one predict the shape that equality will ultimately take.

You could also cut the legs off the taller boys in the left picture so no one gets to see.

The first thing worth point out here, then, is that about half of all the inequality-reducing behaviors that people engaged in ended up destroying overall welfare. These are behaviors in which no one is materially better off. I’m reminded of part of a standup routine by Louis CK, concerning that idea, in which he recounts the following story (starting at about a 1:40):

“My five-year old, the other day, one of her toys broke, and she demanded I break her sister’s toy to make it fair. And I did.”

It’s important to note this so as to point out that achieving equality itself doesn’t necessarily do anything useful. It is not as if equality automatically makes everyone – or anyone – better off. So what kind of useful outcomes might such spiteful behavior result in? To answer that question, we need to examine the ways people reduced inequality. Any player in this game could reduce the overall amount of inequality by either deducting from high earners payment or adding to low earners. This holds for both the bottom and top earners. This means that there are several ways of reducing inequality available to all players. Low earners, for instance, could reduce inequality by engaging in spiteful reductions towards everyone above them until they’re all down at the same low level; they could also reduce the overall inequality by benefiting everyone above them, until everyone (but them) is at the same high level. Alternatively, they could engage in a mixture of these strategies, benefiting some people and harming others. The same holds for high earners, just in the opposite directions. Which path people would take depends on what their set point for ‘equal’ is. Strictly speaking, then, a preference for equality doesn’t tell us which method people should opt for, nor does it tell us what levels of inequality will be relatively accepted and efforts to achieve equality will cease.

There are, however, other possibilities for explaining these results beyond a preference for inequality per se. One particularly strong alternative is that people use perceptions of inequality as inputs for social bargaining. Consider the following scenario: two people are working together to earn a joint prize, like a $10 reward. If they work together, they get the $10 to split; if they do not work together, neither will receive anything. Further, let’s assume one member of this pair is greedy, and in round one, after they cooperate, takes $9 of the pot for themselves. Now, strictly speaking, the person who received $1 is better off than if they received nothing at all, but that doesn’t mean they ought to accept that distribution, and here’s why: if the person with $1 refuses to cooperate during the next round, they only lose that single dollar; the selfish player would lose out on nine-times as much. This asymmetry in losses puts the poorer player in a stronger bargaining position, as they have far less to lose from not cooperating. It is from bargaining structures similar in structure to this that our sense of fairness likely emerged.

So let’s apply this analysis back to the results of the experiment: people all start off with different amounts of money and people are in positions to benefit or harm each other. Everyone wants to leave with as much benefit as possible, which means contributing nothing and getting additions from everyone else. However, since everyone is seeking this same outcome and they can’t all have it, certain compromises need to be reached. Those in high-earning positions face a different set of problems in that compromise than those in low-earning positions: while the high earners are doing something akin to trying to maintain cooperation by increasing the share of resources other people get (as in the previous example), low earners are faced with the problem of negotiating for a better payoff, threatening to cut off cooperation in the process. Both parties seem to anticipate this, with low earners disproportionately punishing high earners, and high earners disproportionately benefiting low earners. That there is no option for cooperation or bargaining present in this experiment is, I think besides the point, as our minds were not designed to deal with the specific context presented in the experiment. Along those same lines, simply telling people that “you’re now anonymous” doesn’t mean that their mind will automatically function as if it was positive no one could observe its actions, and telling people their computer can’t understand their frustration won’t stop them from occasionally yelling at it.

“Listen only to my voice: you are now anonymous. You are now anonymous”

As a final note, one should be careful about inferring a motive or preference for equality just because inequality was sometimes reduced. A relatively simple example should demonstrate why: consider an armed burglar who enters a store, points their gun at the owner, and demands all the money in the register. If the owner hands over the money, they have delivered a benefit to the burglar at a cost to themselves, but most of us would not understand this as an act of altruism on the part of the owner; the owner’s main concern is not getting shot, and they are willing to pay a small cost (the loss of money) so as to avoid a larger one (possible death). Other research has found, for instance, that when given the option to pay a fixed cost (a dollar) to reduce another person’s payment by any amount (up to a total of $12), when people engage in reduction, they’re highly likely to generate inequality that favors themselves. (Houser & Xiao, 2010). It would be inappropriate to suggest that people are equality-averse from such an experiment, however, and, more to the point, doing so wouldn’t further our understanding of human behavior much, if at all. We want to understand why people do certain things; not simply that they do them.

References: Dawes CT, Fowler JH, Johnson T, McElreath R, & Smirnov O (2007). Egalitarian motives in humans. Nature, 446 (7137), 794-6 PMID: 17429399

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment Economic Letters DOI: 10.1016/j.econlet.2010.07.008.

Why Would You Ever Save A Stranger Over A Pet?

The relationship between myself and my cat has been described by many as a rather close one. After I leave my house for almost any amount of a time, I’m greeted by what appears to be a rather excited animal that will meow and purr excessively, all while rubbing on and rolling around my feet upon my return. In turn, I feel a great deal of affection towards my cat, and derive feelings of comfort and happiness from taking care of and petting her. Like the majority of Americans, I happen to be a pet owner, and these experiences and ones like them will all sound perfectly normal and relatable. I would argue, however, that they are, in fact, very strange feelings, biologically-speaking. Despite the occasional story of cross-species fostering, other animals do not seem to behave in ways that indicates they seek out anything resembling pet-ownership. It’s often not until the idea of other species not making habits of having pets is raised that one realizes how strange of a phenomenon pet ownership can be. Finding that bears, for instance, reliably took care of non-bears, providing them with food and protection, would be a biological mystery of the first-degree.

And that I get most of my work done like this seems normal to me.

So why people seem to be so fond of pets? My guess is that the psychological mechanisms that underlie pet ownership in humans are not designed for that function per se. I would say that for a few reasons, notable among them are the time and resource factors. First, psychological adaptations take a good deal of time to be shaped by selective forces, which means long periods of co-residence between animals and people would be required for any dedicated adaptations to have formed. Though it’s no more than a guess on my part, I would assume that conditions that made extended periods of co-residence more probable would likely not have arisen prior to the advent of agriculture and geographically-stable human populations. The second issue involves the cost/benefit ratios: pets require a good deal of investment, at least in terms of food. In order for there to have been any selective pressure to keep pets, the benefits provided by the pets would have needed to have more than offset the costs of their care, and I don’t know of any evidence in that regard. Dogs might have been able to pull their weight in terms of assisting in hunting and protection, but it’s uncertain; other pets – such as cats, birds, lizards, or even the occasional insect – probably did not. While certain pets (like cats) might well have been largely self-sufficient, they don’t seem to offer much in the way of direct benefits to their owners either. No benefits means no distinct selection, which means no dedicated adaptions.

Given that there are unlikely to be dedicated pet modules in our brain, what other systems are good candidates for explaining the tendency towards seeking out pets? The most promising one that comes to mind are our already-existing systems designed for the care of our own, highly-dependent offspring. Positing that pet-care is a byproduct of our infant-care would manage to skirt both the issues of time and resources; our minds were designed to endure such costs to deliver benefits to our children. It would also allow us to better understand certain facets of the ways people behave towards their pets, such as the “aww” reaction people often have to pets (especially young ones, like kittens and puppies) and babies, as well as the frequent use of motherese (baby-talk) when talking to pets and children (to compare speech directed at pets and babies see here and here. Note as well that you don’t often hear adults talking to each other in this manner). Of course, were you to ask people whether their pets are their biological offspring, many would give the correct response of “no”. These verbal responses, however, do not indicate that other modules of the brain – ones that aren’t doing the talking – “know” that pets aren’t actually your offspring, in much the same way that parts of the brain dedicated to arousal don’t “know” that generating arousal to pornography isn’t going to end up being adaptive.

There is another interesting bit of information concerning pet ownership that I feel can be explained through the pets-as-infants model, but to get to it we need to first consider some research on moral dilemmas by Topolski et al (2013). This dilemma is a favorite of mine, and the psychological community more generally: a variant of the trolley dilemma. In this study, 573 participants were asked to respond to a series of 12 similar moral dilemmas, all of which had the same basic setup: there is a speeding bus that is about to hit either a person or an animal that both wandered out into the street. The subject only has time to save one of them, and are asked which they would prefer to save. (Note: each subject responded to all 12 dilemmas, which might result in some carryover effects. A between subjects design would have been stronger here. Anyway…) The identity of the animal and person in the dilemma were varied across the conditions: the animal was either the subject’s pet (subjects were asked to imagine one if they didn’t currently have one) or someone else’s pet, and the person was either a foreign tourist, a hometown stranger, a distant cousin, a best friend, a sibling, or a grandparent.

The study also starred Keanu Reeves.

In terms of saving someone else’s pet, people generally didn’t seem terribly interested. From a high about of 12% of subjects choosing someone else’s pet over a foreign tourist to a low of approximately 2% of subjects picking the strange pet over their own sibling. The willingness to save the animal in question rose substantially when it was the subject’s own pet being considered, however: while people were still about as likely to save their own pet in cases involving a grandparent or sibling, approximately 40% of subjects indicated they would save their pet over a foreign tourist or a hometown stranger (for the curious, about 23% would save their pet over a distant cousin and only about 5% would save their pet over a close friend. For the very curious, I could see myself saving my pet over the strangers or distant cousin). The strength of the relationship between pet owners and their animals appears to be strong enough to, quite literally, make almost half of them throw another human stranger under the bus to save their pet’s lives.

This is a strange response to give, but not for the obvious reasons: given that our pets are being being treated as our children by certain parts of our brain, this raises the question as to why anyone, let alone a majority of people, would be willing to sacrifice the lives of their pets to save a stranger. I don’t expect, for instance, that many people would be willing to let their baby get hit by the bus to save a tourist, so why that discrepancy? Three potential reasons come to mind: first, the pets are only “fooling” certain psychological systems. While some parts of our psychology might be treating pets as children, other parts may well not be (children do not typically look like cats or dogs, for instance). The second possible reason involves the clear threat of moral condemnation. As we saw, people are substantially more interested in saving their own pets, relative to a stranger’s pet. By extension, it’s probably safe to assume that other, uninvolved parties wouldn’t be terribly sympathetic to your decision to save an animal over a person. So the costs to saving the pet might well be perceived as higher. Similarly, the potential benefits to saving an animal may typically be lower than those of another person, as saved individuals and their allies are more likely to do things like reciprocate help, relative to a non-human. Sure, the pet’s owner might reciprocate, but the pet itself would not.

The final potential reason that comes to mind concerns that interesting bit of information I alluded to earlier: women were more likely to indicate they would save the animal in all conditions, and often substantially so. Why might this be the case? The most probable answer to that question again returns to the pets-as-children model: whereas women have not had to face the risk of genetic uncertainty in their children, men have. This risk makes males generally less interested in investing in children and could, by extension, make them less willing to invest in pets over people. The classic phrase, “Momma’s babies; Daddy’s maybes” could apply to this situation, albeit in an under-appreciated way (in other words, men might be harboring doubts about whether the pet is actually ‘theirs’, so to speak). Without reference parental investment theory – which the study does not contain – explaining this sex difference in willingness to pick animals over people would be very tricky indeed. Perhaps it should come as no surprise, then, that the authors do not do a good job of explaining their findings, opting instead to redescribe them in a crude and altogether useless distinction between “hot” and “cold” types of cognitive processing.

“…and the third type of cognitive processing was just right”

In a very real sense, some parts of our brain treat our pets as children: they love them, care for them, invest in them, and wish to save them from harm. Understanding how such tendencies develop, and what cues our minds use to make distinctions between their offspring, the offspring of others, their pets, and non-pet animals are very interesting matters which are likely to be furthered by considering parental investment theory. Are people raised with pets from a young age more likely to view them as fictive offspring? How might hormonal changes during pregnancy affect women’s interest in pets? Might cues of a female mate’s infidelity make their male partner less interested in taking care of pets they jointly own? Under what conditions might pets be viewed as a deterrent or an asset to starting new romantic relationships, in the same way that children from a past relationship might? The answers to these questions require placing pet care in its proper context, and you’re going to have quite a hard time doing that without the right theory.

References: R. Topolski, J.N. Weaver, Z. Martin, & J. McCoy (2013). Choosing between the Emotional Dog and the Rational Pal: A Moral Dilemma with a Tail. ANTHROZOÖS, 26, 253-263 DOI: 10.2752/175303713X13636846944321

Washing Hands In A Bright Room

Part of academic life in psychology – and a rather large part at that – centers around publishing research. Without a list of publications on your resume (or CV, if you want to feel different), your odds of being able to do all sorts of useful things, such as getting and holding onto a job, can be radically decreased. That said, people doing the hiring do not typically care to read through the published research of every candidate applying for the position. This means that career advancement involves not only publishing plenty of research, but publishing it in journals people care about. Though it doesn’t affect the quality of the research in in any way, publishing in the right places can be suitably impressive to some. In some respects, then, your publications are a bit like recommendations, and some journal’s names carry more weight than others. On that subject, I’m somewhat disappointed to note that a manuscript of mine concerning moral judgments was recently rejected from one of these prestigious journals, building upon the ever-lengthening list of prestigious things I’ve been rejected from. Rejection, I might add, appears be another rather large part of academic life in psychology.

After the first dozen or so times, you really stop even noticing.

The decision letter said, in essence, that while they were interesting, my results were not groundbreaking enough for publication the journal. Fair enough; my results were a bit on the expected side of things, and journals do presumably have standards for such things. Being entirely not bitter about the whole experience of not having my paper placed in the esteemed outlet, I’ve decided to turn my attention to two recent articles published in a probably-unrelated journal within psychology, Psychological Science (proud home of the trailblazing paper entitled “Leaning to the Left Makes the Eiffel Tower Seem Smaller“). Both papers were examining what could be considered to fall within the realm of moral psychology, and both present what one might consider to be novel – or at least cute – findings. Somewhat curiously, both papers also lean a bit heavily on the idea of metaphors being more than metaphors, perhaps owing to their propensity for using the phrase “embodied cognition”. The first paper deals with the association between light and dark and good and evil, while the second concerns the association between physical cleanliness and moral cleanliness.

The first paper, by Banerjee, Chatterjee, & Sinha (2012) sought to examine whether recalling abstract concepts of good and evil could make participants perceive the room they’re in to be brighter or darker, respectively. They predicted this, as far as I can tell, on the basis of embodied cognition suggesting that metaphorical representations are hooked up to perceptual systems and, though they aren’t explicit about this, they also seem to suggest that this connection is instantiated in such a way so as to make people perceive the world incorrectly. That is to say that thinking about a time they behaved ethically or unethically ought to make people’s perceptions about the brightness of the world less accurate, which is a rather strange thing to predict if you ask me. In any case, 40 subjects were asked to think about a time they were ethical or unethical (so 20 per group), and to then estimate the brightness of the room they were in, from 1 to 7. The mean brightness rating of the ethical group was 5.3, and the rating in the unethical group was 4.7. Success; it seemed that metaphors are really embodied in people’s perceptual systems.

Not content to rest on that empirical success, Banerjee et al (2012) pressed forward with a second study to examine whether subjects recalling ethical or unethical actions were more likely to prefer objects that produced light (like a candle or a flashlight), relative to objects which did not (such as an apple or a jug). Seventy-four students were again split into two groups and asked to recall an ethical or unethical action in their life, asked to indicated their preference for the objects, and estimate the brightness of the room in watts. The subjects in the unethical condition again estimated the room as being dimmer (M = 74 watts) than the ethical group (M = 87 watts). The unethical group also tended to show a greater preference for light-producing objects. The authors suggest that this might be the case either because (a) the subjects thought the room was too dim, or (b) that participants were trying to reduce their negative feelings of guilt about acting unethically by making the room brighter. This again sounds like a rather peculiar type of connection to posit (the connection between guilt and wanting things to be brighter), and it manages to miss anything resembling a viable functional account for what I think the authors are actually looking at (but more on that in a minute).

Maybe the room was too dark, so they couldn’t “see” a better explanation.

The second paper comes to us from Schnall, Benton, & Harvey (2008), and it examines an aspect of the disgust/morality connection. The authors noted that previous research had found a connection between increasing feelings of disgust and more severe moral judgments, and they wanted to see if they could get that connection to run in reverse: specifically, they wanted to test whether priming people with cleanliness would cause them to deliver less-severe moral judgments about the immoral behaviors of other. The first experiment involved 40 subjects (20 per-cell seemed to be a popular number) who were asked to complete a scrabbled sentence task, with half of the subjects being posed with neutral sentences and the other half with sentences related to cleanliness. Immediately afterwards, they were asked to rate the severity of six different actions typically judged to be immoral on a 10-point scale. On average, the participants primed with the cleanliness words rated the scenarios as being less wrong (M = 5) than those given neutral primes (M = 5.8). While the overall difference was significant, only one of the six actions was rated as being significantly different between conditions, despite all showing the same pattern between conditions. In any case, the authors suggested that this may be due to the disgust component of moral judgments being reduced by the primes.

To test this explanation, the second experiment involved 44 subjects watching a scene from Trainspotting to induce disgust, and then having half of them wash their hands immediately afterwards. Subjects were then asked to rate the same set of moral scenarios. The group that washed their hands again had a lower overall rating of immorality (M = 4.7), relative to the group that did not (M = 5.3), with the same pattern as experiment 1 emerging. To explain this finding, the authors say that moral cleanliness is more than a metaphor (restating their finding) and then reference the idea that humans are trying to avoid “animal reminder” disgust, which is a pretty silly idea for a number of reasons that I need not get into here (the short version is that it doesn’t sound like the type of thing that does anything useful in the first place).

Both studies, it seems, make some novel predictions and present a set of results that might not automatically occur to people. Novelty only takes us so far, though: neither study seems to move our understanding of moral judgments forward much, if at all, and neither one even manages to put forth a convincing explanation for their findings. Taking these results at face value (with such small sample sizes, it can be hard to say whether these are definitely ‘real’ effects, and some research on priming hasn’t been replicating so well these days), there might be some interesting things worth noting here, but the authors don’t manage to nail down what those things are. Without going into too much detail, the first study seem to be looking at what would be a byproducts of a system dedicated to assessing the risk of detection and condemnation for immoral actions. Simply put, the risks involved in immoral actions go down as the odds of being identified do, so when something lowers the odds of being detected – such as it being dark, or the anonymity that something like the internet or a mask can provide – one could expect people to behave in a more immoral fashion as well.

The internet can make monsters of us all.

In terms of the second study, the authors would likely be looking at another byproduct, this time of a system designed to avoid the perceptions of associations with morally-blameworthy others. As cleaning oneself can do things like remove evidence of moral wrongdoing, and thus lower the odds of detection and condemnation, one might feel a slightly reduced pressure to morally condemn others (as the perception of their being less concrete evidence of an association). With respect to the idea of detection and condemnation, then, both studies might be considered to be looking at the same basic kind of byproduct. Of course, phrased in this light (“here’s a relatively small effect that is likely the byproduct of a system designed to do other things and probably has little to no lasting effect on real-world behavior”), neither study seems terribly “trailblazing”. For a journal that can boast about receiving roughly 3000 submissions a year and accepting only 11% of them for publication, I would think they could avoid such submissions in favor of research that the label “groundbreaking” or “innovative” could be more accurately applied (unless they actually were the most groundbreaking of the bunch, that is). It would be a shame for any journal if genuinely good work was passed on because it seemed to be “too obvious” in favor of research that is cute, but not terribly useful. It also seems silly that it matters which journal one’s research is published in in the first place, career-wise, but so does washing your hands in a bright room so as to momentarily reduce the severity of moral judgments to some mild degree.

References: Banerjee P, Chatterjee P, & Sinha J (2012). Is it light or dark? Recalling moral behavior changes perception of brightness. Psychological science, 23 (4), 407-9 PMID: 22395128

Schnall S, Benton J, & Harvey S (2008). With a clean conscience: cleanliness reduces the severity of moral judgments. Psychological science, 19 (12), 1219-22 PMID: 19121126