PZ Myers; Again…

Since my summer vacation is winding itself to a close, it’s time to relax with a fun, argumentative post that doesn’t deal directly with research. PZ Myers, an outspoken critic of evolutionary psychology – or at least an imaginary version of the field, which may bear little or no resemblance to the real thing – is back at it at again. After a recent defense of the field against PZ’s rather confused comments by Jerry Coyne and Steven Pinker, PZ has now responded to Pinker’s comments. Now, presumably, PZ feels like he did a pretty good job here. This is somewhat unfortunate, as PZ’s response basically plays by every rule outlined in the pop anti-evolutionary-psychology game: he asserts, incorrectly, what evolutionary psychology holds to as a discipline, fails to mention any examples of this going on in print (though he does reference blogs, so there’s that…), and then expressing wholehearted agreement with many of the actual theoretical commitments put forth by the field. So I wanted to take this time to briefly respond to PZ’s recent response and defend my field. This should be relatively easy, since it’s takes PZ a full two sentences into his response proper to say something incorrect.

Gotta admire the man’s restraint…

Kicking off his reply, PZ has this to say about why he dislikes the methods of evolutionary psychology:

PZ: That’s my primary objection, the habit of evolutionary psychologists of taking every property of human behavior, assuming that it is the result of selection, building scenarios for their evolution, and then testing them poorly.”

Familiar as I am with the theoretical commitments of the field, I find it strange that I overlooked the part that demands evolutionary psychologists assume that every property of human behavior is the result of selection. It might have been buried amidst all those comments about things like “byproducts”, “genetic drift”, “maladaptiveness” and “randomness” by the very people who, more or less, founded the field. Most every paper using the framework in the primary literature I’ve come across, strangely, seem to write things like “…the current data is consistent with the idea that [trait X] might have evolved to [solve problem Y], but more research is needed”, or might posit that,”…if [trait X] evolved to [solve problem Y], we. ought to expect [design feature Z]“. There is, however, a grain of truth to what PZ writes, and that is this: that hypotheses about adaptive function tend to make better predictions that non-adaptive ones. I highlighted this point in my last response to a post by PZ, but I’ll recreate the quote by Tooby and Cosmides here:

“Modern selectionist theories are used to generate rich and specific prior predictions about new design features and mechanisms that no one would have thought to look in the absence of these theories, which is why they appeal so strongly to the empirically minded….It is exactly this issue of predictive utility, and not “dogma”, that leads adaptationists to use selectionist theories more often than they do Gould’s favorites, such as drift and historical contingency. We are embarrassed to be forced, Gould-style, to state such a palpably obvious thing, but random walks and historical contingency do not, for the most part, make tight or useful prior predictions about the unknown design features of any single species.”

All of that seems to be besides the point, however, because PZ evidently doesn’t believe that we can actually test byproduct claims in the first place. You see, it’s not enough to just say that [trait X] is a byproduct; you need to specify what it’s a byproduct of. Male nipples, for instance, seem to be a byproduct of functional female nipples; female orgasm may be a byproduct of a functional male orgasm. Really, a byproduct claim is more a negative claim than anything else: it’s a claim that [trait X] has (or rather, had) no adaptive function. Substantiating that claim, however, requires one to be able to test for and rule out potential adaptive functions. Here’s what PZ had to say in his comments section about doing so:

PZ: My argument is that most behaviors will NOT be the product of selection, but products of culture, or even when they have a biological basis, will be byproducts or neutral. Therefore you can’t use an adaptationist program as a first principle to determine their origins.”

Overlooking the peculiar contrasting of “culture” and “biological basis” for the moment, if one cannot use an adaptationist paradigm to test for possible functions in the first place, then it seems one would be hard-pressed to make any claim at all about function – whether that claim is that there is or isn’t one. One could, as PZ suggests, assume that all traits are non-functional until demonstrated otherwise, but, again, since we apparently cannot use an adaptationist analysis to determine function, this would leave us assuming things like “language is a byproduct”. This is somewhat at odds with PZ’s suggestion that “there is an evolved component of human language”, but since he doesn’t tell us how he reached that conclusion – presumably not through some kind of adaptationism program – I suppose we’ll all just have to live with the mystery.

Methods: Concentrated real hard, then shook five times.

Moving on, PZ raises the following question about modularity in the next section of his response:

“PZ: …why talk about “modules” at all, other than to reify an abstraction into something misleadingly concrete?”

Now this isn’t really a criticism about the field so much as a question about it, but that’s fine; questions are generally welcomed. In fact, I happen to think that PZ answers this question himself without any awareness of it, when he was previously discussing spleen function:

PZ: What you can’t do is pick any particular property of the spleen and invent functions for it, which is what I mean by arbitrary and elaborate.”

While PZ is happy with the suggestion that spleen itself serves some adapted function, he overlooks the fact, and indeed, would probably take it for granted, that it’s meaningful to talk about the spleen as being a distinct part of the body in which it’s found. To put PZ’s comment in context, imagine some anti-evolutionary physiologist suggesting that it’s nonsensical to try and “pick any particular” part of the body and talk about “it’s specific function” as if it’s distinct from any other part (I imagine the exchange might go like this: “You’re telling me the upper half of the chest functions as a gas exchanger and the lower half functions to extract nutrients from food? What an arbitrary distinction!”). Of course, we know it does make sense to talk about different parts of the body – the heart, the lungs, and spleen – and we do so as each is viewed as having different functions. Modularity essentially does the same thing for the brain. Though the brain might outwardly appear to be a single organ, it is actually a collection of functionally-distinct pieces. The parts of your brain that process taste information aren’t good at solving other problems, like vision. Similarly, a system that processes sexual arousal might do terribly at generating language. This is why brain damage tends to cause rather-selective deficits in cognitive abilities, rather than global or unpredictable ones. We insist on modularity of the mind for the same reason PZ insists on modularity of the body.

PZ also brings the classic trope of dichotomizing “learned/cultural” and “evolved/genetic” to bear, writing:

“PZ: …I suspect it’s most likely that they are seeing cultural variations, so trying to peg them to an adaptive explanation is an exercise in futility”

I will only give the fairly-standard reply to such sentiments, since they’ve been voiced so often before that it’s not worth spending much time on. Yes, cultures differ, and yes, culture clearly has effects on behavior and psychology. I don’t think any evolutionary psychologist would tell you differently. However, these cultural differences do not just come from nowhere, and neither do our consistent patterns of responses to those differences. If, for instance, local sex-ratios have some predictable effects on mating behavior, one needs to explain why that is the case. This is like the byproduct point above: it’s not enough to say “[trait X] is a product of culture” and leave it at that if you want an explanation of trait X that helps you understand anything about it. You need to explain why that particular bit of environmental input is having the effect that it does. Perhaps the effect is the result of psychological adaptation for processing that particular input, or perhaps the effect is a byproduct of mechanisms not designed to process it (which still requires identifying the responsible psychological adaptations), or perhaps the consistent effect is just a rather-unlikely run of random events all turning out the same. In any case, to reach any of these conclusions, one needs an adaptationist approach – or PZ’s magic 8-ball.

Also acceptable: his magic Ouija board.

The final point I want to engage with are two rather interesting comments from PZ. The first comment comes from his initial reply to Coyne and the second from his reply to Pinker:

PZ: I detest evolutionary psychology, not because I dislike the answers it gives, but on purely methodological and empirical grounds…Once again, my criticisms are being addressed by imagining motives”

While PZ continues to stress that, of course, he could not possibly have ulterior, conscious or unconscious, motives for rejecting evolutionary psychology, he then makes a rather strange comment in the comments section:

PZ: Evolutionary psychology has a lot of baggage I disagree with, so no, I don’t agree with it. I agree with the broader principle that brains evolved.”

Now it’s hard to know precisely what PZ meant to imply with the word “baggage” there because, as usual, he’s rather light on the details. When I think of the word “baggage” in that context, however, my mind immediately goes to unpleasant social implications (as in, “I don’t identify as a feminist because the movement has too much baggage”). Such a conclusion would imply there are non-methodological concerns that PZ has about something related to evolutionary psychology. Then again, perhaps PZ simply meant some conceptual, theoretical baggage that can be remedied with some new methodology that evolutionary psychology currently lacks. Since I like to assume the best (you know me), I’ll be eagerly awaiting  PZ’s helpful suggestions as to how the field can be improved by shedding its baggage as it moves into the future.

Why Do People Adopt Moral Rules?

First dates and large social events, like family reunions or holiday gatherings, can leave people wondering about which topics should be off-limits for conversations, or even dreading which topics will inevitably be discussed. There’s nothing quite like the discomfort that a drunken uncle feeling the need to let you know precisely what he thinks about the proper way to craft immigration policy or what he thinks about gay marriage can bring. Similarly, it might not be a good idea to open up a first date with an in-depth discussion of your deeply held views on abortion and racism in the US today. People realize, quite rightly, that such morally-charged topics have the potential to be rather divisive, and can quickly alienate new romantic partners or cause conflict within otherwise cohesive groups. Alternatively, however, in the event that you happen to be in good agreement with others on such topics, they can prove to be fertile grounds for beginning new relationships or strengthening old ones; the enemy of my enemy is my friend and similar such sayings attest to that. All this means you need to be careful about where and how you spread your views about these topics. Moral stances are kind of like manure in that way.

Great on the fields; not so great for tracking around everywhere you walk.

Now these are pretty important things to consider if you’re a human, since a good portion of your success in life is going to be determined by who your allies are. One’s own physical prowess is no longer sufficient to win conflicts when you’re fighting against increasingly larger alliances, not to mention the fact that allies also do wonders for your available options regarding other cooperative ventures. Friends are useful, and this shouldn’t news to anyone. This would, of course, drive selection pressures for adaptations that help people to build and maintain healthy alliances. However, not everyone ends up with a strong network of alliances capable of helping them protect or achieve their interests. Friends and allies are a zero-sum resource, as the time they spend helping one person (or one group of people) is time not spent with another. The best allies are a very limited and desirable resource, and only a select few will have access to them: those who have something of value to offer in return. So what are the people towards the bottom of the alliance hierarchy to do? Well, one potential answer is the obvious, and somewhat depressing, outcome: not much. They tended to get exploited by others; often ruthlessly so. They either need to increase their desirability as a partner to others in order to make friends who can protect them, or face those severe and persistent social costs.

Any available avenue for those exploited parties that help them avoid such costs and protect their interests, then, ought to be extremely appealing. A new paper by Petersen (2013) proposes that one of these avenues might be for those lacking in the alliance department to be more inclined to use moralization to protect their interests. Specifically, the proposition on offer is that if one lacks the private ability to enforce their own interests, in the form of friends, one might be increasingly inclined to turn towards public means of enforcement: recruiting third-party moralistic punishers. If you can create a moral rule that protects your self-interest, third parties – even those who otherwise have no established alliance with you – ought to become your de facto guardians whenever those interests are threatened. Accordingly, the argument goes that those lacking in friends ought to be more likely to support existing rules that protect them against exploitation, whereas those with many friends, who are capable of exploiting others, ought to feel less interest in supporting moral rules that prevent said exploitation. In support of this model, Petersen (2013) notes that there is a negative correlation – albeit a rather small one – between proxies for moralization and friend-based social support (as opposed to familial or religious support, which tended to correlate as well, but in the positive direction).

So let’s run through a hypothetical example to clarify this a bit: you find yourself back in high school and relatively alone in that world, socially. The school bully, with his pack of friends, have been hounding you and taking your lunch money; the classic bully move. You could try and stand up to the bullies to prevent the loss of money, but such attempts are likely to be met with physical aggression, and you’d only end up getting yourself hurt on top of then losing your money anyway. Since you don’t have enough friends who are willing and able to help tip the odds in your favor, you could attempt to convince others that it ought to be immoral to steal lunch money. If you’re successful in your efforts, the next time the bullies attempt to inflict costs on you, they would find themselves opposed by the other students who would otherwise just stay out of it (provided, of course, that they’re around at the time). While these other students might not be your allies at other times, they are your allies, temporarily, when you’re being stolen from. Of course, moralizing stealing prevents you from stealing from others – as well as having it done to you – but since you weren’t in the position to be stealing from anyone in the first place, it’s really not that big of a loss for you, relative to the gain.

Phase Two: Try to make wedgies immoral.

While such a model posits a potentially interesting solution for those without allies, it leaves many important questions unaddressed. Chief among these questions is the matter of what’s in it for third parties? Why should other people adopt your moral rules, as opposed to their own, let alone be sure to intervene even if you share the moral rule? While third-party support is certainly a net benefit for the moralizer who initially can’t defend their own interests, it’s a net cost to the people who actually have to enforce the moral rule. If those bullies are trying to steal from you, the costs of deterring, and if necessary, fighting them off, falls on shoulders of others who would probably rather avoid such risks. These costs are magnified further because a moral rule against stealing lunch money ought to require people to punish any and all instance of the bullying; not just your specific one. As punishing people is generally not a great way to build or maintain relationships with them, supporting this moral rule, then, could prevent the punishers from forming what might be otherwise-useful alliances with the bullying parties. Losing potential friendships to temporarily support someone you’re not actually friends with and won’t become friends with doesn’t sound like a very good investment.

The costs don’t even end there, though. Let’s say, hypothetically, that most people do agree that the stealing of lunch money ought to be stopped and are willing to accept the moral rule in the first place. There are costs involved in enforcing the rule, and it’s generally in everyone’s best interest to not suffer those costs personally. So, while people might be perfectly content with their being a rule against stealing, they don’t want to be the ones who have to enforce it; they would rather free-ride on other people’s punishment efforts. Unfortunately, the moral rule requires a large number of potential punishers for it to be effective. This means that those willing to punish would need to incentivise non-punishers to start punishing as well. These incentives, of course, aren’t free to deliver. This now leads to punishers needing to, in essence, not only punish those who commit the immoral act, but also punish those who fail to punish people who commit the immoral act (which leads to punishing those who fail to punish those who fail to punish as well, and so on. The recursion can be hard to keep track of). As the costs of enforcement continue to mount, in the absence of compensating benefits it’s not at all clear to me why third parties should become involved in the disputes of others, or try to convince other people to get involved. Punishing an act “because it’s immoral” is only a semantic step away from punishing something “just because”.

A more plausible model, I feel, would be an alliance-based model for moralization: people might be more likely to adopt moral rules in the interests of increasing their association value to specific others. Let’s use one of the touchy, initial subjects – abortion – as a test case here: if I adopt a moral stance opposing the practice, I would make myself a less-appealing alliance partner for anyone who likes the idea of abortions being available, but I would also make myself a more-appealing to partner to anyone who dislike the idea (all else being equal). Now that might seem like a wash in terms of costs and benefits on the whole – you open yourself up to some friends and foreclose on others – but there are two main reasons I would still favor the alliance account. The first is the most obvious: it locates some potential benefits for the rule-adopters. While it is true that there are costs to making a moral stance, there aren’t only costs anymore. The second benefit of the alliance account is that the key issue here might not be whether you make or lose friends on the whole, but rather that it can ingratiate you to specific people. If you’re trying to impress a particular potential romantic partner or ally, rather than all romantic partners or allies more generally, it might make good sense to tailor your moral views to that specific audience. As was noted previously, friendship is a zero-sum game, and you don’t get to be friends with everyone.

Basically, these two aren’t trying to impress each other.

It goes without saying that the alliance model is far from complete in terms of having all its specific details fleshed out, but it gives us some plausible places with which to start our analysis: considerations of what specific cues to people might use to assess relative social value, or how those cues interact with current social conditions to determine the degree of current moral support. I feel the answers to such questions will help us shed light on many additional ones, such as why almost all people will agree with the seemingly-universal rule stating “killing morally is wrong” and then go on to expand upon the many, many non-universal exceptions to that moral rule over which they don’t agree (such as when killing in self-defense, or when you find your partner having sex with another person, or when killing a member of certain non-human species, or killing unintentionally, or when killing a terminally ill patient rather than letting them suffer, and so on…). The focus, I feel, should not be on why how powerful of a force third-party punishment can be, but rather why third parties might care (or fail to care) about the moral violations of others in the first place. Just because I think murder is morally wrong, it doesn’t mean I’m going to react the same way to any and all cases of murder.

References: Petersen, M. (2013). Moralization as protection against exploitation: Do individuals without allies moralize more? Evolution and Human Behavior, 34, 78-85 DOI: 10.1016/j.evolhumbehav.2012.09.006

The Inferential Limits Of Economic Games

Having recently returned from the Human Behavior & Evolution Society’s (HBES) conference, I would like to take a moment to let everyone know what an excellent time I had there. Getting to meet some of my readers in person was a fantastic experience, as was the pleasure of being around the wider evolutionary research community and reconnecting with old friends. The only negative parts of the conference involved making my way through the flooded streets of Miami on the first two mornings (which very closely resembled this scene from the Simpsons) and the pool party at which I way over-indulged in drinking. Though there was a diverse array of research presented spanning many different areas, I ended up primarily in the seminars on cooperation, as the topic tends most towards my current research projects. I would like to present two of my favorite findings from those seminars, which serve as excellent cautionary tales concerning what conclusions one can draw from economic games. Despite the popular impression, there’s a lot more to evolutionary psychology than sex research.

Though the Sperm-Sun HBES logo failed to adequately showcase that diversity.

The first game to be discussed is the classic dictator game. In this game, two participants are brought into the lab and assigned the role of either ‘dictator; or ‘recipient’. The dictator is given a sum of money (say, $10) and is given the option to divide it however they want between the pair. If the dictator was maximally selfish – as standard economic rationality might suggest – they would consistently keep all the money and given none to the recipient. Yet this is not what we frequently see: dictators tend to give at least some of the money to the other person, and an even split is often made. While giving these participants anonymity from one another does tend to reduce offers, even ostensibly anonymous dictators continue to give. This result clashes somewhat with our every day experiences: after all, provided we have money in our pocket, we’re faced with possible dictator-like experiences every time we pass someone on the street, whether they’re homeless and begging for money or apparently well-off. Despite the near-constant opportunities during which we could transfer money to others, we frequently do not. So how do we reconcile the two experimental and everyday results?

One possibility is to suggest that the giving in dictator games is largely induced by experimental demand effects: subjects are being placed into a relatively odd situation and are behaving rather oddly because of it (more specifically, because they are inferring what the experimenter “wants” them to do). Of course, it’s not so easy to replicate the effects the contexts of the dictator game (a sudden windfall of a divisible asset and a potential partner to share it with) without subjects knowing they’re talking part in an experiment. Winking & Mizer (2013) manged to find a way around these problems in Las Vegas. In this field experiment, a confederate would be waiting at a bus stop when the ignorant subject approached. Once the subject was waiting for the bus as well, the confederate would pretend to take a phone call and move slightly away from the area with their back turned to the subject. It was at this point that the experiment approached on his cell, ostensibly in a hurry. As the experimenter passed the subject, he gave them $20 in poker chips, saying that he was late for his ride to the airport and didn’t have time to cash them in. These casino chips are an excellent stimuli, as they provided a good cover story for why they were being handed over: they only have value when cashed in, and the experimenter didn’t have time to do so. Using actual currency wouldn’t work well, as it might raise suspicions about the setup, since currency travels well from place to place.

In the first condition, the experimenter left and the confederate returned without further instruction; in the second condition, the experimenter said, “I don’t know. You can split them with that guy however you want” while gesturing at the confederate before he ran off. A third condition involved an explicit version of the dictator game experiment with poker chips, during which anonymity was granted. In the standard version of the experiment – when the subjects knew about the game explicitly – 83% of subjects offered at least some of the chips to other people with a median offer around $5, resembling previous experimental results fairly well. How about the other two conditions? Well, of the 60 participants who were not told they were explicitly taking part in the game, all of them kept all the money. This suggests very strongly that all – or at least most – of the giving we observe in dictator games is grounded in the nature of the experiment itself. Indeed, many of the subjects in the first condition, where the instruction to split was not given, seemed rather perplexed by the purpose of the study during the debriefing. The subjects wondered precisely why in the world they would split the money with the confederate in the first place. Like all of us walking down the street with money on our person, the idea that they would just give that money to other people seemed rather strange.

“I’m still not following: you want to do what with all this money, again?”

The second paper of interest looked at behavior in another popular game: the public goods game. In these games, subjects are typically placed together in groups of four and are provided with a sum of money. During each round, players can invest any amount of their money in the public pot and keep the rest. All the money in the pot is then multiplied by some amount and then divided equally amongst all the participants. In this game, the rational economic move is typically to not put any money in, as for each dollar you put in, you receive less than a dollar back (since the multiplier is below the number of subjects in the group); not a great investment. On the other hand, the group-maximizing outcome is for all the subjects to donate all their money, so everyone ends up richer than when they started. Again, we find that subjects in these games tend to donate some of their money to the public pot, and many researchers have inferred from this giving that people have prosocial preferences (i.e. making other people better off per se increases my subjective welfare). If such an inference was correct, then we ought to expect that subjects should give more money to the public good provided they know how much good they’re doing for others.

Towards examining this inference, Burton-Chellew & West (2013) put subjects into a public goods game in three different conditions. First, there was the standard condition, described above. Second was a condition like the standard game, except subjects received an additional piece of information in the form of how much the other players in the game earned. Finally, there was a third condition in which subjects didn’t even know the game was being played with other people; subjects were merely told they could donate some fraction of their money (from 0 to 40 units) to a “black box” which would perform a transformation on the money received and give them a non-negative payoff (which was the same average benefit they received in the game when playing with other people, but they didn’t know that). In total, 236 subjects played in one of the first two conditions and also in the black box condition, counterbalancing the order of the games (they were informed the two were entirely different experiments).

How did contributions change between the standard condition and the black box condition over time? They didn’t. Subjects that knew they were playing a public goods game donated approximately as much during each round as the subjects who were just putting payments into the black box and getting some payment out: donations started out relatively high, and declined over time (presumably and subjects were learning they tended to get less money by contributing). The one notable difference was in the additional information condition: when subjects could see the earnings of others, relative to their contributions, subjects started to contribute less money to the public good. As a control condition, all the above three games were replicated with a multiplication rule that led the profit-maximizing strategy to being donate all of one’s available money, rather than none. In these conditions, the change in donations between standard and black box conditions again failed to differ significantly, and contributions were still lower in the enhanced-information condition. Further, in all these games subjects tended to fail to make the profit-maximizing decision, irrespective of whether that decision was to donate all their money or none of it. Despite this strategy being deemed relatively to “easy” to figure out by researchers, it apparently was not.

Other people not included, or required

Both of these experiments pose some rather stern warnings about the inferences we might draw from the behavior of people playing economic games. Some our our experiments might end up inducing certain behaviors and preferences, rather than revealing them. We’re putting people into evolutionarily-strange situations in these experiments, and so we might expect some evolutionarily-strange outcomes. It is also worth noting that just because you observe some prosocial outcome – like people giving money apparently altruistically or contributing to the good of others – it doesn’t follow that these outcomes are the direct result of cognitive modules designed to bring them about. Sure, my behavior in some of these games might end up reducing inequality, for instance, but it doesn’t following that people’s psychology was selected to do such things. There are definite limits to how far these economic games can take us inferentially, and it’s important to be aware of them. Do these studies show that such games are worthless tools? I’d say certainly not, as behavior in them is certainly not random. We just need to be mindful of their limits when we try and draw conclusions from them.

References: Burton-Chellew MN, & West SA (2013). Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences of the United States of America, 110 (1), 216-21 PMID: 23248298

Winking, J., & Mizer, N. (2013). Natural-field dictator game shows no altruistic giving. Evolution and Human Behavior. http://dx.doi.org/10.1016/j.evolhumbehav.2013.04.002

Equality-Seeking Can Lift (Or Sink) All Ships

There’s a saying in economics that goes, “A rising tide lifts all ships”. The basic idea behind the saying is that marginal benefits that accrue from people exchanging goods and services is good for everyone involved – and even for some who are not directly involved – in much the same way that all the boats in a body of water will rise or fall in height together as the overall water level does. While there is an element of truth to the saying (trade can be good for everyone, and the resources available to the poor today can, in some cases, be better than those available to even the wealthy in generations past), economies, of course, are not like bodies of water that rise and fall uniformly; some people can end up radically better- or worse-off than others as economic conditions shift, and inequality is a persistent factor in human affairs. Inequality – or, more aptly, the perception of it – is also commonly used as a justification for furthering certain social or moral goals. There appears to be something (or somethings) about inequality that just doesn’t sit well with people.

And I would suggest that those people go and eat some cake.

People’s ostensible discomfort with inequality has not escaped the eyes of many psychological researchers. There are some who suggest that humans have a preference for avoiding inequality; an inequality aversion, if you will. Phrased slightly differently, there are some who suggest that humans have an egalitarian motive (Dawes et al, 2007) that is distinct from other motives, such as enforcing cooperation or gaining benefits. Provided I’m parsing the meaning of the phrase correctly, then, the suggestion being made by some is that people should be expected to dislike inequality per se, rather than dislike inequality for other, strategic reasons. Demonstrating evidence of a distinct preference for inequality aversion, however, can be difficult. There are two reasons for this, I feel: the first is that inequality is often confounded with other factors (such as someone not cooperating or suffering losses). The second reason is that I think it’s the kind of preference that we shouldn’t expect to exist in the first place.

Taking these two issues in order, let’s first consider the paper by Dawes et al (2007) that sought to disentangle some of these confounding issues. In their experiment, 120 subjects were brought into the lab in groups of 20. These groups were further divided into anonymous groups of 4, such that each participant played in five rounds of the experiment, but never with the same people twice. The subjects also did not know about anyone’s past behavior in the experiment. At the beginning of each round, every subject in each group received a random number of payment units between some unmentioned specific values, and everyone was aware of the payments of everyone else in their group. Naturally, this tended to create some inequality in payments. Subjects were given means by which to reduce this inequality, however: they could spend some of their payment points to either add or subtract from other people’s payments at a ratio of 3 to 1 (in other words, I could spend one unit of my payment to either reduce your payment by three points or add three points to your payment). These additions and deductions were all decided on in private an enacted simultaneously, so as to avoid retribution and cooperation factors. It wasn’t until the end of each round that subjects saw how many additions and reductions they had received. In total, each subject had 15 chances to either add to or deduct from someone else payment (3 people per round over 5 rounds).

The results showed that most subjects paid to either add to or deduct from someone else’s payment at least once: 68% of people reduced the payment of someone else at least once, whereas 74% increased someone’s payment at least once. It wasn’t what one might consider a persistent habit, though: only 28% reduced people’s payment more than five times, while 33% added, and only 6% reduced more than 10 times, whereas 10% added. This, despite their being inequality to be reduced in all cases. Further, an appreciable number of the modifications didn’t go in the equality-reducing direction: 29% of reductions went to below-average earners, and 38% of the additions went to above-average earners. Of particular interest, however, is the precise way in which subjects ended up reducing inequality: the people who earned the least in each round tended to spend 96% more on deductions than top earners. In turn, top earners averaged spending 77% more on additions than the bottom earners. This point is of interest because positing a preference for avoiding inequality does not necessarily help one predict the shape that equality will ultimately take.

You could also cut the legs off the taller boys in the left picture so no one gets to see.

The first thing worth point out here, then, is that about half of all the inequality-reducing behaviors that people engaged in ended up destroying overall welfare. These are behaviors in which no one is materially better off. I’m reminded of part of a standup routine by Louis CK, concerning that idea, in which he recounts the following story (starting at about a 1:40):

“My five-year old, the other day, one of her toys broke, and she demanded I break her sister’s toy to make it fair. And I did.”

It’s important to note this so as to point out that achieving equality itself doesn’t necessarily do anything useful. It is not as if equality automatically makes everyone – or anyone – better off. So what kind of useful outcomes might such spiteful behavior result in? To answer that question, we need to examine the ways people reduced inequality. Any player in this game could reduce the overall amount of inequality by either deducting from high earners payment or adding to low earners. This holds for both the bottom and top earners. This means that there are several ways of reducing inequality available to all players. Low earners, for instance, could reduce inequality by engaging in spiteful reductions towards everyone above them until they’re all down at the same low level; they could also reduce the overall inequality by benefiting everyone above them, until everyone (but them) is at the same high level. Alternatively, they could engage in a mixture of these strategies, benefiting some people and harming others. The same holds for high earners, just in the opposite directions. Which path people would take depends on what their set point for ‘equal’ is. Strictly speaking, then, a preference for equality doesn’t tell us which method people should opt for, nor does it tell us what levels of inequality will be relatively accepted and efforts to achieve equality will cease.

There are, however, other possibilities for explaining these results beyond a preference for inequality per se. One particularly strong alternative is that people use perceptions of inequality as inputs for social bargaining. Consider the following scenario: two people are working together to earn a joint prize, like a $10 reward. If they work together, they get the $10 to split; if they do not work together, neither will receive anything. Further, let’s assume one member of this pair is greedy, and in round one, after they cooperate, takes $9 of the pot for themselves. Now, strictly speaking, the person who received $1 is better off than if they received nothing at all, but that doesn’t mean they ought to accept that distribution, and here’s why: if the person with $1 refuses to cooperate during the next round, they only lose that single dollar; the selfish player would lose out on nine-times as much. This asymmetry in losses puts the poorer player in a stronger bargaining position, as they have far less to lose from not cooperating. It is from bargaining structures similar in structure to this that our sense of fairness likely emerged.

So let’s apply this analysis back to the results of the experiment: people all start off with different amounts of money and people are in positions to benefit or harm each other. Everyone wants to leave with as much benefit as possible, which means contributing nothing and getting additions from everyone else. However, since everyone is seeking this same outcome and they can’t all have it, certain compromises need to be reached. Those in high-earning positions face a different set of problems in that compromise than those in low-earning positions: while the high earners are doing something akin to trying to maintain cooperation by increasing the share of resources other people get (as in the previous example), low earners are faced with the problem of negotiating for a better payoff, threatening to cut off cooperation in the process. Both parties seem to anticipate this, with low earners disproportionately punishing high earners, and high earners disproportionately benefiting low earners. That there is no option for cooperation or bargaining present in this experiment is, I think besides the point, as our minds were not designed to deal with the specific context presented in the experiment. Along those same lines, simply telling people that “you’re now anonymous” doesn’t mean that their mind will automatically function as if it was positive no one could observe its actions, and telling people their computer can’t understand their frustration won’t stop them from occasionally yelling at it.

“Listen only to my voice: you are now anonymous. You are now anonymous”

As a final note, one should be careful about inferring a motive or preference for equality just because inequality was sometimes reduced. A relatively simple example should demonstrate why: consider an armed burglar who enters a store, points their gun at the owner, and demands all the money in the register. If the owner hands over the money, they have delivered a benefit to the burglar at a cost to themselves, but most of us would not understand this as an act of altruism on the part of the owner; the owner’s main concern is not getting shot, and they are willing to pay a small cost (the loss of money) so as to avoid a larger one (possible death). Other research has found, for instance, that when given the option to pay a fixed cost (a dollar) to reduce another person’s payment by any amount (up to a total of $12), when people engage in reduction, they’re highly likely to generate inequality that favors themselves. (Houser & Xiao, 2010). It would be inappropriate to suggest that people are equality-averse from such an experiment, however, and, more to the point, doing so wouldn’t further our understanding of human behavior much, if at all. We want to understand why people do certain things; not simply that they do them.

References: Dawes CT, Fowler JH, Johnson T, McElreath R, & Smirnov O (2007). Egalitarian motives in humans. Nature, 446 (7137), 794-6 PMID: 17429399

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment Economic Letters DOI: 10.1016/j.econlet.2010.07.008.

Why Would You Ever Save A Stranger Over A Pet?

The relationship between myself and my cat has been described by many as a rather close one. After I leave my house for almost any amount of a time, I’m greeted by what appears to be a rather excited animal that will meow and purr excessively, all while rubbing on and rolling around my feet upon my return. In turn, I feel a great deal of affection towards my cat, and derive feelings of comfort and happiness from taking care of and petting her. Like the majority of Americans, I happen to be a pet owner, and these experiences and ones like them will all sound perfectly normal and relatable. I would argue, however, that they are, in fact, very strange feelings, biologically-speaking. Despite the occasional story of cross-species fostering, other animals do not seem to behave in ways that indicates they seek out anything resembling pet-ownership. It’s often not until the idea of other species not making habits of having pets is raised that one realizes how strange of a phenomenon pet ownership can be. Finding that bears, for instance, reliably took care of non-bears, providing them with food and protection, would be a biological mystery of the first-degree.

And that I get most of my work done like this seems normal to me.

So why people seem to be so fond of pets? My guess is that the psychological mechanisms that underlie pet ownership in humans are not designed for that function per se. I would say that for a few reasons, notable among them are the time and resource factors. First, psychological adaptations take a good deal of time to be shaped by selective forces, which means long periods of co-residence between animals and people would be required for any dedicated adaptations to have formed. Though it’s no more than a guess on my part, I would assume that conditions that made extended periods of co-residence more probable would likely not have arisen prior to the advent of agriculture and geographically-stable human populations. The second issue involves the cost/benefit ratios: pets require a good deal of investment, at least in terms of food. In order for there to have been any selective pressure to keep pets, the benefits provided by the pets would have needed to have more than offset the costs of their care, and I don’t know of any evidence in that regard. Dogs might have been able to pull their weight in terms of assisting in hunting and protection, but it’s uncertain; other pets – such as cats, birds, lizards, or even the occasional insect – probably did not. While certain pets (like cats) might well have been largely self-sufficient, they don’t seem to offer much in the way of direct benefits to their owners either. No benefits means no distinct selection, which means no dedicated adaptions.

Given that there are unlikely to be dedicated pet modules in our brain, what other systems are good candidates for explaining the tendency towards seeking out pets? The most promising one that comes to mind are our already-existing systems designed for the care of our own, highly-dependent offspring. Positing that pet-care is a byproduct of our infant-care would manage to skirt both the issues of time and resources; our minds were designed to endure such costs to deliver benefits to our children. It would also allow us to better understand certain facets of the ways people behave towards their pets, such as the “aww” reaction people often have to pets (especially young ones, like kittens and puppies) and babies, as well as the frequent use of motherese (baby-talk) when talking to pets and children (to compare speech directed at pets and babies see here and here. Note as well that you don’t often hear adults talking to each other in this manner). Of course, were you to ask people whether their pets are their biological offspring, many would give the correct response of “no”. These verbal responses, however, do not indicate that other modules of the brain – ones that aren’t doing the talking – “know” that pets aren’t actually your offspring, in much the same way that parts of the brain dedicated to arousal don’t “know” that generating arousal to pornography isn’t going to end up being adaptive.

There is another interesting bit of information concerning pet ownership that I feel can be explained through the pets-as-infants model, but to get to it we need to first consider some research on moral dilemmas by Topolski et al (2013). This dilemma is a favorite of mine, and the psychological community more generally: a variant of the trolley dilemma. In this study, 573 participants were asked to respond to a series of 12 similar moral dilemmas, all of which had the same basic setup: there is a speeding bus that is about to hit either a person or an animal that both wandered out into the street. The subject only has time to save one of them, and are asked which they would prefer to save. (Note: each subject responded to all 12 dilemmas, which might result in some carryover effects. A between subjects design would have been stronger here. Anyway…) The identity of the animal and person in the dilemma were varied across the conditions: the animal was either the subject’s pet (subjects were asked to imagine one if they didn’t currently have one) or someone else’s pet, and the person was either a foreign tourist, a hometown stranger, a distant cousin, a best friend, a sibling, or a grandparent.

The study also starred Keanu Reeves.

In terms of saving someone else’s pet, people generally didn’t seem terribly interested. From a high about of 12% of subjects choosing someone else’s pet over a foreign tourist to a low of approximately 2% of subjects picking the strange pet over their own sibling. The willingness to save the animal in question rose substantially when it was the subject’s own pet being considered, however: while people were still about as likely to save their own pet in cases involving a grandparent or sibling, approximately 40% of subjects indicated they would save their pet over a foreign tourist or a hometown stranger (for the curious, about 23% would save their pet over a distant cousin and only about 5% would save their pet over a close friend. For the very curious, I could see myself saving my pet over the strangers or distant cousin). The strength of the relationship between pet owners and their animals appears to be strong enough to, quite literally, make almost half of them throw another human stranger under the bus to save their pet’s lives.

This is a strange response to give, but not for the obvious reasons: given that our pets are being being treated as our children by certain parts of our brain, this raises the question as to why anyone, let alone a majority of people, would be willing to sacrifice the lives of their pets to save a stranger. I don’t expect, for instance, that many people would be willing to let their baby get hit by the bus to save a tourist, so why that discrepancy? Three potential reasons come to mind: first, the pets are only “fooling” certain psychological systems. While some parts of our psychology might be treating pets as children, other parts may well not be (children do not typically look like cats or dogs, for instance). The second possible reason involves the clear threat of moral condemnation. As we saw, people are substantially more interested in saving their own pets, relative to a stranger’s pet. By extension, it’s probably safe to assume that other, uninvolved parties wouldn’t be terribly sympathetic to your decision to save an animal over a person. So the costs to saving the pet might well be perceived as higher. Similarly, the potential benefits to saving an animal may typically be lower than those of another person, as saved individuals and their allies are more likely to do things like reciprocate help, relative to a non-human. Sure, the pet’s owner might reciprocate, but the pet itself would not.

The final potential reason that comes to mind concerns that interesting bit of information I alluded to earlier: women were more likely to indicate they would save the animal in all conditions, and often substantially so. Why might this be the case? The most probable answer to that question again returns to the pets-as-children model: whereas women have not had to face the risk of genetic uncertainty in their children, men have. This risk makes males generally less interested in investing in children and could, by extension, make them less willing to invest in pets over people. The classic phrase, “Momma’s babies; Daddy’s maybes” could apply to this situation, albeit in an under-appreciated way (in other words, men might be harboring doubts about whether the pet is actually ‘theirs’, so to speak). Without reference parental investment theory – which the study does not contain – explaining this sex difference in willingness to pick animals over people would be very tricky indeed. Perhaps it should come as no surprise, then, that the authors do not do a good job of explaining their findings, opting instead to redescribe them in a crude and altogether useless distinction between “hot” and “cold” types of cognitive processing.

“…and the third type of cognitive processing was just right”

In a very real sense, some parts of our brain treat our pets as children: they love them, care for them, invest in them, and wish to save them from harm. Understanding how such tendencies develop, and what cues our minds use to make distinctions between their offspring, the offspring of others, their pets, and non-pet animals are very interesting matters which are likely to be furthered by considering parental investment theory. Are people raised with pets from a young age more likely to view them as fictive offspring? How might hormonal changes during pregnancy affect women’s interest in pets? Might cues of a female mate’s infidelity make their male partner less interested in taking care of pets they jointly own? Under what conditions might pets be viewed as a deterrent or an asset to starting new romantic relationships, in the same way that children from a past relationship might? The answers to these questions require placing pet care in its proper context, and you’re going to have quite a hard time doing that without the right theory.

References: R. Topolski, J.N. Weaver, Z. Martin, & J. McCoy (2013). Choosing between the Emotional Dog and the Rational Pal: A Moral Dilemma with a Tail. ANTHROZOÖS, 26, 253-263 DOI: 10.2752/175303713X13636846944321

This Is Water: Making The Familiar Strange

In the fairly-recent past, there was a viral video being shared across various social media sites called “This is Water” by David Foster Wallace. The beginning of the speech tells a story of two fish who are oblivious to the water in which they exist, in much the same way that humans come to take the existence of the air they breathe for granted. The water is so ubiquitous that the fish fail to notice it; it’s just the way things are. The larger point of the video – for my present purposes – is that the inferences people make in their day-to-day lives are so automatic as to become taken for granted. David correctly notes that there are many, many different inferences that one could make about people we see in our every day lives: is the person in the SUV driving it because they fear for their safety or are they selfish for driving that gas-guzzler? Is the person yelling at their kids not usually like that, or are they an abusive parent? There are two key points in all of this. The first is the aforementioned habit people have to take the ability we have to draw these kinds of inferences in the first place for granted; what Cosmides & Tooby (1994) call instinct blindness. Seeing, for instance, is an incredibly complex and difficult-to-solve task, but the only effort we perceive when it comes to vision involves opening our eyes: the seeing part just happens. The second, related point is the more interesting part to me: it involves the underdetermination of the inferences we draw from the information we’re provided. That is to say that no part of the observations we make (the woman yelling at her child) intrinsically provides us with good information to make inferences with (what is she like at other times).

Was Leonidas really trying to give them something to drink?

There are many ways of demonstrating underdetermination, but visual illusions – like this one – prove to be remarkable effective in quickly highlighting cases where the automatic assumptions your visual systems makes about the world cease to work. Underdetermination isn’t just a problem need to be solved with respect to vision, though: our minds make all sorts of assumptions about the world that we rarely find ourselves in a position to appreciate or even notice. In this instance, we’ll be considering some of the information our mind automatically fills in concerning the actions of other people. Specifically, we perceive our world along a dimension of intentionality. Not only do we perceive that individuals acted “accidentally” or “on purpose”, we also perceive that individuals acted to achieve certain goals; that is, we perceive “motives” in the behavior of others.

Knowing why others might act is incredibly useful for predicting and manipulating their future behavior. The problem that our minds need to solve, as you can no doubt guess by this point, is that intentions and motives are not readily observable from actions. This means that we need to do our best to approximate them from other cues, and that entails making certain assumptions about observable actions and the actors who bring them about. Without these assumptions, we would have no way to distinguish between someone killing in self-defense, killing accidentally, or killing just for the good old fashion fun of it. The questions for consideration, then, concern which kinds of assumptions tend to be triggered by which kinds of cues under what circumstances, as well as why they get triggered by that set of cues. Understanding what problems these inferences about intentions and motives were designed to solve can help us more accurately predict the form that these often-unnoticed assumptions will likely take.

While attempting to answer that question about what cues our minds use, one needs to be careful to not lapse in the automatically-generated inferences our minds typically make and remain instinct-blind. The reason that one ought to avoid doing this – in regards to inferences about intentions and motives – is made very well by Gawronski (2009):

“…how [do] people know that a given behavior is intentional or unintentional[?]  The answer provided…is that a behavior will judged as intentional if the agent (a) desired the outcome, (b) believed that the action would bring about the outcome, (c) planned the action, (d) had the skill to accomplish the action, and (e) was aware of accomplishing the outcome…[T]his conceptualization implies the risk of circularity, as inferences of intentionality provide a precondition for inferences about aims and motives, but at the same time inferences of intentionality depend on a perceivers’ inferences about aims and motives.”

In other words, people often attempt to explain whether or not someone acted intentionally by referencing motives (“he intended to harm X because he stood to benefit”), and they also often attempt to explain someone’s motives on the basis of whether or not they acted intentionally (“because he stood to benefit by harming X, he intended harm”). On top of that, you might also notice that inferences about motives and intentions are themselves derived, at least in part, from other, non-observable inferences about talents and planning. This circularity manages to help us avoid something resembling a more-complete explanation for what we perceive.

“It looks three-dimensional because it is, and it is 3-D because it looks like it”

Even if we ignore this circularity problem for the moment and just grant that inferences about motives and intentions can influence each other, there is also the issue of the multiple possible inferences which could be drawn about a behavior. For instance, if you observe a son push his father down the stairs and kill him, one could make several possible inferences about motives and intentions. Perhaps the son wanted money from an inheritance, resulting in his intending to push his father to cause death. However, pushing his father not only kills close kin, but also carries the risk of a punishment. Since the son might have wanted to avoid punishment (and might well have loved his father), this would result in his not intending to push his father and cause death (i.e. maybe he tripped, which is what caused him to push). Then again, unlikely as it may sound, perhaps the son actively sought punishment, which is why he intended to push. This could go on for some time. The point is that, in order to reach any one of these conclusions, the mind needs to add information that is not present in the initial observation itself.

This leads us to ask what information is added, and on what basis? The answer to this question, I imagine, would depend on the specific inferential goals of the perceiver. One goal is could be accuracy: people wish to try and infer the “actual” motivations and intentions of others, to the extent it makes sense to talk about such things. If it’s true, for instance, that people are more likely to act in ways that avoid something like their own bodily harm, our cognitive systems could be expected to pick up on that regularity and avoid drawing the the inference that someone was intentionally seeking it. Accuracy only gets us so far, however, due to the aforementioned issue of multiple potential motives for acting: there are many different goals one might be intending to achieve and many different costs one might be intending to avoid, and these are not always readily distinguishable from one another. The other complication is that accuracy can sometimes get in the way of other useful goals. Our visual system, for instance, while not always accurate, might well be classified as honest. That is to say though our visual system might occasionally get things wrong, it doesn’t tend to do so strategically; there would be no benefit to sometimes perceiving a shirt as blue and other times as red in the same lighting conditions.

That logic doesn’t always hold for perceptions of intentions and motives, though: intentionally committed moral infractions tend to receive greater degrees of moral condemnation than unintentional ones, and can make one seem like a better or worse social investment. Given that there are some people we might wish to see receive less punishment (ourselves, our kin, and our allies) and some we might wish to see receive more (those who inflict costs on us or our allies), we ought to expect our intentional systems to perceive identical sets of actions very differently, contingent on the nature of the actor in question. In other words, if we can persuade others about our intentions and motives, or the intentions and motives of others, and alter their behavior accordingly, we ought to expect perceptual biases that assist in those goals to start cropping up. This, of course, rests on the idea that other parties can be persuaded to share your sense of these things, posing us with related problems like under what circumstances does it benefit other parties to develop one set of perceptions or another?

How fun this party is can be directly correlated to the odds of picking someone up.

I don’t pretend to have all the answers to questions like these, but they should serve as a reminder that the our minds need to add a lot of structure to the information they perceive in order to do many of the things of which they are capable. Explanations for how and why we do things like perceive intentionality and motive need to be divorced from the feeling that such perceptions are just “natural” or “intuitive”; what we might consider the experience of the word “duh”. This is an especially large concern when you’re dealing with systems that are not guaranteed to be accurate or honest in their perceptions. The cues that our minds use to determine what the motives people had when they acted and what they intended to do are by no means always straightforward, so saying that inferences are generated by “the situation” is unlikely to be of much help, on top of just being wrong.

References: Cosmides, L. & Tooby, J. (1996). Beyond intuition and instinct blindness: Towards an evolutionary rigorous cognitive science. Cognition, 50, 41-77.

Gawronski, B. (2009). The Multiple Inference Model of Social Perception: Two Conceptual Problems and Some Thoughts on How to Resolve Them. Psychological Inquiry, 20, 24-29 DOI: 10.1080/10478400902744261

Why Are They Called “Spoilers”?

Imagine you are running experiments with mice. You deprive the mice of food until they get hungry and then you drop them into a maze. Now obviously the hungry mice are pretty invested in the idea of finding the food; you have been starving them and all. You’re not really that evil of a researcher, though: in one group, you color-code the maze so the mice always know where to go to find the reward. The mice, I expect, would not be terribly bothered by your providing them with information and, if they could talk, I doubt many of them would complain about your “spoiling” the adventure of finding the food themselves. In fact, I would also expect most people would respond the same way when they were hungry: they would rather you provide them with the information they sought directly instead of having to make their own way through the pain of a maze (or do some equally-annoying psychological task) before they could eat. We ought to expect this because, at least in this instance, as well as many others, having access to greater quantities of accurate information allows you to do more useful things with your time. Knowing where food is cuts down on your required search time, which allows you to spend that time in other, more fruitful ways (like doing pretty much anything that undergraduates can do that doesn’t involve serving a participant for psychologists). So what are we to make of cases where people seem to actively avoid such information and claim they find it aversive?

Spoiler warning: If you would rather formulate your own ideas first, stop reading now.

The topic arose for me lately in the context of the upcoming E3 event, where the next generation of video games will be previewed. There happens to be one video game specifically I find myself heavily invested in and, for whatever reason, I find myself wary of tuning into E3 due to the risk of inadvertently exposing myself to any more content from the game. I don’t want to know what the story is; I don’t want to see any more game play; I want to remain as ignorant as possible until I can experience the game firsthand. I’m also far from alone in that experience: of approximately 40,000 who have voiced their opinions, a full half reported that they found spoilers unpleasant. Indeed, the word that refers to the leaking of crucial plot details itself implies that the experience of learning them can actually ruin the pleasure that finding them out for yourself can bring, in much the same way that microorganisms make food unpalatable or dangerous to ingest. Am I, along with the other 20,000, simply mistaken? That is, do spoilers actually make the experience of reading some book or playing some video game any less pleasant? At least two people think that answer is “yes”.

Leavitt & Chistienfeld (2011) suggest that spoilers, in fact, do not make the experience of a story any less pleasant. After all, the authors mention people are perfectly willing to experience stories again, such as by rereading a book, without any apparent loss of pleasure from the story (curiously they cite no empirical evidence on this front, making it an untested assumption). Leavitt & Christienfeld also suggested that perceptual fluency (in the form of familiarity) with a story might make it more pleasant because the information subsequently becomes easier to process. Finally, the pair appear all but entirely disinterested in positing any reasons as to why so many people might find spoilers unpleasant. The most they offer up is the possibility that suspense might have something to do with it, but we’ll return to that point later. The authors, like your average person discussing spoilers, didn’t offer anything resembling a compelling reason as for why people might not like them. They simply note that many people think spoilers are unpleasant and move on.

In any case, to test whether spoilers really spoiled things, they recruited approximately 800 subjects to read a series of short stories, some of which came with a spoiler, some of which without, and some in which the spoiler was presented as the opening paragraph of the short story itself. These stories were short indeed: between 1,400 and 4,200 words a piece, which amounts to the approximate length of this post to about three of them. I think this happens to be another important detail to which I’ll return later, (as I have no intention of spoiling my ideas fully yet). After the subjects had read each story, they rated how much they enjoyed it on a scale of 1 to 10. Across all three types of stories that were presented – mysteries, ironic twists, and literary ones – subjects actually reported liking the spoiled stories somewhat more than the non-spoiled ones. The difference was slight, but significant, and certainly not in the spoiler-are-ruining-things direction. From this, the authors suggest that people are, in fact, mistaken in their beliefs about whether spoilers have any adverse impact on the pleasure one gets from a story. They also suggest that people might like birthday presents more if they were wrapped in clear cellophane.

Then you can get the disappointment over with much quicker.

Is this widespread avoidance of spoilers just another example of quirky, “irrational” human behavior, then, born from the fact that people tend to not have side-by-side exposure to both spoiled and non-spoiled version of a story? I think Leavitt & Christenfeld are being rather hasty in their conclusion, to put it mildly. Let’s start with the first issue: when it comes to my concern over watching the E3 coverage, I’m not worried about getting spoilers for any and all games. I’m worried about getting spoilers for one specific game, and it’s a game from a series I already have a deep emotional commitment to (Dark Souls, for the curious reader). When Harry Potter fans were eagerly awaiting the moment they got to crack open the next new book in the series, I doubt they would care much one way or the other if you told them about the plot to the latest Die Hard movie. Similarly, a hardcore Star Wars fan would probably not have enjoyed someone leaving the theater in 1980 blurting out that Darth Vader was Luke’s father; by comparison, someone who didn’t know anything about Star Wars probably wouldn’t have cared. In other words, the subjects likely have absolutely no emotional attachment to the stories they were reading and, as such, the information they were being given was not exactly a spoiler. If the authors weren’t studying what people would typically consider aversive spoilers in the first place, then their conclusions about spoilers more generally are misplaced.

One of the other issues, as I hinted at before, is that the stories themselves were all rather short. It would take no more than a few minutes to read even the longest of them. This lack of investment of time could cause a major issue for the study but, as the authors didn’t posit any good reasons for why people might not like spoilers in the first place, they didn’t appear to give the point much, if any, consideration. Those who care about spoilers, though, seem to be those who consider themselves part of some community surrounding the story; people who have made some lasting emotional connection with in it along with at least a moderately deep investment of time and energy. At the very least, people have generally selected the story to which they’re about to be exposed themselves (which is quite unlike being handed a preselected story by an experimenter).

If the phenomenon we’re considering appears to be a costly act with no apparent compensating benefits – like actively avoiding information that would otherwise require a great deal of temporal investment to obtain – then it seems we’re venturing into the realm of costly signaling theory (Zahavi, 1975). Perhaps people are avoiding the information ahead of time so they can display their dedication to some person, group, or signal something about themselves by obtaining the information personally. If the signal is too cheap, its information value can be undermined, and that’s certainly something people might be bothered by.

So, given the length of these stories, there didn’t seem to be much that one could actually spoil. If one doesn’t need to invest any real time or energy in obtaining the relevant information, spoilers would not be likely to cause much distress, even in cases where someone was already deeply committed to the story. At worst, the spoilers have ruined what would have been 5 minutes of effort. Further, as I previously mentioned, people don’t seem to dislike receiving all kinds of information (“spoilers” about the location of food or plot detains from stories they don’t care about, for instance). In fact, we ought to expect people to crave these “spoilers” with some frequency, as information gain for cheap or free is, on the whole, generally a good thing. It is only when people are attempting to signal something with their conspicuous ignorance that we ought to expect “spoilers” to actually be spoilers, because it is only then that they have the potential spoil anything. In this case, they would be ruining an attempt to signal some underlying quality of the person who wants to find out for themselves.

Similar reasoning helps explain why it’s not enough for them to just hate people privately.

In two short pages, then, the paper by Leavitt & Christenfeld (2011) demonstrates a host of problems that can be found in the field of psychological research. In fact, this might be the largest number of problems I’ve seen crammed into such a small space. First, they appear to fundamentally misunderstand the topic they’re ostensibly researching. It seems, to me, anyway, as if they’re trying to simply find a new “irrational belief” that people hold, point it out, and say, “isn’t that odd?”. Of course, simply finding a bias or mistaken belief doesn’t explain anything about it, and there’s little to no apparent effort made to understand why people might hold said odd belief. The best the authors offer is that the tension in a story might be heightened by spoilers, but that only comes after they had previously suggested that such suspense might detract from enjoyment by diverting a reader’s attention. While these two claims aren’t necessarily opposed, they seem at least somewhat conflicting and, in any case, neither claim is ever tested.

There’s also a conclusion that vastly over-reaches the scope of the data and is phrased without the necessary cautions. They go from saying that their data “suggest that people are wasting their time avoiding spoilers” to intuitions about spoilers just being flat-out “wrong”. I will agree that people are most definitely wasting their time by avoiding spoilers. I would just also add that, well, that waste is probably the entire point.

References: Leavitt JD, & Christenfeld NJ (2011). Story spoilers don’t spoil stories. Psychological science, 22 (9), 1152-4 PMID: 21841150

Zahavi, M. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.

Why Psychology 101 Should Be Evolutionary Psychology

In two recent posts, I have referenced a relatively-average psychologist (again, this psychologist need not bear any resemblance to any particular person, living or dead). I found this relatively-average psychologist to be severely handicapped in their ability to think about psychology – human and non-human psychology alike – because they lacked a theoretical framework for doing so. What this psychologist knows about one topic, such as self-esteem, doesn’t help this psychologist think about any other topic which is not self-esteem, by in large. Even if this psychologist managed to be an expert on the voluminous literature on the subject, it would probably not tell them much about, say,  learning, or sexual behavior (save the few times where those topics directly overlapped as measured or correlated variables). The problem became magnified when topics shifted outside of humans into other species. Accordingly, I find the idea of teaching students about an evolutionary framework to be more important than teaching them about any particular topic within psychology. Today, I want to consider a paper from one of my favorite side-interests: Darwinian Medicine – the application of evolutionary theory to understanding diseases. I feel this paper will serve as a fine example for driving the point home.

As opposed to continuing to drive with psychology as it usually does.

The paper, by Smallegange et al (2013), was examining malarial transmission between humans and mosquitoes. Malaria is a vector-borne parasite, meaning that it travels from host to host by means of an intermediate source. The source by which the disease is spread is known as a vector, and in this case, that vector is mosquito bites. Humans are infected with malaria by mosquitoes and the malaria reproduces in its human host. That host is subsequently bitten by other mosquitoes who transmit some of the new parasites to future hosts. One nasty side-effect of vector-borne diseases is that they don’t require the hosts to be mobile to spread. In the case of other parasites like, say, HIV, the host needs to be active in order to spread the disease to others, so the parasites have a vested interest in not killing or debilitating their hosts too rapidly. On the other hand, if the disease is spread through mosquito bites, the host doesn’t need to be moving to spread it. In fact, it might even be better – from the point of view of the parasite – if the host was relatively disabled; it’s harder to defend against mosquitoes if one is unable to swat them away. Accordingly, malaria (along with other vector-borne diseases) ends up being a rather nasty killer.

Since malaria is transmitted from human to human by way of mosquito bites, it would stand to reason that the malaria parasites would prefer, so to speak, that mosquitoes preferentially target humans as food sources: more bites equals more chances to spread. The problem, from the malaria’s perspective, is that mosquitoes might not be as inclined to preferentially feed from humans as the malaria would. So, if the malaria parasite could alter the mosquitoes behavior in some way, so as to assist in its spread by making the mosquitoes preferentially target humans, this would be highly adaptive from the malaria’s point of view. In order to test whether the malaria parasites did so, Smallegange et al (2013) collected some human odor samples using a nylon matrix. This matrix, along with a control matrix, were presented to caged mosquitoes and the researchers measured how frequently the mosquitoes – either infected with malaria or not – landed on each. The results showed that mosquitoes, whether infected or uninfected, didn’t seem particularly interested in the control matrix. When it came to the human odor matrix, however, the mosquitoes infected with malaria were substantially more likely to land on it and attempt to probe it than the non-infected ones (the human odor matrix received about four times the attention from infected mosquitoes that it did from the uninfected).

While this result is pretty neat, what can it tell us about the field of psychology? For starters, in order to alter mosquito behavior, the malaria parasite would need to do so via some aspect of the mosquitoes’ psychology. One could imagine a mosquito infected with malaria suddenly feeling overcome with the urge to have human for dinner (if it is proper to talk about mosquitoes having similar experiences, that is) without having the faintest idea why. A mosquito psychologist, unaware of the infection-behavior link, might posit that preferences for food sources naturally vary along a continuum in mosquitoes, and there’s nothing particularly strange about mosquitoes that seem to favor humans excessively; it’s just part of normal mosquito variation. (the parallels to human sexual orientation seem to be apparent, in some respects). This mosquito psychologist might also suggest that there was something present in mosquito culture that made some mosquitoes more likely to seek out humans. Maybe the mosquitoes that prefer humans were insecurely attached to their mother. Maybe they have particularly high self-esteem. That we know such explanations are likely wrong – it seems to be the malaria driving the behavior here – without reference to evolutionary theory and an understanding of pathogen-host relationships, our mosquito psychologists would be at a loss, relatively, to understand what’s going on.

Perhaps mosquitoes are just deficient in empathy towards human hosts and should go vegan.

What this example boils down to (for my purposes here, anyway) is that thinking about the function(s) of behavior – and of psychology by extension – helps us understand it immensely. Imagine mosquito psychologists who insisted on not “limiting themselves” to evolutionary theory for understanding what they’re studying. They might have a hard time understanding food preferences and aversions (like, say, pregnancy-related ones) in general, much less the variations of it. The same would seem probable to hold for sexual behavior and preferences. Mosquito doctors who failed to try and understand function might occasionally (or frequently) try to “treat” natural bodily defense mechanisms against infections and toxins (like, say, reducing fever or pregnancy sickness, respectively) and end up causing harm to their patients inadvertently. Mosquito-human-preference advocates might suggest that the malaria hypothesis purporting to explain their behavior to be insulting, morally offensive, and not worthy of consideration. After all, if it were true, preferences might be alterable by treating some infection, resulting in a loss of some part of their rich and varied culture.

If, however, doctors and psychologists were trained to think about evolved functions from day one, some of these issues might be avoidable. Someone versed in evolutionary theory could understand the relevance between findings in the two fields quickly. The doctors would be able to consider findings from psychology and psychologists from doctors because they were both working within the same conceptual framework; playing by the same rules. On top of that, the psychologists would be better able to communicate with each other, picking out possible errors or strengths in each others’ research projects, as well as making additions, without having to be experts in the fields first (though it certainly wouldn’t hurt). A perspective that not only offers satisfactory explanations within a discipline, between disciplines, and ties them all together, is far more valuable than any set of findings within those fields. It’s more interesting too, especially when considered against the islands-of-findings model that currently seems to predominate in the teaching of psychology. At this point, I feel those who would make a case for not starting with evolutionary theory ought to be burdened by, well, making that case and making it forcefully. That we currently don’t start teaching psychology with evolution is, in my mind, no argument to continue not doing so.

References: Smallegange, R., van Gemert, G., van de Vegte-Bolmer, M., Gezan, S., Takken, W., Sauerwein, R., & Logan, J. (2013). Malaria Infected Mosquitoes Express Enhanced Attraction to Human Odor PLoS ONE, 8 (5) DOI: 10.1371/journal.pone.0063602

Welcome To Introduction To Psychology

In my last post, I mentioned a hypothetical relatively-average psychologist (caveat: the term doesn’t necessarily apply to any specific person, living or dead). I found him to be a bit strange, since he tended to come up with hypotheses that were relatively theory-free; there was no underlying conceptual framework he was using to draw his hypotheses. Instead, most of his research was based on some hunch or personal experience. Perhaps this relatively-average psychologist might also have made predictions on the basis of what previous research had found. For instance, if one relatively-average psychologist found that priming people to think about the elderly made them walk marginally slower, another relatively-average psychologist might predict that priming people to think of a professor would make them marginally smarter. I posited that these relatively-average psychologists might run into some issues when it comes to evaluating published research because, without a theoretical framework with which to understand the findings, all one can really consider are the statistics; without a framework, relatively-average psychologists have a harder time thinking about why some finding might make sense or not.

If you’re not willing to properly frame something, it’s probably not wall-worthy.

So, if a population of these relatively-average psychologists are looking to evaluate research, what are they supposed to evaluate it against? I suppose they could check and see if the results of some paper jibe with their set of personal experiences, hunches, or knowledge of previous research, but that seems to be a bit dissatisfying. Those kinds of practices would seem to make evaluations of research look more like Justice Stewart trying to define pornography: “I know [good research] when I see it”. Perhaps good research would involve projects that delivered results highly consistent with people’s personal general experiences; perhaps good research would be a project that found highly counter-intuitive or surprising results; perhaps good research would be something else still. In any case, such a practice – if widespread enough – would make the field of psychology look like grab bag of seemingly scattered and random findings. Learning how to think about one topic in psychology (say, priming) wouldn’t be very helpful when it came to learning how to think about another topic (say, learning). That’s not to say that the relatively-average psychologists have nothing helpful at all to add, mind you; just that their additions aren’t being driven by anything other than those same initial considerations, such as hunches or personal experience. Sometimes people have good guesses; in the land of psychology, however, it can be difficult to differentiate between good and bad ones a priori in many cases.

It seems like topic-to-topic issues would be hard enough for our relatively-average psychologists to deal with, but that problem becomes magnified once the topics shift outside of what’s typical for one’s local culture, and even further when topics shift outside of one’s species. Sure; maybe male birds will abandon a female partner after a mating season if the pair is unable to produce any eggs because the male birds feel a threat to their masculinity that they defend against by reasserting their virility elsewhere. On the flip side, maybe female birds leave the pair because their sense of intrinsic motivation was undermined by the extrinsic reward of a clutch of eggs. Maybe male ducks force copulations on seemingly unwilling female ducks because male ducks use rape as a tactic to keep female ducks socially subordinate and afraid. Maybe female elephant seals aren’t as combative as their male counterparts because of sexist elephant seal culture. Then again, maybe female elephant seals don’t fight as much as males because of their locus of control or stereotype threat. Maybe all of that is true, but my prior on such ideas is that they’re unlikely to end up treading much explanatory water. Applied to non-human species, their conceptual issues seem to pop out a bit better. Your relatively-average psychologist, then, ends up being rather human-centric, if not a little culture- and topic-centric as well. Their focus is on what’s familiar to them, largely because what they know doesn’t help them think about what they do not know too much.

So let’s say that our relatively-average psychologist has been tasked with designing a college-level introduction to psychology course. This course will be the first time many of the students are being formally exposed to psychology; for the non-psychology majors in the class, it may also be their last time. This limits what the course is capable of doing, in several regards, as there isn’t much information you can take for granted. The problems don’t end there, however: the students, having a less-than-perfect memory, will generally forget many, if not the majority, of the specifics they will be taught. Further, students may never again in their life encounter the topics they learned about in the intro course, even if they do retain the knowledge about them. If you’re like most of the population, knowing the structure of a neuron or who William James was will probably never come up in any meaningful way unless you find yourself at a trivia night (and even then it’s pretty iffy). Given these constraints, how is our relatively-average psychologist supposed to give their students an education of value? Our relatively-average psychologist could just keep pouring information out, hoping some of it sticks and is relevant later. They could also focus on some specific topics, boosting retention, but at the cost of breadth and, accordingly, chance of possible relevance. They could even try to focus on a series of counter-intuitive findings in the hopes of totally blowing their students’ minds (to encourage students’ motivation to show up and stay awake), or perhaps some intended to push a certain social agenda – they might not learn much about psychology, but at least they’ll have some talking points for the next debate they find themselves in. Our relatively-average psychology could do all that, but what they can’t seem to do well is to help students learn how to think about psychology; even if the information is retained, relevant, and interesting, it might not be applicable to any other topics not directly addressed.

“Excuse me, professor: how will classical conditioning help me get laid?”

I happen to feel that we can do better than our relative-average psychologists when designing psychology courses – especially introductory-level ones. If we can successfully provide students with a framework to think about psychology with, we don’t have to necessarily concern ourselves with whether one topic or another was covered or whether they remember some specific list of research findings, as such a framework can be applied to any topic the students may subsequently encounter. Seeing how findings “fit” into something bigger will also make the class seem that much more interesting. Granted, covering more topics in the same amount of depth is generally preferable to covering fewer, but there are very real time constraints to consider. With that limited time, I feel that giving students tools for thinking about psychological material is more valuable than providing them findings within various areas of psychology. Specific topics or findings within psychology should be used predominately as vehicles for getting students to understand that framework; trying to do things the other way around simply isn’t viable. This will not come as a surprise to any regular reader, but the framework that I feel we ought to be teaching students is the functionalist perspective guided by an understanding of evolution by natural selection. Teaching students how to ask and evaluate questions of “what is this designed to do” is a far more valuable skill than teaching them about who Freud was or some finding that failed to replicate but is still found in the introductory textbooks.

On that front, there is both reason to be optimistic and disappointed. According to a fairly exhaustive review of introductory psychology textbooks available from 1975 to 2004 (Cornwell et al, 2005), evolutionary psychology has been gaining greater and more accurate representation: whereas the topic was almost non-existent in 1975, in the 2000s, approximately 80% of all introductory texts discussed the subject at some point.  Further, the tone that the books take towards the subject has become more neutral or positive as well, with approximately 70% of textbooks treating the topic as such. My enthusiasm of the evolutionary perspective’s representation is dampened somewhat by a few other complicating factors, however. First, many of the textbooks analyzed contained inaccurate information when the topic was covered (approximately half of them overall, and the vast majority of the more recent texts that were considered, even if those inaccuracies might appear to have become more subtle over the years). Another concern is that, even when representations of evolutionary psychology were present within the textbooks, the discussion of the topic appeared relatively confined. Specifically, it didn’t appear that many important concepts (like kin selection or parental investment theory) received more than one or two paragraphs on average, if they even got that much space. In fact, the only topic that received much coverage seemed to be David Buss’s work on mating strategies; his citation count alone was greater than all others authors within evolutionary psychology combined. As Cornwell et al (2005) put it:

These data are troubling when one considers undergraduates might conclude that EP is mainly a science of mating strategies studied by David Buss. (p.366).

So, the good news is that introductory psychology books are acknowledging that evolutionary psychology exists in greater and greater number. The field is also less likely to be harshly criticized for being something it isn’t (like genetic determinism). That’s progress. The bad news is that this information is, like many topics in introductory books appear to be, cursory, often inaccurate in at least some regards, and largely restricted to the work of one researcher within the field. Though Cornwell et al (2005) don’t specifically mention it, another factor to consider is where the information is presented within the texts. Though I have no data on hand beyond my personal sample of introductory books I’ve seen in recent years (I’d put that number around a dozen or so), evolutionary psychology is generally found somewhere in the middle of the book when it is found at all (remember, approximately 1-in-5 texts didn’t seem to even acknowledge the topic). Rather than being presented as a framework that can help students understand any topic within psychology, it seems to be presented more as just another island within psychology. In other words, it doesn’t tend to stand out.

So not exactly the portrayal I had hoped for…

Now I have heard some people who aren’t exactly fans (though not necessarily opponents, either) of evolutionary psychology suggest that we wouldn’t want to prematurely close off any alternative avenues of theoretical understanding in favor of evolutionary psychology. The sentiment seems to suggest that we really ought to be treating evolutionary psychology as just another lonely island in the ocean of psychology. Of course, I would agree in the abstract: we wouldn’t want to prematurely foreclose on any alternative theoretical frameworks. If a perspective existed that was demonstrably better than evolution by natural selection and the functionalist view in some regards – perhaps for accounting for the data, understanding it, and generating predictions – I’d be happy to make use of it. I’m trying to further my academic career as much as the next person, and good theory can go a long way. However, psychology, as a field, has had about 150 years with which to come up with anything resembling a viable alternative theoretical framework – or really, a framework at all that goes beyond description – and seems to have resoundingly failed at that task. Perhaps that shouldn’t be surprising, since evolution is currently the only good theory we have for explaining complex biological design, and psychology is biology. So, sure, I’m on board with no foreclosing on alternative ideas, just as soon as those alternatives can be said to exist.

References: Cornwell, R., Palmer, C., Guinther, P., & Davis. H. (2005). Introductory Psychology Texts as a View of Sociobiology/Evolutionary Psychology’s Role in Psychology Evolutionary Psychology, 3, 355-374

He’s Climbing In Your Windows; He’s Snatching Your People Up

One topic that has been addressed by evolutionary psychologists that managed to draw a good deal of ire was rape. Given the sensitive nature of the issue, the criticisms that the theorizing about it brought were largely undeserved, reflecting, perhaps, a human tendency to mistake explanation with exculpation. Needless to say, at this point, sexual assault will be the topic for examination today, so if it’s the kind of thing that bothers you to read about, I suggest clicking away. Now that the warning has been made, if you’re still reading we can move forward. There has been some debate among evolutionary-minded researchers as to whether or not there are any rape-specific cognitive adaptations in humans, or whether rape represents a byproduct of other mating mechanisms. The debate remains unresolved for lack of unambiguous predictions or data. As the available data could be interpreted as consistent with both sides of the debate, the question remains a slippery and contentious one.

So do be careful if you decide to try and pick it up.

A paper by Felson & Cundiff (2012) suggests to have found some data they say support the byproduct view for rape. While I find myself currently favoring the byproduct explanation, I also find their interpretation of the evidence they bring to bear on the matter underwhelming. I actually find their interpretation of several matters off, but we’ll get to that later. First, let’s consider the research itself. The authors sought to examine existing data on robberies committed by lone males 12 years or older where a lone female was present at the time. From the robbery data, the authors were further interested in examining the subset of them that also involved a report of sexually assault. Towards this end, Felson & Cundiff (2012) reported data from approximately 45,000 robberies spanning from 2000-2007. Of those robberies, roughly 2% of them also involved a sexual assault, yielding about 900 cases for examination. As an initial note, the 2% figure would seem to suggest, to me, anyway, that in most instances of robbery/sexual assault, the assaults tended to not be preplanned; they look more opportunistic.

From this sample, the authors first examined what effect the female victim’s age had on the likelihood of a sexual assault being reported during the robbery. As it turns out, the age of the woman was a major determinant: women at the highest risk of being assaulted were in the 15-29 age range (with the peak being within the 20-24 year old age range), where the average risk of a sexual assault was around 2.5%. Before this age range, the risk of assault is substantially lower, around 1.3%. After 29 years, the rate begins to decline, dropping markedly after 40, down to around an average of 0.5%. In terms of opportunistic sexual assaults, then, male robbers appear to target women in their fertile years at disproportionate frequencies, presumably partially or largely on the basis of victim’s physical attractiveness. This finding appears consistent with previous work that had found the average age of a female who was the victim of a robbery alone was 35, while the average age of a robbery/assault victim was 27.9; about 7 years of difference. Any theories of rape that assume the act is motivated by power and not by sex would seem to have a very difficult time accounting for this pattern in the data.

Next, the authors turned their attention towards characteristics of the male robbers that predict whether or not an assault was reported. The results showed that the likelihood of a sexual assault increased as the males reached sexual maturity and steadily increased further until about their mid-thirties, after which they began to decline. Further, regardless of their age, the robbers didn’t show much in the way of variance in terms of the age of women they tended to target. That is to say whether the man was in his late teens or his late forties, they all seemed to preferentially target younger women nearer to their peak fecundity. The one exception to this pattern were the males aged 12-17, who seemed to even more disproportionately prefer women in their teens and early twenties. Felson & Cundiff (2012) note that this pattern of preferences is not typically observed in consensual relationships, where men and women tend to pair up around similar ages. This suggests that older men’s patterns of engaging in relationships with older women likely represents the relative aversion of younger women to the older males; not a genuine preference on the part of men for older women per se.

Though it’s difficult to imagine why older men aren’t preferred…

That’s not to say that older men may not have a preference for pursing relatively older women, just that such a preference wouldn’t be driven by the woman’s age. Such a preference might well be driven by other factors, however, such as the relative willingness of a woman to enter into a relationship with the man in question. There’s not much point for a man in pursuing women they’re unlikely to ever attain success with, even if those women are highly attractive; better to spend that time and energy in domains more liable to payoff. Louis C.K. sums the issue up neatly in one of his stand-up routines: “to me, you’re not a woman until you’ve had a couple of kids and your life is in the toilet…[if you're a younger girl] I don’t want to fuck you…[alright] I do want to fuck you, but you won’t fuck me, so fuck you”. When such tradeoffs can be circumvented – as is the case in sexual assault – a person’s underlying preferences for certain characteristics can be more readily assessed.

This brings us to my complaints with the paper. As I mentioned initially, there’s an ongoing debate as to whether or not men have cognitive mechanisms designed for rape specifically, or whether rape is generated as a byproduct of mechanisms designed for other purposes. Felson & Cundiff (2012) suggest that their data support the byproduct interpretation. Why? Because they found that women in the 15-29 age range who were sexually assaulted were less likely to be raped than older women. This pattern of data is supposed to support the byproduct hypothesis because, I think, the authors are positing some specific motivation for sex acts that could result in conception, rather than some more general interest in sexual behavior. It’s hard to say, since the authors fail to lay out the theory behind their hypothesis with precision. This strikes me as somewhat of a strange argument, though, as it would essentially posit that sexual acts that are unlikely to result in conception (such as oral or anal sex) are motivated by a different set of cognitive mechanisms that an interest in vaginal sex. While that might potentially be the case, I’ve never seen a case made for it, and there isn’t a strong one to be found in the paper.

The other complaint I have is that the authors use a phrase that’s a particular pet peeve of mine: “..our results are consistent with the predictions from evolutionary psychology”. This phrase always troubles me because evolutionary psychology, as field, does not make a set of uniform predictions about sexual behavior. Their results may well be consistent with some sub-theories derived by psychologists using an evolutionary framework – such as sexual strategies theory – but they are not derived from evolutionary psychology more broadly. To say that a result is consistent or inconsistent with evolutionary psychology is to imply that such a finding supports or fails to support the foundational assumptions of the field; assumptions which have to do with the nature of information processing mechanisms. While this might seem like a minor semantic point at first, I feel it’s actually a rather deep issue. It’s a frequent mistake that many of evolutionary psychology’s critics make when attempting to write off the entire field on the basis of a single idea they don’t like. To the extent that such inaccurate generalizations serve to hinder people’s engagement with the field, there’s a problem to be addressed.

And if you’re not willing to engage with me, I’d like the ring back.

As evolutionary psychology more broadly doesn’t deliver specific predictions about rape, neither the hypothesis that rape is an adaptation or a byproduct should rightly be considered the official evolutionary psychology perspective on the topic; this would be the case regardless of whether the evidence strongly supported one side or the other, I might add. While the the current research doesn’t speak to either of these possibilities distinctly, it does manage to speak against the idea that rape isn’t about sex, adding to the already substantial evidence that such a view is profoundly mistaken. Of course, the not-sex explanation was always more of a political slogan than a scientific one, so the lack of empirical support for it might not prove terribly troubling for its supporters.

References: Felson, R., & Cundiff, P. (2012). Age and sexual assault during robberies Evolution and Human Behavior, 33 (1), 10-16 DOI: 10.1016/j.evolhumbehav.2011.04.002