Is Choice Overload A Real Thing?

Within the world of psychology research, time is often not kind to empirical findings. This unkindness was highlighted recently in the results of the reproducibility project, which found that the majority of psychological findings tested did not appear to replicate particularly well. There are a number of reasons this happens, including that psychological research tends to be conducted rather atheoretically (allowing large numbers of politically-motivated or implausible hypotheses to be successfully floated), and that researchers have the freedom to analyze their data in rather creative ways (allowing them to find evidence of effects where none actually exist). These practices are engaged in because positive findings tend to be published more often than null results. In fact, even if the researchers do everything right, that’s still not a guarantee of repeatable results; sometimes people just get lucky with their data. Accordingly, it is a fairly common occurrence for me to revisit some research I learned about during my early psychology education only to find out that things are not quite as straightforward or sensible as they had been presented to be. I’m happy to report that today is (sort of) one of those days. The topic in question has been called a few different things, but for my present purposes I will be referring to it as choice overload: the idea that having access to too many choices actually results in making decisions more difficult and less satisfying. In fact, if too many options are presented, people might even avoid making a decision altogether. What a fascinating idea.

Here’s to hoping time is kind to it…

The first time I had heard of this phenomenon, it was in the context of exotic jams. The summary of the research goes as follows: Iyengar & Lepper (2000) set up shop in a grocery store, creating a tasting booth for either six or 24 varieties of jams (from which the more-standard flavors, like strawberry, were removed). Shoppers were invited to stop by the booths, try as many of the jams as they wanted, given a $1 off coupon for that brand’s jam, and then left. The table with the more extensive variety did attract more customers (60% of those who walked by), relative to the table with fewer selections (40%), suggesting that the availability of more options was, at least initially, appealing to people. Curiously, however, there was no difference between the average number of jams sampled: whether the table had 6 flavors or 24, people only sampled about 1.5 of them, on average, and apparently, no one ever sampled more than two flavors (maybe they didn’t want to see rude or selfish). More interestingly still, because the customers were given coupons, their purchases could be tracked. Of those who stopped at the table with only six flavors, about 30% ended up later purchasing jam; when the table had 24 flavors, a mere 3% of customers ended up buying one.

There are a couple of potential issues with this study, of course, owing to its naturalistic design; issues which were noted by the authors. For instance, it is possible that people who were fairly uninterested in buying jam might have been attracted to the 24-flavor booth nevertheless, simply out of curiosity, whereas those with a greater interest in buying jams would have remained interested in sampling them even when a smaller number of options existed. To try and get around these issues, Iyengar & Lepper (2000) designed another two experiments, one of which I wanted to cover. This other experiment was carried out in a more standard lab setting (to help avoid some of the possible issues with the jam results) and involved tasting chocolate. There were three groups of participants in this case: the first group (n = 33) got to select and sample a chocolate from an array of six possible options, the second group (n = 34) got to select and sample a chocolate from an array of 30 possible options, and a final group (n = 67) were randomly assigned to test a chocolate they had not selected. In the interests of minimizing people’s familiar preferences for such things, only those who enjoyed chocolate, but did not have experience with that particular brand were selected for the study. After filling out a few survey items and completing the sampling task, the participants were presented with their payment option: either $5 in cash, or a box of chocolates from that brand worth $5. 

In accordance with the previous findings, participants who selected from 30 different options were somewhat more likely to say they had been presented with “too many” options (M = 4.88) compared with those who old had 6 possible choices (M = 3.61, on a seven-point scale, ranging from “too few” choices at 1, to “too many” choices at 7). Despite the subjects in the extensive-choice group saying that making a decision as to which chocolate to sample was more difficult, however, there was no correlation between how difficult participants found the decision and how much they reported enjoying making it. It seemed people could enjoy making more difficult choices. Additionally, participants in the limited-choice group were more satisfied with their choice (M = 6.28) than those in the extensive-choice group (M = 5.46), who were in turn more satisfied than those in the no-choice group (M = 4.92). Of particular interest are the compensation findings: those in the limited-choice group were more likely to accept a box of chocolate in lieu of cash (48%) than those in either the extensive-choice (12%) or no-choice conditions (10%). It seems that having some options was preferable to having no options, but having too many options seemed to cause people difficulty in making decisions. The research concluded that, to use the term, people could be overloaded by choices, hindering their decision making process.

“If it can’t be settled via coin flip, I’m not interested”

While such findings are indeed quite interesting, there is no guarantee they will hold up over time; as I mentioned initially, lots of research fails to do likewise. This is where meta-analyses can help. This is the kind of research where the results from many different studies can be examined jointly. Scheibehenne et al (2010) set out to conduct one of their own on the research surrounding choice overload, noting that some of the research on the phenomenon does not point in the same direction. They note a few examples, such as field research in which reducing the number of available items resulted in decreases or no changes to sales, rather than what should have been a predicted uptick in them. Indeed, the lead author also reports that their own attempt at replicating the jam study for their dissertation in 2008 failed, as well as the second author’s attempt to replicate the chocolate experiment. These failures to replicate the original research might indicate that the initial results of choice overload were something of a fluke, and so a wider swath of research needs to be examined to determine if that’s the case.

Towards this end, Scheibehenne et al (2010) collected 50 experiments from the literature on the subject, representing about 5,000 participants in 13 published and 16 unpublished papers from 2000-2009. In total, the average estimated effect size for the choice overload effect across all the experiments was a mere D = 0.02; the effect was all but non-existent. Further analysis revealed that the difference in effect sizes between studies did not seem to be randomly distributed; there were likely relevant differences between these papers determining what kind of results they found. To examine this issue further, Scheibehenne et al (2010) began by trimming off the 6 largest effects from both the top and the bottom ends of the reported research. The results showed that, in the trimmed data set, there was little evidence of difference between the remaining research. This suggests that most of the differences between these studies was being driven by unusually large positive and negative effects.

Returning to the complete, untrimmed data set, Scheibehenne et al (2010) started to pick apart how several moderating variables might be affecting the reported results. In line with the intuitions of Iyengar & Lepper (2000), preexisting preferences or expertise did indeed have an effect on the choice overload issue: people with existing preferences were not as troubled by additional items when making a choice, relative to those without such preferences. However, there was also an effect of publication – such that published papers were somewhat more likely to report an effect of choice overload, relative to unpublished ones – as well as a small effect of year – such that papers published more recently were a bit less likely to report choice overloading effects. In sum, the results of the meta-analysis indicated that the average effect size of choice overload was nearly zero, that older studies which saw publication report larger effects than those that came later or were not published, and that well-defined, preexisting preferences likely remove the negative effects of having too many options (to the extent they actually existed in the first place). Crucially, what should have been an important variable – the number of different options participants were presented with on the high end – explained essentially none of the variance. That is to say that 18 times didn’t seem to make any difference, compared to 30 items or more

“Well, there are too many different chip options; guess I’ll just starve”

While this does not rule out choice overload as being a real thing, it does cast doubt on the phenomenon being as pervasive or important as some might have given it credit for. Instead, it appears probable that such choice effects might be limited to particular contexts, assuming they reliably exist in the first place. Such contexts might include how easily the products can be compared to one another (i.e., it’s harder to decide when faced with two equally attractive, but quite distinct options), or whether people are able to use mental shortcuts (known as heuristics) to rapidly whittle down the number of options they actually consider (so as to avoid spending too much time making fairly unimportant choices). While future examination would be required to test some of these ideas, the larger message here extends beyond the choice overload literature to most of psychology research: it is probably fair to assume that, as things currently stand, the first thing you hear about the existence or importance of an effect will likely not resemble the last thing you do.

References: Iyengar, S. & Lepper, M. (2000). When choice is demotivating: Can one desire too much of a good thing? Journal of Personality & Social Psychology, 79, 995-1006.

Scheibehenne, B., Greifeneder, R., & Todd, P. (2010). Can there ever be too many options? A meta-analytic review of choice overload. Journal of Consumer Research, 37, 409-424.

 

Savvy Shoppers Seeking Sex

There exists an idea in the economic field known as revealed preferences theory. People are often said to have preferences for this or that, but preferences are not the kind of thing that can be directly observed (just as much of our psychology cannot). As such, you need to find a way to infer information about these underlying preferences through something observable. In the case of revealed preferences, the general idea is that people’s decisions about what to buy and how much to spend are capable of revealing that information. For instance, if you would rather buy a Honda instead of a Ford for the same price, I have learned that your preferences – at least in the current moment – favor Hondas; if I were interested in determining the degree of that preference, I could see how much more you were willing to pay for the Honda. There are some criticisms of this approach – such as the the issue that people sometimes prefer A to B when compared to each other directly, but prefer B to A when presented with a third, irrelevant option – but the general principle behind it seems sound: people’s willingness to purchase goods and services positively correlates with their desires, despite some peculiarities. The more someone is willing to pay for something, the more valuable they perceive it to be.

“Marrying you is worth about $1,500 to me”

Now this is by no means groundbreaking information; it’s a facet of our psychology we are all already intimately familiar with. It does, however, yield an interesting method for examining people’s mating preferences when it’s turned on prostitution. In this case, a new paper by Sohn (2016) sought to examine how well men’s self-reported mating preferences for youthful partners were reflected in the prostitution market, where encounters are often short in duration, fairly anonymous, and people can seek out what they’re interested in, so long as they can afford it. It is worth mentioning at the outset that seeking youth per se is not exactly valuable in the adaptive sense of the word; instead, youth is valued (at least in humans) because of how it relates to both reproductive potential and fertility. Reproductive potential refers to how many expected years of future reproduction a woman has remaining before she reaches menopause and loses that capability. As such, this value is highest around the time she reaches menarche (signaling the onset of her reproductive ability) in her mid-teens and decreases over time until it reaches zero at menopause. Fertility, by contrast, refers to a woman’s likelihood of successful conception following intercourse, and tends to peak around her early twenties, being lower both prior to and after that point.

Since the type of intercourse sought by men visiting prostitutes is usually short-term in nature, we ought to expect the male preference for traits that cue high fertility to be revealed by the relative price they’re willing to pay for sex with women displaying them (since short-term encounters are typically aimed at immediate successful reproduction, rather than monopolizing a woman’s reproductive potential in the future). As such fertility cues tend to peak at the same ages as fertility itself, we would predict that women in their early twenties should command the highest price on the sexual market price, and this value should decline as women get older or younger. There are some issues with studying the subject matter, of course: sex with minors – much like prostitution in general – is often subject to social and legal sanctions. While the former issue cannot (and, really, should not) be skirted, the latter issue can be. One way of getting around the legal sanctions of prostitution in general is to study it in areas in the world where it is legal. In this instance, Sohn (2016) reports on a data set derived from approximately 8,600 prostitutes in Indonesia, ranging from ages 17-40, where, we are told, prostitution is quasi-legal.

The variable of interest in this data set concerns how much money the prostitutes received during their last act of commercial sex. This single-act method was employed in the hopes of minimizing any kinds of reporting inaccuracies that might come with trying to estimate how much money is being earned on average over long periods of time. While this choice necessarily limits the scope of the emerging picture concerning the price of sex, I believe it to be a justifiable one. Age was the primary predictor of this sex-related income, but a number of other variables were included in the analysis, such as frequency of condom use, years of schooling, age of first sex, and time selling sex. Overall, these predictor variables were able to account for over half of the variance in the price of sex, which is quite good.

“Priced to move!”

Supporting the hypothesis that men really do value these cues of fertility, the price of sex nominally rose from age 17 until it peaked at 21 (though this rise was not too appreciable), tracking fertility, rather than reproductive potential. Following that peak, the price of sex began to quickly and continuously decline through age 40, though the decline slowed passed 30. Descriptively, the price of sex at its minimum value was only about half the price of sex at peak fertility (which is a helpful tip for all you bargain-seekers out there…). Indeed, when age alone was considered, each additional year reduced the price of sex, on average, by about 4.5%; the size of that decrease uniquely attributable to age was reduced to about 2% per year when other factors were added into the equation, but both numbers tell the same story. A more detailed examination of this decrease grouped women into blocks of 5-year age periods. When considering age alone, there was no statistical difference between women in the 17-19 and 20-25 range. After that period, however, differences emerged: those in the 26-30 range earned 22% less, on average; a figure which fell to 42% less in the 30-34 group, and about 53% in the the 35-40 group.

This decrease in the price of sex over a woman’s lifespan is the opposite of how income usually works in non-sexual careers, where income rises with time and experience. It would be quite strange to work at a job where you saw your pay get cut by 2% each year you were with the company. It is likely for this reason that prostitutes in the 20-25 range were the most common (representing 32.6% of the sample), and those in older age groups were represented less heavily (27.6% in the 26-30 group, all the way down to 12% in the 35-40 range). When shopping for sex, then, men were not necessarily seeking the most experienced candidate for the position(s), but rather the most fertile one. As fertility declined, so too did the price. As price declined, women tended to leave the market. 

There were a few other findings of note, though the ‘whys’ explaining them are less straightforward. First, more educated prostitutes commanded a higher average asking price than their less educated peers, to the tune of about a 5% increase in price per extra year of school. As men and women both value intelligence highly in long-term partners, it is possible that cues of intelligence remain attractive, even in short-term contexts. Second, controlling for age, each year of selling sex tended to decrease the average price by about 1.5%. It is possible that the effects of prostitution visibly wear down the cues that men find appealing over time. Third, prostitutes who had ever used drugs or drank alcohol earned 12% more than their peers who abstained. Though I don’t know precisely why, it’s unlikely a coincidence that moral views about recreational drug use happen to be well predicted by views about the acceptability of casual sex (data from OKCupid, for instance, tells us the single best predictor of a woman’s interest in casual sex is whether she enjoys the taste of beer). Finally, prostitutes who proposed using condoms more often earned about 10% more than those who never did. I agree with Sohn’s (2016) assessment that this probably has to do with more desirable prostitutes being attractive enough to effectively bargain for condom use, whereas less attractive women compromise there in order to bring in clients. While men prefer sex without condoms, they appear willing to put that preference aside in the face of an attractive-enough prospect.  

“Disappointment now sold in bulk”

So what has been revealed about men’s preferences for sex with these data? Unfortunately, interpretation of prices is less straightforward than simply examining the raw numbers: their correspondence to other sources of data and theory should be considered. For instance, at least when seeking short term encounters, men seem to value fertility highly, and are willing to pay a premium to get it. This “real world” data accords well with the self-reports of men in survey and laboratory settings and, as such, seems to be easily interpretable. On other hand, men usually prefer sex without condoms, so the price premium among prostitutes who always suggest they be used would seem to, at face value, ‘reveal’ the wrong preference. Instead, it is more likely that prostitutes who already command a high price are capable of bargaining effectively for their use. In order to test such an explanation, you would need to pit the prospect of sex with the same prostitute with and without a condom against each other, both at the same price. Further, more educated prostitutes seemed to command a higher price on the sexual market: is this because men value intelligence in short-term encounters, educated women are more effective at bargaining, intelligence correlates with other cues of fertility or developmental stability (and thus attractiveness), or because of some other alternative? While one needs to step outside the raw pricing data obtained from these naturalistic observations to answer such questions effectively, the idea of using price data in general seems like a valuable method of analysis; whether it is more accurate, or a “truer” representation of our preferences than our responses to surveys is debatable but, thankfully, this need not be an either/or type of analysis.

References: Sohn, K. (2016). Men’s revealed preferences regarding women’s ages: Evidence from prostitution. Evolution & Human Behavior, DOI: http://dx.doi.org/10.1016/j.evolhumbehav.2016.01.002 

Clues To The Function Of Moralistic Punishment

One of the major questions I’ve spent the better part of the last few years trying to work up an answer to is the matter of why – in the adaptive sense – people punish others as third parties moralistically. That is to ask why people condemn and punish others for engaging in behavior perceived to be “wrong”. This is not the same question as why people take revenge on others for their behavior; that kind of punishment isn’t terribly mysterious. Chimps, for instance, will punish those who cheated them, but they seem fairly uninterested in punishing those who cheated others (Riedl, Jensen, Call, & Tomasello, 2012). Humans are a bit different in that regard: not only do we involve ourselves in the disputes of others, but we do so on behalf of people who aren’t kin or even social allies. This is rather strange behavior, even though we do not usually perceive it to be so. I’m not the only one who has attempted an evolutionary explanation for moralistic punishment, of course; in my hubris, however, I do feel that the explanation I ended up floating tends to fit the existing data better than other, contemporary theories. Today, I wanted to try and demonstrate that fit by considering some relative-recent empirical research.

“I don’t know, man; I think it might be a little baggy. Try a size down”

The paper in question (Kurzban, DeScioli, & O’Brien, 2007) concerns audience effects on punishment. Specifically, it addresses the following question: do people punish others more (as third parties) when they are being watched and their decisions are being made public, relative to when they’re anonymous? The question was examined across two experiments. In the first of these, 43 participants viewed the outcome of a previous trust game in which one player had the choice between two different outcomes: $39 for himself and $1 for the other player, or $20 for both. In this case, the player had selected the former, selfish option. The participants in question were provided with $7 and told they could spend any amount of that money – from $0 to $7 – to deduct from the selfish player. For each dollar they spent on deductions, the selfish player would lose $3. Importantly, the participants were either doing so in conditions under which their spending would be unknown to anyone but themselves (n = 24), or conditions where the experimenter would known how much each particular participant spent (n = 19). In the anonymous condition, 38% of participants spent at least some money on punishment, whereas in the public condition, 47% opted to punish something. Importantly, however, the amount punished varied between these conditions. Those in the public condition tended to spend more on punishment than those in the anonymous condition when they did. For instance, though precise numbers are not reported, approximately 20% of the participants in the public condition paid the full $7 to punish the selfish player, as compared with 0% in the private condition.

The second experiment examined peoples’ responses to a different kind of economic game – a prisoner’s dilemma – where one player attempted to cooperate with their partner, but was met with defection. In terms of the outcome, this means that rather than both players getting $25 (if both had cooperated), the cooperative partner got $0 while the defector got $30. The participants in this experiment were shown this outcome and given $10. As before, the participants could spend any amount of this $10 to deduct from the defecting player at the 3-to-1 rate. In this case, the punishment decisions would either be known by (a) no one else but the participant (n = 31), (b) the experimenter alone (n = 26), or (c) the entire group of participants (n = 30). Again, fewer participants opted to punish at all when punishment was anonymous (42%) relative to when punishment was known to either the experimenter (65%) or the whole group (67%); the amount spent on punishment increased as well (Ms = $1.06, $2.54, and $3.17, respectively). Further, those in the public conditions also tended to display more anger, disgust, and contempt at the cheating, suggesting that they were more upset by the transgression when other people were watching (or they were at least pretending to be).

The existence of audiences seemed to have an important impact on determining moralistic punishment: not only did the presence of other people affect the percent of third parties willing to punish at all, but it also positively influenced how much they did punish. In a sentence, we could say that the presence of observers was being used as an input by the cognitive systems determining moralistic sentiments. While this may sound like a result that could have been derived without needing to run the experiments, the simplicity and predictability of these findings by no means makes them trivial on a theoretical level when it comes to answering the question, “what is the adaptive value of punishment?” Any theory seeking to explain morality in general – and moral punishment in particular – needs to be able to present a plausible explanation for why cues to anonymity (or lack thereof) are being used as inputs by our moral systems. What benefits arise from public punishment that fail to materialize in anonymous cases?

“If you’re good at something, never do it for free…or anonymously”

The first theoretical explanation for morality that these results cut against is the idea that our moral systems evolved to deliver benefits to other per se. One of the common forms of this argument is that our moral systems evolved because they delivered benefits to the wider group (in the form of maintaining beneficial cooperation between members) even if doing so was costly in terms of individual fitness. This argument clearly doesn’t work for explaining the present data, as the potential benefits that could be delivered to others by deterring cheating or selfishness do not (seem to) change contingent on anonymity, yet moral punishment does. 

These results also cut against some aspects of mutualistic theories for morality. This class of theory suggests that, broadly speaking, our moral sense responds primarily to behavior perceived to be costly to the punisher’s personal interests. In short, third parties do not punish perpetrators because they have any interest in the welfare of the victim, but rather because punishers can enforce their own interests through that punishment, however indirectly. To place that idea into a quick example, I might want to see a thief punished not because I care about the people he harmed, but rather because I don’t want to be stolen from and punishing the thief for their behavior reduces that probability for me. Since my interests in deterring certain behaviors do not change contingent on my anonymity, the mutualistic account might feel some degree of threat from the present data. As a rebuttal to that point, the mutualistic theories could make the argument that my punishment being made public would deter others from stealing from me to a greater extent than if they did not know I was the one responsible for punishing. “Because I punished theft in a case where it didn’t effect me,” the rebuttal goes, “this is a good indication I would certainly punish theft which did affect me. Conversely, if I fail to punish transgressions against others, I might not punish them when I’m the victim.” While that argument seems plausible at face value, it’s not bulletproof either. Just because I might fail to go out of my way to punish someone else who was, say, unfaithful in their relationship, that does not necessarily mean I would tolerate infidelity in my own. This rebuttal would require an appreciable correspondence between my willingness to punish those who transgress against others and those who do so against me. As much of the data I’ve seen suggests a weak-to-absent link in both humans and non-humans on that front, that argument might not hold much empirical water.

By contrast, the present evidence is perfectly consistent with the association-management explanation posited in my theory of morality. In brief, this theory suggests that our moral sense helps us navigate the social world, identifying good and bad targets of our limited social investment, and uses punishment to build and break relationships with them. Morality, essentially, is an ingratiation mechanism; it helps us make friends (or, alternatively, not alienate others). Under this perspective, the role of anonymity makes quite a bit of sense: if no one will know how much you punished, or whether you did at all, your ability to use punishment to manage your social associations is effectively compromised. Accordingly, third-party punishment drops off in a big way. On the other hand, when people will know about their punishment, participants become more willing to invest in it in the face of better estimated social return. This social return need not necessarily reside with the actual person being harmed, either (who, in this case, was not present); it can also come from other observers of punishment. The important part is that your value as an associate can be publicly demonstrated to others.

The first step isn’t to generate value; it’s to demonstrate it

The lines between these accounts can seem a bit fuzzy at times: good associates are often ones who share your values, providing some overlap between mutualistic and association accounts. Similarly, punishment, at least from the perspective of the punisher, is altruistic: they are suffering a cost to provide someone else with a benefit. This provides some overlap between the association and altruistic accounts as well. The important point for differentiating these accounts, then, is to look beyond their overlap into domains where they make different predictions in outcomes, or predict the same outcome will obtain, but for different reasons. I feel the results of the present research not only help do that (inconsistent with group selection accounts), but also present opportunities for future research directions as well (such as the search for whether punishment as a third party appreciably predicts revenge).

References: Kurzban, R., DeScioli, P., & O’Brien, E. (2007). Audience effects on moralistic punishment. Evolution & Human Behavior, 28, 75-84.

Riedl, K., Jensen, K., Call, J., & Tomasello, M. (2012). No third-party punishment in chimpanzees. Proceedings of the National Academy of Science, 109, 14824–14829

Exaggerating With Statistics (About Rape)

“As a professional psychology researcher, it’s my job to lie to the participants in my experiments so I can lie to others with statistics using their data”. -On understanding the role of deception in psychology research

In my last post, I discussed the topic of fear: specifically, how social and political agendas can distort the way people reason about statistics. The probable function of such distortions is to convince other people to accept a conclusion which is not exactly well supported by the available evidence. While such behavior is not exactly lying – inasmuch as the people making these claims don’t necessarily know they’re engaged in such cognitive distortions – it is certainly on the spectrum of dishonesty, as they would (and do) reject such reasoning otherwise. In the academic world, related kinds of statistical manipulations go by a few names, the one I like the most being “researcher degrees of freedom“. The spirit of this idea refers to the problem of researchers selectively interpreting their data in a variety of ways until they find a result they want to publish, and then omit mentioning all the ways that their data did not work out, or might be interpreted. On that note, here’s a scary statistic: 1-in-3 college men would rape a woman if they could get away with it. Fortunately (or unfortunately, depending on your perspective) the statistic is not at all what it seems.

“…But the researchers failed to adequately report their methods! Spooky!”

The paper in question (Edwards et al, 2014) seeks to try and understand the apparent mystery behind the following finding: when asked if they ever raped anyone, most men will say “no”; when asked instead whether they ever held someone down to coerce them into having sex, a greater percentage of men will indicate that they have. Women’s perceptions about the matter seem to follow suit. As I wrote when discussing the figure that 25% of college women will be raped:

The difference was so stark that roughly 75% of the participants that Koss had labeled as having experiencing rape did not, themselves, consider the experience to be rape.

What strikes me as curious about these findings is not the discrepancy in responses; that much can likely be explained by positing that these questions are perceived by the participants to be asking about categorically different behaviors. After all, if they were actually perceived to be asking about the same thing, you would see a greater agreement between the responses of both men and women between questions, which we do not. Instead, the curious part is that authors – like Edwards et al (2014) – continue to insist that all those participants must be wrong, writing, “…some men who rape do not seem to classify their behavior as such” (Jesse Singal at NYmag.com expresses a similar view, writing: “At the end of the day, after all, the two groups are saying the exact same thing“). Rather than conclude there is something wrong with the questions being asked (such as, say, they are capturing a portion of the population who would have rough, but consensual sex), they instead conclude there is something wrong with everyone else (both men and women) answering them. This latter explanation strikes me as unlikely. 

There’s already something of a bait-and-switch taking place, then, but this is far from the only methodological issue involved in deriving that scary-sounding 1-in-3 figure. Specifically, Edwards et al (2014) asked their 86 male participants to fill out part of the “attraction to sexual aggression” scale (Malamuth, 1989). On this scale, participants are asked to indicate, from 1 to 5, how likely they would be to engage in a variety of behaviors, with a “1″ corresponding to “not likely at all”, while “5″ corresponds to “very likely”. Included on this scale are two questions, one concerning whether the respondent would “rape” a woman, and another asking about whether he would “force her to do something she did not want to do” in a sexual setting. The participants in question were asked about their likelihood of engaging in such behaviors “if nobody would ever know and there wouldn’t be any consequences”. Edwards et al (2014) report that, if such criteria were met, 31% of the men would force a woman to do something sexually, whereas only 13% would rape a woman.

If you’re perceptive, you might have noticed something strange already: that 1-in-3 figure cannot be straightforwardly derived from the sexual aggression scale, as the scale is a 5-point measure, whereas the 1-in-3 statistic is clearly dichotomous. This raises the question of how one translates the scale into a yes/no response format. Edwards et al (2014) do not explicitly mention how they managed such a feat, but I think the answer is clear from the labeling in one of their tables: “Any intention to rape a woman” (emphasis, mine). What the researchers did, then, was code any response other than a “1″ as an affirmative; the statistical equivalent of saying that 2 is closer to 5 than it is to 1. In other words, the question was, “Would you rape a woman if you could get away with it”, and the answers were, effectively, “No, Yes, Yes, Yes, or Yes”. Making the matter even worse is that all that participants were answering both questions. This means they saw a question asking about “rape” and another question about “forcing a woman to do something she didn’t want to”. As participants likely figured that there was no reason the researchers would be asking the same question twice, they would have very good reason for thinking that these questions refer to categorically different things. For the authors to then conflate the two questions after the fact as being identical is stunningly disingenuous.

“The problem isn’t me; it’s everyone else”

To put these figures in better context, we could consider the results reported by Malamuth (1989). In response to the “Would you rape if you wouldn’t get caught” question, 74% of men indicated “1″ and 14% indicated a “2″, meaning a full 88% of them fell below the midpoint of the scale; by contrast, only 7% fell above the midpoint, with about 5% indicating a “4″ and 2% indicating a “5″. Of course, reporting that “1-in-3 men would rape” if they could get away with it is much different than saying “less than 1-in-10 probably would”. The authors appear interested in deriving the most-damning interpretation of their data possible, however, as evidenced by their unreported and, in my mind, unjustifiable grouping of the responses. That fact alone should raise alarm bells as to whether the statistics they provide you would do a good job of predicting reality.

But let’s go ahead and take these responses at face value anyway, even if we shouldn’t: somewhere between 10-30% of men would rape a woman if there were no consequences for doing so. How alarming should that figure be? On the first front, the hypothetical world of “no consequence” doesn’t exist. Some proportion of men who would be interested in doing such things are indeed restrained from doing so by the probability of being punished. Even within that hypothetical world of freedom from consequences, however, there are likely other problems to worry about, in that you will always find some percentage of the population willing to engage in anti-social behavior that harms others when there are no costs for doing so (in fact, the truly strange part is that lots of people indicate they would avoid such behaviors).

Starting off small, for instance, about 70% of men and women indicate that they would cheat on their committed partner if they wouldn’t get caught (and slightly over 50% have cheated in spite of those possible consequences). What about other acts, like stealing, or murder. How many people might kill someone else if there would be no consequences for it? One informal poll I found placed that number around 40%; another puts it a little above 50% and, when broken up by sex, 32% of women would and a full 68% of men would. Just let those numbers sink in for a moment: comparing the two numbers for rape and murder, the men in Edwards et al (2014) were in between 2-to-7 times less likely to say they would rape a woman than kill someone if they could, depending on how one interprets their answers. That’s a tremendous difference; one that might even suggest that rape is viewed as a less desirable activity than murder. Now that likely has quite a bit to do with some portion of that murder being viewed as defensive in nature, rather than exploitative, but it’s still some food for thought.

 There are proportionately fewer defensive rapes than defensive stabbings…

This returns us nicely to the politics of fear. The last post addressed people purposefully downplaying the risks posed by terrorist attacks; in this case, we see people purposefully inflating the reported propensities to rape. The 1-in-3 statistic is clearly crafted in the hopes of making an issue seem particularly threatening and large, as larger issues tend to have more altruism directed towards them in the hopes of a solution. As there are social stakes in trying to make one’s problems seem especially threatening, however, this should immediately make people skeptical when dealing with such statistics for the same reasons you shouldn’t let me tell you about how smart or nice I am. There is a very real risk of artificially trying to puff one’s statistics up, as people might come to eventually start not trusting you about things as the default, even for different topics entirely; this should hold true especially if they belong to a group targeted by such misleading results. The undesirable outcomes of such a process being, rather than increases in altruism and sympathy devoted to a real problem, apathy and hostility. Lessons learned from fables like The Boy Who Cried Wolf are timely as ever, it would seem.

References: Edwards, S., Bradshaw, K., & Hinsz, V. (2014). Denying rape but endorsing forceful intercourse: Exploring differences among responders. Violence & Gender, 1, 188-193.

Malamuth, N. (1989). The attraction to sexual aggression scale: Part 1. The Journal of Sex Research, 26, 26-49.

The Politics Of Fear

There’s an apparent order of operations frequently observed in human reasoning: politics first, facts second. People appear perfectly willing to accept flawed arguments or incorrect statistics they would otherwise immediately reject, just so long as they support the reasoner’s point of view; Greg Cochran documented a few such cases (in his simple and eloquent style) a few days ago on his blog. Such a bias in our reasoning ability is not only useful – inasmuch as persuading people to join your side of a dispute tends to carry benefits, regardless of whether you’re right or wrong – but it’s also common: we can see evidence of it in every group of people, from the uneducated to those with PhDs and decades of experience in their field. In my case, the most typical contexts in which I encounter examples of this facet of our psychology – like many of you, I would suspect – is through posts shared or liked by others on social media. Recently, these links have been cropping up concerning the topic of fear. More precisely, there are a number of writers who think that people (or at least those who disagree with them) are behaving irrationally regarding their fears of Islamic terrorism and the threat it poses to their life. My goal here is not to say that people are being rational or irrational about such things – I happen to have a hard time finding substance in such terms – but rather to provide a different perspective than the ones offered by the authors; one that is likely in the minority among my professional and social peers.

You can’t make an omelette without alienating important social relations 

The first article on the chopping block was published on the New York Times website in June of last year. The article is entitled, “Homegrown extremists tied to deadlier toll than Jihadists in U.S. since 9/11,” and it attempts to persuade the reader that we, as a nation, are all too worried about the threat Islamic terrorism poses. In other words, American fears of terrorism are wildly out of proportion to the actual threat it presents. This article attempted to highlight the fact that, in terms of the number of bodies, right-wing, anti-government violence was twice as dangerous as Jihadist attacks in the US since 9/11 (48 deaths from non-Muslims; 26 by Jihadists). Since we seem to dedicate more psychological worry to Islam, something was wrong there There are three important parts of that claim to be considered: first, a very important word in that last sentence is “was,” as the body count evened out by early December in that year (currently at 48 to 45). This updated statistic yields some interesting questions: were those people who feared both types of attacks equally (if they existed) being rational or not on December 1st? Were those who feared right-wing attacks more than Muslim ones suddenly being irrational on the 2nd? The idea these questions are targeting is whether or not fears can only be viewed as proportionate (or rational) with the aid of hindsight. If that’s the case, rather than saying that some fears are overblown or irrational, a more accurate statement would be that such fears “have not yet been founded.” Unless those fears have a specific cut-off date (e.g., the fear of being killed in a terrorist attack during a given time period), making claims about their validity is something that one cannot do particularly well. 

The  second important point of the article to consider is that the count begins one day after a Muslim attack that killed over 3,000 people (immediately; that doesn’t count those who were injured or later died as a consequence of the events). Accordingly, if that count is set back just slightly, the fear of being killed by a Muslim terrorist attack would be much more statistically founded, at least in a very general sense. This naturally raises the question of why the count starts when it does. The first explanation that comes to mind is that the people doing the counting (and reporting about the counting) are interested in presenting a rather selective and limited view of the facts that support their case. They want to denigrate the viewpoints of their political rivals first, and so they select the information that helps them do that while subtly brushing aside the information that does not. That seems like a fairly straightforward case of motivated reasoning, but I’m open to someone presenting a viable alternative point of view as to why the count needs to start when it does (such as, “their primary interest is actually in ignoring outliers across the board”).    

Saving the largest for last, the final important point of the article to consider is that it appears to neglect the matter of base rates entirely. The attacks labeled as “right-wing” left a greater absolute number of bodies (at least at the time it was written), but that does not mean we learned right-wing attacks (or individuals) are more dangerous. To see why, we need to consider another question: how many bodies should we have expected? The answer to that question is by no means simple, but we can do a (very) rough calculation. In the US, approximately 42% of the population self-identifies as Republican (our right-wing population), while about 1% identifies as Muslim. If both groups were equally likely to kill others, then we should expect that the right-wing terrorist groups leave 42 bodies for every 1 that the Muslim group do. That ratio would reflect a genuine parity in threat. A count suggesting that this ratio was 2-to-1 at the time it written, and was 1-to-1 later that same year, we might reasonably conclude that the Muslim population, per individual member, is actually quite a bit more prone to killing others in terrorist attacks; if we factor in the 9/11 number, that ratio becomes something closer to 0.01-to-1, which is a far cry from demographic expectations.

Thankfully, you don’t have to report inconvenient numbers

Another example comes from The New Yorker, published just the other day (perhaps is it something about New York that makes people publish these pieces), entitled, “Thinking rationally about terror.” The insinuation, as before, is that people’s fears about these issues do not correspond well to the reality. In order to make the case that people’s fears are wrongheaded, Lawrence Krauss leans on few examples. One of these concerns the recent shootings in Paris. According to Lawrence, these attacks represented an effective doubling of the overall murder rate in Paris from the previous year (2.6 murders per 100,000 residents), but that’s really not too big of a deal because that just makes Paris as dangerous as New York City, and people aren’t that worried about being killed in NYC (or are they? No data on that point is mentioned). In fact, Lawrence goes on to say, the average Paris resident is about as likely to have been killed in a car accident during any given year than to have been killed during the mass shooting. This point is raised, presumably, to highlight an irrationality: people aren’t concerned about being killed by cars for the most part, so they should be just as unconcerned about being killed by a terrorist if they want to be rational.

This point about cars is yet another fine example of an author failing to account for base rates. Looking at the raw body count is not enough, as people in Paris likely interact with hundreds (or perhaps even thousands; I don’t have any real sense for that number) of cars every day for extended periods of time. By contrast, I would imagine Paris residents interact markedly less frequently with Muslim extremists. Per unit of time spent around cars, they would pose what is likely a much, much lower threat of death than Muslim extremists. Further, people do fear the harm caused by cars (we look both ways before crossing a street, we restrict licenses to individuals who demonstrate their competence to handle the equipment, have speed limits, and so on), and it is likely that the harm they inflict would be much greater if such fears were not present. In much the same way, it is also possible that the harms caused by terrorist groups would be much higher if people decided that such things were not worth getting worked up about and took no steps to assure their safety early on. Do considerations of these base rates and future risks fall under the umbrella of “rational” thinking? I would like to think so, and yet they seemed so easily overlooked by someone chiding others for being irrational: Lawrence at least acknowledges that future terror risks might increase for places like Paris, but notes that that kind of life is pretty much normal for Israel; the base-rate problems is not even mentioned.

While there’s more I could say on these topics, the major point I hope to get across is this: if you want to know why people experience fear about certain topics, it’s probably best to not start your analysis with the assumption that these people are wrong to feel the way they do. Letting one’s politics do the thinking is not a reliable way to get at a solid understanding of anything, even if it might help further your social goals. If we were interested in understanding the “why” behind such fears, we might begin, for instance, with the prospect that many people likely fear historically-relevant, proximate cues of danger, including groups of young, violent males making threats to your life based on your group membership, and cases where those threats are followed through and made credible. Even if such individuals currently reside many miles away, and even if only a few such threats have been acted upon, and even if the dangerous ones represent a small minority of the population, fearing them for one’s own safety does not – by default – seem to be an unreasonable thing to do; neither does fearing them for the safety of one’s relatives, social relations, or wider group members.

“My odds of getting hurt were low, so this isn’t worth getting worked up over”

Now, as I mentioned, all of this is not to say that people ought to fear some particular group or not; my current interests do not reside in directing your fears or their scope. I have no desire to tell you that your fears are well founded or completely off base (in no small part because I earnestly don’t know if they are). My interests are much more general than that, as this kind of thinking is present in all kinds of different contexts. There’s a real problem in beginning with the truth of your perspective and beginning your search for evidence only after the fact. The problem can run so deep that I actually find myself surprised to see someone take up the position that they were wrong after an earnest dig through the available evidence. Such an occurrence should be commonplace if rationality or truth were the goal in these debates, as people get things wrong (at least to some extent) all the time, especially when such opinions are formed in advance of such knowledge. Admissions of incorrect thinking does require, however, that one is willing to, at least occasionally, sacrifice a belief that used to be held quite dear; it requires looking like a fool publicly now and again; it even requires working against your own interests sometimes. These are things you will have to do; not just things that the opposition will. As such, I suspect these kinds of inadequate lines of reasoning will continue to pervade such discussions, which is a bit of a problem when the lives of others literally hang in the balance of the outcome.

Science By Funeral

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

As the above quote by Max Planck suggests, science is a very human affair. While, in an idealized form, the scientific process is a very useful tool for discovering truth, the reality of using the process in the world can be substantially messier. One of the primary culprits of this messiness is that being a good scientist per se – as defined by one who rigorously and consistently applies the scientific method – is not necessarily any indication that one is particularly bright or worthy of social esteem. It is perfectly possible to apply the scientific method to the testing of any number of inane or incorrect hypotheses. Instead, social status (and its associated rewards) tends to be provided to people who discover something that is novel, interesting, and true. Well, sort of; the discovery itself need not be exactly true as much as people need to perceive the idea as being true. So long as people perceive my ideas to be true, I can reap those social benefits; I can even do so if my big idea was actually quite wrong.

Sure; it looks plenty bright, but it’s mostly just full of hot air

Just as there are benefits to being known as the person with the big idea, there are also benefits to being friends with the person with the big idea, as access to those social (and material) resources tends to diffuse to the academic superstar’s close associates. Importantly, these benefits can still flow to those associates even if they lack the same skill set that made the superstar famous. To put this all into a simple example, getting a professor position at Harvard likely carries social and material benefits to the professor; those who study under the professor and get a degree from Harvard can also benefit by riding the coattails of the professor, even if they aren’t particularly smart or talented themselves. One possible result of this process is that certain ideas can become entrenched in a field, even if the ideas are not necessarily the best: as the originator of the idea has a vested interest in keeping it the order of the day in his field, and his academic progeny have a similar interest in upholding the originator’s status (as their status depends on his), new ideas may be – formally or informally – barred from entry and resisted, even if they more closely resemble the truth. As Planck quipped, then, science begins to move forward as the old guard die out and can no longer defend their status effectively; not because they relinquish their status in the face of new, contradictory evidence.

With this in mind, I wanted to discuss the findings of one of the most interesting papers I’ve seen in some time. The paper (Azoulay, Fons-Rosen, & Zivin, 2015) examined what happens to a field of research in the life sciences following the untimely death of one of its superstar members. Azoulay et al (2015) began by identifying their sample of approximately 13,000 superstars, 452 of which died prematurely (which, in this case, corresponded to an average age of death at 61). Of those who died, the term “superstar” would certainly describe them well, at least in terms of their output, generating a median authorship on 138 papers, 8,347 citations, and receiving over $16 million in government funding by the time of their death. These superstars were then linked to various subfields in which they published, their collaborators and non-collaborators within those subfields were identified, and a number of other variables that I won’t go into were also collected.

The question of interest, then, is what happens to these fields following the death of a superstar? In terms of the raw number of publications within a subfield, there was a very slight increase following the death of about 2%. That number does not give much of a sense for the interesting things that were happening, however. The first of these things is that the superstar’s collaborators saw a rather steep decline in their research output; a decline of about 40% over time. However, this drop in productivity of the collaborators was more than offset by an 8% increase in output by non-collaborators. This was an effect that remained (though it was somewhat reduced) even when the analysis excluded papers on which the superstar was an author (which makes sense: if one of your authors dies, of course you will produce fewer papers; there was just more to the decline than that). This decline in collaborator output would be consistent with a healthy degree of coattail-riding likely taking place prior to death. Further, there were no hints of these trends prior to the death, suggesting that the death in question was doing the causing when it came to changes in research output.

Figure 2: How much better-off your death made other people

The possible “whys” as to these effects was examined in the rest of the paper. A number of hints as to what is going on follow. First, there is the effect of death on citation counts, with non-collaborators producing more high-impact – but not low-impact – papers after the superstar’s passing. Second, these non-collaborators were producing papers in the very same subfields that the superstar had previously been in. Third, this new work did not appear to be building on the work of the superstar; the non-collaborators tended to cite the superstar less and newer work more. Forth, the newer authors were largely not competitors of the superstar during the time they were alive, opting instead to become active in the field following the death. The picture being painted by the data seems to be one in which the superstars initially dominate publishing within their subfields. While new faces might have some interest in researching these same topics, they fail to enter the field while the superstar is alive, instead providing their new ideas – not those already established – only after a hole has opened in the social fabric of the field. In other words, there might be barriers to entry for newcomers keeping them out, and those barriers relax somewhat following the death of a prominent member.

Accordingly, Azoulay et al (2015) turn their attention to what kinds of barriers might exist. The first barrier they posit is one they call “Goliath’s Shadow”, where newcomers are simply deterred by the prospect of having to challenge existing, high-status figures. Evidence consistent with this prospect was reported: the importance of the superstar – as defined by the fraction of papers in the field produced by them – seemed to have a noticeable effect, with more important figures creating a larger void to fill. By contrast, the involvement of the superstar – as defined by what percentage of their papers were published in a given field – did not seem to have an effect. The more a superstar published (and received grant money), the less room other people seemed to see for themselves. 

Two other possible barriers to entry concern the intellectual and social closure of a field: the former refers to the degree that most of the researchers within a field – not just the superstar – agree on what methods to use and what questions to ask; the latter refers to how tightly the researchers within a field work together, coauthoring papers and such. Evidence for both of these came up positive: fields in which the superstar trained many of the researchers in it and fields in which people worked very closely did not show the major effects of superstar death. Finally, a related possibility is that the associates of the superstar might indirectly control access to the field by denying resources to newcomers who might challenge the older set of ideas. In this instance, the authors reported that the deaths of those superstars who had more collaborators on editorial and funding boards tended to have less of an impact, which could be a sign of trouble. 

The influence of these superstars on generating barriers to entry, then, were often quite indirect. It’s not that the superstars were preventing newcomers themselves; it is unlikely they had the power to do so, even if they were trying. Instead, these barriers were created indirectly, either through the superstar receiving a healthly portion of the existing funding and publication slots, or through the collaborators of the superstar forming a relatively tight-knit community that could wield influence over what ideas got to see the light of day more effectively.

“We have your ideas. We don’t know who you are, and now no one else will either”

While it’s easy (and sometimes fun) to conjure up a picture of some old professor and their intellectual clique keeping out plucky, young, and insightful prospects with the power of discrimination, it is important to not leap to that conclusion immediately. While the faces and ideas within a field might change following the deaths of important figures, that does not necessarily mean the new ideas are closer to to that all-important, capital-T, Truth that we (sometimes) value. The same social pressures, costs, and benefits that applied to the now-dead old guard apply in turn to the new researchers, and new status within a field will not be reaped by rehashing the ideas of the past, even if they’re correct. Old-but-true ideas might be cast aside for the sake of novelty, just as new-but-false ideas might be promulgated. Regardless of the truth value of these ideas, however, the present data does lend a good deal of credence of the notion that science tends to move one funeral at a time. While truth may eventually win out by a gradual process of erosion, it’s important to always bear in mind that the people doing science are still only human, subject to the same biases and social pressures we all are.

References: Azoulay, P., Fons-Rosen, C., & Zivin, J. (2015). Does science advance one funeral at a time? The National Bureau of Economic Research, DOI: 10.3386/w21788

 

When Intuitions Meet Reality

Let’s talk research ethics for a moment.

Would you rather have someone actually take $20 from your payment for taking part in a research project, or would you rather be told – incorrectly – that someone had taken $20, only to later (almost immediately, in fact) find out that your money is safely intact and that the other person who supposedly took it doesn’t actually exist? I have no data on that question, but I suspect most people would prefer the second option; after all, not losing money tends to be preferable to losing money, and the lie is relatively benign. To use a pop culture example, Jimmy Kimmel has aired a segment where parents lie to their children about having eaten all their Halloween candy. The children are naturally upset for a moment and their reactions are captured so people can laugh at them, only to later have their candy returned and the lie exposed (I would hope). Would it be more ethical, then, for parents to actually eat their children’s candy so as to avoid lying to their children? Would children prefer that outcome?

“I wasn’t actually going to eat your candy, but I wanted to be ethical”

I happen to think that answer is, “no; it’s better to lie about eating the candy than to actually do it” if you are primarily looking out for the children’s welfare (there is obviously the argument to be made that it’s neither OK to eat the candy or to lie about it, but that’s a separate discussion). That sounds simple enough, but according to some arguments I have heard, it is unethical to design research that, basically, mimics the lying outcome. The costs being suffered by participants need to be real in order for research on suffering costs to be ethically acceptable. Well, sort of; more precisely, what I’ve been told is that it’s OK to lie to my subjects (deceive them) about little matters, but only in the context of using participants drawn from undergraduate research pools. By contrast, it’s wrong for me to deceive participants I’ve recruited from online crowd-sourcing sites, like Mturk. Why is that the case? Because, as the logic continues, many researchers rely on MTurk for their participants, and my deception is bad for those researchers because it means participants may not take future research seriously. If I lied to them, perhaps other researchers would too, and I have poisoned the well, so to speak. In comparison, lying to undergraduates is acceptable because, once I’m done with them, they probably won’t be taking part in many future experiments, so their trust in future research is less relevant (at least they won’t take part in many research projects once they get out of the introductory courses that require them to do so. Forcing undergraduates to take part in research for the sake of their grade is, of course, perfectly ethical).

This scenario, it seems, creates a rather interesting ethical tension. What I think is happening here is that a conflict has been created between looking out for the welfare of research participants (in common research pools; not undergraduates) and looking out for the welfare of researchers. On the one hand, it’s probably better for participants’ welfare to briefly think they lost money, rather than to let them actually lose money; at least I’m fairly confident that is the option subjects would select if given the choice. On the other hand, it’s better for researchers if those participants actually lose money, rather than briefly hold the false believe that they did, so participants continue to take their other projects seriously. An ethical dilemma indeed, balancing the interests of the participants against those of the researchers.

I am sympathetic to the concerns here; don’t get me wrong. I find it plausible to suggest that if, say, 80% of researchers outright deceived their participants about something important, people taking this kind of research over and over again would likely come to assume some parts of it were unlikely to be true. Would this affect the answers participants provide to these surveys in any consistent manner? Possibly, but I can’t say with any confidence if or how it would. There also seems to be workarounds for this poisoning-the-well problem; perhaps honest researchers could write in big, bold letters, “the following research does not contain the use of deception” and research that did use deception would be prohibited from attaching that bit by the various institutional review boards that need to approve these projects. Barring the use of deception across the board would, of course, create its own set of problems too. For instance, many participants taking part in research are likely curious as to what the goals of the project are. If researchers were required to be honest and transparent about their purposes upfront so as to allow their participants to make informed decisions regarding their desire to participate (e.g., “I am studying X…”), this can lead to all sorts of interesting results being due to demand characteristics - where participants behave in unusual manners as a result of their knowledge about the purpose of the experiment – rather than the natural responses of the subjects to the experimental materials. One could argue (and many have) that not telling participants about the real purpose of the study is fine, since it’s not a lie as much as an omission. Other consequences of barring explicitly deception exist as well, though, including the lack of control over experimental stimuli during interactions between participants and the inability to feasibly even test some hypotheses (such as whether people prefer the tastes of identical foods, contingent on whether they’re labeled in non-identical ways).

Something tells me this one might be a knock off

Now this debate is all well and good to have in the abstract sense, but it’s important to bring some evidence to the matter if you want to move the discussion forward. After all, it’s not terribly difficult for people to come up with plausible-sounding, but ultimately incorrect, lines of reasoning as for why some research practice is possibly (un)ethical. For example, some review boards have raised concerns about psychologists asking people to take surveys on “sensitive topics”, under the fear that answering questions about things like sexual histories might send students into an abyss of anxiety. As it turns out, such concerns were ultimately empirically unfounded, but that does not always prevent them from holding up otherwise interesting or valuable research. So let’s take a quick break from thinking about how deception might be harmful in the abstract to see what effects it has (or doesn’t have) empirically.

Drawn by the debate between economists (who tend to think deception is bad) and social scientists (who tend to think it’s fine), Barrera & Simpson (2012) conducted two experiments to examine how deceiving participants affected their future behavior. The first of these studies tested the direct effects of deception: did deceiving a participant make them behave differently in a subsequent experiment? In this study, participants were recruited as part of a two-phase experiment from introductory undergraduate courses (so as to minimize their previous exposure to research deception, the story goes; it just so happens they’re likely also the easiest sample to get). In the first phase of this experiment, 150 participants played a prisoner’s dilemma game which involved cooperating with or defecting on another player; a decision which would affect both player’s payments. Once the decisions had been made, half the participants were told (correctly) that they had been interacting with another real person in the other room; the other half were told they had been deceived, and that no other player was actually present. Everyone was paid and sent home.

Two to three weeks later, 140 of these participants returned for phase two. Here, they played 4 rounds of similar economic games: two rounds of dictator-games and two rounds of trust-games. In the dictator games, subjects could divide $20 between themselves and their partner; in the trust games, subjects could send some amount of $10 to the other player, this amount would be multiplied by three, and that player could then keep it all or send some of it back. The question of interest, then, is whether the previously-deceived subjects would behave any differently, contingent on their doubts as to whether they were being deceived again. The thinking here is that if you don’t believe you’re interacting with another real person, then you might as well be more selfish than you otherwise would. The results showed that while the previously-deceived participants were more likely to believe that social science researchers used deception somewhat more regularly, relative to the non-deceived participants their behavior was actually no different. Not only were the amounts of money sent to others no different (participants gave $5.75 on average in the dictator condition and trusted $3.29 when they were not previously deceived, and gave $5.52 and trusted $3.92 when they had been), but the behavior was no more erratic either. The deceived participants behaved just like the non-deceived ones.

In the second study the indirect effects of deception were examined. One-hundred-six participants first completed the same dictator and trust games as above. They were then either assigned to read about an experiment that did or did not make use of deception; a deception which included the simulation of non-existent participants. They then played another round of dictator and trust games immediately afterwards to see if their behavior would differ, contingent on knowing about how researchers might be deceive them. As in the first study, no behavioral differences emerged. Neither directly deceiving participants about the presence of others in the experiment or providing them with information that deception does take place in such research seemed to have any noticeable effects on subsequent behavior.

“Fool me once, shame on me; Fool me twice? Sure, go ahead”

Now it is possible that the lack of any effect in the present research had to do with the fact that participants were only deceived once. It is certainly possible that repeated exposures to deception, if frequent enough, will begin to have an effect and that effect will be a lasting one and it will not just be limited to the researcher employing the deception. In essence, it is possible that some spillover between experimenters over time might occur. However, this is something that needs to be demonstrated; not just assumed. Ironically, as Barrera & Simpson (2012) note, demonstrating such a spillover effect can be difficult in some instances, as designing non-deceptive control conditions to test against the deceptive ones is not always a straightforward task. In other words, as I mentioned before, some research is quite difficult – if not impossible – to conduct without being able to use deception. Accordingly, some control conditions might require that you deceive participants about deceiving them, which is awfully meta. Barrera & Simpson (2012) also mention some research findings that report even when no deception is used, participants who repeatedly take part in these kinds of economic experiments tend to get less cooperative over time. If that finding holds true, then the effects of repeated deception need to be filtered out from the effects of repeated participation in general. In any case, there does not appear to any good evidence that minor deceptions are doing harm to participants or other researchers. They might still be doing harm, but I’d like to see it demonstrated before I accept that they do. 

References: Barrera, D. & Simpson, B. (2012). Much ado about deception: Consequences of deceiving research participants in the social sciences. Sociological Methods & Research, 41, 383-413.

Preferences For Equality?

People are social creatures. This is a statement that surprises no one, seeming trivial to the same degree it is widely recognized (which is to say, “very”). That many people will recognize such a statement in the abstract and nod their head in agreement when they hear it does not mean they will always apply it to their thinking in particular cases, though. Let’s start with a context in which people will readily apply this idea to their thinking about the world: a video in which pairs of friends watch porn together while being filmed by others who have the intention to put the video online for view by (at the time of writing) about 5,700,000 people worldwide. The video is designed to get people’s reactions to an awkward situation, but what precisely is it about that situation which causes the awkward reactions? As many of you will no doubt agree, I suspect that answer has to do with the aforementioned point that people are social creatures. Because we are social creatures, others in our environment will be relatively inclined (or disinclined) from associating with us contingent on, among other things, our preferences. If some preferences make us seem like a bad associate to others – such as, say, our preferences concerning what kind of pornography arouses us, or our interest in pornography more generally – we might try to conceal those preferences from public view. As people are trying to conceal their preferences, we likely observe a different pattern of reactions to – and searches for – pornography in the linked video, compared to what we might expect if those actors were in the comfort and privacy of their own home.

Or, in a pinch, in the privacy of an Apple store or Public Library 

Basically, we would be wrong to think we get a good sense for these people’s pornography preferences from their viewing habits in the video, as people’s behavior will not necessarily match their desires. With that in mind, we can turn to a rather social human behavior: punishment. Now, punishment might not be the first example of social behavior that pops into people’s heads when they think about social things, but make no mistake about it; punishment is quite social. A healthy degree of human gossip centers around what we believe ought to be and not be punished; a fact which, much to my dismay, seems to take up a majority of my social media feeds at times. More gossip still concerns details of who was punished, how much they were punished, why they were punished, and, sometimes, this information will lead to other people joining in the punishment themselves or trying to defend someone else from it. From this analysis, we can conclude a few things, chief among which are that, (a) some portion of our value as an associate to others (what I would call our association value) will be determined by the perception of our punishment preferences, and (b) punishment can be made most or less costly, contingent on the degree of social support our punishment receives from others. 

This large social component of punishment means that observing the results of people’s punishment decisions does not necessarily inform you as to their preferences for punishment; sometimes people might punish others more or less than they would prefer to, were it not for these public variables being a factor. With that in mind, I wanted to review two pieces of research to see what we can learn about human punishment preferences from people’s behavior. The first piece claims that human punishment mechanisms have – to some extent – evolved to seek equal outcomes between the punisher and the target of their punishment. In short, if someone does some harm to you, you will only desire to punish them to the extent that it will make you two “even” again. An eye for an eye, as the saying goes; not an eye for a head. The second piece makes a much different claim: that human punishment mechanisms are not designed for fairness at all, seeking instead to inflict large costs on others who harm you, so as to deter future exploitation. Though both of these papers do not assess punishment in a social context, I think they have something to tell us about that all  the same. Before getting to that point, though, let’s start by considering the research in question.

The first of these papers is from Bone & Raihani (2015). Without getting too bogged down in the details, the general methods of the paper go as follows: two players enter into a game together. Player A begins the game with $1.10 while player B begins with a payment ranging from $0.60 to also $1.10. Player B is then given a chance to “steal” some of player A’s money for himself. The important part about this stealing is that it would either leave player B (a) still worse off than A, (b) with an equal payment to A, or (c) with a better payment than A. After the stealing phase, player A has the chance to respond by “punishing” player B. This punishment was either efficient – where for each cent player A spent, player B would lose three – or inefficient – where for each cent player A spent, player B would only lose one. The results of this study turned up the following findings of interest: first, player As who were stolen from tended to punish the player Bs more, relative to when the As were not stolen from. Second, player As who had access to the more efficient punishment option tended to spend more on punishment than those who had access to the less efficient option. Third, those player As who had access to the efficient punishment option also punished player Bs more in cases where B ended up better off than them. Finally, when participants in that former case were punishing the player Bs, the most common amount of punishment they enacted was the amount which would leave both player A and B with the same payment. From these findings, Bone & Raihani (2015) conclude that:

Although many of our results support the idea that punishment was motivated primarily by a desire for revenge, we report two findings that support the hypothesis that punishment is motivated by a desire for equality (with an associated fitness-leveling function…)

In other words, the authors believe they have observed the output of two distinct preferences: one for punishing those who harm you (revenge), and one for creating equality (fitness leveling). But were people really that concerned with “being even” with their agent of harm? I take issue with that claim, and I don’t believe we can conclude that from the data. 

We’re working on preventing exploitation; not building a frame.

To see why I take issue with that claim, I want to consider an earlier paper by Houser & Xiao (2010). This study involves a slightly different setup. Again, two players are involved in a game: player A begins the game by receiving $8. Player A could then transfer some amount of that money (either $0, $2, $4, $6, or $8) to player B, and then keep whatever remained for himself (another condition existed in which this transfer amount was randomly determined). Following that transfer, both players received $2. Finally, player B was given the following option: to pay $1 for the option to reduce player A’s payment by as much as they wanted. The results showed the following pattern: first, when the allocations were random, player B rarely punished at all (under 20%) and, when they did punish, they tended to punish the other player irrespective of inequality. That is they were equally as likely to deduct at all, no matter the monetary difference, and the amount they deducted did not appear to aimed at achieving equality. By contrast, of the player Bs that received $0 or $2 intentionally, 54% opted to punish player A and, when they did punish, were most likely to deduct so much from player A that they ended up better off than him (that outcome obtained between 66-73% of the time). When given free reign over the desired punishment amount, then, punishers did not appear to be seeking equality as an outcome. This finding, the authors conclude, is inconsistent with the idea that people are motivated to achieve equality per se. 

What both of these studies do, then, is vary the cost of punishment. In the first, punishment is either inefficient (1-to-1 ratio) or quite efficient (3-to-1 ratio); in the second, punishment is unrestricted in its efficiency (X-to-1 ratio). In all cases, as punishment becomes more efficient and less costly, we observe people engaging in more of it. What we learn about people’s preferences for punishment, then, is that they seems to be based, in some part, on how costly punishment is to enact. With those results, I can now turn to the matter of what they tell us about punishment in a social context. As I mentioned before, the costs of engaging punishment can be augmented or reduced to the extent that other people join in your disputes. If your course of punishment is widely supported by others, this means its easier to enact it; if your punishment is opposed by others, not only is it costlier to enact, but you might in turn get punished for engaging in your excessive punishment. This idea is fairly easy to wrap one’s mind around: stealing a piece of candy from a corner store does not usually warrant the death penalty, and people would likely oppose (or attack) the store owner or some government agency if they attempted to hand down such a draconian punishment for the offense.

Now many of you might be thinking that third parties were not present in the studies I mentioned, so it would make no sense for people to be thinking about how these non-existent third parties might feel about their punishment decisions. Such an intuition, I feel, would be a mistake. This brings me back to the matter of pornography briefly. As I’ve written before, people’s minds tend to generate physiological arousal to pornography despite there being no current adaptive reason for that arousal. Instead, our minds – or, more precisely, specific cognitive modules – attend to particular proximate cues when generating arousal that historically correlated with opportunities to increase our genetic fitness. In modern environments, where that link between cue and fitness benefit is broken by digital media providing similar proximate cues, the result in maladaptive outputs: people get aroused by an image, which makes about as much adaptive sense as getting aroused by one’s chair.

The same logic can likely be applied to punishment here as well, I feel: the cognitive modules in our mind responsible for punishment decisions evolved in a world of social punishment. Not only would your punishment decisions become known to others, but those others might join in the conflict on your side or opposing you. As such, proximate cues that historically correlated with the degree of third party support are likely still being utilized by our brains in these modern experimental contexts where that link is being intentionally broken and interactions are anonymous and dyadic. What is likely being observed in these studies, then, is not an aversion to inequality as much as an aversion to the costs of punishment or, more specifically, the estimated social and personal costs of engaging in punishment in a world that other people exist in.

“We’re here about our concerns with your harsh punishment lately”

When punishment is rather cheap to enact for the individual in question – as it was in Houser & Xiao (2010) – the social factor probably plays less of a role in determining the amount of punishment enacted. You can think of that condition as one in which a king is punishing a subject who stole from him: while the king is still sensitive to the social costs of punishment (punish too harshly and the rabble will rise up and crush you…probably), he is free to punish someone who wronged him to a much greater degree than your average peasant on the street. By contrast, in Bone & Raihani (2015), the punisher is substantially less powerful and, accordingly, more interested in the (estimated) social support factors. You can think of those conditions as ones in which a knight or a peasant is trying to punish another peasant. This could well yield inequality-seeking punishment in the former study and equality-seeking punishment in the latter, as different groups require different levels of social support, and so scale their punishment accordingly. Now the matter of why third parties might be interested in inequality between the disputants is a different matter entirely, but recognition of the existence of that factor is important for understanding why inequality matters to second parties at all.

References: Bone, J. & Raihani, N. (2015). Human punishment is motivated both by a desire for revenge and a desire for equality. Evolution & Human Behavior, 36, 323-330.

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment. Economics Letters, 109, 20-23.

Benefits To Bullying

When it comes to assessing hypotheses of evolutionary function, there is a troublesome pair of intuitions which frequently trip many people up. The first of these is commonly called the naturalistic fallacy, though it also goes by the name of an appeal to nature: the idea that because something is natural, it ought to be good. As a typical argument using this line might go, because having sex is natural, we ought to – morally and socially – approve of it. The corresponding intuition to this is known as the moralistic fallacy: if something is wrong, then it’s not natural (or, alternatively, if something is good, it is natural). An argument using this type of reasoning might (and has, more or less) gone, because rape is morally wrong, it cannot be a natural behavior. In both cases, ‘natural’ is a bit of a wiggle word but, in general, it seems to refer to whether or not a species possesses some biological tendency to engage in the behavior in question. Put another way, ‘natural’ refers to whether a species possesses an adaptation(s) that functions so as to bring about a particular outcome. Extending these examples a little further, we might come up with the arguments that, because humans possess cognitive mechanisms which motivate sexual behavior, sex must be a moral good; however, because rape is a moral wrong, the human must not contain any adaptations that were selected for because they promoted such behavior.

An argument with which many people appear to disagree, apparently

This type of thinking is, of course, fallacious, as per the namesakes of the two fallacies. It’s quite easy to think of many moral wrong which might increase one’s reproductive fitness (and thus select for adaptations that produce them), just as it is easy to think of morally-virtuous behaviors that could lower one’s fitness: infanticide is certainly among the things people would consider morally wrong, and yet there is often an adaptive logic to be found in the behavior; conversely, while the ideal of universal altruism is praised by many as morally virtuous, altruistic behavior is often limited to contexts in which it will later be reciprocated or channeled towards close kin. As such, it’s probably for the best to avoid tethering one’s system of moral approval to natural-ness, or vice versa; you end up in some weird places philosophically if you do. Now this type of thinking is not limited to any particular group of people: scientists and laypeople alike can make use of these naturalistic and moralistic intuitions (intentionally or not), leading to cases where hypotheses of function are violently rejected for even considering that certain condemned behaviors might be the result of an adaptation for generating them, or other cases where weak adaptive arguments are made in the service of making other behaviors with which the arguer approves seem more natural and, accordingly, more morally acceptable.

With that in mind, we can turn to the matter of bullying: aggression enacted by more powerful individuals against weaker ones, typically peaking in frequency during adolescence. Bullying is a candidate behavior that might fall prey to the former fallacies because, well, it tends to generate many consequences people find unpleasant: having their lunch money taken, being hit, being verbally mocked, having slanderous rumors about them being spread, or other such nastiness. As bullying generates such proximately negative consequences for its victims, I suspect that many people would balk at the prospect that bullying might reflect a class of natural, adaptive behaviors, resulting in the bully gaining greater access to resources and reputation; in other words, doing evolutionarily useful things. Now that’s not to say that if you were to start bullying people you would suddenly find your lot in life improving, largely because bullying others tends to carry consequences; many people will not sit idly by and suffer the costs of your bullying; they will defend themselves. In order for bullying to be effective, then, the bully needs to possess certain traits that minimize, withstand, or remove the consequences of this retaliation, such as a greater physical formidability than their victim, a stronger social circle willing to protect them, or other means of backing up their aggression.

Accordingly, only those in certain conditions and possessing particular traits are capable of effectively bullying others (inflicting costs without suffering them in turn). Provided that is the case, those who engaged in bullying behaviors more often might be expected to achieve correspondingly greater reproductive success, as the same traits that make bullying an effective strategy also make the bully an attractive mating prospect. It’s probably worse to select a mate unable to defend themselves from aggression, relative to one able and willing to do so; not only would your mate (and perhaps you) be exploited more regularly, but such traits may well be passed onto your children in turn, leaving them open for exploitation as well. Conversely, the bully able to exploit others can likely can access to more plentiful resources, protect you from exploitation, and pass such useful traits along to their children. That bullying might have an adaptive basis was the hypothesis examined in a recent paper by Volk et al (2015). As noted in their introduction, previous data on the subject is consistent with the possibility that bullies are actually in relatively better condition than their victims, with bullies displaying comparable or better mental and physical health, as well as improved social and leadership skills, setting the stage for the prospect of greater mating success (as all of those traits are valuable in the mating arena). Findings like those run counter to some others suggestions floating around the wider culture that people bully others precisely because they lack social skills, intelligence, or are unhappy with themselves. While I understand that no one is particularly keen to paint a flattering picture of people they don’t like and their motives for engaging in behavior they seek to condemn, it’s important to not lose sight of reality while you try reduce the behavior and condemn its perpetrators.

“Sure, he does hit me regularly, but he’s a really great guy otherwise”

Volk et al (2015) examined the mating success of bullies by correlating people’s self-reports of their bullying behavior with their reports of dating and sexual behavior across two samples: 334 younger adolescents (11-18 years old) and 143 college freshman, all drawn from Canada. Both groups answered questions concerning how often they engaged in, and were a victim of, bullying behaviors, whether they have had sex and, if they had, how many partners they’ve had, whether they have dated and, if so, how many people they’ve dated, as well as how likable and attractive they found themselves to be. Self-reports are obviously not the ideal measures of such things, but at times they can be the best available option.

Focusing on the bullying results, Volk et al (2015) reported a positive relationship between bullying and engaging in dating and sexual relationships in both samples: controlling for age, sex, reported victimization, attractiveness, and likability, bullying not only emerged a positive predictor as to whether the adolescent had dated or had sex at all (about 1.3 to 2 times more likely), but also correlated with the number of sexual and, sometimes, dating partners; those who bullied people more frequently tended to have a greater number of sexual partners, though this effect was modest (bs ranging from 0.2 to 0.26). By contrast, being a victim of bullying did not consistently or appreciably effect the number of sexual partners one had (while victimization was positively correlated with participant’s number of dating partners, it was not correlated with their number of sexual partners. This might reflecting the possibility that those who seek to date frequently might be viewed as competitors by other same-sex individuals and bullied in order to prevent such behavior from taking place, though that much is only speculation).

While this data is by no means conclusive, it does present the possibility that bullying is not indicative of someone who is poor shape physically, mentally, or socially; quite the opposite, in fact. Indeed, that is probably why bullying often appears to be so one-sided: those being victimized are not doing more to fight back because they are aware of how well that would turn out for them. Understanding this relationship between bullying and sexual success might prove rather important for anyone looking to reduce the prevalence of bullying. After all, if bullying is providing access to desirable social resources – including sexual partners – it will be hard to shift the cost/benefit analysis away from bullying being the more attractive option barring some introduction of more attractive alternatives for achieving that goal. If, for instance, bullying serves a cue that potential mates might use for assessing underlying characteristics that make the bully more attractive to others, finding new, less harmful ways of signaling those traits (and getting bullies to use those instead) could represent a viable anti-bully technique.

But, until then, this kid is going to get so laid

As these relationships are merely correlational, however, there are other ways of interpreting them. It could be possible, for example, that the relationship between bullying and sexual success is accounted for by those who bully being more coercive towards their sexual partners as well as their victims, achieving a greater number of sexual partners, but not in the healthiest fashion. This interpretation would be somewhat complicated by the lack of a sex differences between men and women in the current data, however, as it seems unlikely that women who bully are also more likely to coerce their male partners into sex they don’t really want. The only sex difference reported involved the relationship between bullying and dating, with the older sample of women who bullied people more often having a greater number of dating relationships (r = 0.5), relative to men (r = 0.13), as well as a difference in the younger sample with respect to desire for dating relationships (female r = 0.28, male r = 0.03). It is possible, then, that men and women might bully others, at least at times, to obtain different goals, which ought to be expected when the interests of each sex diverge. Understanding those adaptive goals should prove key for effectively reducing bullying; at least I feel that understanding would be more profitable than positing that bullies are mean because they wish to make others as miserable as they are, crave attention, or other such implausible evolutionary functions.

References: Volk, A., Dane, A., Marini, Z., & Vaillancourt, T., (2015). Adolescent bullying, dating, and mating: Testing an evolutionary hypothesis. Evolutionary Psychology, DOI: 10.1177/1474704915613909

Inequality Aversion, Evolution, And Reproduction

Here’s a scenario that’s not too difficult to imagine: a drug company has recently released a paper claiming that a product they produce is both safe and effective. It would be foolish of any company with such a product to release a report saying their drugs were in any way harmful or defective, as it would likely lead to a reduction in sales and, potentially, a banning or withdrawal of the drugs from the wider market. Now, one day, an outside researcher claims to have some data suggesting that drug company’s interpretation of their data isn’t quite right; once a few other data points are considered, it becomes clear that the drug is only contextually effective and, in other cases, not really effective at all. Naturally, were some representatives of the drug company asked about the quality of this new data, one might expect them to engage in a bit of motivated reasoning: some concerns might be raised about the quality of the new research that otherwise would not be, were its conclusions different. In fact, the drug company would likely wish to see the new research written up to be more supportive of their initial conclusion that the drug works. Because of their conflict of interests, however, expecting an unbiased appraisal of the research suggesting the drug is actually much less effective than previously stated from those representatives would be unrealistic. For this reason, you probably shouldn’t ask representatives from the drug company to serve as reviewers for the new research, as they’d be assessing both the quality of their own work and the quality of the work of others with factors like ‘money’ and ‘prestige’ on the table.

“It must work, as it’s been successfully making us money; a lot of money”

On an entirely unrelated note, I was the lucky recipient of a few comments about some work of mine concerning inequality aversion: the idea that people dislike inequality per se (or at least when they get the short end of the inequality stick) and are willing to actually punish it. Specifically, I happen to have some data that suggests people do not punish inequality per se: they are much more interested in punishing losses, with inequality only playing a secondary role in – occasionally – increasing the frequency of that punishment. To place this in an easy example, let’s consider TVs. If someone broke into your house and destroyed your TV, you would likely want to see the perpetrator punished, regardless of whether they were richer or poorer than you. Similarly, if someone went out and bought themselves a TV (without having any effect on yours), you wouldn’t really have any urge to punish them at all, whether they were poorer or richer than you. If, however, someone broke into your house and took your TV for themselves, you would likely want to see them punished for their actions. However, if they were actually poorer than you, this might incline you to go after the thief a bit less. This example isn’t perfect, but it basically describes what I found.

Inequality aversion would posit that people show a different pattern of punitive sentiments: that you would want to punish people who end up better off than you, regardless of how they got that way. This means that you’d want to punish the guy who bought the TV for himself if it meant he ended up better off than you, even though he had no effect on your well-being. Alternatively, you wouldn’t be particularly inclined to punish the person who stole/broke your TV either unless they subsequently ended up better off than you. If they were poorer than you to begin with and were still poorer than you after stealing/destroying the TV, you ought not to be particularly interested in seeing them punished.

In case that wasn’t clear, the argument being put forth is that how well you are doing, relative to others ought to be used as an input for punishment decisions to a greater extent – a far greater one – than absolute losses or gains are.

Now there’s a lot to say about that argument. The first thing to say is that, empirically, it is not supported by the data I just mentioned: if people were interested in punishing inequality itself, they ought to be willing to punish that inequality regardless of how it came about: stealing a TV, buying a TV, or breaking a TV should be expected to prompt very similar punishment responses; it’s just that they don’t: punishment is almost entirely absent when people create inequality by benefiting themselves at no cost to others. By contrast, punishment is rather common when costs are inflicted on someone, whether those costs involve taking (where one party benefits while the other suffers) or destruction (where one party suffers a loss at no benefit to anyone else). On those grounds alone we can conclude that something is off about the inequality aversion argument: the theory does not match the data. Thankfully – for me, anyway – there are also many good theoretical justifications for rejecting inequality aversion.

“It’s a great home in a good neighborhood; pay no mind to the foundation”

The next thing to say about the inequality argument is that, in one regard, it is true: relative reproduction rates determine how quickly the genes underlying an adaptation spread – or fail to spread – throughout the population. As resources are not unlimited, a gene that reproduces itself 1.1 times for each time an alternative variant reproduces itself once will eventually replace the other in the population entirely, assuming that the reproductive rates stay constant. It’s not enough for genes to reproduce themselves, then, but for them to reproduce themselves more frequently than competitors if they metaphorically hope to stick around in the population over time. That this much is true might lure people into accepting the rest of the line of reasoning, though to do so would be a mistake for a few reasons.

Notable among this reasons is that “relative reproductive advantage” does not have three modes of “equal”, “better”, or “worse”. Instead, relative advantage is a matter of degree: a gene that reproduces itself twice as frequently as other variants is doing better than a gene that does so with 1.5 times the frequency; a gene that reproduces itself three times as frequently will do better still, and so on. As relative reproductive advantages can be large or small, we ought to expect mechanisms that generate larger relative reproductive advantages to be favored over those which generate smaller ones. On that point, it’s worth bearing in bearing in mind that the degree of relative reproductive advantage is an abstract quantity compromised of absolute differences between variants. This is the same point as noting that, even if the average woman in the US has 2.2 children, no woman actually has two-tenths of a child laying around; they only come in whole numbers. That means, of course, that evolution (metaphorically) must care about absolute advantages to precisely the same degree it cares about relative ones, as maximizing a relative reproductive rate is the same thing as maximizing an absolute reproductive rate.

The question remains, however, as to what kind of cognitive adaptations would arise from that state of affairs. On the one hand, we might expect adaptations that primarily monitor one’s own state of affairs and makes decisions based on those calculations. For instance, if a male with two mates has an option to pursue a third and the expected fitness benefits of doing so outweigh the expected costs, then the male in question would likely pursue the opportunity. On the other hand, we might follow the inequality aversion line of thought and say that the primary driver of the decision to pursue this additional mate should be how well the male in question is doing, relative to his competitors. If most (or should it be all?) of his competitors currently have fewer than two mates, then the cognitive mechanisms underlying his decision should generate a “don’t pursue” output, even if the expected fitness costs are smaller than the benefits. It’s hard to imagine how this latter strategy is expected to do better (much less far better) than the former, especially in light of the fact that calculating how everyone else is doing is more costly and prone to errors than calculating how you are doing. It’s similarly hard to imagine how the latter strategy would do better if the state of the world changes: after all, just because someone is not currently doing as well as you, it does not mean they won’t eventually be. If you miss an opportunity to be doing better today, you may end up being relatively disadvantaged in the long run.

“I do see her more than the guy she’s cheating on me with, so I’ll let it slide…”

I’m having a hard time seeing how a mechanism that operates on an expected fitness cost/benefit analysis would get out-competed by a more cognitively-demanding strategy that either ignores such a cost/benefit strategy or takes it and adds something irrelevant into the calculations (e.g.,” get that extra benefit, but only so long as other people are currently doing better than you)”. As I mentioned initially, the data shows the absolute cost/benefit pattern predominates: people do not punish others primarily on the basis of whether they’re doing better than them or not; they primarily punish on the basis of whether they experienced losses. Nevertheless inequality does play a secondary role – sometimes – in the decision regarding whether to punish someone for taking from you. I happen to think I have an explanation as to why that’s the case but, as I’ve also been informed by another helpful comment (which might or might not be related to the first one), speculating about such things is a bit on the taboo side and should be avoided. Unless one is speculating that inequality, and not losses, primarily drives punishment, that is.