Socially-Strategic Welfare

Continuing with the trend from my last post, I wanted to talk a bit more about altruism today. Having already discussed that people do, in fact, appear to engage in altruistic behaviors and possess some cognitive mechanisms that have been selected for that end, I want to move into discussing the matter of the variance in altruistic inclinations. That is to say that people – both within and between populations – are differentially inclined towards altruistic behavior, with some people appearing rather disinterested in altruism, while others appear quite interested in it. The question of interest for many is how those differences are to be explained. One explanatory route would be to suggest that the people in question have, in some sense, fundamentally different psychologies. A possible hypothesis to accompany that explanation might go roughly as follows: if people have spent their entire lives being exposed to social messages about how helping others is their duty, their cognitive mechanisms related to altruism might have developed differently than someone who instead spent their life being exposed to the opposite message (or, at least, less of that previous one). On that note, let’s consider the topic of welfare.

In a more academic fashion, if you don’t mind…

The official website of Denmark suggests that such a message of helping being a duty might be sent in that country, stating that:

The basic principle of the Danish welfare system, often referred to as the Scandinavian welfare model, is that all citizens have equal rights to social security. Within the Danish welfare system, a number of services are available to citizens, free of charge.

Provided that this statement accurately characterizes what we would consider the typical Danish stance on welfare, one might imagine that growing up in such a country could lead individuals to develop substantially different views about welfare than, say, someone who grew up in the US, where opinions are quite varied. In my non-scientific and anecdotal experience, while some in the US might consider the country a welfare state, those same people frequently seem to be the ones who think that is a bad thing; those who think it’s a good thing often seem to believe the US is not nearly enough of a welfare state. At the very least, the US doesn’t advertise a unified belief about welfare on its official site.

On the other hand, we might consider another hypothesis: that Danes and Americans don’t necessarily possess any different cognitive mechanisms in terms of their being designed for regulating altruistic behavior. Instead, members of both countries might possess very similar underlying cognitive mechanisms which are being fed different inputs, resulting in the different national beliefs about welfare. This is the hypothesis that was tested by Aaroe & Petersen (2014). The pair make the argument that part of our underlying altruistic psychology is a mechanism that functions to determine deservingness. This hypothetical mechanism is said to use inputs of laziness: in the presence of a perceived needy but lazy target, altruistic inclinations towards that individual should be reduced; in the presence of a needy, hard-working, but unlucky individual, these inclinations should be augmented. Thus, cross-national differences, as well as within-group differences, concerning support for welfare programs should be explained, at least in part, by perceptions of deservingness (I will get to the why part of this explanation later).

Putting those ideas together, two countries that differ on their willingness to provide welfare should also differ on their perceptions of the recipients in general. However, there are exceptions to every rule: even if you believe (correctly or incorrectly) that group X happens to be lazy and undeserving of welfare, you might believe that a particular member of group X bucks that trend and does deserve assistance. This is the same thing as saying that while men are generally taller than women, you can find exceptions where a particular woman is quite tall or a man quite short. This leads to a corollary prediction,that Aaroe & Petersen examine: despite decades of exposure to different social messages about welfare, participants from the US and Denmark should come to agree on whether or not a particular individual deserves welfare assistance.

     Never have I encountered a more deserving cause

The authors sampled approximately 1000 participants from both the US and Denmark; a sample designed to be representative of their home country’s demographics. That sample was then surveyed on their views about people who receive social welfare via a free-association task in which they were asked to write descriptors of those recipients. Words that referred to the recipients’ laziness or poor luck were coded to determine which belief was the more dominant one (as defined by lazy words minus unlucky one). As predicted, the lazy stereotype was dominant in the US, relative to Denmark, with Americans listing an average of 0.3 more words referring to laziness than luck; approximately four-times the size from Denmark, in which these two beliefs were more balanced.

In line with that previous finding was the fact that Americans were also more likely to support the tightening of welfare restrictions (M = 0.57) than the Danes (M = 0.49, scale 0-1). However, this difference between the two samples only existed under the condition of informational uncertainty (i.e., when participants were thinking about welfare recipients in general). When presented with a welfare recipient who was described as the victim of a work-related accident and motivated to return to work, the US and Danish citizens both agreed that welfare for restrictions for people like that person should not be tightened (M = 0.36 and 0.35 respectively); when this recipient was instead described as able-bodied but unmotivated to work, the Americans and Danes once again agreed, suggesting that welfare restrictions should be tightened for people like him (M = 0.76 and 0.79). In the presence of more individualizing information, then, the national stereotypes built over a lifetime of socialization appear to get crowded out, as predicted. All it took was about two sentences worth of information to get the US and Danish citizens to agree. This pattern of data would seem to support the hypothesis that some universal psychological mechanisms reside in both populations, and their differing views tend to be the result of their being fed different information.

This brings us to the matter of why people are using cues to laziness to determine who should receive assistance, which is not explicitly addressed in the body of the paper itself. If the psychological mechanisms in question function to reduce the need of others per se, laziness cues should not be relevant. Returning to the example from my last post, for instance, mothers do not tend to withhold breastfeeding from infants on the basis on whether those infants are lazy. Instead, breastfeeding seems better designed to reduce need per se in the infants. It’s more likely that mechanisms responsible for determining these welfare attitudes are instead designed to build lasting friendships (Tooby & Cosmides, 1996): by assisting an individual today, you increase the odds they will be inclined to assist you in the future. This altruism might be especially relevant when the individual is in more severe need, as the marginal value of altruism in such situations is larger, relative to when they’re less needy (in the same way that a very hungry individual values the same amount of food more than a slightly hungry one; the same food is simply a better return on the same investment when given to the hungrier party). However, lazy individuals are unlikely to be able to provide such reciprocal assistance – even if they wanted to – as the factors determining their need are chronic, rather than temporary. Thus, while both the lazy and motivated individual are needy, the lazy individual is the worse social investment; the unlucky one is much better.

Investing in toilet futures might not have been the wisest retirement move

In this case, then, perceptions of deservingness appear to be connected to adaptations that function to build alliances. Might perceptions of deservingness in other domains serve a similar function? I think it’s probable. One such domain is the realm of moral punishment, where transgressors are seen as being deserving of punishment. In this case, if victimized individuals make better targets of social investment than non-victimized ones (all else being equal), then we should expect people to direct altruism towards the former group; when it comes to moral condemnation, the altruism takes the form of assisting the victimized individual in punishing the transgressor. Despite that relatively minor difference, the logic here is precisely the same as my explanation for welfare attitudes. The moral explanation would require that moral punishment contains an alliance-building function. When most people think morality, they don’t tend to think about building friendships, largely owing to the impartial components of moral cognitions (since impartiality opposes partial friendships). I think that problem is easy enough to overcome; in fact, I deal with it in an upcoming paper (Marczyk, in press). Then again, it’s not as if welfare is an amoral topic, so there’s overlap to consider as well.

References: Aaroe, l. & Petersen, M. (2014). Crowding out culture: Scandinavians and Americans agree on social welfare in the face of deservingness cues. The Journal of Politics, 76, 684-697.

Marczyk, J. (in press). Moral alliance strategies theory. Evolutionary Psychological Science

Tooby, J. & Cosmides, L. (1996). Friendship and the banker’s paradox: Other pathways to the evolution of adaptations for altruism. Proceedings of the British Academy, 88, 119-143.

Phrasing The Question: Does Altruism Even Exist?

There’s a great value in being precise when it comes to communication (if you want your message to be understood as you intended it, anyway; when clarity isn’t the goal, by all means, be imprecise). While that may seem trivial enough, it is my general experience that many communicative conflicts in psychology arise because people are often unaware of, or at least less than explicit about, the level of analysis at which they’re speaking. As an example of these different levels of analysis, today I will consider a question that many people wonder about: does altruism really exist? While definitions do vary, perhaps the most common definition of altruism involves the benefiting of another individual at the expense of the actor. So, to rephrase the question a little, “Do people really benefit others at an expense to themselves, or are ostensibly altruistic acts merely self-interest in disguise?”

“I would have saved his life if you wouldn’t have thought me selfish for doing so”

There are three cases I’m going to consider to help demonstrate these different levels of analysis. The first two examples are human-centric, as they have a greater bearing on the initial question: reciprocal exchanges and breastfeeding. In the former case – reciprocal altruism – two individuals will provide benefits to each other in the hopes of receiving similar assistance in turn. This type of behavior is often summarized with the “you scratch my back and I’ll scratch yours” line. In the case of breastfeeding, the bodies of mammalian mothers will produce a calorically-valuable milk, which they allow their dependent offspring to feed on. This latter type of altruism is generally not reciprocal in nature, as most mothers do not breastfeed their infants in the hope of, say, their infant one day breastfeeding them.

But are these acts really altruistic? After all, if I’m doing a favor for you in the hopes that you’ll do one for me later, it seems that I’m not enduring a cost to provide you with a benefit; I’m enduring a cost to try and provide me with a benefit. As for breastfeeding, offspring share some of their mother’s genes, so allowing an infant to breastfeed is, in the genetic sense, beneficial for the mother; at least for a time, at which point the weaning conflicts tend to kick in. If the previous thoughts ran through your head, chances are that you’re thinking about some of the right things but in the wrong way, blurring the lines between different levels of analysis. Allow me to explain.

In a post last year, I discussed what are commonly known as the “big four” questions one might ask about a biological mechanism, like the psychological ones that generate altruistic behavior: how does it proximately (immediately) function; how does it develop over one’s life; what is its evolutionary history with respect to other species; and what its evolved function might be. These questions all require different considerations and evidence to answer and, in many cases, can be informative as to answers at other levels. Despite their mutually informative nature, they are nonetheless distinct.

The first and fourth questions (proximate and evolutionary functioning) are the most relevant for the current matter. Let’s start by considering reciprocal altruism. The first question we might ask concerns the proximate functioning of the behavior: do people behave in ways that deliver benefits to other individuals that carry costs for the actors? Well, we certainly seem to. Some of these acts of reciprocal altruism might be based on relatively short-lived and explicit exchanges: I give you my money, you give me your goods and/or services. In other cases, the exchanges might be longer lived and more implicit: I give you help today, you should be inclined to give me help down the road when I need it. To demonstrate that these acts are, in fact, altruistic, is relatively straightforward: in the first example, for instance, I would be better off getting my goods/services and keeping my money. The act of giving does not provide me with a direct benefit. Even though we might both benefit from the exchange (gains from trade, and all that), it doesn’t mean each portion of the exchange isn’t altruistic. Proximately, then, we can say that people are altruistic or, more conservatively, that people engage in altruistic behaviors from time to time.

Score one for team good

But how about the mechanisms generating these reciprocally-altruistic behaviors; do they function to deliver benefits to others? That is to ask whether these mechanisms were selected to deliver benefits to others. The answer to this question depends on which part of the system you’re looking at. In the broader sense, that answer is “no,” inasmuch as the cognitive systems for reciprocal exchanges appear to owe their existence to receiving benefits from others, rather than providing them; the providing is just instrumental to another goal. This would mean we have altruistic behavior that is the product of a non-altruistic system, which is a perfectly possible outcome. However, in a narrower sense, the answer to that question can be “yes,” inasmuch as the broader cognitive system engaged in reciprocal exchanges is made up of a number of subsystems: one such system needs to monitor the needs of others and generate behavior to deliver benefits to them in order for the system to work, making that bit seem to be adapted for providing altruism; another piece needs to monitor the return on those investments, down-regulating altruistic behavior in the absence of reciprocity (or other relevant cues). This is where being precise really begins to count: some parts of a system might be considered altruistic, while others are more selfish.

Now let’s turn to the breastfeeding example. Beginning again with the proximate question, breastfeeding certainly seems like an altruistic behavior: the mother’s body pays a metabolic cost to create the calorically-rich milk which is then consumed by the infant. So the mother is paying a biological cost to deliver a benefit to another individual, making the behavior altruistic. In the functional sense of the word, this behavior appears to be the result of adaptations for altruism: mothers of a number of mammalian species are found to breastfeed their infants with little apparent need for reciprocity. The reason they can do so, as I previously mentioned, is that the infants share some portion of their mother’s genes, so the mother is, by proxy, improving her reproductive success by helping her offspring survive and thrive. Importantly, one needs to bear in mind that explaining the persistence of these altruistic mechanisms over time with kin selection does not make them any less altruistic. In much the same way, while the manufacturing of cars owes its existence to the process being profitable, that doesn’t mean that I’m inclined to think of cars as really being devices designed to make money.

The third example of altruism I wanted to mention is an interesting one, involving a certain parasite – the Lancet Liver Fluke – that infects ants (among other things). In brief, this pathogen will impact an ant’s behavior such that the ant will dangle from the tip of a blade of grass, leading to being more likely to get eaten by a passing grazing animal (it then travels from the grazer to snails to ants and then back into the grazers; it’s a rather involved reproductive cycle). In the proximate sense, this behavior of the ant is altruistic inasmuch that the ant is suffering a cost – death – to deliver a benefit to the parasite. However, the ant possesses no cognitive mechanisms designed for this function; the adaptations for making the ant behave as it does are found within the parasite. In this case, while the proximate behavior of the ant might appear to be altruistic, it is not because of any altruistic adaptation on the part of the ant.

The new poster child for increasing altruism

Depending, then, on what one means by “really” when asking if something is “really” altruistic, one can get vastly different answers to the question. Some behavior may or may not be proximately altruistic, the system under consideration may contain both altruistic and non-altruistic mechanisms, and the extent of that altruism can also vary. These examples should highlight the considerable subtlety that underlies such analyses, hopefully impressing upon you the point that one can easily stumble, instead of progress, if ideas are not carefully selected and understood. There are, of course, other realms we could consider – like altruism that functions to signal traits about the actor, to gain social status, or whether the immediate motives of an actor are altruistic – but the general analyses, rather than their specific details, are what is important here. Thinking about what benefits organisms might reap through their altruistic behavior is a very valuable line of thought; it just shouldn’t be confused with other meaningful levels of thought.

The Implicit Assumptions Test

Let’s say you have a pet cause to which you want to draw attention and support. There are a number of ways you might go about trying to do so, honesty being perhaps the most common initial policy. While your initial campaign is met with a modest level of success, you’d like to grow your brand, so to speak. As you start researching how other causes draw attention to themselves, you notice an obvious trend: big problems tend to get more support than smaller ones: that medical condition affecting 1-in-4 people is much different than one affecting 1-in-10,0000. Though you realize it sounds a bit perverse, if you could somehow make your pet problem a much bigger one than it actually is – or at least seem like it is –  you would likely attract more attention and funding. There’s only one problem standing in your way: reality. When most people tell you that your problem isn’t much of one, you’re kind of out of luck. Or are you? What if you could convince others that what people are telling you isn’t quite right? Maybe they think your problem isn’t much of one but, if their reports can’t be trusted, now you have more leeway to make claims about the scope of your issue.

You finally get that big fish you always knew you actually caught

This brings us once again to the matter of the implicit association task, or IAT. According to it’s creators, the IAT “…measures attitudes and beliefs that people may be unwilling or unable to report,” making that jump from “association” to “attitudes” in a timely fashion. This kind of test could serve a valuable end for the fundraiser in the above example, as it could potentially increase the perceived scope of your problem. Not finding enough people who are explicitly racist to make your case that the topic should be getting more attention than it currently is? Well, that could be because racism is, by in large, a socially-undesirable trait to display and, accordingly, many people don’t want to openly say they’re a racist even if they hold some racial biases. If you had a test that could plausibly be interpreted as saying that people hold attitudes they explicitly deny, you could talk about how racism is much more common than it seems to be.

This depends on how one interprets the test, though: all the IAT measures is very-fast and immediate reaction times when it comes to pushing buttons. I’ve discussed the IAT on a few occasions: first with regard to what precisely the IAT is (and might not be) measuring and, more recently, with respect to whether IAT-like tests that use response times as measures of racial bias are actually predicting anything when it comes to actual behaviors. The quick version of both of those posts is that we ought to be careful about drawing a connection between measures of reaction time in a lab to racial biases in the real world that cause widespread discrimination. In the case of shooting decisions, for instance, a more realistic task in which participants were using a simulation with a gun instead of just pressing buttons at a computer resulted in the opposite pattern of results that many IAT tests would predict: participants were actually slower to shoot black suspects and more likely to shoot unarmed white suspects. It’s not enough to just assume that, “of course this different reaction times translate into real world discrimination”; you need to demonstrate it first.

This brings us to a recent meta-analysis of some IAT experiments by Oswald et al (2014) examining how well the IAT did at predicting behaviors, and whether it was substantially better than the explicit measures being used in those experiments. There was, apparently, a previous meta-analysis of IAT research that did find such things – at least for certain, socially-sensitive topics – and this new meta-analysis seems to be a response to the former one. Oswald et al (2014) begin by noting that the results of IAT research has been brought out of the lab into practical applications in law and politics; a matter that would be more than a little concerning if the IAT actually wasn’t measuring what it’s interpreted by many to be measuring, such as evidence of discrimination in the real world. They go on to suggest that the previous meta-analysis of IAT effects lacked a degree of analytic and methodological validity that they hope their new analysis would address.

Which is about as close as academic publications come to outright shit-talking

For example, the authors were interested in examining whether various experimental definitions of discrimination were differentially predicted by the IAT and explicit measures, whereas they had previously all been lumped into the same category by the last analysis. Oswald et al (2014) grouped these operationalizations of discrimination into six categories: (1) measured brain activity, which is a rather vague and open-to-interpretation category, (2) response times in other tasks, (3), microbehavior, like posture or expression of emotions, (4), interpersonal behavior, like whether one cooperates in a prisoner’s dilemma, (5) person perception, (i.e., explicit judgments of others), and (6) political preferences, such as whether one supports policies that benefit certain racial groups or not. Oswald et al (2014) also added in some additional, more recent studies that the previous meta-analysis did not include.

While this is a lot to this paper, I wanted to skip ahead to discussing a certain set of results. The first of these results is that, in most cases, IAT scores correlated very weakly to the discrimination criterion being assessed, averaging a meager correlation of 0.14.To the extent that IAT is actually measuring implicit attitudes, those attitudes don’t seem to have much a predictable affect on behavior. The exception to this pattern was in regard to the brain activity studies: that correlation was substantially higher (around a 0.4). However, as brain activity per se is not a terribly meaningful variable when it comes to its interpretation, whether that tells us anything of interest about discrimination is an open question. Indeed, in the previous post I mentioned, the authors also observed an effect for brain activity, but it did not mean people were biased toward shooting black people; quite the opposite, in fact.

The second finding I would like to mention is that, in most cases, the explicit measures of attitudes toward other races being used by researchers (like this one or this one) were also very weakly correlated to the discrimination criterion being assess, though their average correlation was about the same size as the implicit measures at 0.12. Further, this value is apparently substantially below the value achieved by other measures of explicit attitudes, leading the authors to suggest that researchers really ought to think more deeply about what explicit measures they’re using. Indeed, when you’re asking questions about “symbolic racism” or “modern racism”, one might wonder why you’re not just asking about “racism”. The answer, as far as I can tell, is because, proportionately, very few people – and perhaps even fewer undergraduates; the population most often being assessed – actually express openly racist views. If you want to find much racism as a researcher, then, you have to dig deeper and kind of squint a little.

The third finding is that the above two measures – implicit and explicit – really didn’t correlate with each other very well either, averaging only a correlation of 0.14. As Oswald et al (2014) put it:

“These findings collectively indicate, at least for the race domain…that implicit and explicit measures tap into different psychological constructs—none of which may have much influence on behavior…”

In fact, the authors estimate that the implicit and explicit measures collectively accounted for about 2.5% of the variance in discriminatory criterion behaviors concerning race, which each adding about a percent or so over and beyond the other measure. In other words, these effects are small – very small – and do a rather poor job of predicting much of anything.

“Results: Answer was unclear, so we shook the magic ball again”

We’re left with a rather unflattering picture of research in this domain. The explicit measures of racial attitudes don’t seem to do very well at predicting behaviors, perhaps owing to the nature of the questions being asked. For instance, in the symbolic racism scale, the answer one provides to questions like, “How much discrimination against blacks do you feel there is in the United States today, limiting their chances to get ahead?” could have quite a bit to do with matters that have little, if anything, to do with racial prejudice. Sure, certain answers might sound racist if you believe there is an easy answer to that question and anyone who disagrees must be evil and biased, but for those who haven’t already drank that particular batch of kool-aid, some reservations might remain. Using the implicit reaction times also seems to blur the line between actually measuring racist attitudes and many other things, such as whether one holds a stereotype or whether one is aware of a stereotype (foregoing the matter of its accuracy for the moment). These reservations appear to be reflected in how very bad both methods seem to be at predicting much of anything.

So why do (some) people like the IAT so much even if it predicts so little? My guess, again, is that a lot of it’s appeal flows from its ability to provide researchers and laypeople alike with a plausible-sounding story to tell others about how bad a problem is in order to draw more support to their cause. It provides cover for one’s inability to explicitly find what you’re looking for – such as many people voicing opinions of racial superiority – and allows a much vaguer measure to stand in for it instead. Since more people fit that vaguer definition, the result is a more intimidating sounding problem; whether it corresponds to reality can be besides the point if it’s useful.

References: Oswald, F., Blanton, H., Mitchell, G., Jaccard, J., & Tetlock, P. (2014). Predicting racial and ethnic discrimination: A meta-analysis of IAT criterion studies. Journal of Personality & Social Psychology, 105, 171-192.

Quid Pro Quo

Managing relationships is a task that most people perform fairly adeptly. That’s not to say that we do so flawlessly – we certainly don’t – but we manage to avoid most major faux pas with regularity. Despite our ability to do so, many of us would not be able to provide compelling answers that help others understand why we do what we do. Here’s a frequently referenced example: if you invited your friend over for dinner, many of you would likely find it rather strange – perhaps even insulting – if after the meal your friend pulled out his wallet and asked how much he owed you for the food. Though we would find such behavior strange or rude, when asked to explain what is rude about it, most people would verbally stumble. It’s not that the exchange of money for food is strange; that part is really quite normal. We don’t expect to go into a restaurant, be served, eat, and then leave without paying. There are also other kinds of strange goods and services – such a sex and organs – that people often do see something wrong with exchanging resources for, at least so long as the exchange is explicit; despite that, we often have less of a problem with people giving such resources away.

Alright; not quite implicit enough, but good try

This raises all sorts of interesting questions, such as why is it acceptable for people to give away things but not accept money for them? Why would it be unacceptable for a host to expect his guests to pay, or for the guests to offer? The most straightforward answer is that the nature of these relationships are different: two friends have different expectations of each other than two strangers, for instance. While such an answer is true enough, it don’t really deepen our understanding of the matter; it just seems to note the difference. One might go a bit further and begin to document some of the ways in which these relationships differ, but without a guiding functional analysis of why they differ we would be stuck at the level of just noting differences. We could learn not only that business associates treat each other differently than friends (which we knew already), but also some of the ways they do. While documenting such things does have value, it would be nice to place such facts in a broader framework. On that note, I’d like to briefly consider one such descriptive answer to the matter of why these relationships differ before moving onto the latter point: the distinction between what has been labeled exchange relationships and communal relationships. 

Exchange relationships are said to be those in which one party provides a good or service to the other in the hopes of receiving a comparable benefit in return; the giving thus creates the obligation for reciprocity. This is the typical consumer relationship that we have with businesses as customers: I give you money, you give me groceries. Communal relationships, by contrast, do not carry similar expectations; instead, these are relationships in which each party cares about the welfare of the other, for lack of a better word, intrinsically. This is more typically of, say, mother-daughter relationships, where the mother provisions her daughter not in the hopes of her daughter one day provisioning her, but rather because she earnestly wishes to deliver those benefits to her daughter.On the descriptive level, then, this difference between expectations of quid pro quo are supposed to differentiate the two types of relationships. Friends offering to pay for dinner are viewed as odd because they’re treating a communal relationship as an exchange one.

Many other social disasters might arise from treating one type of social relationship as if it were another. One of the most notable examples in this regard is the ongoing disputes over “nice guys”, nice guys, and the women they seek to become intimate with. To oversimplify the details substantially, many men will lament that women do not seem to be interested in guys who care about their well-being, but rather seek men who offer resources or treat them as less valuable. The men feel they are offering a communal relationship, but women opt for the exchange kind. Many women return the volley, suggesting instead that many of the “nice guys” are actually entitled creeps who think women are machines you put niceness coins into to get them to dispense sex. Now, it’s the men seeking the exchange relationships (i.e., “I give you dinner dates and you give me affection”), whereas the women are looking for the communal ones. But are these two types of relationships – exchange and communal – really that different? Are communal relationships, especially those between friends and couples, free of the quid-pro-quo style of reciprocity? There are good reasons to think that they are not quite different in kind, but rather different in respect to the  details of the quids and quos.

A subject our good friend Dr. Lecter is quite familiar with

To demonstrate this point, I would invite you to engage in a little thought experiment: imagine that your friend or your partner decided one day to behave as if you didn’t exist: they stopped returning your messages, they stopped caring about whether they saw you, they stopped coming to your aid when you needed them, and so on. Further, suppose this new-found cold and callous attitude wouldn’t change in the future. About how long would it take you to break off your relationship with them and move onto greener pastures? If your answer to that question was any amount of time whatsoever, then I think we have demonstrated that the quid-pro-quo style of exchange still holds in such relationships (and if you believe that no amount of that behavior on another’s part would ever change how much you care about that person, I congratulate you on the depths of your sunny optimism and view of yourself as an altruist; it would also be great if you could prove it by buying me things I want for as long as you live while I ignore you). The difference, then, is not so much whether there are expectations of exchanges in these relationships, but rather concerning the details of precisely what is being exchanged for what, the time frame in which those exchanges take place, and the explicitness of those exchanges.

(As an aside, kin relationships can be free of expectations of reciprocity. This is because, owing to the genetic relatedness between the parties, helping them can be viewed – in the ultimate, fitness sense of the word – as helping yourself to some degree. The question is whether this distinction also holds for non-relatives.)

Taking those matters in order, what gets exchanged in communal relationships is, I think, something that many people would explicitly deny is getting exchanged: altruism for friendship. That is to say that people are using behavior typical of communal relationships as an ingratiation device (Batson, 1993): if I am kind to you today, you will repay with [friendship/altruism/sex/etc] at some point in the future; not necessarily immediately or at some dedicated point. These types of exchange, as one can imagine, might get a little messy to the extent that the parties are interested in exchanging different resources. Returning to our initial dinner example, if your guest offers to compensate you for dinner explicitly, it could mean that he considers the debt between you paid in full and, accordingly, is not interested in exchanging the resource you would prefer to receive (perhaps gratitude, complete with the possibility that he will be inclined to benefit you later if need be). In terms of the men and women example for before, men often attempt to exchange kindness for sex, but instead receive non-sexual friendship, which was not the intended goal. Many women, by contrast, feel that men should value the friendship…unless of course it’s their partner building friendship with another woman, in which case it’s clearly not just about friendship between them.

But why aren’t these exchanges explicit? It seems that one could, at least in principle, tell other people that you will invite them over for dinner if they will be your friend in much the same way that a bank might extend a loan to person and ask that it be repaid over time. If the implicit nature of these exchanges were removed, it seems that lots of people could be saved a lot of headache. The reason such exchanges cannot be made explicit, I think, has to do with the signal value of the exchange. Consider two possible friends: one of those friends tells you they will be your friend and support you so long as you don’t need too much help; the other tells you they will support you no matter what. Assuming both are telling the truth, the latter individual would make the better friend for you because they have a greater vested interest in your well-being: they will be less likely to abandon you in times of need, less likely to take better social deals elsewhere, less likely to betray you, and the like. In turn, that fact should incline you to help the latter more than the former individual. After all, it’s better for you to have your very-valuable allies alive and well-provisioned if you want them to be able to continue to help you to their fullest when you need it. The mere fact that you are valuable to them makes them valuable to you.

“Also, your leaving would literally kill me, so…motivation?”

This leaves people trying to walk a fine line between making friendships valuable in the exchange-sense of the word (friendships need to return more than they cost, else they could not have been selected for), while maintaining the representation that they not grounded in explicit exchanges publicly so as to make themselves appear to be better partners. In turn, this would create the need for people to distinguish between what we might call “true friends” – those who have your interests in mind – and “fair-weather friends” – those who will only behave as your friend so long as it’s convenient for them. In that last example we assumed both parties were telling the truth about how much they value you; in reality we can’t ever be so sure. This strategic analysis of the problem leaves us with a better sense as for why friendship relationships are different from exchange ones: while both involve exchanges, the nature of the exchanges do not serve the same signaling function, and so their form ends up looking different. People will need to engage in proximately altruistic behaviors for which they don’t expect immediate or specific reciprocity in order to credibly signal their value as an ally. Without such credible signaling, I’d be left taking you at your word that you really have my interests at heart, and that system is way too open to manipulation.

Such considerations could help explain, in part, why people are opposed to exchanging things like selling organs or sex for money but have little problem with such things being given for free. In the case of organ sales, for instance, there are a number of concerns which might crop up in people’s minds, one of the most prominent being that it puts an explicit dollar sign on human life. While we clearly need to do so implicitly (else we could, in principle, be willing to exhaust all worldly resources trying to prevent just one person from dying today), to make such an exchange implicit turns the relationship into an exchange one, sending a message along the lines of, “your life is not worth all that much to me”. Conversely, selling an organ could send a similar message: “my own life isn’t worth that much to me”. Both statements could have the effect of making one look like a worse social asset even if, practically, all such relationships are fundamentally based in exchanges; even if such a policy would have an overall positive effect on a group’s welfare.

References: Batson, C. (1993). Communal and exchange relationships: What is the difference? Personality & Social Psychology Bulletin, 19, 677-683.

DeScioli, P. & Kurzban, R. (2009). The alliance hypothesis for human friendship. PLoS ONE, 4(6): e5802. doi:10.1371/journal.pone.0005802

Some Thoughts On Side-Taking

Humans have a habit of inserting themselves in the disputes of other people. We often care deeply about matters concerning what other people do to each other and, occasionally, will even involve ourselves in disputes that previously had nothing to do with us; at least not directly. Though there are many examples of this kind of behavior, one of the most recent concerned the fatal shooting of a teen in Ferguson, Missouri, by a police officer. People from all over the country and, in some cases, other countries, were quick to weigh in on the issue, noting who they thought was wrong, what they think happened, and what punishment, if any, should be doled out. Phenomena like that one are so commonplace in human interactions it’s likely the case that the strangeness of the behavior often goes almost entirely unappreciated. What makes the behavior strange? Well, the fact that intervention in other people’s affairs and attempts to control their behavior or inflict costs on them for what they did tends to be costly. As it turns out, people aren’t exactly keen on having their behavior controlled by others and will, in many cases, aggressively resist those attempts.

Not unlike the free-spirited house cat

Let’s say, for instance, that you have a keen interest in killing someone. One day, you decide to translate that interest into action, attacking your target with a knife. If I were to attempt and intervene in that little dispute to try and help your target, there’s a very real possibility that some portion of your aggression might become directed at me instead. It seems as if I would be altogether safer if I minded my own business and let you get on with yours. In order for there to be selection for any psychological mechanisms that predispose me to become involved in other people’s disputes, then, there need to be some fitness benefits that outweigh the potential costs I might suffer. Alternatively, there might also be costs to me for not becoming involved. If the costs to non-involvement are greater than the costs of involvement, then there can also be selection for my side-taking mechanisms even if they are costly. So what might some of those benefits or costs be?

One obvious candidate is mutual self-interest. Though that term could cover a broad swath of meanings, I intend it in the proximate sense of the word at the moment. If you and I both desire that outcome X occurs, and someone else is going to prevent that outcome if either of us attempt to achieve it, then it would be in our interests to join forces – at least temporarily – to remove the obstacle in both of our paths. Translating this into a concrete example, you and I might be faced by an enemy who wishes to kill both of us, so by working together to kill him first, we can both achieve an end we desire. In another, less direct case, if my friend became involved in a bar fight, it would be in my best interests to avoid seeing my friend harmed, as an injured (or dead) friend is less effective at providing me benefits than a healthy one. In such cases, I might preferentially side with my friend so as to avoid seeing costs inflicted on him. In both cases, both the other party and I share a vested interest in the same outcome obtaining (in this case, the removal of a mutual threat).

Related to that last example is another candidate explanation: kin selection. As it is adaptive for copies of my genes to reproduce themselves regardless of which bodies they happen to be located in, assisting genetic relatives in disputes could similarly prove to be useful. A partially-overlapping set of genetic interests, then, could (and likely does) account for a certain degree of side-taking behavior, just as overlapping proximate interests might. By helping my kin, we are achieving a mutually-beneficial (ultimate-level) goal: the propagation of common genes.

A third possible explanation could also be grounded in reciprocal altruism, or long-term alliances. If I take your side today to help you achieve our goals, this might prove beneficial in the long term to the extent that it encourages you to take my side in the future. This explanation would work even in the absence of overlapping proximate or genetic interests: maybe I want to build my house where others would prefer I did not and maybe you want to get warning labels attached to ketchup bottles.You don’t really care about my problem and I don’t really care about yours, but so long as you’re willing to help me scratch my back on my problem, I might also be willing to help you scratch yours.

Also not unlike the free-spirited house cat

There is, however, another prominent reason we might take the side of another individual in a dispute: moral concerns. That is, people could take sides on the basis of whether they perceive someone did something “wrong”. This strategy, then, relies on using people’s behavior to take sides. In that domain, locating the benefits to involvement or the costs to non-involvement becomes a little trickier. Using behavior to pick sides can carry some costs: you will occasionally side against your interests, friends, and family by doing so (to the extent that those groups behave in immoral ways towards others). Nevertheless, the relative upsides to involvement in disputes on the basis of morality need to exist in some form for the mechanisms generating that behavior to have been selected for. As moral psychology likely serves the function of picking sides in disputes, we could consider how well the previous explanations for side taking fare for explaining moral side taking.

We can rule out the kin selection hypothesis immediately as explaining the relative benefits to moral side taking, as taking someone’s side in a dispute will not increase your genetic relatedness to them. Further, a mechanism that took sides on the basis of kinship should be primarily using genetic relatedness as an input for side-taking behavior; a mechanism that uses moral perceptions should be relatively insensitive to kinship cues. Relatedness is out.

A mutualistic account of morality could certainly explain some of the variance we see in moral side-taking. If both you and I want to see a cost inflicted on an individual or group of people because their existence presents us with costs, then we might side against people who engage in behaviors that benefit them, representing such behavior as immoral. This type of argument has been leveraged to understand why people often oppose recreational drug use: the opposition might help people with long-term sexual strategies inflict costs on the more promiscuous members of a population. The complication that mutualism runs into, though, is that certain behaviors might be evaluated inconsistently in that respect. As an example, murder might be in my interests when in the service of removing my enemies or the enemies of my allies; however, murder is not in my interests when used against me or my allies. If you side against those who murder people, you might also end up siding against people who share your interests and murder people (who might, in fact, further your interests by murdering others who oppose them).

While one could make the argument that we also don’t want to be murdered ourselves – accounting for some or all of that moral representation  of murder as wrong – something about that line doesn’t sit right with me: it seems to conceive of the mutual interest in an overly broad manner. Here’s an example of what I mean: let’s say that I don’t want to be murdered and you don’t want to be murdered. In some sense, we share an interest in common when it comes to preventing murder; it’s an outcome we both want to avoid. So let’s say one day I see you being attacked by someone who intends to murder to you. If I were to come to your aid and prevent you from being killed, I have not necessarily achieved my goal (“I don’t want to be murdered”); I’ve just helped you achieve yours (“You don’t want to be murdered”). To use an even simpler example, if both you and I are hungry, we both share an interest in obtaining food; that doesn’t mean that my helping you get food is filling my interests or my stomach. Thus, the interest in the above example is not necessarily a mutual one. As I noted previously, in the case of friends or kin it can be a mutual interest; it just doesn’t seem to be the case when thinking about the behavior per se. My preventing your murder is only useful (in the fitness sense of the word) to the extent that doing so helps me in some way in the future.

Another account of morality which differs from the above positions posits that side-taking on the basis of behavior could help reduce the costs of becoming involved in the disputes of others. Specifically, if all (or at least a sizable majority of) third parties took the same side in a dispute, one side would back down without the need for fights to be escalated to determine the winner (as more evenly-matched fights might require increased fighting costs to determine a winner, whereas lopsided ones often do not). This is something of a cost-reduction model. While the idea that morality functions as a coordination device – the same way, say, a traffic light does – raises an interesting possibility, it too comes with a number of complications. Chief among those complications is that coordination need not require a focus on the behavior of the disputants. In much the same way that the color of a traffic light bears no intrinsic relationship to driving behavior but is publicly observable, so too might coordination in the moral domain need not bear any resemblance to the behavior of the disputants. Third parties could, for instance, coordinate around the flip of a coin, rather than the behavior of the disputants. If anything, coin flips might be better tools than disputant’s behavior as, unlike behavior, the outcome of coin flips are easily observable. Most immoral behavior is notably not publicly observable, making coordination around it something of a hassle.

 And also making trials a thing…

What about the alliance-building idea? At first blush, taking sides on the basis of behavior seems like a much different type of strategy than siding on the basis of existing friendships. With some deeper consideration, though, I think there’s a lot of merit to the idea. Might behavior work as a cue for who would make a good alliance partner for you? After all, friendships have to start somewhere, and someone who was just stolen from might have a sudden need for partial partners that you might fill by punishing the perpetrator. Need provides a catalyst for new relationships to form. On the reverse end, that friend of yours who happens to be killing other people is probably going to end up racking up more than a few enemies: both the ones he directly impacted and the new ones who are trying to help his victims. If these enemies take a keen interest in harming him, he’s a riskier investment as costs are likely coming his way. The friendship itself might even become a liability to the extent that the people he put off are interested in harming you because you’re helping him, even if your help is unrelated to his acts. At such a point, his behavior might be a good indication that his value as a friend has gone down and, accordingly, it might be time to dump your friend from your life to avoid those association costs; it might even pay to jump on the punishing bandwagon. Even though you’re seeking partial relationships, you need impartial moral mechanisms to manage that task effectively.

This could explain why strangers become involved in disputes (they’re trying to build friendships and taking advantage of a temporary state of need to do so) and why side-taking on the basis of behavior rather than identity is useful at times (your friends might generate more hassle than they’re worth due to their behavior, especially since all the people they’re harming look like good social investments to others). It’s certainly an idea that deserves more thought.

Moral Stupefaction

I’m going to paint a picture of loss. Here’s a spoiler alert for you: this story will be a sad one.

Mark is sitting in a room with his cat, Tigger. Mark is a 23-year-old man who has lived most of his life as a social outcast. He never really fit in at school and he didn’t have any major accomplishments to his name. What Mark did have was Tigger. While Mark had lived a lonely life in his younger years, that loneliness had been kept at bay when, at the age of 12, he adopted Tigger. The two had been inseparable ever since, with Mark taking care of the cat with all of his heart. This night, as the two laid together, Tigger’s breathing was labored. Having recently become infected with a deadly parasite, Tigger was dying. Mark was set on keeping his beloved pet company in its last moments, hoping to chase away any fear or pain that Tigger might be feeling. Mark held Tigger close, petting him as he felt each breath grow shallower. Then they stopped coming all together. The cat’s body went limp, and Mark watched the life of only thing he had loved, and that had loved him, fade away.

As the cat was now dead and beyond experiencing any sensations of harm, Mark promptly got up to toss the cat’s body into the dumpster behind his apartment. On his way, Mark passed a homeless man who seemed hungry. Mark handed the man Tigger’s body, suggesting he eat it (the parasite which had killed Tigger was not transmittable to humans). After all, it seemed like a perfectly good meal shouldn’t go to waste. Mark even offered to cook the cat’s body thoroughly.

Now, the psychologist in me wants to know: Do you think what Mark did was wrong? Why do you think that? 

Also, I think we figured out the reason no one else liked Mark

If you answered “yes” to that question, chances are that at least some psychologists would call you morally dumbfounded. That is to say you are holding moral positions that you do not have good reasons for holding; you are struck dumb with confusion as to why you feel the way you do. Why might they call you this, you ask? Well, chances are because they would find your reasons for the wrongness of Mark’s behavior unpersuasive. You see, the above story has been carefully crafted to try and nullify any objections about proximate harms you might have. As the cat is dead, Mark isn’t hurting it by carelessly disposing of the body or even by suggesting that others eat it. As the parasite is not transmittable to humans, no harm would come of consuming the cat’s body. Maybe you find Mark’s behavior at the end disgusting or offensive for some reason, but your disgust and offense don’t make something morally wrong, the psychologists would tell you. After hearing these counter arguments, are you suddenly persuaded that Mark didn’t do something wrong? If you still feel he did, well, consider yourself morally dumbfounded as, chances are, you don’t have any more arguments to fall back on. You might even up saying, “It’s wrong but I don’t know why.”

The above scenario is quite similar to the ones presented to 31 undergraduate subjects in the now-classic paper on moral dumbfounding by Haidt, Bjorklund, & Murphy (2000). In the paper, subjects are presented with one reasoning task (the Heinz dilemma, asking whether a man should steal to help his dying wife) that involves trading off the welfare of one individual for another, and four other scenarios, each designed to be “harmless, yet disgusting:” a case of mutually-consensual incest between a brother and sister where pregnancy was precluded (due to birth control and condom use); a case where a medical student cuts a piece of flesh from a cadaver to eat, (the cadaver is about to be cremated and had been donated for medical research); a chance to drink juice that had a dead, sterilized cockroach stirred in for a few seconds and then removed; and a case where participants would be paid a small sum to sign and then destroy a non-binding contract that gave their soul to the experimenter. In the former two cases – incest and cannibalism –  participants were asked whether they thought the act was wrong and, if they did, to try and provide reasons for why; in the latter two cases – roach and soul – participants were asked if they would perform the task and, if they would not, why. After the participants stated their reasons, the experimenter would challenge their arguments in a devil’s-advocate type of way to try and get them to change their minds.

As a brief summary of the results: the large majority of participants reported that having consensual incest and removing flesh from a human cadaver to eat were wrong (in the latter case, I imagine they would similarly rate the removal of flesh as wrong even if it were not eaten, but that’s besides the point), and a similarly-large majority were also unwilling to drink from the roached water or the sign the soul contract. On average, the experimenter was able to change about 16% of the participants’ initial stances by countering their stated arguments. The finding of note that got this paper its recognition, however, is that, in many cases, participants would state reasons for their decisions that contradicted the story (i.e., that a child born of incest might have birth defects, though no child was born due to the contraceptives) and, when those concerns had been answered by the experimenter, that they still believed these acts to be wrong even if they could no longer think of any reasons for that judgment. In other words, participants appeared to generate their judgments of an act first (their intuitions), with the explicit verbal reasoning for their judgments being generated after the fact and, in some cases, seemingly disconnected from the scenarios themselves. Indeed, in all cases except the Heinz dilemma, participants rated their judgments as arising more from “gut feelings” than reasoning.

“fMRI scans revealed activation of the ascending colon for moral judgments…”

A number of facets of this work on moral dumbfounding are curious to me, though. One of those things that has always stood out to me as dissatisfying is that moral dumbfounding claims being made here are not what I would call positive claims (i.e., “people are using variable X as an input for determining moral perceptions”), but rather they seem to be negative ones (“people aren’t using conscious reasoning, or at least the parts of the brain doing the talking aren’t able to adequately articulate the reasoning”). While there’s nothing wrong with negative claims per se, I just happen to find them less satisfying than positive ones. I feel that this dissatisfaction owes its existence to the notion that positive claims help guide and frame future research to a greater extent than negative ones (but that could just be some part of my brain confabulating my intuitions).

My main issue with the paper, however, hinges on the notion that the acts in question were “harmless.” A lot is going to turn on what is meant by that term. An excellent analysis of this matter is put forth in a paper by Jacobson (2012), in which he notes that there are perfectly good, harm-based reasons as to why one might oppose, say, consensual incest. Specifically, what participants might be responding to was not the harm generated by the act in a particular instance so much as the expected value of the act. One example offered to help make that point concerns gambling:

Compare a scenario I’ll call Gamble, in which Mike and Judy—who have no creditors or dependents, but have been diligently saving for their retirement—take their nest egg, head to Vegas, and put it all on one spin of the roulette wheel. And they win! Suddenly their retirement becomes about 40 times more comfortable. Having gotten lucky once, they decide that they will never do anything like that again. Was what Mike and Judy did prudent?

 The answer, of course, is a resounding “no.” While the winning game of roulette might have been “harmless” in the proximate sense of the word, such an analysis would ignore risk. The expected value of the act was, on the whole, rather negative. Jacobson (2012) goes on to expand the example, asking now whether it would have been OK for the gambling couple to have used their child’s college savings instead. The point here is that consensual incest can be considered similarly dangerous. Just because things turned out well in that instance, it doesn’t mean that harm-based justifications for the condemnation are discountable ones; it could instead suggest that there exists a distinction between harm and risk that 30 undergraduate subjects are not able to articulate well when being challenged by a researcher. Like Jacobson, (2012), I would condemn drunk driving as well, even if it didn’t result in an accident.

To bolster that case, I would also like to draw attention to one of the findings of the moral dumbfounding paper I mentioned before: about 16% of participants reversed their moral judgments when their harm-based reasoning was challenged. Though this finding is not often the one people focus on when considering the moral dumbfounding paper, I think it helps demonstrate the importance of this harm dimension. If participants were not using harm (or risk of harm) as an input for their moral perceptions, but rather only a post-hoc justification, these reversals of opinion in the wake of reduced welfare concerns would seem rather strange. Granted, not every participant changes their mind – in fact, many did not – but that any of them did requires an explanation. If judgments of harm (or risk) are coming after the fact and not being used an inputs, why would they subsequently have any impact whatsoever?

“I have revised my nonconsequentialist position in light of those consequences”

Jacobson (2012) makes the point that perhaps there’s a case to be made that the subjects were not necessarily morally dumbfounded as much as the researchers looking at the data were morally stupefied. That is to say, it’s not that the participants didn’t have reasons for their judgments (whether or not they were able to articulate them well) so much as the researchers didn’t accept their viability or weren’t able to see their validity owing to their own theoretical blinders. If participants did not want to drink juice that had a sterilized cockroach dunked in it because they found it disgusting, they are not dumbfounded as to why they don’t want to drink it; the researchers just aren’t accepting the subject’s reasons (it’s disgusting) as valid. If, returning to the initial story in this post, people appear to be opposed to behaving toward beloved (but dead) pets in ways that appear more consistent with feelings of indifference or contempt because it is offensive, that seems like a fine reason for doing so. Whether or not offense is classified as a harm by a stupefied research is another matter entirely.

References: Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished Manuscript.

Jacobson, D., (2012). Moral dumbfounding and moral stupefaction. Oxford Studies in Normative Ethics, 2, DOI:10.1093/acprof:oso/9780199662951.003.0012

Charitable Interpretations Were Never My Strong Suit

Never attribute to malice what is adequately explained by stupidity – Halon’s Razor

Disagreement and dispute are pervasive parts of human life, arising for a number of reasons. As Halon’s razor suggests, the charitable response to disagreement would be to just call someone stupid for disagreeing, rather than evil. Thankfully, these are not either/or types of aspersion we can cast, and we’re free to consider those who disagree with us both stupid and evil if we so desire. Being the occasional participant in discussions – both in the academic and online worlds – I’m no stranger to either of those labels. The question of the accuracy of the aspersions remains, however: calling someone ignorant or evil could serve the function of spreading accurate information; then again, it could also serve the function of persuading others to not listen to what the target has to say.

“The other side doesn’t have the best interests of the Empire in mind like I do”

When persuasion gets involved, we are entering the realm where perceptions can be inaccurate, yet still be adaptive. Usually being wrong about the world carries costs, as incorrect information yields worse decision making. Believing inaccurately that cigarettes don’t increase the probability of developing lung cancer will not alter the probability of developing a tumor after picking up a pack-a-day habit. If, however, my beliefs can cause other people to behave differently, then I could do myself some good and being wrong isn’t quite as bad. For instance, even if my motives in a debate are purely and ruthlessly selfish, I might be able to persuade other people to support my side anyway through both (1) suggesting that my point of view is not being driven by my underlying biases – but rather by the facts of the matter and my altruistic tendencies – and (2) that my opponent’s perspective is not to be trusted (usually for the opposite set of reasons). The explanation for why people frequently accuse others of not understanding their perspective, or of sporting particular sets of biases, in debates, then, might have little to do with accuracy and more to do with convincing other people to not listen; to the extent that they happen to be accurate might be more be accidental than anything.

One example I discussed last year concerned the curious case of Kim Kardashian. Kim had donated 10% of some eBay sales to disaster relief, prompting many people to deride Kim’s behavior as selfishly motivated (even evil) and, in turn, also suggest that her donation be refused by aid organizations or the people in need themselves. It seemed to me that people were more interested in condemning Kim because they had something against her in particular, rather than because any of what she did was traditionally wrong or otherwise evil. It also seemed to me that, putting it lightly, Kim’s detractors might have been exaggerating her predilection towards evil by just a little bit. Maybe they were completely accurate – it’s possible, I suppose – it just didn’t seem particularly likely, especially given that many of the people condemning her probably knew very little about Kim on a personal level. If you want to watch other people make uncharitable interpretations of other people’s motives, I would encourage you to go observe a debate between people passionately arguing over an issue you couldn’t care less about. If you do, I suspect you will be struck by a sense that both sides of the dispute are, at least occasionally, being a little less than accurate when it comes to pinning motives and views on the other.

Alternatively, you could just observe the opposite side of a dispute you actually are invested in; chances are you will see your detractors as being dishonest and malicious, at least if the results obtained by Reeder et al (2005) are generalizable. In their paper, the researchers sought to examine whether one’s own stance on an issue tended to color their perceptions about the opposition’s motives. In their first study, Reeder et al (2005) posed about 100 American undergradutes with a survey asking them both about their perceptions of the US war in Iraq (concerning matters such as what motivated Bush to undertake the conflict and how likely particular motives were to be part of that reason), as well as whether they supported the war personally and what their political affiliation was. How charitable were the undergraduates when it came to assessing the motives for other people’s behavior?

“Don’t spend it all in one place”

The answer, predictably, tended to hinge on whether or not the participant favored the war themselves. In the open-ended responses, the two most common motives listed for going to war were self-defense and bringing benefits to the Iraqi people, freeing it from a dictatorship; the next two most common reasons were proactive aggression and hidden motives (like trying and take US citizen’s minds off other issues, like the economy). Among those who favored the war, 73% listed self-defense as a motive for the war, compared to just 39% who opposed it; conversely, proactive aggression was listed by 30% of those who supported the war, relative to 73% of those who oppose it. The findings were similar for ratings of self-serving motives: on a 1-7 scale (from being motivated by ethical principles to selfishness), those in favor of the war gave Bush a mean of 2.81; those opposed to the war gave him a 6.07. It’s worth noting at this point that (assuming the scale is, in fact measuring two opposite ends of a spectrum) both groups cannot be accurate in their perceptions of Bush’s motives. Given that those not either opposed to or supportive of the war tended to fall in between those two groups in their attributions of motives, it is also possible that both sides could well be wrong.

Interestingly – though not surprisingly – political affiliation per se did not have much predictive value for determining what people thought of Bush’s motives for the war when one’s own support for the war was entered into a regression model with it. What predicted people’s motive attributions was largely their own view about the war. In other words, Republicans who opposed to the war tended to view Bush largely the same as Democrats opposed to the war, just as Democrats supportive of the war viewed Bush the same as Republicans in favor of it. Reeder et al (2005) subsequently replicated these findings in a sample of Candian undergraduates who, at the time, were far less supportive of the war on the whole than the American sample. Additionally, this pattern of results was also replicated when asking about the motives of other people who support/oppose the war, rather than asking about Bush specifically. Further, when issues other than the war (in this case, abortion and gay marriage) were used, the same pattern of results obtained. In general, opposing an issue made those who support it look more self-serving and biased, and vice versa.

The last set of findings – concerning abortion and gay marriage – was particularly noteworthy because of an addition to the survey: a measure of personal involvement in the issue. Rather than just being asked about whether they support or oppose one side of the issue, they were also asked about how important the issue was to them and how likely they were to change their mind about their stance. As one might expect, this tendency to see your opposition as selfish, biased, close-minded, and ignorant was magnified by the extent to which one found the issue personally important. Though I can’t say for certain, I would venture a guess that, in general, the importance of an issue to me is fairly uncorrelated with how much other people know about it. In fact, if these judgments of other people’s motives and knowledge were driven by the facts of the matter, then the authors should not have observed this effect of issue importance. That line of reasoning, again, suggests that these perceptions are probably aimed more at persuasion than accuracy. The extent to which they’re accurate is likely besides the point.

“Damn it all; I was aiming for the man”

While I find this research interesting, I do wish that it had been grounded in the theory I had initially mentioned, concerning persuasion and accuracy. Instead, Reeder et al (2005) ground their account in naive realism, the tenets of which seem to be (roughly) that (a) people believe they are objective observers and (b) that other objective observers will see the world as they do, so (c) anyone who doesn’t agree must be ignorant or biased. Naive realism looks more like a description of results they found, rather than an explanation for them. In the interests of completeness, the authors also ground their research in self-categorization theory, which states that people seek to differentiate their group from other groups in terms of values, with the goal of making their own group look better. Again, this sounds like a description of behavior, rather than an explanation for it. As the authors don’t seem to share my taste for a particular type of theoretical explanation grounded in considerations of evolutionary function here (at least in terms of what they wrote), I am forced to conclude that they’re at least ignorant, if not downright evil*

References: Reeder, G., Pryor, J., Wohl, M., & Griswell, M. (2005). On attributing negative motives to others who disagree with our opinions. Personality & Social Psychology Bulletin, 31, 1498-1510.

*Not really

 

An Eye For Talent

Rejection can be a painful process for almost anyone (unless you’re English). For many, rejection is what happens when a (perhaps overly-bloated) ego ends up facing the reality that it really isn’t as good as it likes to tell people it is. For others, rejection is what happens when the person in charge of making the decision doesn’t possess the accuracy of assessment that they think they do (or wish they did), and failed to recognize your genius. One of the most notable examples of the latter is The Beatle’s Decca audition in 1962, during which the band was told they had no future in show business. Well over 250 million certified sales later, “oops” kind of fails to cut it with respect to how large of a blunder that decision was. This is by no means a phenomenon unique to The Beatles either: plenty of notable celebrities had been previously discouraged or rejected from their eventual profession by others. So we have a bit of error management going on here: record labels want to do things like (a) avoid signing artists that are unlikely to go anywhere while (b) avoiding failures to sign the best-selling band of all time. As they can’t do either of those things with perfect accuracy, they’re bound to make some mistakes.

“Yet again, our talents have gone unnoticed despite our sick riffs”

Part of the problem facing companies that put out products such as albums, books, movies, and the rest, is that popularity can be a terribly finicky thing, since popularity can often snowball on itself. It’s not necessarily the objective properties of a song or book that make it popular; a healthy portion of popularity depends on who else likes it (which might sound circular, but it’s not). This tends to make the former problem of weeding out the bad artists easier than finding the superstars: in most cases, people who can’t sing well won’t sell, but just because one can sing well it doesn’t mean they’re going to be a hit. As we’re about to see, these problems are shared not only by people who put out products like music or movies; they’re also shared by people who publish (or fail to publish) scientific research. A recent paper by Siler, Lee, & Bero (2014) sought to examine how good the peer review process – the process through which journal editors and reviewers decide what gets published and what does not – is at catching good papers and filtering out bad ones.

The data examined by the authors focused on approximately 1,000 papers that had been submitted to three of the top medical journals between 2003 and 2004: Annals of Internal Medicine, British Medical Journal, and The Lancet. Of the 1,008 manuscripts, 946 – or about 94% of them – were rejected. The vast majority of those rejections – about 80% – were desk rejections, which is when an article is not sent out for review before the journal decides to not publish it. From that statistic alone, we can already see that these journals are getting way more submissions than they could conceivably publish or review and, accordingly, lots of people are going to be unhappy with their decision letters. Thankfully, publication isn’t a one-time effort; authors can, and frequently do, resubmit their papers to other journals for publication. In fact, 757 of the rejected papers were found to have been subsequently published in other journals (more might have been published after being modified substantially, which would make them more difficult to track). This allowed Siler, Lee, & Bero (2014) the opportunity to compare the articles that were accepted to those which were rejected in terms of their quality and importance.

Now determining an article’s importance is a rather subjective task, so the authors decided to focus instead on the paper’s citation counts – how often other papers had referenced them – as of April 2014. While by no means a perfect metric, it’s a certainly a reasonable one, as most citations tend to be positive in nature. First, let’s consider the rejected articles. Of the articles that had been desk rejected by one of the three major journals but eventually published in other outlets, the average citation count was 69.8 per article; somewhat lower than the articles which had been sent out for review before they had been rejected (M = 94.65). This overstates the “average” difference by a bit, however, as citation count is not distributed normally. In the academic world, some superstar papers receive hundreds or thousands of the citations, whereas many others hardly receive any. To help account for this, the authors also examined the log-transformed number of citations. When they did so, the mean citation count for the desk rejected papers was 3.44, and 3.92 for the reviewed-then-rejected ones. So that is some evidence consistent with the notion that those who decide whether or not to send papers out for review work as advertised: the less popular papers (which we’re using as a proxy for quality) were rejected more readily, on average.

“I just don’t think they’re room for you on the team this season…”

There’s also evidence that, if the paper gets sent out to reviewers, the peer reviewers are able to assess a paper’s quality with some accuracy. When reviewers send their reviews back to the journal, they suggest that the paper be published as is, with minor/major revisions, or rejected. If those suggestions are coded as numerical values, each paper’s mean reviewer score can be calculated (e.g., fewer recommendations to reject = better paper). As it turns out, these scores correlated weakly – but positively – with an article’s subsequent citation count (r = 0.28 and 0.21 with citation and logged citation counts, respectively), so it seems the reviewers have at least some grasp on the paper’s importance and quality as well. That said, the number of times an article was revised prior to acceptance had no noticeable effect on it’s citation count. While reviewers might be able to discern the good papers from the bad at better-than-chance rates, the revisions they suggested did not appear to have a noticeable impact on later popularity.

What about the lucky papers that managed to get accepted by these prestigious journals? As they had all gone out for peer review, the reviewer’s scores were again compared against citation count, revealing a similarly small but positive correlation (0.21 and 0.26 with citation and logged citation counts). Additionally, the published articles that did not receive any recommendations to reject from the reviewers received higher citation counts on average (162.8 and 4.72) relative to those with at least one recommendation to reject (115.24 and 4.33). Comparing these numbers to the citation counts of the rejected articles, we can see a rather larger difference: articles being accepted by the high-end journals tended to garner substantially more citations than the ones that were rejected, whether before or after peer review.

That said, there’s a complication present in all this: papers rejected from the most prestigious journals tend to subsequently get published in less-prestigious outlets, which fewer people tend to read. As fewer eyes tend to see papers published in less-cited journals, this might mean that even good articles published in worse journals receive less attention. Indeed, the impact factor of the journal (the average citation count of the recent articles published in it) in which an article was published correlated 0.54 with citation and 0.42 with logged citation counts. To help get around that issue, the authors compared the published to rejected-then-published papers in journals with an impact factor of 8 or greater. When they did so, the authors found, interestingly, that the rejected articles were actually cited more than the accepted ones (212.77 vs 143.22 citations and 4.77 and 4.53 logged citations). While such an analysis might bias the number of “mistaken” rejections upwards (as it doesn’t count the papers that were “correctly” bumped down into lower journals), it’s a worthwhile point to bear in mind. It suggests that, above a certain threshold of quality, the acceptance or rejection by a journal might reflect chance differences more than meaningful ones.

But what about the superstar papers? Of the 15 most cited papers, 12 of them (80%) had been desk rejected. As the authors put it, “This finding suggests that in our case study, articles that would eventually become highly cited were roughly equally likely to be desk-rejected as a random submission“. Of the remaining three papers, two had been rejected after review (one of which had been rejected by two of the top 3 journals in question). While it was generally the case, then, that peer review appears to help weed out the “worst” papers, the process does not seem to be particularly good at recognizing the “best” work. Much like The Beatles Decca audition, then, rockstar papers are not often recognized as such immediately. Towards the end of the paper, the authors make reference to some other notable cases of important papers being rejected (one of which being rejected twice for being trivial and then a third time for being too novel).

“Your blindingly-obvious finding is just too novel”

It is worth bearing in mind that academic journals are looking to do more than just publish papers that will have the highest citation count down the line: sometimes good articles are rejected because they don’t fit the scope of the journal; others are rejected just because the journals just don’t have the space to publish them. When that happens, they thankfully tend to get published elsewhere relatively soon after; though “soon” can be a relative term for academics, it’s often within about half a year.

There are also cases where papers will be rejected because of some personal biases on the part of the reviewers, though, and those are the cases most people agree we want to avoid. It is then that the gatekeepers of scientific thought can do the most damage in hindering new and useful ideas because they find them personally unpalatable. If a particularly good idea ends up published in a particularly bad journal, so much the worse for the scientific community. Unfortunately, most of those biases remain hidden and hard to definitively demonstrate in any given instance, so I don’t know how much there is to do about reducing them. It’s a matter worth thinking about.

References: Siler, K., Lee, K., & Bero, L. (2014). Measuring the effectiveness of scientific gatekeeping. Proceedings of the National Academy of Sciences (US), DOI10.1073/pnas.1418218112

Why Do People Care About Race?

As I have discussed before, claims about a species’ evolutionary history – while they don’t directly test functional explanations – can be used to inform hypotheses about adaptive function. A good example of this concerns the topic of race, which happens to have been on many people’s minds lately. Along with sex and age, race tends to be encoded by our minds relatively automatically: these are the three primary factors people tend to notice and remember about others immediately. What makes the automatic encoding of race curious is that, prior to the advent of technologies for rapid transportation, our ancestors were unlikely to have consistently traveled far enough in the world to encounter people of other races. If that was the case, then our minds could not possess any adaptations that were selected to attend to it specifically. That doesn’t mean that we don’t attend to race (we clearly do), but rather that the attention that we pay to it is likely the byproduct of cognitive mechanisms designed to do other things. If, through some functional analysis, we were to uncover what those other things were, this could have some important implications for removing, or at least minimizing, all sorts of nasty racial prejudices.

…in turn eliminating the need to murder others for that skin-suit…

This, of course, raises the question what the cognitive mechanisms that end up attending to race have been selected to do; what their function is. One plausible candidate explanation put forth by Kurzban, Tooby, & Cosmides, (2001) is that the mechanisms that are currently attending to race might actually have been designed to attend instead to social coalitions. Though our ancestors might not have traveled far enough to encounter people of different races, they certainly did travel far enough to encounter members of other groups. Our ancestors also had to successfully manage within-group coalitions; questions concerning who happens to be who’s friends and enemies. Knowing the group membership of an individual is a rather important piece of information: it can inform you as to their probability of providing you with benefits or, say, a spear to the chest, among other things. Accordingly, traits that allowed individuals to determine other’s probable group membership, even incidentally, should be attended to, and it just so happens that race gets caught up in that mix in the modern day. That is likely due to shared appearance reflecting probable group memberships; just ask any clique of high school children who dress, talk, and act quite similarly to their close friends.

Unlike sex, however, people’s relevant coalitional membership is substantially more dynamic over time. This means that shared physical appearance will not always be a valid cue for determining who is likely to be siding with who. In such instances, then, we should predict that race-based cues should be disregarded in favor of more predictive ones. In simple terms, then, the hypothesis on the table is that (a) race tends to be used by our minds as a proxy for group membership, so (b) when more valid cues for group membership are present, people should pay much less attention to race.

So how does one go about testing such an idea? Kurzban, Tooby, & Cosmides, (2001) did so by using a memory confusion protocol. In such a design, participants are presented with a number of photos of people, as well as a sentence that the pictured individuals are said to have spoken to each other during a conversation about a sporting dispute they had last year. Following that, participants are given a surprise recall task, during which they are asked to match the sentences to the pictures of the people who said them. The underlying logic is that participants will tend to make a certain pattern of mistakes in their matching: they will confuse individuals with each other more readily to the extent that their mind has placed them in the same group (or, perhaps more accurately, to the extent that their mind has failed to encode differentiating features of the individuals). Framed in terms of race, we might expect that people will mistake a quote attributed to one black person with another, as they had been mentally grouped together, but will be less likely to mistake that quote for one attributed to a white person. Again, the question of interest here is how our minds might be grouping people: is it doing so on the basis of race per se, or on the basis of coalitions?

“Yes; it’s Photoshopped. And yes; you’re racist for asking”

In the first experiment, 8 pictures were presented, split evenly between young white and black males. From the verbal statements that accompanied each picture, they could be classified into one of two coalitions, though participants were not explicitly instructed to attend to that variable. All the men were dressed identically. In this condition, while subjects did appear to pick up on the coalition factor – evidenced by their being somewhat more likely to mistake people who belonged to same coalition with one another – the size of the race effect was twice as large. In other words, when the only cue to group membership was the statement accompanying each picture, people were more likely to mistake one white man for another more often than they were to mistake one member of a coalition for another.

In the second experiment, however, participants were given the same pictures, but now there was an additional visual cue to group membership: half of the men were wearing yellow jerseys while the other half wore gray. In this case, the color of the shirt predicted which coalition each man was in, but participants were again not told to pay attention to that explicitly. In this condition, the previous effect reversed: the size of the race effect was only half that of the effect for coalition membership. It seemed that giving people an alternative visual cue for group membership dramatically cut the race effect. In fact, in a follow-up study reported by the paper (using pictures of different men), the race effect disappeared. When provided with alternate visual cues to coalition membership, people seemed to be largely (though not necessarily entirely) disregarding race. This finding demonstrates that racial categorization is not always automatic and strong as it had previously been thought it to be.

Importantly, when this experiment was run using sex instead of race (i.e., 4 women and 4 men), the above effects did not replicate. Whether the cues to group membership were only verbal or whether they were verbal and visual, people continued to encode sex automatically and do so robustly, as evidenced again by their pattern of mistakes. Though white women and black men are both visually distinct from white men, additional visual cues to coalition membership only had an appreciable effect on latter group, consistent with the notion that the tendency people have to encode race is a byproduct of our coalitional psychology.

“With a little teamwork – black or white – we can all crush our enemies!”

The good news, then, is that people aren’t inherently racist; our evolutionary history wouldn’t allow it, given how far our ancestors likely traveled. We’re certainly interested in coalitions, these coalitions are frequently used to benefit our allies at the expense of non-members, and that part probably isn’t going away anytime soon, but that has a less morally-sinister tone to it for some reason. It is worth noting that, in the reality outside the lab, coalitions may well (and frequently seem to) form among racial or ethnic lines. Thankfully, as I mentioned initially, coalitions are also fluid things, and it (sometimes) only seems to take a small exposure to other visual indicators of membership to change the way people are viewed by others in that respect. Certainly useful information for anyone looking to reduce the impact of race-based categorization.

References: Kurzban, R., Tooby, J., & Cosmides, L. (2001). Can race be erased? Coalitional computation and social categorization. PNAS, 98, 15387-15392.

#HandsUp (Don’t Press The Button)

In general, people tend to think of themselves as not possessing biases or, at the very least, less susceptible to them than the average person. Roughly paraphrasing from Jason Weeden and Robert Kurzban’s latest book, when it comes to debates, people from both sides tend to agree with the premise that one side of the debate is full of reasonable, dispassionate, objective folk and the other side is full of biased, evil, ignorant ones; the only problem is that people seem to disagree as to which side is which. To quote directly from Mercier & Sperber (2011): “[people in debates] are not trying to form an opinion: They already have one. Their goal is argumentative rather than epistemic, and it ends up being pursued at the expense of epistemic soundness” (p.67). This is a long-winded way of saying that people – you and I included – are biased, and we typically end up seeking to support views we already hold. Now, recently, owing to the events that took place in Ferguson, a case has been made that police officers (as well as people in general) are biased against the black population when it comes to criminal justice. This claim is by no means novel; NWA, for instance, voiced in 1988 in their hit song “Fuck tha police”.

 They also have songs about killing people, so there’s that too…

Is the justice system and its representatives, at least in here in the US, biased against the black population? I suspect that most of you reading this already have an answer to that question which, to you, likely sounds pretty obvious. Many people have answered that question in the affirmative, as evidenced by such trending twitter hashtags as #BlackLivesMatter and #CrimingWhileWhite (the former implying that people devalue black lives and the latter implying that people get away with crimes because they’re white, but they wouldn’t if they were black). Though I can’t speak to the existence or extent of such biases – as well as the contexts in which they occur – I did come across some interesting research recently that deals with a related, but narrower question. This research attempts to answer a question that many people feel they already have the answer to: are police officers (or people) quicker to deploy deadly force against black targets, relative to white targets? I suspect many of you anticipate – correctly – that I’m about to tell you that some new research shows people aren’t biased against the black population in that respect. I further suspect that upon hearing that, one of your immediate thoughts will be to figure out why the conclusion must be incorrect.

The first of these papers (James, Vila, & Daratha, 2013) begins by noting that some previous research on the topic (though by no means all) has concluded that a racial bias against blacks exists when it comes to the deployment of deadly force. How did they come to this conclusion? Experimentally, it would seem they used a research method similar to the Implicit Association Task (or IAT): they have participants come into a lab, sit in front of a computer, and ask them to press a “shoot” button when they see armed targets pop up on screen and a “don’t shoot” button when the target isn’t armed. James, Vila, & Daratha (2013) argue that such a task is, well, fairly artificial and, as I have discussed before, artificial tasks can lead to artificial results. Part of that artificiality is that there is no difference between the two responses in such an experiment: both responses just involve pushing one button or another. By contrast, actually shooting someone involves unholstering a weapon and pulling a trigger, while not shooting at least does not involve that last step.So shooting is an action; not shooting is an inaction; pressing buttons, however, are both actions, and simple ones. Further, sitting at a computer and seeing static images pop up on the screen is just a bit less interactive than most police encounters that lead to the use of deadly force. So, whether these results concern people’s biases against blacks translate to anywhere outside the lab is an open question.

Accordingly, what the authors of the current paper did involved what must have been quite the luxurious lab set up. The researchers collected data from around 60 civilians and 40 police and military subjects. During each trial, the subjects were standing in an enclosed shooting range with a large screen that would display a simulations where they might or might not have to shoot. Each subject was provided with a modified Glock pistol (that shot lasers instead of bullets), holsters, and instructions on how to use them. The subjects each went through in between 10-30 simulations that recreated instances where officers had been assaulted or killed; simulations which included naturalistic filming with paid actors (as opposed to the typical static images). The subjects were supposed to shoot the armed targets in the simulation and avoid shooting unarmed ones. As usual, the race of the targets was varied to be white, black, or hispanic, as well as whether or not the targets were armed.

Across three studies, a clear pattern emerged: the participants were actually slower to shoot the armed black targets by in between 0.7 – 1.35 seconds, on average; no difference was found between the white and hispanic targets. This result held for both the civilians and the police. The pattern of mistakes people made was even more interesting: when they shot unarmed targets, they tended to shoot the unarmed black targets less than the unarmed white or hispanic targets; often substantially less. Similarly, subjects were also more likely to fail to shot an armed black target. To the extent that people were making errors or slowing down, they were doing so in favor of black targets, contrary to what many people shouting things right now would predict.

“That result is threatening my worldview; shoot it!”

As these studies appear to use a more realistic context when it comes to shooting – relative to sitting at a computer and pressing buttons – it casts some doubt as whether the previous findings that were uncovered when subjects were sitting at computer screens are able to be generalized to the wider world. Casting further doubt on the validity of the computer-derived results, a second paper by James, Klinger, & Vila (2014) examined the relationship between these subconscious race-base biases and the actual decision to shoot. They did so by reanalyzing some of the data (n = 48) from the previous experiment when participants had been hooked up to EEGs at the time. The EEG equipment was measuring what the authors call “alpha suppression”. According to their explanation (I’m not a neuroscience expert, so I’m only reporting what they do), the alpha waves being measured by the EEG tend to occur when individuals are relaxed, and reductions of alpha waves are associated with the presence of arousing external stimuli; in this case, the perception of threat. The short version of this study, then, seems to be that reductions in alpha waves equate, in some way, to more perception of threat.

The more difficult shooting scenarios resulted in greater alpha suppression than the simpler ones, consistent with a relation to threat level but, regardless of the scenario difficulty, the race effect remained consistent. The EEG results found that, when faced with a black target, subjects evidenced greater alpha suppression relative to when they confronting a white or hispanic target; this result obtained regardless of whether the target ended up being armed or not. To the extent that these alpha waves are measuring threat response on a physiological level, people found the black targets more threatening, but this did not translate into an increased likelihood to shoot them; in fact, it seemed to do the opposite. The authors suggest that this might have something to do with the perception of possible social and legal consequences for harming a member of a historically oppressed racial group.

In other words, people might not be shooting because they’re afraid that people will claim that the shooting was racially motivated (indeed, if the results had turned out the opposite way, I suspect many people would be making that precise claim, so they wouldn’t be wrong). The authors provide some reason to think the social concerns of shooting might be driving the hesitation, one of which involves this passage from an interview of a police chief in 1992:

“Bouza…. added that in most urban centers in the United States, when a police chief is called “at three in the morning and told, ‘Chief, one of our cops just shot a kid,’ the chief’s first questions are: ‘What color is the cop? What color is the kid?’” “And,” the reporter asked, “if the answer is, ‘The cop is white, the kid is black’?” “He gets dressed,”

“I’m not letting a white on white killing ruin this nap”

Just for some perspective, the subjects in this second study had responded to about 830 scenarios in total. Of those, there were 240 that did not require the use of force. Of those 240, participants accidentally shot a total of 47 times; 46 of those 47 unarmed targets were white (even though around a third of the targets were black). If there was some itchy trigger finger concerning black threats, it wasn’t seen in this study. Another article I came across (but have not fact checked so, you know, caveat there) suggests something similar: that biases against blacks in the criminal justice system don’t appear to exist.

Now the findings I have presented here may, for some reason, be faulty. Perhaps better experiments in the future will provide more concrete evidence concerning racial biases, or lack thereof. However, if you first reaction to these findings is to assume that something is wrong with them because you know that police target black suspects disproportionately, then I would urge you to consider that, well, maybe some biases are driving your reaction. That’s not to say that others aren’t biased, mind you, or that you’re necessarily wrong, just that you might be more biased than you like to imagine.

References: James, L., Vila, B. & Daratha, K. (2013) Influence of suspect race and ethnicity on decisions to shoot in high fidelity deadly force judgment and decision-making simulations. Journal of Experimental Criminology, 9, 189–212.

 James, L., Klinger, D., & Vila, B. (2014). Racial and ethnic bias in decisions to shoot seen through a stronger lens: Experimental results from high-fidelity laboratory simulations. Journal of Experimental Criminology, 10, 323-340.