Intergenerational Epigenetics And You

Today I wanted to cover a theoretical matter I’ve discussed before but apparently not on this site: the idea of epigenetic intergenerational transmission. In brief, epigenetics refers to chemical markers attached to your DNA that regulate how it’s expressed and regulated without changing the DNA itself. You could imagine your DNA as a book full of information and each cell in your body contains the same book. However, not every cell expressed the full genome; each cell only expresses part of it (which is why skin cells are different from muscle cells, for instance). The epigenetic portion, then, could be thought of as black tape placed over certain passages in the books so they are not read. As this tape is added or removed by environmental influences, different portions of the DNA will become active. From what I understand about how this works (which is admittedly very little at this juncture), usually these markers are not passed onto offspring from parents. The life experiences of your parents, in other words, will not be passed onto you via epigenetics. However, there has been some talk lately of people hypothesizing that not only are these changes occasionally (perhaps regularly?) passed on from parents to offspring; the implication seems to be present that they also might be passed on in an adaptive fashion. In short, organisms might adapt to their environment not just through genetic factors, but also through epigenetic ones.  

Who would have guessed Lamarckian evolution was still alive?

One of the examples given in the target article on the subject concerns periods of feast and famine. While rare in most first-world nations these days, these events probably used to be more recurrent features of our evolutionary history. The example there involves the following context: during some years in early 1900 Sweden food was abundant, while during other years it was scarce. Boys who were hitting puberty just at the time of a feast season tended to have grandchildren who died six years earlier than the grandchildren of boys who have experienced famine season during the same developmental window. The causes of death, we are told, often involving diabetes. Another case involves the children of smokers: men who smoked right before puberty tended to have children who were fatter, on average, than fathers who smoked habitually but didn’t start until after puberty . The speculation, in this case, is that development was in some way affected in a permanent fashion by food availability (or smoking) during a critical window of development, and those developmental changes were passed onto their sons and the sons of their sons.

As I read about these examples, there were a few things that stuck out to me as rather strange. First, it seems odd that no mention was made of daughters or granddaughters in that case, whereas in the food example there wasn’t any mention of the in-between male generation (they only mentioned grandfathers and grandsons there; not fathers). Perhaps there’s more to the data that is let on there but – in the event that no effects were found for fathers or daughters or any kind – it is also possible that a single data set might have been sliced up into a number of different pieces until the researchers found something worth talking about (e.g., didn’t find an effect in general? Try breaking the data down by gender and testing again). Now that might or might not be the case here, but as we’ve learned from the replication troubles in psychology, one way of increasing your false-positive rate is to divide your sample into a number of different subgroups. For the sake of this post, I’m going to assume that is not the case and treat the data as representing something real, rather than a statistical fluke.   

Assuming this isn’t just a false-positive, there are two issues with the examples as I see them. I’m going to focus predominately on the food example to highlight these issues: first, passing on such epigenetic changes seems maladaptive and, second, the story behind it seems implausible. Let’s take the issues in turn.

To understand why this kind of inter-generational epigenetic transmission seems maladaptive, consider two hypothetical children born one year apart (in, say, the years 1900 and 1901). At the time the first child’s father was hitting puberty, there was a temporary famine taking place and food was scarce; at the time of the second child, the famine had passed and food was abundant. According to the logic laid out, we should expect that (a) both children will have their genetic expression altered due to the epigenetic markers passed down by their parents, affecting their long-term development, and (b) the children will, in turn, pass those markers on to their own children, and their children’s children (and so on).

The big Thanksgiving dinner that gave your grandson diabetes

The problems here should become apparent quickly enough. First, let’s begin by assuming these epigenetic changes are adaptive: they are passed on because they are reproductively useful at helping a child develop appropriately. Specifically, a famine or feast at or around the time of puberty would need to be a reliable cue as to the type of environments their children could expect to encounter. If a child is going to face shortages of food, they might want to develop in a different manner than if they’re expecting food to be abundant.

Now that sounds well and good, but in our example these two children were born just a year apart and, as such, should be expected to face (broadly) the same environment, at least with respect to food availability (since feast and famines tends to be more global). Clearly, if the children were adopting different developmental plans in response to that feast of famine, both of them (plan A affected by the famine and plan B not so affected) cannot be adaptive. Specifically, if this epigenetic inheritance is trying to anticipate children’s future conditions by those present around the time of their father’s puberty, at least one of the children’s developmental plans will be anticipating the wrong set of conditions. That said, both developmental plans could be wrong, and conditions could look different than either anticipated. Trying to anticipate the future conditions one will encounter over their lifespan (and over their children’s and grandchild’s lifespan) using only information from the brief window of time around puberty seems like a plan doomed for failure, or at least suboptimal results.

A second problem arises because these changes are hypothesized to be intergenerational: capable of transmission across multiple generations. If that is the case, why on Earth would the researchers in this study pay any mind to the conditions the grandparents were facing around the time of puberty per se? Shouldn’t we be more concerned with the conditions being faced a number of generations backs, rather than the more immediate ones? To phrase this in terms of a chicken/egg problem, shouldn’t the grandparents in question have inherited epigenetic markers of their own from their grandparents, and so on down the line? If that were the case, the conditions they were facing around their puberty would either be irrelevant (because they already inherited such markers from their own parents) or would have altered the epigenetic markers as well.

If we opt for the former possibility, than studying grandparent’s puberty conditions shouldn’t be too impactful. However, if we opt for the latter possibility, we are again left in a bit of a theoretical bind: if the conditions faced by the grandparents altered their epigenetic markers, shouldn’t those same markers also have been altered by the parent’s experiences, and their grandson’s experiences as well? If they are being altered by the environment each generation, then they are poor candidates for intergenerational transmission (just as DNA that was constantly mutating would be). There is our dilemma, then: if epigenetics change across one’s lifespan, they are unlikely candidates for transmission between generations; if epigenetic changes can be passed down across generations stably, why look at the specific period pre-puberty for grandparents? Shouldn’t we be concerned with their grandparents, and so on down the lines?

“Oh no you don’t; you’re not pinning this one all on me”

Now, to be clear, a famine around the time of conception could affect development in other, more mundane ways. If a child isn’t receiving adequate nutrition at the time they are growing, then it is likely certain parts of their developing body will not grow as they otherwise would. When you don’t have enough calories to support your full development, trade-offs need to be made, just like if you don’t have enough money to buy everything you want at the store you have to pass up on some items to afford others. Those kinds of developmental outcomes can certainly have downstream effects on future generations through behavior, but they don’t seem like the kind of changes that could be passed on the way genetic material can. The same can be said about the smoking example provided as well: people who smoked during critical developmental windows could do damage to their own development, which in turn impacts the quality of the offspring they produce, but that’s not like genetic transmission at all. It would be no more surprising than finding out that parents exposed to radioactive waste tend to have children of a different quality than those not so exposed.

To the extent that these intergenerational changes are real and not just statistical oddities, it doesn’t seem likely that they could be adaptive; they would instead likely reflect developmental errors. Basically, the matter comes down to the following question: are the environmental conditions surrounding a particular developmental window good indicators of future conditions to the point you’d want to not only focus your own development around them, but also the development of your children and their children in turn? To me, the answer seems like a resounding, ‘”No, and that seems like a prime example of developmental rigidity, rather than plasticity.” Such a plan would not allow offspring to meet the demands of their unique environments particularly well. I’m not hopeful that this kind of thinking will lead to any revolutions in evolutionary theory, but I’m always willing to be proven wrong if the right data comes up. 

Mistreated Children Misbehaving

None of us are conceived or born as full adults; we all need to grow and develop from single cells to fully-formed adults. Unfortunately – for the sake of development, anyway – the future world you will find yourself in is not always predictable, which makes development a tricky matter at times. While there are often regularities in the broader environment (such as the presence or absence of sunlight, for instance), not every individual will inhabit the same environment or, more precisely, the same place in their environment. Consider two adult males, one of whom is six-feet tall and 230 pounds of muscle, and the other being five-feet tall and 110 pounds. While the dichotomy here is stark, it serves to make a simple point: if both of these males developed in a psychological manner that led them to pursue precisely the same strategies in life – in this case, say, one involving aggressive contests for access to females – it is quite likely that the weaker male will lose out to the stronger one most (if not all) of the time. As such, in order to be more-consistently adaptive, development must be something of a fluid process that helps tailor an individual’s psychology to the unique positions they find themselves in within a particular environment. Thus, if an organism is able to use some cues within their environment to predict their likely place in it in the future (in this case, whether they would grow large or small), their development could be altered to encourage their pursuit of alternate routes to eventual reproductive success. 

Because pretending you’re cut out for that kind of life will only make it worse

Let’s take that initial example and adapt it to a new context: rather than trying to predict whether one will grow up weak or strong, a child is trying to predict the probability of receiving parental investment in the future. If parental investment is unlikely to be forthcoming, children may need to take a different approach to their development to help secure the needed resources on their own, sometimes requiring their undertaking risky behaviors; by contrast, those children who are likely to receive consistent investment might be relatively less-inclined to take such risky and costly matters into their own hands, as the risk vs. reward calculations don’t favor such behavior. Placed in an understandable analogy, a child who estimates they won’t be receiving much investment from their parents might forgo a college education (and, indeed, even much of a high-school one) because they need to work to make ends meet. When you’re concerned about where your next meal is coming from there’s less time in your schedule for studying and taking out loans to not be working for four years. By contrast, the child from a richer family has the luxury of pursuing an education likely to produce greater future rewards because certain obstacles have been removed from their path.

Now obviously going to college is not something that humans have psychological adaptations for – it wasn’t a recurrent feature of our evolutionary history as a species – but there are cognitive systems we might expect to follow different developmental trajectories contingent on such estimations of one’s likely place in the environment; these could include systems judging the relative attractiveness of short- vs long-term rewards, willingness to take risks, pursuit of aggressive resolutions to conflicts, and so on. If the future is uncertain, saving for it makes less than taking a smaller reward in the present; if you lack social or financial support, being willing to fight to defend what little you do have might sound more appealing (as losing that little bit is more impactful when you won’t have anything left). The questions of interest thus becomes, “what cues in the environment might a developing child use to determine what their future will look like?” This brings us to the current paper by Abajobir et al (2016).

One potential cue might be your experiences with maltreatment while growing up, specifically at the hands of your caregivers. Though Abajobir et al (2016) don’t make the argument I’ve been sketching out explicitly, that seems to be the direction their research takes. They seem to reason (implicitly) that parental mistreatment should be a reliable cue to the future conditions you’re liable to encounter and, accordingly, one that children could use to alter their development. For instance, abusive or neglectful parents might lead to children adopting faster life history strategies involving risk-taking, delinquency, and violence themselves (or, if they’re going the maladaptive explanatory route, the failure of parents to provide supporting environments could in some way hinder development from proceeding as it usually would, in a similar fashion to not having enough food growing up might lead to one being shorter as an adult. I don’t know which line the authors would favor from their paper). That said, there is a healthy (and convincing) literature consistent with the hypothesis that parental behavior per se is not the cause of these developmental outcomes (Harris, 2009), but rather that it simply co-occurs with them. Specifically, abusive parents might be genetically different from non-abusive ones and those tendencies could get passed onto the children, accounting for the correlation. Alternatively, parents that maltreat their children might just happen to go together with children having peer groups growing up more prone to violence and delinquency themselves. Both are caused by other third variables.

Your personality usually can’t be blamed on them; you’re you all on your own

Whatever the nature of that correlation, Abajobir et al (2016) sought to use parental maltreatment from ages 0 to 14 as a predictor of later delinquent behaviors in the children by age 21. To do so, they used a prospective cohort of children and their mothers visiting a hospital between 1981-83. The cohort was then tracked for substantiated cases of child maltreatment reported to government agencies up to age 14, and at age 21 the children themselves were surveyed (the mothers being surveyed at several points throughout that time). Out of the 7200 initial participants, 3800 completed the 21-year follow up. At that follow up point, the children were asked questions concerning how often they did things like get excessively drunk, use recreational drugs, break the law, lie, cheat, steal, destroy the property of others, or fail to pay their debts. The mothers were also surveyed on matters concerning their age when they got pregnant, their arrest records, martial stability, and the amount of supervision they gave their children (all of these factors, unsurprisingly, predicting whether or not people continued on in the study for its full duration).

In total, of the 512 eventual cases of reported child maltreatment, only 172 remained in the sample at the 21-year follow up. As one might expect, maternal factors like her education status, arrest record, economic status, and unstable marriages all predicted increased likelihood of eventual child maltreatment. Further, of the 3800 participants, only 161 of them met the criteria for delinquency at 21 years. All of the previous maternal factors predicted delinquency as well: mothers who were arrested, got pregnant earlier, had unstable marriages, less education, and less money tended to produce more delinquent offspring. Adjusting for the maternal factors, however, it was reported that childhood maltreatment still predicted delinquency, but only for the male children. Specifically, maltreatment in males was associated with approximately 2-to-3.5 times as much delinquency as the non-maltreated males. For female offspring, there didn’t seem to be any notable correlation.

Now, as I mentioned, there are some genetic confounds here. It seems probable that parents who maltreat their children are, in some very real sense, different than parents who do not, and those tendencies can be inherited. This also doesn’t necessarily point a causal finger directly at parents, as it is also likely that maltreatment correlates with other social factors, like the peer group a child is liable to have or the neighborhoods they grow up in. The authors also mention that it is possible their measures of delinquency might not capture whatever effects childhood maltreatment (or its correlates) have on females, and that’s the point I wanted to wrap up discussing. To really put these findings on context, we would need to understand what adaptive role these delinquent behaviors – or rather the psychological mechanisms underlying them – have. For instance, frequent recreational drug use and problems fulfilling financial obligations might both signal that the person in question favors short-term rewards over long-term ones; frequent trouble with the law or destroying other people’s property could signal something about how the individual in question competes for social status. Maltreatment does seem to predict (even if it might not cause) different developmental courses, perhaps reflecting an active adjustment of development to deal with local environmental demands.

 The kids at school will all think you’re such a badass for this one

As we reviewed in the initial example, however, the same strategies will not always work equally well for every person. Those who are physically weaker are less likely to successfully enact aggressive strategies, all else being equal, for reasons which should be clear. Accordingly, we might expect that men and women show different patterns of delinquency to the extent they face unique adaptive problems. For instance, we might expect that females who find themselves in particularly hostile environments preferentially seek out male partners capable of enacting and defending against such aggression, as males tend to be more physically formidable (which is not to say that the women themselves might not be more physically aggressive as well). Any hypothetical shifts in mating preferences like these would not be captured by the present research particularly well, but it is nice to see the authors are at least thinking about what sex differences in patterns of delinquency might exist. It would be preferable if they were asking about those differences using this kind of a functional framework from the beginning, as that’s likely to yield more profitable insights and refine what questions get asked, but it’s good to see this kind of work all the same.

References: Abajobir, A., Kisely, S., Williams, G., Strathearnd, L., Clavarino, A., & Najman, J. (2016). Gender differences in delinquency at 21 years following childhood maltreatment: A birth cohort study. Personality & Individual Differences, 106, 95-103. 

Harris, J. (2009). The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press.

If No One Else Is Around, How Attractive Are You?

There’s anecdote that I’ve heard a few times about a man who goes to a diner for a meal. After finishing his dinner the waitress asks him if he’d like some dessert. When he inquires as to what flavors of pie they have the waitress tells him they have apple and cherry. The man says cherry and the waitress leaves to get it. She returns shortly afterwards and tells him she had forgotten they actually also had a blueberry pie. “In that case,” the man replies, “I’ll have apple.” Breaking this story down into a more abstract form, the man was presented with two options: A and B. Since he prefers A to B, he naturally selected A. However, when represented with A, B, and C, he now appears to reverse his initial preference, favoring B over A. Since he appears to prefer both A and B over C, it seems strange that C would affect his judgment at all, yet here it does. Now that’s just a funny little story, but there does appear some psychological literature suggesting that people’s preferences can be modified in similar ways.

“If only I had some more pointless options to help make my choice clear”

The general phenomenon might not be as strange as it initially sounds for two reasons. First, when choosing between A and B, the two items might be rather difficult to directly compare. Both A and B could have some upsides and downsides, but since they don’t necessarily all fall in the same domains, weighing them against the other isn’t always simple. As a for instance, if you looking to buy a new car, one option might have good gas mileage and great interior features (option A) while the other looks more visually appealing and comes with a lower price tag (option B). Pitting A against B here doesn’t always yield a straightforward choice, but if option C rolls around that gets good gas mileage, looks visually appealing, and comes with a lower price tag, this car can look better than either of the previous options by comparison. This third option need not even better more appealing than both alternatives, however; simply being preferable to one of them is usually enough (Mercier & Sperber, 2011).

Related to this point, people might want to maintain some degree of justifiably in their choices as well. After all, we don’t just make choices in a vacuum; the decisions we make often have wider social ramifications, so making a choice that can be easily justified to others can make them accept your decisions more readily (even if the choice you make is overall worse for you). Sticking with our car example, if you were to select option A, you might be praised by your environmentally-conscience friends while mocked by your friends more concerned with the look of the car; if you choose option B a similar outcome might obtain, but the friends doing the praising and mocking could switch. However, option C might be a crowd pleaser for both groups, yielding a decision with greater approval (you miss out on the interior features you want, but that’s the price you pay for social acceptance). The general logic of this example should extend to a number of different domains both in terms of things you might select and features you might use as the basis to select them on. So long as your decisions need to be justified to others, the individual appeal of certain features can be trumped.

Whether these kinds of comparison effects exist across all domains is an open question, however. The adaptive problems species need to solve often require specific sets of cognitive mechanics, so the mental algorithms that are leveraged to solve problems relating to selecting a car (a rather novel issue at that) might not be the same that help solve other problems. Given that different learning mechanisms appear to underlie seemingly similar problems – like learning the location of food and water - there is some good theoretical reasons to suspect that these kinds of comparison effects might not exist in domains where decisions require less justification, such as selecting a mate. This brings us to the present research today by Tovee et al (2016) who were examining the matter of how attractive people perceive the bodies of others (in this case, women) to be.

“Well, that’s not exactly how the other participants posed, but we can make an exception”

Tovee et al (2016) were interested in finding out whether judging bodies among a large array of other bodies might influence the judgments on any individual body’s attractiveness. The goal here was to find out whether people’s bodies have an attractiveness value independent of the range of bodies they happen to be around, or whether attractiveness judgments are made in relation to immediate circumstances. To put that another way, if you’re a “5-out-of-10″ on your own, might standing next to a three (or several threes) make you look more like a six? This is a matter of clear empirical importance as, when studies of this nature are conducted, it is fairly common for participants to be rating a large number of targets for attractiveness one after the other. If attractiveness judgments are, in some sense, corrupted from previous images there are implications for both past and future research that make use of such methods.

So, to get at the matter, the researchers employed a straightforward strategy: first, they asked one group of 20 participants (10 males and females) to judge 20 images of female bodies for attractiveness (these bodies varied in their BMI and waist-to-hip ratio; all clothing was standardized and all faces blurred out). Following that, a group of 400 participants rated the same images, but this time only rating a single image rather than 20 of them, again providing 10 male and female ratings per picture. The logic of this method is simple: if ratings of attractiveness tend to change contingent on the array of bodies available, then the between-subjects group ratings should be expected to differ in some noticeable way than those of the within-subjects group.

Turning to the results, there was a very strong correspondence between male and female judgments of attractiveness (r = .95) as well as within sex agreement (Cronbach’s alphas of 0.89 and 0.95). People tended to agree that as BMI and WHR increased, the women’s bodies became less attractive (at least within the range of values examined; the results might look different if women with very low BMIs were examined). As it turns out, however, there were no appreciable differences when comparing the within- and between-groups attractiveness ratings. When people were making judgments of just a single picture, they delivered similar judgments to those presented with many bodies. The authors conclude that perceptions of attractiveness appear to be generated by (metaphorically) consulting an internal reference template, rather than such judgments being influenced by the range of available bodies.

Which is not to say that being the best looking member of group will hurt

These findings make quite a bit of sense in light of the job that judgments of physical attractiveness are supposed to accomplish; namely assessing traits like physical health, fertility, strength, and so on. If one is interested in assessing the probable fertility of a given female, that value should not be expected to change as a function of whom she happens to be standing next to. In a simple example, a male copulating with a post-menopausal female should not be expected to achieve anything useful (in the reproductive sense of word), and the fact that she happened to be around women who are even older or less attractive shouldn’t be expected to change that fact. Indeed, on theoretical level we shouldn’t expect the independent attractiveness value of a body to change based on the other bodies around; at least there doesn’t seem to be any obvious adaptive advantages to (incorrectly) perceive a five as a six because she’s around a bunch of threes, rather than just (accurately) perceiving that five as a five and nevertheless concluding she’s the most attractive of the current options. However, if you were to incorrectly perceive that five as a six, it might have some downstream consequences when future options present themselves (such as not pursuing a more attractive alternative because the risk vs. reward calculations are being made with inaccurate information). As usual, acting on accurate information tends to have more benefits that changing your perceptions of the world.

References: Mercier, H. & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34, 57-111.

Tovee, M., Taylor, J., & Cornelissen, P. (2016). Can we believe judgments of human physical attractiveness? Evolution & Human Behavior, doi: 10.1016/j.evolhumbehav.2016.10.005

More About Race And Police Violence

A couple months back, I offered some thoughts on police violence. The most important, take-home message from that piece was that you need to be clear about what your expectations about the world are – as well as why they are that way – before you make claims of discrimination about population level data. If, for instance, you believe that men and women should be approximately equally likely to be killed by police – as both groups are approximately equal in the US population – then the information that approximately 95% or so of civilians killed by police are male might look odd to you. It means that some factors beyond simple representation in the population are responsible for determining who is likely to get shot and killed. Crucially, that gap cannot be automatically chalked up to any other particular factor by default. Just because men are overwhelmingly more likely to be killed by police, that assuredly does not mean police are biased against men and have an interest in killing them simply because of their sex.

“You can tell they just hate men; it’s so obvious”

Today, I wanted to continue on the theme from my last post and ask about what patterns of data we ought to expect with respect to police killing civilians and race. If we wanted to test the hypothesis that police killings tend to be racially-motivated (i.e., driven by anti-black prejudice), I would think we should expect a different pattern of data from the hypothesis that such killings are driven by race-neutral practices (e.g., cases in which the police are defending against perceived lethal threats, regardless of race). In this case, if police killings are driven by anti-black prejudice, we might propose the following hypothesis: all else being equal, we ought to expect white officers to kill black civilians in greater numbers than black officers. This expectation could be reasonably driven by the prospect that members of a group are less likely to be biased against their in-group than out-group members, on average (in other words, the non-fictional Clayton Bigsbys and Uncle Ruckus’s of the world ought to be rare).

If there was good evidence in favor of the racially-motivated hypothesis for police killings, there would be real implications for the trust people – especially minority groups – should put in the police, as well as for particular social reforms. By contrast, if the evidence is more consistent with the race-neutrality hypothesis, then a continuous emphasis of the importance of race could prove a red herring, distracting people from the other causes of police violence and preventing more effective interventions from being discussed. The issue is basically analogous to a doctor trying to treat an infection with a correct or incorrect diagnosis. It is unfortunate (and rather strange, frankly), then, that good data on police killings is apparently difficult to come by. One would think this is the kind of thing that people would have collected more information on, but apparently that’s not exactly the case. Thankfully,  we now have some fresh data on the topic that was just published by Lott & Moody (2016).

The authors collected their own data set of police killings from 2013 to 2015 by digging through Lexis/Nexis, Google, Google Alerts, and a number of other online databases, as well as directly contacting police departments. In total, they were able to compile information on on 2,700 police killings. Compared with the FBI’s information, the authors found about 1,300 more, about 741 more than the CDC, and 18 more than the Washington Post. Importantly, the authors were also able to collect a number of other pieces of information not consistently included in the other sources, including the number of officers on the scene, their age, gender, sex, and race, among a number of other factors. In demonstrating the importance of having good data, whereas the FBI had been reporting a 6% decrease in police killings over that period, the current data actually found a 29% increase. For those curious – and this is preview of what’s to come – the largest increase was attributed to white citizens being killed (312 in 2013 up to 509 in 2015; the comparable numbers for black citizens were 198 and 257).

“Good data is important, you say?”

In general, black civilians represented 25% of those killed by police, but only 12% of the overall population. Many people take this fact to reflect racial bias, but there are other things to consider, perhaps chief among which is that the crime rates were substantially higher in black neighborhoods. The reported violent crime rates were 758 per 100,000 in cities were black citizens were killed, compared with the 480 in which white citizens were killed (the rates of murder were 11.2 and 4.6, respectively). Thus, to the extent that police are only responding to criminal activity and not race, we should expect a greater representation of the black population relative to the overall population (just like we should expect more males than females to be shot, and more young people than older ones).

Turning to the matter of whether the race of the officer mattered, data was available for 904 cases (whereas the race of all those who were killed was known). When that information was entered into a number of regressions predicting the odds of the officer killing a black suspect, it was actually the case that black officers were quite a bit more likely to have killed a black suspect than a white officer in all cases (consistent with other data I’ve talked about before). It should be noted at this point, however, that for 67% of the cases, the race of the officers was unknown, whereas only 2% of the shootings for which race is known involve a black officer. As the CIA data I mentioned earlier highlighted, this unknown factor can be a big deal; perhaps black officers are actually less likely to have shot black suspects but we just can’t see it here. Since the killings of black citizens from the unknown race group did not differ from white officers, however, it seems unlikely that white officers would end up being unusually likely to shoot black suspects. Moreover, the racial composition of the police force was unrelated to those killings.

A number of other interesting findings cropped up as well. First, there was no effect of body cameras on police killings. This might suggest that when officers do kill someone – given the extremity and possible consequences of the action – it is something they tend to undertake earnestly out of fear for their life. Consistent with that idea, the greater the number of officers on the scene, the greater the reduction in the police killing anyone (about a 14-18% decline per additional officer present). Further, white female officers (though their numbers were low in the data) were also quite a bit more likely to shoot unarmed citizens (79% more), likely as a byproduct of their reduced capabilities to prevail in a physical conflict during which their weapon might be taken or they could get killed. To the extent these shootings are being driven by legitimate fears on the parts of the officers, all this data would appear to consistently fit together.

“Unarmed” does not always equal “Not Dangerous”

In sum, there doesn’t appear to be particularly strong empirical evidence that white officers are killing black citizens at higher rates than black officers; quite the opposite, in fact.  While such information might be viewed as a welcome relief, to those who have wed themselves to the idea that black populations are being targeted for lethal violence by police this data will likely be shrugged off. It will almost always be possible for someone seeking to find racism to manipulate their expectations into the world of empirical unfalsifiability. For example, given the current data of a lack of bias against black civilians by white officers, the racism hypothesis could be pushed one step back to some population-level bias whereby all officers, even black ones, are impacted by anti-black prejudice in their judgments (regardless of the department’s racial makeup, the presence of cameras, or any other such factor). It is also entirely possible that any racial biases don’t show up in the patterns of police killings, but might well show up in other patterns of less-lethal aggression or harassment. After all, there are very real consequences for killing a person – even when the killings are deemed justified and lawful – and many people would rather not subject themselves to such complications. Whatever the case, white officers do not appear unusually likely to shoot black suspects. 

References: Lott, J. & Moody, C. (2016). Do white officers unfairly target black suspects? (November 15, 2016). Available at SSRN: https://ssrn.com/abstract=2870189