Learning About Privilege Makes Liberals Look More Conservative

 

Not a good representation of poverty when people usually don’t use cash anymore

Why are poor people poor? Your answer to that question determines a lot about your feelings and response towards them. If you think people are poor because they’re good social investments who happen to be experiencing a patch of bad luck outside of their control – in other words, that their poverty isn’t really their fault – your interest in seeing that they receive assistance increases. (http://popsych.org/who-deserves-healthcare-and-unemployment-benefits/). On the other hand, if people are perceived to be poor because of undesirable personality traits – like laziness – and their poverty is their own fault, then people are less interested in providing them with assistance. (http://popsych.org/socially-strategic-welfare/) This makes sense in light of the prospect that people don’t’ help others simply because those other people need help. A psychological mechanism that encouraged its bearer to aid others at a personal cost wouldn’t do much to help its bearer succeed on the evolutionary stage unless those personal costs were later recouped. You help them at time A so that you get something in return at time B that outweighs the initial helping costs. If you’re helping someone who needs help because they’re lazy, it’s less likely they’re going to suddenly find motivation to help you later than if you helped someone who’s just unlucky.

God helps those who can help him later

The extent to which people differ in their desire to help the poor, then, likely varies with the attributions they make for poverty: If people largely believe poverty isn’t the fault of the poor, they will favor helping the poor more broadly, while those who believe poverty is the fault of the poor will disfavor helping them, in general. This divide should go a long way to explaining why, in the US, Liberals tend to favor social programs for helping the poor more than conservatives. Indeed, that precise pattern popped up in a recent paper by Cooley et al (2019) when participants read the following description of a made-up poor person:

Kevin, a[n]…American living in New York City, would say his life has been defined by poverty. As a child, Kevin was raised by a single mom who struggled to balance several part-time jobs simply to pay the bills. Most winters, they had no heat; and, it was a daily question whether they would have enough to eat. In late 2016, Kevin began to receive welfare assistance. Since then, he has not applied for any jobs and instead has cycled between jail cells, shelters, emergency rooms and the streets. Although Kevin would like to be financially independent, he doesn’t feel he has the skills or ability to obtain a well-paying job.

The results found that as political liberalism increased in people, they tended to both report more sympathy for Kevin, as well as making more external attributions for the causes of that poverty. Liberals were more interested in helping because they blamed Kevin less for his circumstances.

If you fancy yourself a liberal, take this time to pat yourself on the back for caring about Kevin’s plight. Good for you. If you fancy yourself a conservative, you can also take this time to pat yourself on the back for your realism about why Kevin is poor.

Now if that was all there was to this study, there might not be too much to talk about. However, the focus of this paper was more specific than general attitudes about poverty and political affiliation. Instead, the authors also looked at Kevin’s race: What happens when Kevin is described as White or Black in that opening sentence? As it turns out…nothing. While both liberals and conservatives were modestly more sympathetic towards a Black Kevin’s plight, these differences weren’t significant. Race didn’t seem to enter the equation when people were looking at this specific example of a poor person. That should be a good thing, I would think; people where judging Kevin as Kevin, rather than as a proxy for his entire race.

Again, if that’s all there was to this study, there might still not be much to talk about. It’s in the final twist of the experiment that brings it all home: how do people respond to a white/black Kevin after reading a bit about white privilege?

See how everyone’s angry here? That’s called foreshadowing

The experiment (number 2 in the paper) went as follows: 650 Participants would begin by reading a story. This story was either about the importance of a daily routine (the neutral control condition) or about white privilege (the experimental condition). Specifically, the privilege story read:

In America, there is a long history of White people having more power than other racial groups (e.g., Black people). Although many people think of racial inequality as decreasing, there are still privileges that are experienced by White Americans that are not true for other racial groups. For example, in her essay “White Privilege: Unpacking the Invisible Knapsack” Peggy McIntosh, PhD, lists different privileges that she experiences as a White person living in America.

Four specific examples were provided, including being able to be in the company of people of your same race most of the time, see your race widely and positively presented in the media, not being asked to speak on behalf of your racial group, and not having your race work against you if you need legal or medical help.

Once participants had read that story, they were then presented with the Kevin story from above, asked to respond about how much sympathy they felt for him and how much they blamed him for his situation before finally completing some demographic measures. This allowed the authors to probe what effect this brief discussion of white privilege had on people’s responses.

As it turned out, the conservatives didn’t seem to take much away from that brief lesson on privilege: On a scale of 0 (strongly disagree) to 100 (strongly agree), Conservatives reported an equal amount of sympathy for Kevin whether he was white or black (M = 59 for white and M = 61 for Black). As these numbers mirrored the values reported for Conservatives in the control condition well, we could conclude that Conservatives didn’t seem to care about the privilege talk.

The liberals were listening, on the other hand. In the experiment condition, they reported more sympathy for the Black Kevin (M = 76) than the white one (M = 60). So liberals and conservatives seemed to “agree” about how much sympathy white Kevin deserved, while liberals cared more about black Kevin. Does that mean the privilege lesson made liberals care about Black Kevin more? Not at all. When examining the control condition, the most interesting finding was made clear: when they were simply reading about routines, liberals cared as much about White Kevin (M = 71) as Black Kevin (M = 74). Comparing the numbers from the control and experimental groups, we see the following pattern of results emerge: when not thinking about white privilege, liberals cared more about poor people than conservatives and neither seemed to care about race. When white privilege was added to the equation, the only difference that emerged is that only liberals started to care less about White Kevin and blamed him more for this problems without showing any increase in care for Black Kevin.

“At least I didn’t help that poor white guy, which makes me a good person”

In sum, it looked like briefly reading about white privilege made liberals more conservative in their responses towards poor white people. It was a purely negative effect, with no apparent benefits for poor black people. Conservatives, on the other hand, remained consistent, suggesting the privilege talks weren’t doing any good there either. While it is only speculative, it is not hard to imagine how these effects might carry into other domains – like gender – or how they might be made more extreme when the discussion of white privilege isn’t limited to a short passage but instead begins to take up increasingly larger portions of social discourse. If this results in less care for certain groups without a corresponding increase in care for others, it should be a cause for concern to anyone interested in seeing poverty addressed effectively. It might also be a concern if your interest is in treating people as individuals, instead of as proxies for an entire group of people

References: Cooley, E., Brown-Iannuzzi, J., Lei, R., & Cipolli, W. (2019). Complex Intersections of Race and Class: Among Social Liberals, Learning About White Privilege Reduces Sympathy, Increases Blame, and Decreases External Attributions for White People Struggling With Poverty. Journal of Experimental Psychology: Generalhttp://dx.doi.org/10.1037/xge0000605

 

More About Race And Police Violence

A couple months back, I offered some thoughts on police violence. The most important, take-home message from that piece was that you need to be clear about what your expectations about the world are – as well as why they are that way – before you make claims of discrimination about population level data. If, for instance, you believe that men and women should be approximately equally likely to be killed by police – as both groups are approximately equal in the US population – then the information that approximately 95% or so of civilians killed by police are male might look odd to you. It means that some factors beyond simple representation in the population are responsible for determining who is likely to get shot and killed. Crucially, that gap cannot be automatically chalked up to any other particular factor by default. Just because men are overwhelmingly more likely to be killed by police, that assuredly does not mean police are biased against men and have an interest in killing them simply because of their sex.

“You can tell they just hate men; it’s so obvious”

Today, I wanted to continue on the theme from my last post and ask about what patterns of data we ought to expect with respect to police killing civilians and race. If we wanted to test the hypothesis that police killings tend to be racially-motivated (i.e., driven by anti-black prejudice), I would think we should expect a different pattern of data from the hypothesis that such killings are driven by race-neutral practices (e.g., cases in which the police are defending against perceived lethal threats, regardless of race). In this case, if police killings are driven by anti-black prejudice, we might propose the following hypothesis: all else being equal, we ought to expect white officers to kill black civilians in greater numbers than black officers. This expectation could be reasonably driven by the prospect that members of a group are less likely to be biased against their in-group than out-group members, on average (in other words, the non-fictional Clayton Bigsbys and Uncle Ruckus’s of the world ought to be rare).

If there was good evidence in favor of the racially-motivated hypothesis for police killings, there would be real implications for the trust people – especially minority groups – should put in the police, as well as for particular social reforms. By contrast, if the evidence is more consistent with the race-neutrality hypothesis, then a continuous emphasis of the importance of race could prove a red herring, distracting people from the other causes of police violence and preventing more effective interventions from being discussed. The issue is basically analogous to a doctor trying to treat an infection with a correct or incorrect diagnosis. It is unfortunate (and rather strange, frankly), then, that good data on police killings is apparently difficult to come by. One would think this is the kind of thing that people would have collected more information on, but apparently that’s not exactly the case. Thankfully,  we now have some fresh data on the topic that was just published by Lott & Moody (2016).

The authors collected their own data set of police killings from 2013 to 2015 by digging through Lexis/Nexis, Google, Google Alerts, and a number of other online databases, as well as directly contacting police departments. In total, they were able to compile information on on 2,700 police killings. Compared with the FBI’s information, the authors found about 1,300 more, about 741 more than the CDC, and 18 more than the Washington Post. Importantly, the authors were also able to collect a number of other pieces of information not consistently included in the other sources, including the number of officers on the scene, their age, gender, sex, and race, among a number of other factors. In demonstrating the importance of having good data, whereas the FBI had been reporting a 6% decrease in police killings over that period, the current data actually found a 29% increase. For those curious – and this is preview of what’s to come – the largest increase was attributed to white citizens being killed (312 in 2013 up to 509 in 2015; the comparable numbers for black citizens were 198 and 257).

“Good data is important, you say?”

In general, black civilians represented 25% of those killed by police, but only 12% of the overall population. Many people take this fact to reflect racial bias, but there are other things to consider, perhaps chief among which is that the crime rates were substantially higher in black neighborhoods. The reported violent crime rates were 758 per 100,000 in cities were black citizens were killed, compared with the 480 in which white citizens were killed (the rates of murder were 11.2 and 4.6, respectively). Thus, to the extent that police are only responding to criminal activity and not race, we should expect a greater representation of the black population relative to the overall population (just like we should expect more males than females to be shot, and more young people than older ones).

Turning to the matter of whether the race of the officer mattered, data was available for 904 cases (whereas the race of all those who were killed was known). When that information was entered into a number of regressions predicting the odds of the officer killing a black suspect, it was actually the case that black officers were quite a bit more likely to have killed a black suspect than a white officer in all cases (consistent with other data I’ve talked about before). It should be noted at this point, however, that for 67% of the cases, the race of the officers was unknown, whereas only 2% of the shootings for which race is known involve a black officer. As the CIA data I mentioned earlier highlighted, this unknown factor can be a big deal; perhaps black officers are actually less likely to have shot black suspects but we just can’t see it here. Since the killings of black citizens from the unknown race group did not differ from white officers, however, it seems unlikely that white officers would end up being unusually likely to shoot black suspects. Moreover, the racial composition of the police force was unrelated to those killings.

A number of other interesting findings cropped up as well. First, there was no effect of body cameras on police killings. This might suggest that when officers do kill someone – given the extremity and possible consequences of the action – it is something they tend to undertake earnestly out of fear for their life. Consistent with that idea, the greater the number of officers on the scene, the greater the reduction in the police killing anyone (about a 14-18% decline per additional officer present). Further, white female officers (though their numbers were low in the data) were also quite a bit more likely to shoot unarmed citizens (79% more), likely as a byproduct of their reduced capabilities to prevail in a physical conflict during which their weapon might be taken or they could get killed. To the extent these shootings are being driven by legitimate fears on the parts of the officers, all this data would appear to consistently fit together.

“Unarmed” does not always equal “Not Dangerous”

In sum, there doesn’t appear to be particularly strong empirical evidence that white officers are killing black citizens at higher rates than black officers; quite the opposite, in fact.  While such information might be viewed as a welcome relief, to those who have wed themselves to the idea that black populations are being targeted for lethal violence by police this data will likely be shrugged off. It will almost always be possible for someone seeking to find racism to manipulate their expectations into the world of empirical unfalsifiability. For example, given the current data of a lack of bias against black civilians by white officers, the racism hypothesis could be pushed one step back to some population-level bias whereby all officers, even black ones, are impacted by anti-black prejudice in their judgments (regardless of the department’s racial makeup, the presence of cameras, or any other such factor). It is also entirely possible that any racial biases don’t show up in the patterns of police killings, but might well show up in other patterns of less-lethal aggression or harassment. After all, there are very real consequences for killing a person – even when the killings are deemed justified and lawful – and many people would rather not subject themselves to such complications. Whatever the case, white officers do not appear unusually likely to shoot black suspects. 

References: Lott, J. & Moody, C. (2016). Do white officers unfairly target black suspects? (November 15, 2016). Available at SSRN: https://ssrn.com/abstract=2870189

When It’s Not About Race Per Se

We can use facts about human evolutionary history to understand the shape of our minds; using it to understand people’s reactions to race is no exception. As I have discussed before, it is unlikely that ancestral human populations ever traveled far enough, consistently enough throughout our history as a species to have encountered members of other races with any regularity. Different races, in other words, were unlikely to be a persistent feature of our evolutionary history. As such, it seems correspondingly unlikely that human minds contain any modules that function to attend to race per se. Yet we do seem to automatically attend to race on a cognitive level (just as we do with sex and age), so what’s going on here? The best hypothesis I’ve seen as of yet is that people aren’t paying attention to race itself as much as they are using it as a proxy for something else that likely was recurrently relevant during our history: group membership and social coalitions (Kurzban, Tooby, & Cosmides, 2001). Indeed, when people are provided with alternate visual cues to group membership – such as different color shirts – the automaticity of race being attended to appears to be diminished, even to the point of being erased entirely at times.

Bright colors; more relevant than race at times

If people attend to race as a byproduct of our interest in social coalitions, then there are implications here for understanding racial biases as well. Specifically, it would seem unlikely for widespread racial biases to exist simply because of superficial differences like skin color or facial features; instead, it seems more likely that racial biases are a product of other considerations, such as the possibility that different groups – racial or otherwise – simply hold different values as social associates to others. For instance, if the best interests of group X are opposed to those group Y, then we might expect those groups to hold negative opinions of each other on the whole, since the success of one appears to handicap the success of the other (for an easy example of this, think about how more monogamous individuals tend to come into conflict with promiscuous ones). Importantly, to the extent that those best interests just so happen to correlate with race, people might mistake a negative bias due to varying social values or best interests for one due to race.

In case that sounds a bit too abstract, here’s an example to make it immediately understandable: imagine an insurance company that is trying to set its premiums only in accordance with risk. If someone lives in an area at a high risk of some negative outcome (like flooding or robbery), it makes sense for the insurance company to set a higher premium for them, as there’s a greater chance they will need to pay out; conversely, those in low-risk areas can pay reduced premiums for the same reason. In general, people have no problem with this idea of discrimination: it is morally acceptable to charge different rates for insurance based on risk factors. However, if that high-risk area just so happens to be one in which a particular racial group lives, then people might mistake a risk-based policy for a race-based one. In fact, in previous research, certain groups (specifically liberal ones) generally say it is unacceptable for insurance companies to require those living in high-risk areas pay higher premiums if they happen to be predominately black (Tetlock et al, 2000).

Returning the main idea at hand, previous research in psychology has tended to associate conservatives – but not liberals – with prejudice. However, there has been something of a confounding factor in that literature (which might be expected, given that academics in psychology are overwhelmingly liberal): specifically, much of that literature on prejudice asks about attitudes towards groups whose values tend to lean more towards the liberal side of the political spectrum, like homosexual, immigrant, and black populations (groups that might tend to support things like affirmative action, which conservative groups would tend to oppose). When that confound is present, then it’s not terribly surprising that conservatives would look more prejudiced, but that prejudice might ultimately have little to do with the target’s race or sexual orientation per se.  More specifically, if animosity between different racial groups is due primarily to a factor like race itself, then you might expect those negative feelings to persist even in the face of compatible values. That is, if a white person happens to not like black people because they are black, then the views of a particular black person shouldn’t be liable to change those racist sentiments too much. However, if those negative attitudes are instead more of a product of a perceived conflict of values, then altering those political or social values should dampen or remove the effects of race altogether. 

Shaving the mustache is probably a good place to start

This idea was tested by Chambers et al (2012) over the course of three studies. The first of these involved 170 Mturk participants who indicated their own ideological position (strongly liberal to strongly conservative, 5-point scale), their impressions of 34 different groups (in terms of whether they’re usually liberal or conservative on the same scale, as well as how much they liked the target group), as well as a few other measures related to the prejudice construct, like system justification and modern racism. As it turns out, both liberals and conservatives tended to agree with one another about how liberal or conservative the target groups tended to be (r = .97), so their ratings were averaged. Importantly, when the target group in question tended to be liberal (such as Feminists or Atheists), liberals tended to have higher favorability ratings of them (M = 3.48) than did conservatives (M = 2.57; d = 1.23); conversely, when the target group was perceived as conservative (such as business people or the elderly), liberals now tended to have lower favorability ratings (M = 2.99) of them than conservatives (M = 3.86; d = 1.22). In short, liberals tended to feel positive about liberals, and conservatives tended to feel positive about conservatives. The more extreme the perceived political differences of the target were, the larger these biases were (r = .84). Further, when group memberships needed to be chosen, the biases were larger than when they were involuntary (e.g., as a group, “feminist”s generated more bias from liberals and conservatives than “women”).

Since that was all correlational, studies 2 and 3 took a more experimental approach. Here, participants were exposed to a target whose race (white/black) and positions (conservatives or liberal) were manipulated on six different issues (welfare, affirmative action, wealth redistribution, abortion, gun control, and the Iraq war).In study 2 this was done on a within-subjects basis with 67 participants, and in study 3 it was done between-subjects with 152 participants. In both cases, however, the results were similar: in general, the results showed that while the target’s attitudes mattered when it came to how much the participants liked them, the target’s race did not. Liberals didn’t like black targets who disagreed any more than conservatives did. The conservatives happened to like the targets who expressed conservative views more, whereas liberals tended to like targets who expressed liberal views more. The participants had also provided scores on measures of system justification, modern racism, and attitudes towards blacks. Even when these factors were controlled for, however, the pattern of results remained: people tended to react favorably towards those who shared views and unfavorably to those who did not. The race of the person with those views seemed besides the point for both liberals and conservatives. Not to hammer the point home too much, but perceiving ideological agreement – not race – was doing the metaphorical lifting here. 

Now perhaps these results would have looked different if the samples in question were comprised of people who held, more or less, extreme and explicit racist views; the type of people would wouldn’t want to live next to someone of a different race. While that’s possible, there are a few points to make about that suggestion: first, it’s becoming increasing difficult to find people who hold such racist or sexist views, despite certain rhetoric to the contrary; that’s the reason researchers ask about “symbolic” or “modern” or “implicit” racism, rather than just racism. Such openly-racist individuals are clearly the exceptions, rather than the rule. This brings me to the second point, which is that, even if biases did look different among the hardcore racists (we don’t know if they do), for more average people, like the kind in these studies, there doesn’t appear to be a widespread problem with race per se; at least not if the current data have any bearing on the matter. Instead, it seems possible that people might be inferring a racial motivation where it doesn’t exist because of correlations with race (just like in our insurance example).

Pictured: unusual people; not everyone you disagree with

For some, the reaction to this finding might be to say that it doesn’t matter. After all, we want to reduce racism, so being incredibly vigilant for it should ensure that we catch it where it exists, rather than miss it or make it seem permissible. Now that likely true enough, but there are other considerations to add into that equation. One of them is that by reducing your type-two errors (failing to see racism where it exists) you increase your type-one errors (seeing racism where there is none). As long as accusations of being a racist are tied to social condemnation (not praise; a fact alone which ought to tell you something), you will be harming people by overperceiving the issue. Moreover, if you perceive racism where it doesn’t exist too often, you will end up with people who don’t take your claims of racism seriously anymore. Another point to make is that if you’re actually serious about addressing a social problem you see, accurately understanding its causes will go a long way. That is to say that time and energy invested in interventions to reduce racism is time not spent trying to address other problems. If you have misdiagnosed the issue you seek to tackle as being grounded in race, then your efforts to address it will be less successful than they otherwise could be, not unlike a doctor prescribing the wrong medication to treat an infection.          

References: Chambers, J., Schlenker, B., & Collisson, B. (2012). Ideology and prejudice: The role of value conflicts. Psychological Science, 24, 140-149.

Kurzban, R., Tooby, J., & Cosmides, L. (2001). Can race be erased? Coalitional computation and social categorization. PNAS, 98, 15387-15392.

Tetlock, P., Kristel, O., Elson, S., Green, M., & Lerner, J. (2000). The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78 (5), 853-870 DOI: 10.1037//0022-3514.78.5.853

Musings About Police Violence

I was going to write about something else today (the finding from a meta-analysis that artificial surveillance cues do not appear to appreciably increase generosity; the effects fail to reliably replicate), but I decided to switch topics up to something more topical: police violence. My goal today is not to provide answers to this on-going public debate – I certainly don’t know enough about the topic to consider myself an expert – but rather to try and add some clarity to certain features of the discussions surrounding the matter, and hopefully help people think about it in somewhat unusual ways. If you expect me to take a specific stance on the issue, be that one that agrees or disagrees with your own, I’m going to disappoint you. That alone may upset some people who take anything other than definite agreement as a sign of aggression against them, but there isn’t much to do about that. That said, the discussion about police violence itself is a large and complex one, the scope of which far exceeds the length constraints of my usual posts. Accordingly, I wanted to limit my thoughts on the matter to two main domains: important questions worth answering, and addressing the matter of why many people find the “Black Lives Matter” hashtag needlessly divisive.

Which I’m sure will receive a warm, measured response

First, let’s jump into the matter of important questions. One of the questions I’ve never seen explicitly raised in the context of these discussions – let alone answered – is the following: How many people should we expect to get killed by police each year? There is a gut response that many would no doubt have to that question: zero. Surely someone getting killed is a tragedy that we should seek to avoid at all times, regardless of the situation; at best, it’s a regrettable state of affairs that sometimes occurs because the alternative is worse. While zero might be the ideal world outcome, this question is asking more about the world that we find ourselves in now. Even if you don’t particularly like the expectation that police will kill people from time to time, we need to have some expectation of just how often it will happen to put the violence in context. These killings, of course, include a variety of scenarios: there are those in which the police justifiably kill someone (usually in defense of themselves or others), those cases where the police mistakenly kill someone (usually when an error of judgment occurs regarding the need for defense, such as when someone has a toy gun), and those cases where police maliciously kill someone (the killing is aggressive, rather than defensive, in nature). How are we to go about generating these expectations?

One popular method seems to be comparisons of police shootings cross-nationally. The picture that results from such analyses appears to suggest that US police shoot people much more frequently than police from other modern countries. For instance, The Guardian claims that Canadian police shoot and kill about 25 people a year, as compared with approximately 1,000 such shootings in the US in 2015. Assuming those numbers are correct, once we correct for population size (the US is about ten-times more populated than Canada), we can see that US police shoot and kill about four-times as many people. That sure seems like a lot, probably because it is a lot. We want to do more than note that there is a difference, however; we want to see whether that difference violates our expectations, and to do that, we need to be clear about why our expectations were generated. If, for example, police in the US face threatening situations more often than Canadian police, this is a relevant piece of information.

To begin engaging with that idea, we might consider how many police die each year in the line of duty, cross-nationally as well. In Canada, the number for 2015 looks to be three; adjusting for population size again, we would generate an expectation of 30 US police officer deaths if all else were equal. All else is apparently not equal, however, as the actual number for 2015 in the US is about 130. Not only are the US police killing four-times as often as their Canadian counterparts, then, but they’re also dying at approximately the same rate as well. That said, those numbers include factors other than homicides, and so that too should be taken into account when generating our expectations (in Canada, the number of police shot was 2 in 2015, compared to 40 in the US, which is still twice as high as one would expect from population size

. There are also other methods of killing police, such as the 50 US police killed by bombs or cars; 0 for Canada). Given the prevalence of firearm ownership in the US, it might not be too surprising that the rates of violence between police and citizens – as well as between citizens and other citizens – looks substantially different than in other countries. There are other facts which might adjust our expectations up or down. For instance, while the US has 10 times the population of Canada, the number of police per 100,000 people (376) is different than that of Canada (202). How we should adjust the numbers to make a comparison based on population differences, then, is a matter worth thinking about (should we expect ratio of police officers to citizens per se to increase the number of them that are shot, or is population the better metric?). Also worth mentioning is that the general homicide rate per 100,000 people is quite a bit higher in the US (3.9) than in Canada (1.4). While this list of considerations is very clearly not exhaustive, I hope it generates some thoughts regarding the importance of figuring out what our expectations are, as well as why. The numbers of shootings alone are going to be useless without good context. 

Factor 10: Perceived silliness of uniforms

The second question concerns bias within these shootings in the US. In addition to our expectations for the number of people being killed each year by police, we also want to generate some expectations for the demographics of those who are shot: what should we expect the demographics of those being killed by police to be? Before we can claim there is a bias in the shooting data, we need to both have a sense for what our expectation in that regard are, why they are such, and only then can we look at how those expectations are violated.

The obvious benchmark that many people would begin would be the demographics of the US as a whole. We might expect, for instance, that the victims of police violence in the US are 63% white, 12% black, about 50% male, and so on, mirroring the population of the country. Some data I’ve come across suggests that this is not the case, however, with approximately 50% of the victims being white and 26% being black. Now that we know the demographics don’t match up as we’d expect from population alone, we want to know why. One tempting answer that many people fall back on is that police are racially motivated: after all, if black people make up 12% of the population but represent 26% of police killings, this might mean police specifically target black suspects. Then again, males make up about 50% of the population but represent about 96% of police killings. While one could similarly posit that police have a wide-spread hatred of men and seek to harm them, that seems unlikely. A better explanation for more of the variation is that men are behaving differently than women: less compliant, more aggressive, or something along those lines. After all, the only reasons you’d expect police shootings to match population demographics perfectly would be either if police shot people at random (they don’t) or police shot people based on some nonrandom factors that did not differ between groups of people (which also seems unlikely).

One such factor that we might use to adjust our expectations would be crime rates in general; perhaps violent crime in particular, as that class likely generates a greater need for officers to defend themselves. In that respect, men tend to commit much more crime than women, which likely begins to explain why men are also shot by police more often. Along those lines, there are also rather stark differences between racial groups when it comes to involvement in criminal activity: while 12% of the US population is black, approximately 40% of the prison population is, suggesting differences in patterns of offending. While some might claim that prison percentage too is due to racial discrimination against blacks, the arrest records tend to agree with victim reports, suggesting a real differential involvement in criminal activity.

That said, criminal activity per se shouldn’t get one shot by police. When generating our expectations, we also might want to consider factors such as whether people resist arrest or otherwise threaten the officers in some way. In testing theories of racial biases, we would want to consider whether officers of different races are more or less likely to shoot citizens of various demographics (that is to ask whether, say, black officers are any more or less likely to shoot black civilians than white officers are. I could have sworn I’ve seen data on that before but cannot appear to locate it at this time. What I did find, however, was a case-matched study of NYPD officers, reporting that black officers were about three times as likely to discharge their weapon as white officers at the scene, spanning 106 shooting and about 300 officers; Ridgeway, 2016). Again, while this is not a comprehensive list of things to think about, factors like these should help us generate our expectations about what the demographics of police shooting victims should look like, and it is only from there that we can begin to make claims about racial biases in the data.

It’s hard to be surprised at the outcomes sometimes

Regardless of where you settled on your answer to the above expectations, I suspect that many people would nonetheless want to reduce those numbers, if possible. Fewer people getting killed by police is a good thing most of the time. So how do we want to go about seeing that outcome achieved? Some have harnessed the “Black Lives Matter” (BLM) hashtag and suggest that police (and other) violence should be addressed via a focus on, and reductions in, explicit, and presumably implicit, racism (I think; finding an outline of the goals of the movement proves a bit difficult).

One common response to this hashtag has been the notion that BLM is needlessly divisive, suggesting instead that “All Lives Matter” (ALM) be used as a more appropriate description. In turn, the reply to ALM by BLM is that the lack of focus on black people is an attempt to turn a blind eye to problems viewed a disproportionately affecting black populations. The ALM idea was recently criticized by the writer Maddox, who compared the ALM expression to a person who, went confronted with the idea of “supporting the troops,” suggests that we should support all people (the latter being a notion that receives quite a bit of support, in fact). This line of argument is not unique to Maddox, of course, and I wanted to address that thought briefly to show why I don’t think it works particularly well here.

First, I would agree that “support the troops” slogan is met with a much lower degree of resistance than “black lives matter,” at least as far as I’ve seen. So why this differential response? As I see it, the reason this comparison breaks down involves the zero-sum nature of each issue: if you spend $5 to buy a “support the troops” ribbon magnet to attach to your car, that money is usually intended to be designated towards military-related causes. Now, importantly, money that is spent relieving the problems in the military domain cannot be spent elsewhere. That $5 cannot be given to both military causes and also given to cancer research and also given to teachers and also used to repave roads, and so on. There need to be trade-offs in whom you support in that case. However, if you want to address the problem of police violence against civilians, it seems that tactics which effectively reduce violence against black populations should also be able to reduce violence against non-black populations, such as use-of-force training or body cameras.

The problems, essentially, have a very high degree of overlap and, in terms of the raw numbers, many more non-black people are killed by police than black ones. If we can alleviate both at the same time with the same methods, focusing on one group seems needless. It is only those killings of civilians that effect black populations (24% of the shootings) and are also driven predominately or wholly by racism (an unknown percent of that 24%) that could be effectively addressed by a myopic focus on the race of the person being killed per se. I suspect that many people have independently figured that out – consciously or otherwise – and so dislike the specific attention drawn to race. While a focus on race might be useful for virtue signaling, I don’t think it will be very productive in actually reducing police violence.

“Look at how high my horse is!”

To summarize, to meaningfully talk about police violence, we need to articulate our expectations about how much of it we should see, as well as its shape. It makes no sense to talk about how violence is biased against one group or another until those benchmarks have been established (this logic applies to all discussions of bias in data, regardless of topic). None of this is intended to be me telling you how much or what kind of violence to expect; I’m by no means in possession of the necessary expertise. Regardless, if one wants to reduce police violence, inclusive solutions are likely going to be superior to exclusive ones, as a large degree of overlap in causes likely exists between cases, and solving the problems of one group will help solve the problems of another. There is merit to addressing specific problems as well – as that overlap is certainly less than 100% – but in doing so, it is important to not lose sight of the commonalities and distance those who might otherwise be your allies. 

References: Ridgeway, G. (2016). Officer risk factors associated with police shootings: a matched case-control study. Statistics & Public Policy, 3, 1-6.

Understanding Conspicuous Consumption (Via Race)

Buckle up, everyone; this post is going to be a long one. Today, I wanted to discuss the matter of conspicuous consumption: the art of spending relatively large sums of money on luxury goods. When you see people spending close to $600 on a single button-up shirt, two-months salary on engagement rings, or tossing spinning rims on their car, you’re seeing examples of conspicuous consumption. A natural question that many people might (and do) ask when confronted with such outrageous behavior is, “why do you people seem to (apparently) waste money?” A second, related question that might be asked once we have an answer to the first question (indeed, our examination of this second question should be guided by – and eventually inform – our answer to the first) is how can we understand who is most likely to spend money in a conspicuous fashion? Alternatively, this question could be framed by asking about what contexts tend to favor conspicuous consuming behavior. Such information should be valuable to anyone looking to encourage or target big-ticket spending or spenders or, if you’re a bit strange, you could also try to create contexts in which people spend their money more responsibly.

But how fun is sustainability when you could be buying expensive teeth  instead?

The first question – why do people conspicuously consume – is perhaps the easier question to initially answer, as it’s been discussed for the last several decades. In the biological world, when you observe seemingly gaudy ornaments that are costly to grow and maintain – peacock feathers being the go-to example – the key to understanding their existence is to examine their communicative function (Zahavi, 1975). Such ornaments are typically a detriment to an organism’s survival; peacocks could do much better for themselves if they didn’t have to waste time and energy growing the tail feathers which make it harder to maneuver in the world and escape from predators. Indeed, if there was some kind of survival benefit to those long, colorful tail feathers, we would expect that both sexes would develop them; not just the males.

However, it is because these feathers are costly that they are useful signals, since males in relatively poor condition could not shoulder their costs effectively. It takes a healthy, well-developed male to be able to survive and thrive in spite of carrying these trains of feathers. The costs of these feathers, in other words, ensures their honesty, in the biological sense of the word. Accordingly, females who prefer males with these gaudy tails can be more assured that their mate is of good genetic quality, likely leading to offspring well-suited to survive and eventually reproduce themselves. On the other hand, if such tails were free to grow and develop – that is, if they did not reliably carry much cost – they would not make good cues for such underlying qualities. Essentially, a free tail would be a form of biological cheap talk. It’s easy for me to just say I’m the best boxer in the world, which is why you probably shouldn’t believe such boasts until you’ve actually seen me perform in the ring.

Costly displays, then, owe their existence to the honesty they impart on a signal. Human consumption patterns should be expected to follow a similar pattern: if someone is looking to communicate information to others, costlier communications should be viewed as more credible than cheap ones. To understand conspicuous consumption we would need to begin by thinking about matters such as what signal someone is trying to send to others, how that signal is being sent, and what conditions tend to make the sending of particular signals more likely? Towards that end, I was recently sent an interesting paper examining how patterns of conspicuous consumption vary among racial groups: specifically, the paper examined racial patterns of spending on what was dubbed visible goods: objects which are conspicuous in anonymous interactions and portable, such as jewelry, clothing, and cars. These are good designed to be luxury items which others will frequently see, relative to other, less-visible luxury items, such as hot tubs or fancy bed sheets.

That is, unless you just have to show off your new queen mattress

The paper, by Charles et al (2008), examined data drawn from approximately 50,000 households across the US, representing about 37,000 White 7,000 Black, and 5,000 Hispanic households between the ages of 18 and 50. In absolute dollar amounts, Black and Hispanic households tended to spend less on all manner of things than Whites (about 40% and 25%, respectively), but this difference needs to be viewed with respect to each group’s relative income. After all, richer people tend to spend more than poorer people. Accordingly, the income of these households was estimated through their reports of their overall reported spending on a variety of different goods, such as food, housing, etc. Once a household’s overall income was controlled for, a better picture of their relative spending on a number of different categories emerged. Specifically, it was found that Blacks and Hispanics tended to spend more on visible  goods (like clothing, cars, and jewelry) than Whites by about 20-30%, depending on the estimate, while consuming relatively less in other categories like healthcare and education.

This visible consumption is appreciable in absolute size, as well. The average white household was spending approximately $7,000 on such purchases each year, which would imply that a comparably-wealthy Black or Hispanic household would spend approximately $9,000 on such purchases. These purchases come at the expense of all other categories as well (which should be expected, as the money has to come from somewhere), meaning that the money spent on visible goods often means less is spent on education, health care, and entertainment.

There are some other interesting findings to mention. One – which I find rather notable, but the authors don’t see to spend any time discussing – is that racial differences in consumption of visible goods declines sharply with age: specifically, the Black-White gap in visible spending was 30% in the 18-34 group, 23% in the 35-49 group, and only 15% in the 50+ group. Another similarly-undiscussed finding is that visible consumption gap appears to decline as one goes from single  to married. The numbers Charles et al (2009) mention estimate that the average percentage of budgets used on visible purchases was 32% higher for single Black men, 28% higher for single Black women, and 22% higher for married Black couples, relative to their White counterparts. Whether these declines represent declines in absolute dollar amounts or just declines in racial differences, I can’t say, but my guess is that it represents both. Getting old and getting into relationships tended to reduce the racial divide in visible good consumption.

Cool really does have a cut-off age…

Noting these findings is one thing; explaining them is another, and arguably the thing we’re more interested in doing. The explanation offered by Charles et al (2009) goes roughly as follows: people have a certain preference for social status, specifically with respect to their economic standing. People are interested in signaling their economic standing to others via conspicuous consumption. However, the degree to which you have to signal depends strongly on the reference group to which you belong. For example, if Black people have a lower average income than Whites, then people might tend to assume that a Black person has a lower economic standing. To overcome this assumption, then, Black individuals should be particularly motivated to signal that they do not, in fact, have a lower economic standing more typical of their group. In brief: as the average income of a group drops, those with money should be particularly inclined to signal that they are not as poor as other people below them in their group.

In support of this idea, Charles et al (2008) further analyzed their data, finding that the average spending on visible luxury goods declined in states with higher average incomes, just as it also declined among racial groups with higher average incomes. In other words, raising the average income of a racial group within a state tended to strongly impact what percentage of consumption was visible in nature. Indeed, the size of this effect was such that, controlling for the average income of a race within a state, the racial gaps almost entirely disappeared.

Now there are a few things to say about this explanation, first of which being that it’s incomplete as stands. From my reading of it, it’s a bit unclear to me how the explanation works for the current data. Specifically, it would seem to posit that people are looking to signal that they are wealthier than those immediately below them in the social ladder. This could explain the signaling in general, but not the racial divide. To explain the racial divide, you need to add something else; perhaps that people are trying to signal to members of higher income groups that, though one is a member of a lower income group, one’s income is higher than the average income. However, that explanation would not explain the age/marital status information I mentioned before without adding on other assumption, nor would directly explain the benefits which arise from signaling one’s economic status in the first place. Moreover, if I’m understanding the results properly, it wouldn’t directly explain why visible consumption drops as the overall level of wealth increases. If people are trying to signal something about their relative wealth, increasing the aggregate wealth shouldn’t have much of an impact, as “rich” and “poor” are relative terms.

“Oh sure, he might be rich, but I’m super rich; don’t lump us together”

So how might this explanation be altered to fit the data better? The first step is to be more explicit about why people might want to signal their economic status to others in the first place. Typically, the answer to this question hinges on the fact that being able to command more resources effectively makes one a more valuable associate. The world is full of people who need things – like food and shelter – so being able to provide those things should make one seem like a better ally to have. For much the same reason, being in command of resources also tends to make one appear to be a more desirable mate as well. A healthy portion of conspicuous signaling, as I mentioned initially, has to do with attracting sexual partners. If you know that I am capable of providing you with valuable resources you desire, this should, all else being equal, make me look like a more attractive friend or mate, depending on your sexual preferences.

However, recognition of that underlying logic helps make a corollary point: the added value that I can bring you, owing to my command of resources, diminishes as overall wealth increases. To place it in an easy example, there’s a big difference between having access to no food and some food; there’s less of a difference between having access to some food and good food; there’s less of a difference still between good food and great food. The same holds for all manner of other resources. As the marginal value of resources decreases as access to resources increases overall, we can explain the finding that increases in average group wealth decrease relative spending on visible goods: there’s less of a value in signaling that one is wealthier than another if that wealth difference isn’t going to amount to the same degree of marginal benefit.

So, provided that wealth has a higher marginal value in poorer communities – like Black and Hispanic ones, relative to Whites – we should expect more signaling of it in those contexts. This logic could explain the racial gap on spending patterns. It’s not that people are trying to avoid a negative association with a poor reference group as much as they’re only engaging in signaling to the extent that signaling holds value to others. In other words, it’s not about my signaling to avoid being thought of as poor; it’s about my signaling to demonstrate that I hold a high value as a partner, socially or sexually, relative to my competition.

Similarly, if signaling functions in part to attract sexual partners, we can readily explain the age and martial data as well. Those who are married are relatively less likely to engage in signaling for the purposes of attracting a mate, as they already have one. They might engage in such purchases for the purposes of retaining that mate, though such purchases should involve spending money on visible items for other people, rather than for themselves. Further, as people age, their competition in the mating market tends to decline for a number reasons, such as existing children, inability to compete effectively, and fewer years of reproductive viability ahead of them. Accordingly, we see that visible consumption tends to drop off, again, because the marginal value of sending such signals has surely declined.

“His most attractive quality is his rapidly-approaching demise”

Finally, it is also worth noting other factors which might play an important role in determining the marginal value of this kind of conspicuous signaling. One of these is an individual’s life history. To the extent that one is following a faster life history strategy – reproducing earlier, taking rewards today rather than saving for greater rewards later – one might be more inclined to engage in such visible consumption, as the marginal value of signaling you have resources now is higher when the stability of those resources (or your future) is called into question. The current data does not speak to this possibility, however. Additionally, one’s sexual strategy might also be a valuable piece of information, given the links we saw with age and martial status. As these ornaments are predominately used to attract the attention of prospective mates in nonhuman species, it seems likely that individuals with a more promiscuous mating strategy should see a higher marginal value in advertising their wealth visibly. More attention is important if you’re looking to get multiple partners. In all cases, I feel these explanations make more textured predictions than the “signaling to not seem as poor as others” hypothesis, as considerations of adaptive function often do.

References: Charles, K., Hurst, E., & Roussanov, N. (2008). Conspicuous consumption and race. The Journal of Quarterly Economics, 124, 425-467.

Zahavi, A. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.

 

Stereotyping Stereotypes

I’ve attended a number of talks on stereotypes; I’ve read many more papers in which the word was used; I’ve seen still more instances where the term has been used outside of academic settings in discussions or articles. Though I have no data on hand, I would wager that the weight of this academic and non-academic literature leans heavily towards the idea that stereotypes are, by in large, inaccurate. In fact, I would go a bit farther than that: the notion that stereotypes are inaccurate seems to be so common that people often see little need in ensuring any checks were put into place to test for their accuracy in the first place. Indeed, one of my major complaints about the talks on stereotypes I’ve attended is just that: speakers never mentioning the possibility that people’s beliefs about other groups happen to, on the whole, match up to reality fairly well in many cases (sometimes they have mentioned this point as an afterthought but, from what I’ve seen, that rarely translates into later going out and testing for accuracy). To use a non-controversial example, I expect that many people believe men are taller than women, on average, because men do, in fact, happen to be taller.

Pictured above: not a perceptual bias or an illusory correlation

This naturally raises the question of how accurate stereotypes – when defined as beliefs about social groups – tend to be. It should go without saying that there will not be a single answer to that question: accuracy is not an either/or type of matter. If I happen to think it’s about 75 degrees out when the temperature is actually 80, I’m more accurate in my belief than if the temperature was 90. Similarly, the degree of that accuracy should be expected to vary on the intended nature of the stereotype in question; a matter to which I’ll return later. That said, as I mentioned before, quite a bit of the exposure I’ve had to the subject of stereotypes suggests rather strongly and frequently that they’re inaccurate. Much of the writing about stereotypes I’ve encountered focuses on notions like “tearing them down”, “busting myths”, or about how people are unfairly discriminated against because of them; comparatively little of that work has focused on instances in which they’re accurate which, one would think, would represent the first step in attempting to understand them.

According to some research reviewed by Jussim et al (2009), however, that latter point is rather unfortunate, as stereotypes often seem to be quite accurate, at least by the standards set by other research in psychology. In order to test for the accuracy of stereotypes, Jussim et al (2009) report on some empirical studies that met two key criteria: first, the research had to compare people’s beliefs about a group to what that group was actually like; that much is a fairly basic requirement. Second, the research had to use an appropriate sample to determine what that group was actually like. For example, if someone was interested in people’s beliefs about some difference between men and women in general, but only tested these beliefs against data from a convenience sample (like men and women attending the local college), this could pose something of a problem to the extent that the convenience sample differs from the reference group of people holding the stereotypes. If people, by in large, have accurate stereotypes, researchers would never know if they make use of a non-represented reference group.

Within the realm of racial stereotypes, Jussim et al (2009) summarized the results of 4 papers that met this criteria. The majority of the results fell within what the authors consider “accurate” range (as defined by being 0-10% off from the criteria values) or near-misses (those between 10-20% off). Indeed, the average correlations between the stereotypes and criteria measures ranged from .53 to .93, which are very high, relative to the average correlation uncovered by psychological research. Even the personal stereotypes, while not as high, were appreciably accurate, ranging from .36 to .69. Further, while people weren’t perfectly accurate in their beliefs, those who overestimated differences between racial groups tended be balanced out by those who underestimated those differences in most instances. Interestingly enough, people’s stereotypes about group differences tended to be a bit more accurate than their within group stereotypes.

“Ha! Look at all that inaccurate shooting. Didn’t even come close”

The same procedure was used to review research on gender stereotypes as well, yielding 7 papers with larger sample sizes. A similar set of results emerged: the average stereotype was rather accurate, with correlations ranging between .34 to .98, most of which hovered in the range of .7. Individual stereotypes were again less accurate, but most were still heading in the right direction. To put those numbers in perspective, Jussim et al (2009) summarized a meta-analyses examining the average correlation found in psychological research. According to that data, only 24% of social psychology effects represent correlations larger than .3 and a mere 5% exceeded a correlation of .5; the corresponding numbers for averaged stereotypes were 100% of the reviewed work meeting the .3 threshold, and about 89% of the correlations exceeding the .5 threshold (personal stereotypes at 81% and 36%, respectively).

Now neither Jussim et al (2009) or I would claim that all stereotypes are accurate (or at least reasonably close); no one I’m aware of has. This brings us to the matter of when we should expect stereotypes to be accurate and when we should expect them to fall shorter of that point. As an initial note, we should always expect some degree of inaccuracy in stereotypes – indeed, in all beliefs about the world – to the extent that gathering information takes time and improving accuracy is not always worth that investment in the adaptive sense. To use a non-biological example, spending an extra three hours studying to improve one’s grade on a test from a 70 to a 90 might seem worth it, but the same amount of time used to improve from a 90 to a 92 might not. Similarly, if one lacks access to reliable information about the behavior of others in the first place, stereotypes should also tend to be relatively inaccurate. For this reason, Jussim et al (2009) note that cross-cultural stereotypes in national personalities tend to be among the most inaccurate, as people from, say, India, might have relatively little exposure to information about people from South Africa, and vice versa.

The second point to make on accuracy is that, to the extent that beliefs guide behavior and that behavior carries costs or benefits, we should expect beliefs to tend towards accuracy (again, regardless of whether they’re about social groups or the world more generally). If you believe, incorrectly, that group A is as likely to assault you as group B (the example that Jussim et al (2009) use involves biker gang members and ballerinas), you’ll either end up avoiding one group more than you need to, not being wary enough around one, or miss in both directions, all of which involves social and physical costs. One of the only cases in which being wrong might reliably carry benefits are contexts in which one’s inaccurate beliefs modifies the behavior of other people. In other words, stereotypes can be expected to be inaccurate in the realm of persuasion. Jussim et al (2009) make nods toward this possibility, noting that political stereotypes are among the least accurate ones out there, and that certain stereotypes might have been crafted specifically with the intent of maligning a particular group.

For instance…

While I do suspect that some stereotypes exist specifically to malign a particular group, that possibility does raise another interesting question: namely, why would anyone, let alone large groups of people, be persuaded to accept inaccurate stereotypes? For the same reason that people should prefer accurate information over inaccurate information when guiding their own behaviors, they should also be relatively resistant to adopting stereotypes which are inaccurate, just as they should be when it comes to applying them to individuals when they don’t fit. To the extent that a stereotype is of this sort (inaccurate), then, we should expect that it not be widely held, except in a few particular contexts.

Indeed, Jussim et al (2009) also review evidence that suggests people do not inflexibly make use of stereotypes, preferring individuating information when it’s available: according to the meta-analyses reviewed, the average influence of stereotypes on judgments hangs around r = .1 (which does not, in many instances, have anything to say about the accuracy of the stereotype; just the extent of its effect); by contrast, individuating information had an average effect of about .7 which, again, is much larger than the average psychology effect. Once individuating information is controlled for, stereotypes tend to have next to zero impact on people’s judgments of others. People appear to rely on personal information to a much higher degree than stereotypes, and often jettison ill-fitting stereotypes in favor of personal information. In other words, the knowledge that men tend to be taller than women does not have much of an influence on whether I think a particular women is taller than a particular man.

When should we expect that people will make the greatest use of stereotypes, then? Likely when they have access to the least amount of individuating information. This has been the case in a lot of the previous research on gender bias where very little information is provided about the target individual beyond their sex (see here for an example). In these cases, stereotypes represent an individual doing the best they can with limited information. In some cases, however, people express moral opposition to making use of that limited information, contingent on the group(s) it benefits or disadvantages. It is in such cases that, ironically, stereotypes might be stereotyped as inaccurate (or at least insufficiently accurate) to the greatest degree.

References: Jussim, L., Cain, T., Crawford, J., Harber, K., & Cohen, F. (2009). The unbearable accuracy of stereotypes. In Nelson, T. The Handbook of Prejudice, Stereotyping, and Discrimination (199-227). NY: Psychological Press.  

The Implicit Assumptions Test

Let’s say you have a pet cause to which you want to draw attention and support. There are a number of ways you might go about trying to do so, honesty being perhaps the most common initial policy. While your initial campaign is met with a modest level of success, you’d like to grow your brand, so to speak. As you start researching how other causes draw attention to themselves, you notice an obvious trend: big problems tend to get more support than smaller ones: that medical condition affecting 1-in-4 people is much different than one affecting 1-in-10,0000. Though you realize it sounds a bit perverse, if you could somehow make your pet problem a much bigger one than it actually is – or at least seem like it is –  you would likely attract more attention and funding. There’s only one problem standing in your way: reality. When most people tell you that your problem isn’t much of one, you’re kind of out of luck. Or are you? What if you could convince others that what people are telling you isn’t quite right? Maybe they think your problem isn’t much of one but, if their reports can’t be trusted, now you have more leeway to make claims about the scope of your issue.

You finally get that big fish you always knew you actually caught

This brings us once again to the matter of the implicit association task, or IAT. According to it’s creators, the IAT “…measures attitudes and beliefs that people may be unwilling or unable to report,” making that jump from “association” to “attitudes” in a timely fashion. This kind of test could serve a valuable end for the fundraiser in the above example, as it could potentially increase the perceived scope of your problem. Not finding enough people who are explicitly racist to make your case that the topic should be getting more attention than it currently is? Well, that could be because racism is, by in large, a socially-undesirable trait to display and, accordingly, many people don’t want to openly say they’re a racist even if they hold some racial biases. If you had a test that could plausibly be interpreted as saying that people hold attitudes they explicitly deny, you could talk about how racism is much more common than it seems to be.

This depends on how one interprets the test, though: all the IAT measures is very-fast and immediate reaction times when it comes to pushing buttons. I’ve discussed the IAT on a few occasions: first with regard to what precisely the IAT is (and might not be) measuring and, more recently, with respect to whether IAT-like tests that use response times as measures of racial bias are actually predicting anything when it comes to actual behaviors. The quick version of both of those posts is that we ought to be careful about drawing a connection between measures of reaction time in a lab to racial biases in the real world that cause widespread discrimination. In the case of shooting decisions, for instance, a more realistic task in which participants were using a simulation with a gun instead of just pressing buttons at a computer resulted in the opposite pattern of results that many IAT tests would predict: participants were actually slower to shoot black suspects and more likely to shoot unarmed white suspects. It’s not enough to just assume that, “of course this different reaction times translate into real world discrimination”; you need to demonstrate it first.

This brings us to a recent meta-analysis of some IAT experiments by Oswald et al (2014) examining how well the IAT did at predicting behaviors, and whether it was substantially better than the explicit measures being used in those experiments. There was, apparently, a previous meta-analysis of IAT research that did find such things – at least for certain, socially-sensitive topics – and this new meta-analysis seems to be a response to the former one. Oswald et al (2014) begin by noting that the results of IAT research has been brought out of the lab into practical applications in law and politics; a matter that would be more than a little concerning if the IAT actually wasn’t measuring what it’s interpreted by many to be measuring, such as evidence of discrimination in the real world. They go on to suggest that the previous meta-analysis of IAT effects lacked a degree of analytic and methodological validity that they hope their new analysis would address.

Which is about as close as academic publications come to outright shit-talking

For example, the authors were interested in examining whether various experimental definitions of discrimination were differentially predicted by the IAT and explicit measures, whereas they had previously all been lumped into the same category by the last analysis. Oswald et al (2014) grouped these operationalizations of discrimination into six categories: (1) measured brain activity, which is a rather vague and open-to-interpretation category, (2) response times in other tasks, (3), microbehavior, like posture or expression of emotions, (4), interpersonal behavior, like whether one cooperates in a prisoner’s dilemma, (5) person perception, (i.e., explicit judgments of others), and (6) political preferences, such as whether one supports policies that benefit certain racial groups or not. Oswald et al (2014) also added in some additional, more recent studies that the previous meta-analysis did not include.

While this is a lot to this paper, I wanted to skip ahead to discussing a certain set of results. The first of these results is that, in most cases, IAT scores correlated very weakly to the discrimination criterion being assessed, averaging a meager correlation of 0.14.To the extent that IAT is actually measuring implicit attitudes, those attitudes don’t seem to have much a predictable affect on behavior. The exception to this pattern was in regard to the brain activity studies: that correlation was substantially higher (around a 0.4). However, as brain activity per se is not a terribly meaningful variable when it comes to its interpretation, whether that tells us anything of interest about discrimination is an open question. Indeed, in the previous post I mentioned, the authors also observed an effect for brain activity, but it did not mean people were biased toward shooting black people; quite the opposite, in fact.

The second finding I would like to mention is that, in most cases, the explicit measures of attitudes toward other races being used by researchers (like this one or this one) were also very weakly correlated to the discrimination criterion being assess, though their average correlation was about the same size as the implicit measures at 0.12. Further, this value is apparently substantially below the value achieved by other measures of explicit attitudes, leading the authors to suggest that researchers really ought to think more deeply about what explicit measures they’re using. Indeed, when you’re asking questions about “symbolic racism” or “modern racism”, one might wonder why you’re not just asking about “racism”. The answer, as far as I can tell, is because, proportionately, very few people – and perhaps even fewer undergraduates; the population most often being assessed – actually express openly racist views. If you want to find much racism as a researcher, then, you have to dig deeper and kind of squint a little.

The third finding is that the above two measures – implicit and explicit – really didn’t correlate with each other very well either, averaging only a correlation of 0.14. As Oswald et al (2014) put it:

“These findings collectively indicate, at least for the race domain…that implicit and explicit measures tap into different psychological constructs—none of which may have much influence on behavior…”

In fact, the authors estimate that the implicit and explicit measures collectively accounted for about 2.5% of the variance in discriminatory criterion behaviors concerning race, which each adding about a percent or so over and beyond the other measure. In other words, these effects are small – very small – and do a rather poor job of predicting much of anything.

“Results: Answer was unclear, so we shook the magic ball again”

We’re left with a rather unflattering picture of research in this domain. The explicit measures of racial attitudes don’t seem to do very well at predicting behaviors, perhaps owing to the nature of the questions being asked. For instance, in the symbolic racism scale, the answer one provides to questions like, “How much discrimination against blacks do you feel there is in the United States today, limiting their chances to get ahead?” could have quite a bit to do with matters that have little, if anything, to do with racial prejudice. Sure, certain answers might sound racist if you believe there is an easy answer to that question and anyone who disagrees must be evil and biased, but for those who haven’t already drank that particular batch of kool-aid, some reservations might remain. Using the implicit reaction times also seems to blur the line between actually measuring racist attitudes and many other things, such as whether one holds a stereotype or whether one is aware of a stereotype (foregoing the matter of its accuracy for the moment). These reservations appear to be reflected in how very bad both methods seem to be at predicting much of anything.

So why do (some) people like the IAT so much even if it predicts so little? My guess, again, is that a lot of it’s appeal flows from its ability to provide researchers and laypeople alike with a plausible-sounding story to tell others about how bad a problem is in order to draw more support to their cause. It provides cover for one’s inability to explicitly find what you’re looking for – such as many people voicing opinions of racial superiority – and allows a much vaguer measure to stand in for it instead. Since more people fit that vaguer definition, the result is a more intimidating sounding problem; whether it corresponds to reality can be besides the point if it’s useful.

References: Oswald, F., Blanton, H., Mitchell, G., Jaccard, J., & Tetlock, P. (2014). Predicting racial and ethnic discrimination: A meta-analysis of IAT criterion studies. Journal of Personality & Social Psychology, 105, 171-192.

Why Do People Care About Race?

As I have discussed before, claims about a species’ evolutionary history – while they don’t directly test functional explanations – can be used to inform hypotheses about adaptive function. A good example of this concerns the topic of race, which happens to have been on many people’s minds lately. Along with sex and age, race tends to be encoded by our minds relatively automatically: these are the three primary factors people tend to notice and remember about others immediately. What makes the automatic encoding of race curious is that, prior to the advent of technologies for rapid transportation, our ancestors were unlikely to have consistently traveled far enough in the world to encounter people of other races. If that was the case, then our minds could not possess any adaptations that were selected to attend to it specifically. That doesn’t mean that we don’t attend to race (we clearly do), but rather that the attention that we pay to it is likely the byproduct of cognitive mechanisms designed to do other things. If, through some functional analysis, we were to uncover what those other things were, this could have some important implications for removing, or at least minimizing, all sorts of nasty racial prejudices.

…in turn eliminating the need to murder others for that skin-suit…

This, of course, raises the question what the cognitive mechanisms that end up attending to race have been selected to do; what their function is. One plausible candidate explanation put forth by Kurzban, Tooby, & Cosmides, (2001) is that the mechanisms that are currently attending to race might actually have been designed to attend instead to social coalitions. Though our ancestors might not have traveled far enough to encounter people of different races, they certainly did travel far enough to encounter members of other groups. Our ancestors also had to successfully manage within-group coalitions; questions concerning who happens to be who’s friends and enemies. Knowing the group membership of an individual is a rather important piece of information: it can inform you as to their probability of providing you with benefits or, say, a spear to the chest, among other things. Accordingly, traits that allowed individuals to determine other’s probable group membership, even incidentally, should be attended to, and it just so happens that race gets caught up in that mix in the modern day. That is likely due to shared appearance reflecting probable group memberships; just ask any clique of high school children who dress, talk, and act quite similarly to their close friends.

Unlike sex, however, people’s relevant coalitional membership is substantially more dynamic over time. This means that shared physical appearance will not always be a valid cue for determining who is likely to be siding with who. In such instances, then, we should predict that race-based cues should be disregarded in favor of more predictive ones. In simple terms, then, the hypothesis on the table is that (a) race tends to be used by our minds as a proxy for group membership, so (b) when more valid cues for group membership are present, people should pay much less attention to race.

So how does one go about testing such an idea? Kurzban, Tooby, & Cosmides, (2001) did so by using a memory confusion protocol. In such a design, participants are presented with a number of photos of people, as well as a sentence that the pictured individuals are said to have spoken to each other during a conversation about a sporting dispute they had last year. Following that, participants are given a surprise recall task, during which they are asked to match the sentences to the pictures of the people who said them. The underlying logic is that participants will tend to make a certain pattern of mistakes in their matching: they will confuse individuals with each other more readily to the extent that their mind has placed them in the same group (or, perhaps more accurately, to the extent that their mind has failed to encode differentiating features of the individuals). Framed in terms of race, we might expect that people will mistake a quote attributed to one black person with another, as they had been mentally grouped together, but will be less likely to mistake that quote for one attributed to a white person. Again, the question of interest here is how our minds might be grouping people: is it doing so on the basis of race per se, or on the basis of coalitions?

“Yes; it’s Photoshopped. And yes; you’re racist for asking”

In the first experiment, 8 pictures were presented, split evenly between young white and black males. From the verbal statements that accompanied each picture, they could be classified into one of two coalitions, though participants were not explicitly instructed to attend to that variable. All the men were dressed identically. In this condition, while subjects did appear to pick up on the coalition factor – evidenced by their being somewhat more likely to mistake people who belonged to same coalition with one another – the size of the race effect was twice as large. In other words, when the only cue to group membership was the statement accompanying each picture, people were more likely to mistake one white man for another more often than they were to mistake one member of a coalition for another.

In the second experiment, however, participants were given the same pictures, but now there was an additional visual cue to group membership: half of the men were wearing yellow jerseys while the other half wore gray. In this case, the color of the shirt predicted which coalition each man was in, but participants were again not told to pay attention to that explicitly. In this condition, the previous effect reversed: the size of the race effect was only half that of the effect for coalition membership. It seemed that giving people an alternative visual cue for group membership dramatically cut the race effect. In fact, in a follow-up study reported by the paper (using pictures of different men), the race effect disappeared. When provided with alternate visual cues to coalition membership, people seemed to be largely (though not necessarily entirely) disregarding race. This finding demonstrates that racial categorization is not always automatic and strong as it had previously been thought it to be.

Importantly, when this experiment was run using sex instead of race (i.e., 4 women and 4 men), the above effects did not replicate. Whether the cues to group membership were only verbal or whether they were verbal and visual, people continued to encode sex automatically and do so robustly, as evidenced again by their pattern of mistakes. Though white women and black men are both visually distinct from white men, additional visual cues to coalition membership only had an appreciable effect on latter group, consistent with the notion that the tendency people have to encode race is a byproduct of our coalitional psychology.

“With a little teamwork – black or white – we can all crush our enemies!”

The good news, then, is that people aren’t inherently racist; our evolutionary history wouldn’t allow it, given how far our ancestors likely traveled. We’re certainly interested in coalitions, these coalitions are frequently used to benefit our allies at the expense of non-members, and that part probably isn’t going away anytime soon, but that has a less morally-sinister tone to it for some reason. It is worth noting that, in the reality outside the lab, coalitions may well (and frequently seem to) form among racial or ethnic lines. Thankfully, as I mentioned initially, coalitions are also fluid things, and it (sometimes) only seems to take a small exposure to other visual indicators of membership to change the way people are viewed by others in that respect. Certainly useful information for anyone looking to reduce the impact of race-based categorization.

References: Kurzban, R., Tooby, J., & Cosmides, L. (2001). Can race be erased? Coalitional computation and social categorization. PNAS, 98, 15387-15392.

#HandsUp (Don’t Press The Button)

In general, people tend to think of themselves as not possessing biases or, at the very least, less susceptible to them than the average person. Roughly paraphrasing from Jason Weeden and Robert Kurzban’s latest book, when it comes to debates, people from both sides tend to agree with the premise that one side of the debate is full of reasonable, dispassionate, objective folk and the other side is full of biased, evil, ignorant ones; the only problem is that people seem to disagree as to which side is which. To quote directly from Mercier & Sperber (2011): “[people in debates] are not trying to form an opinion: They already have one. Their goal is argumentative rather than epistemic, and it ends up being pursued at the expense of epistemic soundness” (p.67). This is a long-winded way of saying that people – you and I included – are biased, and we typically end up seeking to support views we already hold. Now, recently, owing to the events that took place in Ferguson, a case has been made that police officers (as well as people in general) are biased against the black population when it comes to criminal justice. This claim is by no means novel; NWA, for instance, voiced in 1988 in their hit song “Fuck tha police”.

 They also have songs about killing people, so there’s that too…

Is the justice system and its representatives, at least in here in the US, biased against the black population? I suspect that most of you reading this already have an answer to that question which, to you, likely sounds pretty obvious. Many people have answered that question in the affirmative, as evidenced by such trending twitter hashtags as #BlackLivesMatter and #CrimingWhileWhite (the former implying that people devalue black lives and the latter implying that people get away with crimes because they’re white, but they wouldn’t if they were black). Though I can’t speak to the existence or extent of such biases – as well as the contexts in which they occur – I did come across some interesting research recently that deals with a related, but narrower question. This research attempts to answer a question that many people feel they already have the answer to: are police officers (or people) quicker to deploy deadly force against black targets, relative to white targets? I suspect many of you anticipate – correctly – that I’m about to tell you that some new research shows people aren’t biased against the black population in that respect. I further suspect that upon hearing that, one of your immediate thoughts will be to figure out why the conclusion must be incorrect.

The first of these papers (James, Vila, & Daratha, 2013) begins by noting that some previous research on the topic (though by no means all) has concluded that a racial bias against blacks exists when it comes to the deployment of deadly force. How did they come to this conclusion? Experimentally, it would seem they used a research method similar to the Implicit Association Task (or IAT): they have participants come into a lab, sit in front of a computer, and ask them to press a “shoot” button when they see armed targets pop up on screen and a “don’t shoot” button when the target isn’t armed. James, Vila, & Daratha (2013) argue that such a task is, well, fairly artificial and, as I have discussed before, artificial tasks can lead to artificial results. Part of that artificiality is that there is no difference between the two responses in such an experiment: both responses just involve pushing one button or another. By contrast, actually shooting someone involves unholstering a weapon and pulling a trigger, while not shooting at least does not involve that last step.So shooting is an action; not shooting is an inaction; pressing buttons, however, are both actions, and simple ones. Further, sitting at a computer and seeing static images pop up on the screen is just a bit less interactive than most police encounters that lead to the use of deadly force. So, whether these results concern people’s biases against blacks translate to anywhere outside the lab is an open question.

Accordingly, what the authors of the current paper did involved what must have been quite the luxurious lab set up. The researchers collected data from around 60 civilians and 40 police and military subjects. During each trial, the subjects were standing in an enclosed shooting range with a large screen that would display a simulations where they might or might not have to shoot. Each subject was provided with a modified Glock pistol (that shot lasers instead of bullets), holsters, and instructions on how to use them. The subjects each went through in between 10-30 simulations that recreated instances where officers had been assaulted or killed; simulations which included naturalistic filming with paid actors (as opposed to the typical static images). The subjects were supposed to shoot the armed targets in the simulation and avoid shooting unarmed ones. As usual, the race of the targets was varied to be white, black, or hispanic, as well as whether or not the targets were armed.

Across three studies, a clear pattern emerged: the participants were actually slower to shoot the armed black targets by in between 0.7 – 1.35 seconds, on average; no difference was found between the white and hispanic targets. This result held for both the civilians and the police. The pattern of mistakes people made was even more interesting: when they shot unarmed targets, they tended to shoot the unarmed black targets less than the unarmed white or hispanic targets; often substantially less. Similarly, subjects were also more likely to fail to shot an armed black target. To the extent that people were making errors or slowing down, they were doing so in favor of black targets, contrary to what many people shouting things right now would predict.

“That result is threatening my worldview; shoot it!”

As these studies appear to use a more realistic context when it comes to shooting – relative to sitting at a computer and pressing buttons – it casts some doubt as whether the previous findings that were uncovered when subjects were sitting at computer screens are able to be generalized to the wider world. Casting further doubt on the validity of the computer-derived results, a second paper by James, Klinger, & Vila (2014) examined the relationship between these subconscious race-base biases and the actual decision to shoot. They did so by reanalyzing some of the data (n = 48) from the previous experiment when participants had been hooked up to EEGs at the time. The EEG equipment was measuring what the authors call “alpha suppression”. According to their explanation (I’m not a neuroscience expert, so I’m only reporting what they do), the alpha waves being measured by the EEG tend to occur when individuals are relaxed, and reductions of alpha waves are associated with the presence of arousing external stimuli; in this case, the perception of threat. The short version of this study, then, seems to be that reductions in alpha waves equate, in some way, to more perception of threat.

The more difficult shooting scenarios resulted in greater alpha suppression than the simpler ones, consistent with a relation to threat level but, regardless of the scenario difficulty, the race effect remained consistent. The EEG results found that, when faced with a black target, subjects evidenced greater alpha suppression relative to when they confronting a white or hispanic target; this result obtained regardless of whether the target ended up being armed or not. To the extent that these alpha waves are measuring threat response on a physiological level, people found the black targets more threatening, but this did not translate into an increased likelihood to shoot them; in fact, it seemed to do the opposite. The authors suggest that this might have something to do with the perception of possible social and legal consequences for harming a member of a historically oppressed racial group.

In other words, people might not be shooting because they’re afraid that people will claim that the shooting was racially motivated (indeed, if the results had turned out the opposite way, I suspect many people would be making that precise claim, so they wouldn’t be wrong). The authors provide some reason to think the social concerns of shooting might be driving the hesitation, one of which involves this passage from an interview of a police chief in 1992:

“Bouza…. added that in most urban centers in the United States, when a police chief is called “at three in the morning and told, ‘Chief, one of our cops just shot a kid,’ the chief’s first questions are: ‘What color is the cop? What color is the kid?’” “And,” the reporter asked, “if the answer is, ‘The cop is white, the kid is black’?” “He gets dressed,”

“I’m not letting a white on white killing ruin this nap”

Just for some perspective, the subjects in this second study had responded to about 830 scenarios in total. Of those, there were 240 that did not require the use of force. Of those 240, participants accidentally shot a total of 47 times; 46 of those 47 unarmed targets were white (even though around a third of the targets were black). If there was some itchy trigger finger concerning black threats, it wasn’t seen in this study. Another article I came across (but have not fact checked so, you know, caveat there) suggests something similar: that biases against blacks in the criminal justice system don’t appear to exist.

Now the findings I have presented here may, for some reason, be faulty. Perhaps better experiments in the future will provide more concrete evidence concerning racial biases, or lack thereof. However, if you first reaction to these findings is to assume that something is wrong with them because you know that police target black suspects disproportionately, then I would urge you to consider that, well, maybe some biases are driving your reaction. That’s not to say that others aren’t biased, mind you, or that you’re necessarily wrong, just that you might be more biased than you like to imagine.

References: James, L., Vila, B. & Daratha, K. (2013) Influence of suspect race and ethnicity on decisions to shoot in high fidelity deadly force judgment and decision-making simulations. Journal of Experimental Criminology, 9, 189–212.

 James, L., Klinger, D., & Vila, B. (2014). Racial and ethnic bias in decisions to shoot seen through a stronger lens: Experimental results from high-fidelity laboratory simulations. Journal of Experimental Criminology, 10, 323-340.

 

Up Next On MythBusters: Race And Parenting

Lately, there’s been an article that keeps crossing my field of vision; it’s done so about three or four times in the last week, likely because it was written about fathers and father’s day has just come and gone. The article is titled, “The Myth of the Absent Black Father“. In it, Tara Culp-Ressler suggests that “hands-on parenting is similar among dads of all races”, and, I think, that the idea that any racial differences in the parenting realm might exist is driven by inaccurate racist stereotypes instead of accurate perceptions of reality. There are two points I want to make, one of which is specific to the article itself, and the other of which is about stereotypes and biases as they are spoken about more generally. So let’s start with the myth-busting about parenting across races.

Network TV wasn’t touching this one with a 10-foot pole

The first point I want to make about the article in question is that the title is badly at odds with the data being reported on. The title – The Myth of the Absent Black Father – would seem to strongly suggest that the myth here is that black fathers tend to be absent when it comes to childcare (presumably with respect to other racial groups, rather than in some absolute sense of the word). Now if one wished to label this a myth, they should, presumably, examine the data of the percentage of families with father-present and father-absent homes to demonstrate that rates of absent fathers do not differ substantially by race. What it means to be “present” or “absent” is, of course, a semantic issue that is likely to garner some disagreement. In the interests of maintaining something resembling a precise definition, then, let’s consider matters over which there is likely to be less disagreement, such as, “across different races, are the fathers equally likely to be married to the mother of their children?” or, “does the father live in the same household as their child?”.

There exists plenty of data that speaks to those questions. The answer from the data to both is a clear “no; fathers are not equally likely to be living with the mother across races”. According to census data from 2009, for instance, black children were residing in single-mother homes in around 50% of cases, compared to 18% of white children, 8% in Asian children, and 25% in Hispanic children. With respect to births outside of marriage, further data from 2011 found:

…72 percent of all births to black women, 66 percent to American Indian or Alaskan native women, and 53 percent to Hispanic women occurred outside of marriage, compared with 29 percent for white women, and 17 percent for Asian or Pacific Islander women.

In these two cases, then, it seems abundantly clear that, at least relatively speaking, the “myth” of absent black fathers is not a myth at all; it’s a statistical reality (just like how last time I was discussing “myths” about sex differences, most of the “myths” turned out to be true). This would make the title of the article seem more than a little misleading. If the “myth” of the absent black father isn’t referring to whether the father is actually present in the home or not, then what is the article focused on?

The article itself focuses on a report by the CDC which found that, when they are present, fathers tend to report being about equally involved in childcare over the last month, regardless of their race; similar findings emerge for fathers who are absent. In other words, an absent father is an absent father, regardless of race, just as a present father is a present father, regardless of race. There were some slight differences between racial groups, sure; but nothing terribly noteworthy. That said, if one is concerned with the myth of the absent black father, comparing how much work fathers do given they are present or absent  across races seems to miss the mark. Yes; present fathers tend to do more work than absent ones, but the absent ones are disproportionately represented in some groups. That point doesn’t seem to be contested by Tara; instead, she opts to suggest that the reasons that many black fathers don’t live with their children come down to social and economic inequalities. Now that explanation may well be true; it may well not be the whole picture, either. The reason(s) this difference exists is likely complicated, as many things related to human social life are. However, even fully explaining the reasons for a discrepancy does not make the discrepancy stop existing, nor does it make it a myth.

But never mind that; your ax won’t grind itself

So the content of the article is a bit of a non-sequitur from the title. The combination of the title and content seemed a bit like me trying to say it’s a myth that it’s cloudy outside because it’s not raining; though the two might be related, they’re not the same thing (and it very clearly is cloudy, in any case…). This brings me to the second, more general point I wanted to discuss: articles like these are common enough to be mundane. It doesn’t take much searching to find people writing about how (typically other) people (who the author disagrees with or dislikes) tend to hold to incorrect stereotypes or have fallen prey to cognitive biases. As Steven Pinker once said, a healthy portion of social psychological research often focuses on:

… endless demonstrations that People are Really Bad at X, which are then “explained” by an ever-lengthening list of Biases, Fallacies, Illusions, Neglects, Blindnesses, and Fundamental Errors, each of which restates the finding that people are really bad at X.

Reading over a lot of the standard psychological literature, one might get the sense that people aren’t terribly good at being right about the world. In fact, one might even get the impression that our brains were designed to be wrong about a number of socially-important things (like how smart, trustworthy, or productive some people are which might, in turn, affect our decisions about whether they would make good friends or romantic partners). If that were the case, it should pose us with a rather interesting biological mystery.

That’s not to say that being wrong per se is much of a mystery – as we lack perfect information and perfect information processing mechanisms – but rather that it would be strange if people’s brains were designed for such an outcome: if people’s minds were designed to make use of stereotypes as a source of information for decision making, and if those stereotypes are inaccurate, then people should be expected to make worse decisions relative to if they had not used that stereotype as information in the first place (and, importantly, that being wrong tends to carry fitness-relevant consequences). That people continue to make use of these stereotypes (regardless of their race or sex) would require an explanation. Now the most obvious reason for the usage of stereotypes would be, as per the example above, that they are not actually wrong. Before wondering why people use bad information to make decisions, it would serve us well to make sure that the information is, well, actually bad (again, not just imperfect, but actually incorrect).

“Bad information! Very bad!”

Unfortunately, as far as I’ve seen, proportionately-few projects on topics like biases and stereotypes begin by testing for accuracy. Instead, they seem to begin with their conclusion (which is generally, “people are wrong about quite a number of things related to gender and/or race, and no meaningful differences could possibly exist between these groups, so any differential treatment of said groups must be baseless”) and then go out in search of the confirmatory evidence. That’s not to say that all stereotypes will necessarily be true, of course; just that figuring out if that’s the case ought to be step one (step two might then involve trying to understand any differences that do emerge in some meaningful way, with the aforementioned knowledge that explaining these differences doesn’t make them disappear). Skipping that first step leads to labeling facts as “myths” or “racist stereotypes”, and that doesn’t get us anywhere we should want to be (though it can get one pretty good publicity, apparently).