Sexism: One More Time With Feeling

For whatever reason, a lot of sexism-related pieces have been crossing my desk lately. It’s not that I particularly mind; writing about these papers is quite engaging, and many people – no matter the side of the issue they tend to find themselves falling on – seem to share a similar perspective when it comes to reading about them (known more colloquially as the Howard Stern Effect). Now, as I’ve said before on several of the occasions I’ve written about them, the interpretations of the research on sexism – or sometimes the research itself – feels rather weak. The main reason I’ve found this research to feel so wanting centers around the rather transparent and socially-relevant persuasive messages that reside in such papers: when people have some vested interest in the outcome of the research – perhaps because it might lend legitimacy to their causes or because it paints a socially-flattering picture of their group – this opens the door for research designs and interpretations of data that can get rather selective. Basically, I have a difficult time trusting that truth will fall out of sexism research for the same reason I wouldn’t take a drug company’s report about the safety of their product at face value; there’s just too much on the line socially to not be skeptical.

“50% of the time it worked 100% of the time. Most of the rats didn’t even die!”

Up for consideration today is a paper examining how men and women perceive the quality of sexism research, contingent on the results of it (Handley et al, 2015). Before getting into the meat of this paper, I want to quote a passage from its introduction to applaud the brilliant tactical move the authors make (and to give you a sense for why I experience a certain degree of distrust concerning sexism research). When discussing how some of the previous research published by one of the authors was greeted with skepticism by predominately men – at least according to an informal analysis of online comments replying to coverage of it – the authors have this to say:

“…men might find the results reported by Moss-Racusin et al. threatening, because remedying the gender bias in STEM fields could translate into favoring women over men, especially if one takes a zero-sum-gain perspective. Therefore, relative to women, men may devalue such evidence in an unintentional implicit effort to retain their status as the majority group in STEM fields.”

This is just a fantastic passage for a few reasons. First, it subtlety affirms the truth of the previous research; after all, if there did not exist a real gender bias, there would be nothing in need of being remedied, so the finding must therefore reflect reality. Second. the passage provides a natural defense against future criticism of their work: anyone who questions the soundness of their research, or their interpretation of the results, is probably just biased against seeing the plainly-obvious truth they have stumbled upon because they’re male and trying to maintain their status in the world. For context, it’s worth noting that I have touched upon the piece in question before, writing, “Off the top of my head, I see nothing glaringly wrong with this study, so I’m fine with accepting the results…“. While I think the study in question seemed fine, I nevertheless questioned how well their results mesh with other findings (I happen to think there are some inconsistencies that would require a rather strange kind of discrimination be at play in the real world) and I was not overly taken with their interpretation of what they found.

With that context in mind, the three studies in the paper followed the same general method: an abstract of some research was provided to men and women (the first two studies used the abstract from one of the authors; the third used a different one). The subjects were asked to evaluate, on a 1-6 scale, whether they agreed with the author’s interpretation of the results, whether the research was important, whether the abstract was well written, and what their overall evaluation of the research was. These scores were then averaged into a single measure for each subject. In the third experiment the abstract itself was modified to either suggest that a bias favoring men and disfavoring women in STEM fields was uncovered by the research, or that no bias was found (why no condition existed in which the bias favored women I can’t say, but I think it would have been a nice addition to the paper). Just as with the previous paper, I see nothing glaringly wrong with their methods (beyond that omission), so let’s consider the results.

The first sample was comprised of 205 Mturk participants, and found that men were somewhat less favorable about the research that found evidence of sexism in STEM fields (M = 4.25) relative to women (M = 4.66). The second sample was made up of 205 academics from an unnamed research university and the same pattern was observed: overall, male faculty assessed the research somewhat less favorably (M = 4.21) than female faculty (M = 4.65). However, an important interaction emerged: the difference in this second sample was due to male-female differences within STEM fields. Male STEM faculty were substantially less positive about the study (M = 4.02) than their female counterparts (M = 4.80); non-STEM faculty did not differ in this respect, both falling right in between those two points (Ms = 4.55). Now it is worth mentioning that the difference between the STEM and non-STEM male faculty was statistically significant, but the difference between the female STEM and non-STEM faculty was not. Handley et al (2015) infer from that result that, “…men in STEM displayed harsher judgments of Moss-Racusin et al.’s research, not that women in STEM exhibited more positive evaluations of it“. This is where I’m going to be sexist and disagree with the author’s interpretation, as I feel it’s also worth noting that the sample size of male STEM faculty (n = 66) was almost twice as large as the female sample (n = 38), which likely contributed to that asymmetry in statistical significance. Descriptively speaking, STEM men were less accepting of the research and STEM women were more accepting of it, relative to the academics for whom this finding would be less immediately relevant.

“The interpretation of this research determines who deserves a raise, so please be honest.”

The third experiment that modified the abstract to contain a finding of either sexism against women or no sexism also used an Mturk sample of 303 people, rather than faculty. The same basic pattern was found here: when the research reported a bias against women, men were less favorable towards it (M = 3.65) than if it found no bias (M = 3.83); women showed the opposite pattern (Ms =  3.86 and 3.59, respectively). So – taken together – there’s some neat evidence here that the relevance of a research finding affects how that finding is perceived. Those who have something to gain by the research finding sexism (women, particularly those in STEM) tended to be slightly more favorable towards research that found it, whereas those who had something to lose (men, particularly those in STEM) tended to be slightly unfavorable towards research finding sexism. This isn’t exactly new – research on the idea has dated back at least two decades - but it fits well with what we know about how motivated reasoning works.

I want to give credit where credit is due: Handley et al (2015) do write that they cannot conclude that one gender is more biased than the other; just that gender appears to – sometimes – bias how sexism research is perceived to some degree. Now that tentative conclusion would be all well and good were it a consistent theme throughout their paper. However, the examples raised in the write-up universally center around how men might find findings of sexism threatening and how women are known to be disadvantaged by it; not on how women might be strategically inclined towards such research because it suits their goals (as, to remedy anti-female bias, female-benefiting plans may well have to be enacted). Even a quick reading of the paper should demonstrate that the authors are clearly of the view that sexism is a rather large problem for STEM fields, writing about how female participation needs to be increased and encouraged. That would seem to imply that anyone who denies the importance of the research reporting sexism is the one with the problematic bias, and that is a much less tentative way to think about the results. In the spirit of furthering their own interests, the authors further note how these biases could be a real problem for people publishing sexism research, as many of the people reviewing research articles are likely to be men and, accordingly, not necessarily inclined towards it (which, they note, makes it harder for them to publish in good journals and get tenure).

Handley et al’s (2015) review of the literature also comes off as rather one-sided, never explicitly discussing other findings that run counter to the idea that women experienced a constant stream of sexist discrimination in academia (like this finding: qualified women are almost universally preferred to qualified men by hiring committees, often by a large margin). Funnily enough, the authors transition from writing about how the evidence of sexism against women in STEM is “mounting” in the introduction to how the evidence is “copious” by the discussion. This one-sided treatment can be seen again around the very end of their discussion (in the “limitations and future directions” section) when Handley et al (2015) note that they failed to find an effect they were looking for: abstracts that were ostensibly written by women were not rated any differently than abstracts presented as being written by men (they hoped to find the female abstracts to be rated as lower quality). For whatever reason, however, they neglected to report this failure in their results section, where it belonged; indeed, they failed to mention that this was a prediction they were making the main paper at all, even though it was clearly something they were looking to find (else why would they include that factor and analyze the data in the first place?). Not mentioning a prediction that didn’t work out upfront strikes me as somewhat less than honest.

“Yeah; I probably should have mentioned I was drunk before right now. Oops”

Taking these results at face value, we can say that people who are motivated to interpret results in a particular way are going to be less than objective about that work, relative to someone with less to gain or lose. With that in mind, I would be inherently skeptical of the way sexist biases are presented in the literature more broadly and how they’re discussed in the current paper: the authors clearly have a vested interest in their research uncovering particular patterns of sexism, and in their interpretations of their data being accepted by the general and academic populations. That doesn’t make them unique (you could describe almost all academic researchers that way), nor does it make their results incorrect, but it does seem to make their presentation of these impactful issues seem painfully one-sided. This is especially concerning because these are matters which many feel carry important social implications. Bear in mind, I am not taking issue with the methods or the data presented in the current paper; those seem fine; what I take issue with is the interpretation and presentation of them. Then again, perhaps these only seem like issues to me because I’m a male STEM major…

References: Handley, I., Brown, E., Moss-Racusin, C., & Smith, J. (2015). Quality of evidence revealing subtle gender biases in science is in the eye of the beholder. Proceedings of the National Academy of Science, 112, 13201-13206.

Comments are closed.