About Jesse Marczyk

An Evolutionary-Minded Psychologist, of All Things

Imagine If The Results Went The Other Way

One day, three young children are talking about what they want to be when they get older. The first friend says, “I love animals, so I want to become a veterinarian.” The second says, “I love computers, so I want to become a programmer.” The third says, “I love making people laugh, so I want to become a psychology researcher.” Luckily for all these children, they all end up living a life that affords them the opportunity to pursue their desires, and each ends up working happily in the career of their choice for their entire adult life.

The first question I’d like to consider is whether any of those children made choices that were problematic. For instance, should the first child have decided to help animals, or perhaps should they have put their own interests aside and pursued another line of work because of their sex and the current sex-ratio of men and women in that field? Would your answer change if you found out the sex of each of the children in question? Answer as if the second child was a boy, then think about whether your answer would change if you found out she was a girl.

Well if you wanted to be a vet, you should have been born a boy

This hypothetical example should, hopefully, highlight a fact that some people seem to lose track of from time to time: broad demographic groups are not entities themselves; only made up of their individual members. Once one starts talking about how gender inequality in professions ought to be reduced – such that you see a greater representation of 50/50 men and women across a greater number of fields – you are, by default, talking about how some people need to start making choices less in line with their interests, skills, and desires to reach that parity. This can end up yielding strange outcomes, such as a gender studies major telling a literature major she should have gone into math instead. 

Speaking of which, a paper I wanted to examine today (Riegle-Crumb, King, & Moore, 2016) begins laying on the idea of gender inequality across majors rather thick. Unless I misread their meaning, they seem to think that gender segregation in college majors ought to be disrupted and, accordingly, sought to understand what happens to men and women who make non-normative choices in selecting a college major, relative to their more normative peers. Specifically, they set out to examine what happens to men who major both in male- and female-dominated fields: are they likely to persist in their chosen field of study in the same or different percentages? The same question was asked of women as well. Putting that into a quick example, you might consider how likely a man who initially majors in nursing is to switch or stay in his program, relative to one who majors in computer science. Similarly, you might think about the fate of a woman who majors in physics, compared to one who majors in psychology.

The authors expected that women would be more likely to drop out of male-dominated fields because they encounter a “chilly” social climate there and face stereotype threat, compared to their peers in female-dominated fields. By contrast, men were expected to drop out of female-dominated fields more often as they begin to confront the prospect of earning less money in the future and/or lose social status on account of emasculation brought on by their major (whether perceived or real).

To test these predictions, Riegle-Crumb, King, & Moore (2016) examined a nationally-representative sample of approximately 3,700 college students who had completed their degree. These students had been studied longitudinally, interviewed at the end of their first year of college in 2004, then again in 2006 and 2009. A gender atypical major was coded as one in which the opposite sex compromised 70% or more of the major. In the sample being examined, 14% of the males selected a gender-atypical field, while 4% of women did likewise. While this isn’t noted explicitly, I suspect some of that difference might have to do with the relative size of certain majors. For instance, psychology is one of the most popular majors in the US, but also happened to fall under the female-dominated category. That would naturally yield more men than women choosing a gender atypical major if the pattern continued into other fields.

Can’t beat that kind of ratio in the dating pool, though

Moving on to what was found, the researchers were trying to predict whether people would switch majors or not. The initial analysis found that men in male-typical majors switched about 39% of the time, compared to the 63% of men who switched from atypical majors. So the men in atypical fields were more likely to switch. There was a different story for the women, however: those in female-typical majors switched 46% of the time, compared to 41% who switched in atypical fields. The latter difference was neither statistically or practically significant. Unsurprisingly, for both men and women, those most likely to switch had lower GPAs than those who stayed, suggesting switching was due, in part, to performance.

When formally examined with a number of control variables (for social background and academic performance) included in the model, men in gender atypical fields were about 2.6 times as likely to switch majors, relative to those in male-dominated ones. The same analysis run for women found that those in atypical majors were about 0.8 times as likely to switch majors as those in female-dominated ones. Again, this difference wasn’t statistically significant. Nominally, however, women in atypical fields were more likely to stay put.

What do the authors make of this finding? Though they note correctly that their analysis says nothing of the reasons for the switch, they view the greater male-atypical pattern of switching as consistent with their expectations. I think this is probably close to the truth: as a greater proportion of a man’s future success is determined by his ability to provision mates and his social status, we might expect that men tend to migrate from majors with a lower future financial payoff to those that have a larger one. Placing that into a personal example, I might have wanted to be a musician, but the odds of landing a job as a respected rockstar seemed slim indeed. Better that I got a degree in something capable of paying the bills consistently if I care about money.

By contrast, the authors also correctly note that they don’t find evidence consistent with their prediction that women in gender-atypical fields would switch more often. This does not, however, cause them to abandon the justifications for their prediction. As far as I can tell, they still believe that factors like a chilly climate and stereotype threat are pushing women out of those majors; they just supplement that expectation by adding on that a number of factors (like the aforementioned financial ones) might be keeping them in, and the latter factors are either more common or influential (though that certainly makes you wonder why women tend to choose lower-paying fields in greater numbers the first place).

Certainly worth a 20-year career in a field you hate

This strikes me as kind of a fool-proof strategy for maintaining a belief in the prospect of nefarious social forces doing women harm. To demonstrate why, I’d like to take this moment to think about what people’s reactions to these findings might have been if the patterns for men and women were reversed. If it turned out that women in male-dominated majors were more likely to switch than their peers in female-dominated majors, would there have been calls to address the clear sexism causally responsible for that pattern? I suspect that answer is yes, judging from reactions I’ve seen in the past. So, if that result was found, the authors could point a finger at the assumed culprits. However, even when that result was not found, they can just tack on other assumptions (women remain in this major for the money) that allows the initial hypothesis of discrimination to be maintained in full force. Indeed, they end their paper by claiming, “Gender segregation in fields of study and related occupations severely constrains the life choices and chances of both women and men,” demonstrating a full commitment to being unphased by their results.

In other words, there doesn’t seem to be a pattern of data that could have been observed capable of falsifying the initial reasons these expectations were formed. Even nominally contradictory data appears to have been assimilated into their view immediately. Now I’m not going to say it’s impossible that there are large, sexist forces at work trying to push women out of gender atypical fields that are being outweighed by other forces pulling in the opposite direction; that is something that could, in theory, be happening. What I will say is that granting that possibility makes the current work a poor test of the original hypotheses, since no data could prove it wrong. If you aren’t conducting research capable of falsifying your ideas – asking yourself, “what data could prove me wrong?” – then you aren’t engaged in rigorous science. 

References: Riegle-Crumb, C., King, B., & Moore, C. (2016). Do they stay or do they go? The switching decisions of individuals who enter gender atypical college majors. Sex Roles, 74, 436-449.

Diversity: A Follow-Up

My last post focused on the business case for demographic diversity. Summarizing briefly, an attempted replication of a paper claiming that companies with greater gender and racial diversity outperformed those with less diversity failed to reach the same conclusion. Instead, these measures of diversity were effectively unrelated to business performance once you controlled for a few variables. This should make plenty of intuitive sense, as demographic variables per se aren’t related to job performance. While they might prove to be rough proxies if you have no information (men or women might be better at tasks X or Y, for instance), once you can assess skills, competencies, and interests, the demographic variables cease to be good predictors of much else. Being a man or a woman, African or Chinese, does not itself make you competent or interested in any particular domain. Today, I wanted to tackle the matter of diversity itself on more of a philosophical level. With any luck, we might be able to understand some of the issues that can cloud discussions on the topic.

And if I’m unlucky, well…

Let’s start with the justifications for concerns with demographic diversity. As far as I’ve seen, there are two routes people take with this. The first – and perhaps most common – has been the moral justification for increasing diversity of race and gender in certain professions. The argument here is that certain groups of people have been historically denied access to particular positions, institutions, and roles, and so they need to be proactively included in such endeavors as a means of reparation to make up for past wrongs. While that’s an interesting discussion in its own right, I have not found many people who claim that, say, more women should be brought into a profession no matter the impact. That is, no one has said, “So what if bringing in more women would mess everything up? Bring them in anyway.” This brings us to the second justification for increasing demographic diversity that usually accompanies the first: the focus on the benefits of cognitive diversity. The general idea here is not only that people from all different groups will perform at least as well in such roles, but that having a wider mix of people from different demographic groups will actually result in benefits. The larger your metaphorical cognitive toolkit, the more likely you will successfully meet and overcome the challenges of the world. Kind of like having a Swiss Army knife with many different attachments, just with brains.

This idea is appealing on its face but, as we saw last time, diversity wasn’t found to yield any noticeable benefits. There are a few reasons why we might expect that outcome. The first is that cognitive diversity itself is not always going to be useful. If you’re on a camping trip and you need to saw through a piece of wood, the saw attachment on your Swiss Army knife would work well; the scissors, toothpick, and can opener will all prove ineffective at solving your problem. Even the non-serrated knife will prove inefficient at the task. The solutions to problems in the world are not general-purpose in nature. They require specialized equipment to solve. Expanding that metaphor into the cognitive domain, if you’re trying to extract bitumen from tar sands, you don’t want a team of cognitively diverse individuals including a history major, a psychology PhD, and a computer scientist, along with a middle-school student. Their diverse set of skills and knowledge won’t help you solve your problem. You might do better if you hired a cognitively non-diverse group of petroleum engineers.

This is why companies hiring for positions regularly list rather specific qualification requirements. They understand – as we all should – that cognitive diversity isn’t always (or even usually) useful when it comes to solving particular tasks efficiently. Cognitive specialization does that. Returning this point back to demographic diversity, the problem should be clear enough: whatever cognitive diversity exists between men and women, or between different racial groups, it needs to be task relevant in order for it to even potentially improve performance outcomes. Even if the differences are relevant, in order for diversity to improve outcomes, the different demographic groups in question need to complement the skill sets of the other. If, say, women are better at programming than men, then diversity of men and women wouldn’t improve programming outcomes; the non-diverse outcome of hiring women instead of men would.

Just like you don’t improve your track team’s relay time by including diverse species

Now it’s not impossible that such complementary cognitive demographic differences exist, at least in theory, even though the former restrictions are already onerous. However, the next question that arises is whether such cognitive differences would actually exist in practice by the time hiring decisions were made. There’s reason to expect they would not, as people do not specialize in skills or bodies of knowledge at random. While there might be an appreciable amount of cognitive diversity between groups like men and women, or between racial groups, in the entire population, (indeed, meaningful differences would need to exist in order for the beneficial diversity argument to make any sense in the first place) people do not get randomly sorted into groups like professions or college majors.

Most people probably aren’t that interested in art history, or computer science, or psychology, or math to the extent they would pursue it at the expense of everything else they could do. As such, the people who are sufficiently interested in psychology are probably more similar to one another than they are to people who major in engineering. Those who are interested in plumbing are likely more similar to other plumbers than they are to nurses.

As such, whatever differences exist between demographics on the population level may be reduced in part or in whole once people begin to self-select into different groups based on skills, interests, and aptitudes. Even if men and women possess some cognitive differences in general, male and female nurses, or psychologists, or engineers, might not differ in those same regards. The narrower the skill set you’re looking for when it comes to solving a task, the more similar we might expect people who possess those skills to be. Just to use my profession, psychologists might be more similar than non-psychologists; those with a PhD might be more similar than those with just a BA; those who do research may differ from those who enter into the clinical field, and so on.

I think these latter points are where a lot of people get tripped up when thinking about the possible benefits of demographic diversity to task performance. They notice appreciable and real differences between demographic groups on a number of cognitive dimensions, but fail to appreciate that these population differences might (a) not be large once enough self-selection by skills and interests has taken place, (b) not be particularly task relevant, and (c) might not be complementary.

Ironically, one of the larger benefits to cognitive diversity might be the kind that people typically want to see the least: the ability of differing perspectives to help check the personal biases we possess. As people become less reliant on those in their immediate vicinity and increasingly able to self-segregate into similar-thinking social and political groups around the world, they may begin to likewise pursue policies and ideas that are increasingly self-serving and less likely to benefit the population on the whole. Key assumptions may go unchallenged and the welfare of others may be taken into account less frequently, resulting in everyone being worse off. Groups like the Heterodox Academy have been set up to try and counteract this problem, though the extent of their success is debatable.

A noble attempt to hold back the oncoming flood all the same

Condensing this post a little, the basic idea is this: men and women (to use just one group), on average, are likely to show a greater degree of between-group cognitive diversity than are male and female computer science majors. Or male and female literature majors. Any group you can imagine. Once people are segregating themselves into different groups on the basis of shared abilities and interests, those within the groups should be much more similar to one another than you’d expect on the basis of their demographics. If much of the cognitive diversity between these groups is getting removed through self-selection, then there isn’t much reason to expect that demographic diversity within those groups will have as much of an effect one way or the other. If male and female programmers already know the same sets of skills and have fairly similar personalities, making those groups look more male or more female won’t have much of an overall effect on their performance.

For it to even be possible that such diversity might help, we need to grant that meaningful, task-relevant differences between demographic groups exist, are retained throughout a long process of self-selection, and that these differences complement each other, rather than one group being superior. Further, these differences would need to create more benefits than conflicts. While there might be plenty of cognitive diversity in, say, the US congress in terms of ideology, that doesn’t necessarily mean it helps people achieve useful outcomes all the time once you account for all the dispute-related costs and lack of shared goals. 

If qualified and interested individuals are being kept out of a profession simply because of their race or gender, that obviously carries costs and should be stopped. There would be many valuable resources going untapped. If, however, people left to their own devices are simply making choices they feel suit them better – creating some natural demographic imbalances – then just changing their representation in this field or that shouldn’t impact much.

Does Diversity Per Se Pay?

In one of the most interesting short reports I read recently, some research was conducted in Australia examining what the effect of blind reviews would be on hiring. The premise of the research, far as I can surmise, was that a fear existed of conscious or unconscious bias against women and minority groups when it came to getting hired. This bias would naturally make it harder for those groups to find employment, ultimately yielding a less diverse workforce. In the interests of avoiding that bias, the research team compared what happened when candidates were assessed on either standard resumes or de-identified ones. The latter resumes were identical to the former, except they had group-relevant information (like gender and race) removed. If reviewers don’t have that information of race or gender available, then they couldn’t possibly assess the candidates on the basis of them, whether consciously or unconsciously. That seems straightforward enough. The aim was to compare the results from the blind assessments to those of the standard resumes. As it turned out, there were indeed hints of bias; relatively small in size sometimes, but present nonetheless. However, the bias did not go in the direction that had been feared.

Shocking that the headline wasn’t “Blind review processes are biased”

Specifically, when the participants assessing the resumes had information about gender, they were about 3% more likely to select women, and 3% less likely to select men. Further, minorities were more likely to be selected as well when the information was available (about 6% for males and 9% for females). While there’s more to the picture than that, the primary result seemed to be that, when given the option, these reviewers discriminated in favor of women and minority groups simply because of their group membership. If these results had run in the opposite direction (against women and minorities) there would have no doubt been calls for increasing blind reviews. However, because blind reviews seemed to disfavor women and minorities, the authors had a different suggestion:

Overall, the results indicate the need for caution when moving towards ’blind’ recruitment processes in the Australian Public Service, as de-identification may frustrate efforts aimed at promoting diversity

It’s hard to interpret that statement as anything other than ”we should hire more women and minorities, regardless of qualifications.” Even if sex and race ought to be irrelevant to the demands of the job and candidates should be assessed on their merit, people should also apparently be cautious when removing those irrelevant pieces from the application process. The authors seemed to favor discrimination based on sex or race so long as it benefited the right groups. Such discriminatory practices have led to negative reactions on the part of others, as one might expect.

This brings me another question: why should we value diversity when it comes to hiring decisions? To be clear, the diversity being sought is often strictly demographic in nature (many organizations tout diversity in race, for instance, but not in perspective. I don’t recall the draw of many positions being that you will meet a variety of people who hold fundamental disagreements with your view on the world). It’s also usually the kind of diversity that benefits women and minorities (I’ve never come across calls to get more white males into certain fields dominated by women or other races. Perhaps they exist; I just haven’t seen them). But are there real economic benefits to increasing diversity per se? Could it be the case that more diverse organizations just do better? On the face of it, I would assume the answer is “no” if the diversity in question is simply demographic in nature. What matters when it comes to job performance is not the color of one’s skin or what sex chromosomes they possess, but rather their skills and competencies they bring with them. While some of those skills and competencies might be very roughly approximated by race and gender if you have no additional information about your applicants, we thankfully don’t need to rely on those indirect measures. Rather than asking about gender or race, one could just ask directly about skill sets and interests. When you can do that, the additional value of knowing one’s group membership is likely close to nil. Why bother using a predictor of a variable when you can just use the variable itself?

Do you really love roundabouts that much?

Nevertheless, it has apparently been reported before that demographic diversity predicts the relative success of companies (Herring, 2009). A business case was made for diversity, such that diverse companies were found to generally do better than less diverse ones across a number of different metrics. Not that those in favor of increasing diversity really seemed to need a financial justification, but having one certainly wouldn’t hurt their case. As this paper was apparently popular within the literature (for what I assume is that reason), a replication was attempted (Stojmenovska et al, 2017), beginning in a graduate course as an assignment to help students “learn from the best.” Since it seems “psychology research” and “replications” mix about as well as oil and water as of late, the results turned out a bit worse than hoped. The student wasn’t even trying to look for problems; they just stumbled upon them.  

In this instance, the replication attempt failed to find the published result, instead catching two primary mistakes made in the original paper (as opposed to anything malicious): there were a number of coding mistakes within the data, and the sample data itself was skewed. Without going too deeply into why this is a problem, it should suffice to say that coding mistakes are bad for all the obvious reasons. Fixing the coding mistakes by deleting missing data resulted in a substantial reduction in sample size (25-50% smaller). As for the issue of skew, having a skewed sample can result in an underestimation of the relationship between predictors and outcomes. In brief, there were confounding relationships between predictor variables and the outcomes that were not adequately controlled for in the original paper. To correct for the skew issue, a log transformation on the data was carried out, resulting in a dramatic increase in the relationship between particular variables.

In order to provide a concrete sense for that increase, in the original report the correlation between company size and racial diversity was .14; after the log transformation was carried out, that correlation increased to .41. This means that larger companies tended to be more racially diverse than smaller ones, but that relationship was not fully accounted for in the original paper examining how diversity impacted success. The same issue held for gender diversity and establishment size.

Once these two issues – coding errors and skewed data – were addressed, the new results showed that gender and racial diversity were effectively unrelated to company performance. The only remaining relationship was a small one between gender diversity and the logged number of customers. While seven of the original eight hypotheses were supported in the first paper, the replication attempt correcting these errors only found one of the eight to be statistically supported. As most of the effects no longer existed, and the one that did exist was small in size, the business justification for increasing racial and gender diversity failed to receive any real support.

Very colorful, but they ultimately all taste the same

As I initially mentioned, I don’t see a very good reason to expect that a more demographically diverse group of employees should yield better outcomes. They don’t yield worse outcomes either. However, the study from Australia suggests that the benefits of diversity (or the lack thereof) are basically besides the point in many instances. That is, not only would I imagine this failure to replicate won’t have a substantial impact on many people’s views on whether or not diversity should be increased, but I don’t think it would even if diversity was found to be a bad thing, financially speaking. This is because I don’t suspect many views of whether increasing diversity should be done are based on the foundation that it’s good for people economically in the first place. Increasing diversity isn’t viewed as a tricky empirical matter as much as it seems to be a moral one; one in which certain groups of people are viewed as owing or deserving various things.

This is only looking at the outcomes of adding diversity, of course. The causes of such diverse levels of diversity across different walks of life is another beast entirely.

References: Stojmenovska, D., Bol, T., & Leopolda, T. (2017). Does diversity pay? Replication of Herring (2009). American Sociological Review, 82, 857-867. 

Herring, C. (2009). Does diversity pay? Race, gender, and the business case for diversity. American Sociological Review, 74, 208–224.

If You Got It, Think Hard About Flaunting It

I’ve attended the Gay Pride Parade in New York on more than one occasion. The event itself holds a special significance for many people who have been close to me and I’m always happy to see them happy, even if parades normally aren’t my cup of tea. That said, I have found certain aspects of the event a little peculiar, at least with regard to its execution. I had this to say about it some years ago:

One could be left wondering what a straight pride parade would even look like anyway, and admittedly, I have no idea. Of course, if I didn’t already know what gay pride parades do look like, I don’t know why I would assume they would be populated with mostly naked men and rainbows, especially if the goal is fostering acceptance and rejection of bigotry. The two don’t seem to have any real connection, as evidenced by black civil rights activists not marching mostly naked for the rights afforded to whites, and suffragettes not holding any marches while clad in assless leather chaps.

Colorful exaggerations aside, there’s something very noteworthy to think about here. While it might seem normal for gay pride events to be rather flamboyant affairs, there need not be any displays of promiscuous sexuality inherent to the event. That is, if people were celebrating a straight, monogamous relationship style with a parade, I don’t think we’d see many people dressing down or, in some cases, going without clothing at all. I imagine the event would be substantially more modest as, well, most other parts of life tend to be.

“From: Straight Pride Boat Ride, 2016″

The relevance of this point comes when one begins to consider what types of people in the world are most opposed to homosexual lifestyles and, accordingly, pose the largest obstacles to things like marriage and adoption rights for the gay community. When considering who those people are, the most common idea that will no doubt spring to many minds are the conservative, religious type (likely because that would be the correct answer). But why are such people most likely to condemn homosexuality on a moral level? A tempting answer would be to make reference to some religious texts condemning homosexuality, but that’s a rather circular explanation: religious people condemn homosexuality because they believe in a doctrine that condemns homosexuality. It’s also not entirely complete, as many parts of the doctrine are only selectively followed in other contexts. We’re also left wondering why those doctrines condemned homosexuality in the first place, placing us back at square one.

A more detailed picture begins to emerge when you consider what predicts religiosity in the first place; what type of person is most drawn to such groups. As it turns out, one of the better predictors of who ends up associating themselves with religious groups and who does not is sexual strategy. Those who are more inclined to monogamy (or, more precisely, opposed to promiscuity) tend to be more religious, and this holds across cultures and religions. By contrast, religiosity is not well predicted by general cooperative morals or behavior. It would be remarkable if religions from all parts of the world ended up stumbling upon a common distaste for promiscuity if it was not inherently tied to religious belief. Something about sexual behavior is uniquely predictive of religiosity, which ought to be strange when you consider that one’s sexual behavior should have little bearing on whether a deity (or several deities) exist. It has even been proposed that religious groups themselves function to support particular kinds of relatively monogamous mating arrangements. In that light, religious groups can be viewed as a support structure for monogamous couples who plan on having many children.

With that perspective in mind, the religious opposition to promiscuity becomes substantially clearer: promiscuity makes monogamous arrangements more difficult to sustain, and vice versa. If you plan on having a lot of children, men face risks of cuckoldry (raising a child that was unknowingly sired by another man) while women face risks of abandonment (if their husband runs off with another woman, leaving her to care for the children alone). As such, having lots of promiscuous men and women around who might lure your partner away or stop them from investing in you in the first place does the monogamous type no favors. In order to support their more monogamous lifestyle, then, these people begin to punish those who engage in promiscuous behaviors to make such strategies more costly to engage in and, accordingly, more rare.

The first punishment for promiscuity – spankings – didn’t have the intended effect

While homosexual individuals themselves don’t exactly pose direct risks to heterosexual, long-term mating couples, they may nevertheless be condemned to the extent that the gay community is viewed as promiscuous. There are a few possible reasons for that outcome to obtain. Perhaps homosexuals are viewed as supporting and encouraging promiscuity, and to let that go unpunished would start other people down a path towards promiscuity (similar to how recreational drug use is also condemned by the long-term maters). Perhaps all sorts of non-traditional sexual behavior is condemned by the conservative groups and homosexuality just ends up condemned as a byproduct. Whatever the explanation for this condemnation, however, a key prediction falls out of this framework: moral condemnation of homosexuality ought to increase to the extent they are viewed as promiscuous and decrease to the extent they are viewed as monogamous. As homosexual groups (particularly men) are viewed as more promiscuous than their heterosexual counterparts (because they are, from every data set I’ve seen), this might help explain the condemnation and, in turn, do something about it.

This is exactly what a new paper by Pinsof & Haselton (2017) sought to test. The pair recruited approximately 1,000 participants from online. The participants read either an article that reported gay men had more partners than straight ones, or an article that reported gay men and straight had the same number of partners. Participants were also asked about their own perceptions of how promiscuous gay men are, their stance on gay rights, and on their own mating orientation (whether they thought short-term sexual encounters were acceptable or not).

As expected, there was an appreciable relationship between one’s mating orientation and one’s support of gay rights: the more long-term their mating strategy, the less supportive of gay rights they were (r = -0.4). That said, despite men being more accepting of promiscuity than women, there was no relationship between gender and support for gay rights. Crucially, an interaction was observed between experimental condition and mating orientation when it came to predicting support for gay rights: Those who were particularly accepting of short-term mating arrangements opposed gay rights very little regardless of which article they had read regarding gay men’s sexual behavior (Ms = approximately 2.25 in both groups, on a scale from 1-7). However, among those who were relatively less accepting of short-term mating, there was a significant difference between the two conditions: when reading an article about how gay men were more promiscuous, opposition to gay rights was higher (M = 4.25) than it was in the condition where they read about how gay men were equally promiscuous (M = 3.5).

Acceptable

By manipulating perceptions of whether gay men were promiscuous, the researchers were also able to manipulate opposition to gay rights. So, if one is interested in achieving greater support for the homosexual community, that’s important information to bear in mind. It also brings me back to the initial point I mentioned about the Gay Pride events I have attended. While I was there, I couldn’t help but wonder whether the atmosphere of sexual promiscuity surrounding the parade would be off-putting to a substantial percentage of the population (even within the gay community), and it seems that intuition was borne out by the present data. The Gay Pride events go beyond a simple celebration and acceptance of homosexuality at points, as it is frequently coupled with sexual promiscuity. It seems that many people might have less of a problem with the former issue if the latter one wasn’t tagging along.

Then again, perhaps promiscuity will be a bit more closely linked with the homosexual community in general, given that children do not result from such unions (making them less costly to engage in) and because heterosexual men are usually only as promiscuous as women allow them to be. If women were just as interested in casual sex as men, there would likely be a lot more casual sex going on. When men are attracted to other men, however, the barriers that usually holds promiscuity in check (children and women’s desires) are much weaker. That does raise the interesting question of whether a different pattern holds for lesbian relationships (which are less promiscuous than gay ones), and it’s certainly one worth pursuing.

References: Pinsof, D. & Haselton, M. (2017). The effect of the promiscuity stereotype on opposition to gay rights. PLoS ONE 12(7): e0178534. https://doi.org/10.1371/journal.pone.0178534

Not-So-Leaky Pipelines

There’s an interesting perspective many people take when trying to understand the distribution of jobs in the world, specifically with respect to men and women: they look at the percentage of men and women in a population (usually in terms of country-wide percentages, but sometimes more localized), make note of any deviations from those percentages in terms of representation in a job, and then use those deviations to suggest that certain desirable fields (but not usually undesirable ones) are biased against women. So, for instance, if women make up 50% of the population but only represent 30% of lawyers, there are some who would conclude this means the profession (and associated organizations) is likely biased against women, usually because of some implicit sexism (as evidence of explicit and systematic sexism in training or hiring practices is exceptionally hard to come by). Similar methods have been used when substituting race for gender as well.

Just another gap, no doubt caused by sexism

Most of the ostensible demonstrations of this sexism issue are wanting, and I’ve covered a number of these examples before (see here, here, here, and here). Simply put, there are a lot of factors in the world that determine where people ultimately end up working (or whether they’re working at all). Finding a consistent gap between groups tells you something is different, just not what. As such, you don’t just get to assume that the cause of the difference is sexism and call it a day. My go-to example in that regard has long been plumbing. As a profession, it is almost entirely male dominated: something like 99% of the plumbers in the US are men. That’s as large of a gender gap as you could ask for, yet I have never once seen a campaign to get more women into plumbing or complaints about sexism in the profession keeping otherwise-interested women out. Similarly, men make up about 96% of the people shot by police, but the focus on police violence has never been on getting officers to shoot fewer men per se. In those cases, most people seem to recognize that factors other than sex are the primary determinants of the observed sex differences. Correlation isn’t causation, and maybe women aren’t as interested in digging around through human waste or committing violent felonies as men are. Not to say that many men are interested, just that more of those who are end up being men.

If that was the case and these sex differences aren’t caused by sexism, any efforts that sought to “fix” the gap by focusing on sexism would ultimately be unsuccessful. At the risk of saying something too obvious, you change outcomes by changing their causes; not unrelated issues. If we have the wrong idea as to what is causing an outcome, we end up wasting time and money (which often does not belong to us) trying to change it and accomplishing very little in the process (outside of getting people annoyed at us for wasting their time and money).

Today I wanted to add to that pile of questionable claims of sexism concerning an academic neighbor to psychology: philosophy. Though I was unaware of this debate, there is apparently some contention within the field concerning the perceived under-representation of women. As is typical, the apparent under-representation of women in this field has been chalked up to sexist biases keeping women discouraged and out of a job. To be clear about things, some people are looking at the percentage of men and women in the field of philosophy, noting that it differs from their expectations (whatever those are and however they were derived), calling it under-representation because of those expectations, and then further assuming a culprit in the form of sexism. As it turns out, the data has something to say about that.

It also has some great jokes about Polish people if you’re a racist.

The data in question come from a paper by Allen-Hermanson (2017), which examined sex differences in tenure-track hiring and academic publishing in philosophy departments. The reasoning behind this line of research was that if insidious forces are at work against women in philosophy departments, we ought to expect something of a leaky pipeline: women should not be as successful as men at landing desirable, tenure-track jobs, relative to the rates at which each sex earn philosophy degrees. So, if women earned, say, 40% of the philosophy PhDs during the last year, we might expect that they get 40% of the tenure-track jobs in the next, all else being equal. Across the 10 year period examined (2005-2014), there were three years in which women were hired very slightly below their relative percentage into the tenure-track jobs (and by “very slightly” I’m talking in range of about 1-2%), one year in which it was dead even, and during the remaining six years women were hired at above the rate which would be expected by much more substantial margins (in the range of 5-10%).

Putting some rough numbers to that, women earned about 28% of the PhDs and received about 36% of the jobs in the most recent hiring seasons. It seems, then, women tended to be over-represented in those positions, on average. Other data discussed in the paper corresponds to those findings, again suggesting that women had about a 25% advantage over men in finding desirable positions (in terms of less desirable positions, men and women were hired in about equal numbers).

This finding is made all the stranger by Allen-Hermanson (2017) noting that male and female degree holders differed with respect to how often they published. On average, the new tenure-track female candidates who had never held such a position before had 0.77 publications. The comparable male number was 1.37. Of those who secured a job in 2012-2013, men averaged 2.4 publications to women’s 1.17. Not only are the men publishing about twice as much, then, but they’re also modestly less successful at landing a job (and this effect did not appear to be driven by particularly prolific publishers). While one could possibly make the case that maybe female publications are in some sense higher qualitythat remains to be seen. One could more easily make the case that female candidates were held to lower standards than male ones.

As the data currently stand, I can’t imagine many people will be making a fuss about them and crying sexism. Perhaps the men with the degrees went out to seek work elsewhere and that explains why women are over-represented. Perhaps there are other causes. The world is a complicated place, after all. The point here is that there won’t be talk about how philosophy departments are biased against men, just like there wasn’t much talk I saw last time research found a much larger academic bias in favor of women, holding candidate quality constant. I think that is largely because the data apparently favor women with respect to hiring. If the results had run in the opposite direction, I can imagine that a lot more noise would have been made about them and many people would be getting scolded right now about their tolerance of sexism. But that’s just an intuition.

“Now, if you’ll excuse me, I’m off to find bias against my group somewhere else”

When asking a question of under-representation, the most pressing matter should always be, “under-represented with respect to what expectation?” In order to say that a group is under-represented, you need to make it clear what the expected degree of representation is as well as why. We shouldn’t expect that men and women be killed by police in equal numbers unless we also expect that both groups behave more-or-less identically. We similarly shouldn’t expect that men and women enter into certain fields in the same proportion unless they have identical sets of interests. On the other hand, if the two groups are different with respect to some key factor that determines an outcome, such as interests, using sex itself is just a poor variable choice. Compared to interest in fixing toilets (and other such relevant factors), I imagine sex itself uniquely predicts very little about who ultimately ends up becoming a plumber. If we can use those better, more directly-relevant factors, we should. You don’t build your predictive model with irrelevant factors; not if accuracy is your goal, in any case.

References: Allen-Hermanson S. (2017). Leaking pipeline myths: In search of gender effects on the job market and early career publishing in philosophy. Frontiers in Psychology, 8, doi: 10.3389/fpsyg.2017.00953

Understanding Sex In Advertising

When people post videos on YouTube, one major point of interest for content creators and aggregators is to capture as much attention as possible. Your video is adrift in a sea of information and you’re trying to get as many eyes/clicks on your work as possible. In that realm, first impressions are all important: you want your video to have an attention-grabbing thumbnail image, as that will likely be the only thing viewers see before they actually click (or don’t) on it. So how do people go about capturing attention in that realm? One popular method is to ensure their thumbnail has a very emotive expression on it; a face of shock, embarrassment, stress, or any similar emotion. That’s certainly one way of attracting attention: trying to convince people there is something worth looking at, not unlike articles titled along the lines of five shocking tips for a better sex life (and number 3 will blow your mind!). Speaking of sex, that’s another popular method of grabbing attention: it’s fairly common for video thumbnails to feature people or body parts in various stages of undress. Not much will pull eyes towards a video like the promise of sex (and if you’re feeling an urge to click on that link, you’ll have experienced exactly what I’m talking about).

Case in point: most of that content is unrelated to the featured women

If sex happens to be attention grabbing, the natural question arises concerning what you might do with that attention once you have it. Much of the time, that answer will involve selling some good or service. In other words, sex is used as a form of advertising to try and sell things. “If you enjoyed that picture of a woman wearing a thong, you’ll surely love our reasonably-costed laptops!”. Something along those lines, anyway. Provided that’s your goal, lots of questions naturally start to crop up: How effective is sex at these goals? Does it capture attention well? Does it help people notice and remember your product or brand? Are those who viewed your sexy advert more likely to buy the product you’re selling? How do other factors – the sex of the person viewing the ad – contribute to your success in these realms?

These are some of the questions examined in a recent meta-analysis by Wirtz, Sparks, & Zimbres (2017). The researchers searched the literature and found about 80 studies, representing about 18,000 participants. They sought to find out what effects featuring sexually provocative material had, on average (defined in terms of style of dress, sexual behavior, innuendo, or sexual embeds, which is where hidden messages or images are placed within the ad, like the word “sex” added somewhere to the picture, which is something people apparently think is a good idea sometimes). These ads had to have been compared against a comparable, non-sexual ad for the same product to be included in the analysis to determine which was more effective.

The effectiveness of these ads were assessed across a number of domains as well, including ad recognition (in aided and unaided contexts), whether the brand being advertised in the ad could be recalled (i.e., were people paying attention to just the sex, or did they remember the product?), the positive or negative response people had to the ad, what people thought about the brand being advertised with sex, and whether the ad actually got them interested in purchasing the product (does sex sell?).

Finally, a number of potentially moderating factors that might influence these effects were considered. The first of these was gender: did these ads have different impacts on men and women? Others factors included the gender of the model used in the advertisement, the date the article was published (to see if attitudes shifted over time), the sample used (college students or not), and – most interestingly – product/ad congruity: did the type of product being advertised matter when it came to whether sex was effective? Perhaps sex might help sell a product like sun-tan lotion (as the beach might be a good place to pick up mates), but be much less effective for selling, say, laptops.

Maybe even political views

In terms of capturing attention, sex works. Of the 20 effects looking at the recall for ads, the average size was d = .38. Interesting, this effect was slightly larger for the congruent ads (d = .45), but completely reversed for the incongruent ones (d = -.45). Sex was good at getting people to remember ads selling a sex-related product, but not just generally useful. That said, they seemed better at getting people to remember just the ads. When the researchers turned to the matter of whether the brands within the ads were more likely to be recalled, the 31 effects looking at brand recognition turned out to barely break zero (d = .09). While sex might be attention-grabbing, it didn’t seem especially good at getting people to remember the objects being sold.

Regarding people’s attitudes towards the ads, sex seems like something of a wash (d = -.07). Digging a little deeper revealed a more nuanced pictured of these reactions, though: while sexual ads seemed to be a modest hit with the men (d = .27), they had the opposite effect on women (d = -.38). Women seemed to dislike the ads modestly more than men liked them, as sexual strategies theory would suggest (for the record, the type of model being depicted didn’t make much of a difference. In order, people liked males models the least (d = -.28), then female models (d = -.20), and couples were mildly positive, d = .08).

Curiously, both the men and women seemed to be agreement regarding their stance towards brands that used sex to sell things: negative, on the whole (d – =.22). For women, this makes some intuitive sense: they didn’t see to be a fan of the sexual ads, so they weren’t exactly feeling too ingratiated towards the brand itself. But why were the men negatively inclined towards the brand if they were favorably inclined towards the ads? I can only speculate on that front, but I assume it would have something to do with their inevitable disappointment: either that the brands were promising on sex the male customers likely knew they couldn’t deliver on, or perhaps the men simply wanted to enjoy the sex part and the brand itself ended up getting in their way. I can’t imagine men would be too happy with their porn time being interrupted by an ad for toilet paper or fruit snacks mid-video.

Finally, turning the matter of purchase intentions – whether the ads encouraged people to want to buy the product or not – it seemed that sex didn’t really sell, but it didn’t really seem to hurt, either (d = .01). One interesting exception in that realm was that sex appeals were actually less likely to get people to buy a product when the product being sold was incongruent with the sexual appeal (d = -.24). Putting that into a simple example, the phrase “strip club buffet” probably doesn’t wet many appetites, and wouldn’t be a strong selling point for such a venue. Sex can be something of a disease vector, and associating your food with that might illicit more than a bit of disgust.

“Oh good, I was starving. This seems like as good a place as any”

As I’ve noted before, context matching matters in advertising. If you’re looking to sell people something that highlights their individuality, then doing so in a mating context works better than in a context of fear (as animals aren’t exactly aiming to look distinct when predators are nearby). The same seems to hold for using sex. While it might be useful for getting eyes on your advertisement, sex is by no mean guaranteed to ensure that people like what they see once you have their attention. In that regard, sex – like any other advertising tool – needs to be used selectively, targeting the correct audience in the correct context if it’s going to succeed at increasing people’s interest in buying. Sex in general doesn’t sell. However, it might prove more effective for those with more promiscuous attitudes than those with more monogamous ones; it might prove useful if advertising a product related to sex or mating, but not useful for selling domain names (like the old GoDaddy commercials; coincidentally, GoDaddy was also the brand I used to register this site); it might work better if you associate your product with things that lead to sex (like status), rather than sex itself. These are all avenues worth pursuing further to see when, where, and why sex works or fails.

That said, it is still possible that sex might prove useful, even in some inappropriate contexts. Consider the following hypothetical example: people will consider buying a product only after they have seen an advertisement for it. Advertisement X isn’t sexual, but when paired with the product will increase people’s intentions to buy it by 10%. However, it will also not really get noticed by many people, as the content is bland. By contrast, advertisement Y is sexual, will decrease people’s intentions to buy a product by 10%, but will also get four-times as many eyes on it. The latter ad might well be more successful, as it will capture the eye of more potential customers that may still buy the product despite the inappropriate use of sexWhile targeting advertisements might be more effective, the attention model of advertising shouldn’t be ruled out entirely, especially if targeting advertising would prove too cumbersome.

References: Wirtz, J., Sparks, J., & Zimbres, T. (2017). The effect of exposure to sexual appeals in advertisements on memory, attitude, and purchase intention: A meta-analytic review. International Journal of Advertising, https://doi.org/10.1080/02650487.2017.1334996

 

Divorced Dads And Their Daughters

Despite common assumptions, parents have less of an impact on their children’s future development than they’re often credited with. Twins reared apart usually aren’t much different than twins reared together, and adopted children don’t end up resembling their adoptive parents substantially more than strangers. While parents can indeed affect their children’s happiness profoundly, a healthy (and convincing) literature exists supporting the hypothesis that differences in parenting behaviors don’t do a whole lot of shaping in terms of children’s later personalities (at least when the child isn’t around the parent; Harris, 2009). This makes a good deal of theoretical sense, as children aren’t developing to be better children; they’re developing to become adults in their own right. What children learn works when it comes to interacting with their parents might not readily translate to the outside world. If you assume your boss will treat you the same way your parents would, you’re likely in for some unpleasant clashes with reality. 

“Who’s a good branch manager? That’s right! You are!”

Not that this has stopped researchers from seeking to find ways that parent-child interactions might shape children’s future personalities, mind you. Indeed, I came upon a very new paper purporting to do just that this last week. It suggested that the quality of a father’s investment in his daughters causes shifts in his daughter’s willingness to engage in risky sexual behavior (DelPriore, Schlomer, & Ellis, 2017). The analysis in the paper is admittedly a bit tough to follow, as the authors examine three- and even four-way interactions (which are difficult to keep straight in one’s mind: the importance of variable A changes contingent on the interaction between B, C, & D), so I don’t want to delve too deeply into the specific details. Instead, I want to discuss the broader themes and design of the paper.

Previous research looking at parenting effects on children’s development often suffers from the problem of relatedness, as genetic similarities between parents and children make it hard to tease apart the unique effects of parenting behaviors (how the parents treat their children) from natural resemblances (nice parents have nice children). In a simple example, parents who love and nurture their children tend to have children who grow up kinder and nicer, while parents who neglect their children tend to have children who grow up to be mean. However, it seems likely that parents who care for their children are different in some important regards than those who neglect them, and those tendencies are perfectly capable of being passed on through shared genes. So are the nice kids nice because of how their parents treated them or because of inheritance? The adoption studies I mentioned previously tend to support the latter interpretation. When you control for genetic factors, parenting effects tend to drop out.

What’s good about the present research is its innovative design to try and circumvent this issue of genetic similarities between children and parents. To accomplish this goal, the authors examined (among other things) how divorce might affect the development of different daughters within the same family. The reasoning for doing so seems to go roughly as follows: daughters should base their sexual developmental trajectory, in part, on the extent of paternal investment they’re exposed to during their early years. When daughters are regularly exposed to fathers that invest in them and monitor their behavior, they should come to expect that subsequent male parental investment will be forthcoming in future relationships and avoid peers who engage in risky sexual behavior. The net result is that such daughters will engage in less risky sexual behavior themselves. By contrast, when daughters lack proper exposure to an investing father, or have one who does not monitor their peer behavior as tightly (due to divorce), they should come to view future male investment as unlikely, associate with those who engage in riskier sexual behavior, and engage in such behavior themselves.

Accordingly, if a family with two daughters experiences a divorce, the younger daughter’s development might be affected differently than the older daughter’s, as they have different levels of exposure to their father’s investment. The larger this age gap between the daughters, the larger this effect should be. After recruiting 42 sister pairs from intact families and 59 sister pairs from divorced families and asking them some retrospective questions about what their life was like growing up, this is basically the result the authors found. Younger daughters tended to receive less monitoring than older daughters in families of divorce and, accordingly, tended to associate with more sexually-risky peers and engage in such behaviors themselves. This effect was not present in biologically intact families. Do we finally have some convincing evidence of parenting behaviors shaping children’s personalities outside the home?

Look at this data and tell me the first thing that comes to your mind

I don’t think so. The first concern I would raise regarding this research is the monitoring measure utilized. Monitoring, in this instance, represented a composite score of how much information the daughters reported their parents had about their lives (rated from (1) didn’t know anything, (2) knew a little, or (3) knew a lot) in five domains: who their friends were, how they spent their money, where they spent their time after school, where they were at night, and how they spent their free time. While one might conceptualize that as monitoring (i.e., parents taking an active interest in their children’s lives and seeking to learn about/control what they do), it seems that one could just as easily think of that measure as how often children independently shared information with their parents. After all, the measure doesn’t specify, “how often did your parents try to learn about your life and keep track of your behavior?” It just asked about how much they knew.

To put that point concretely, my close friends might know quite a bit about what I do, where I go, and so on, but it’s not because they’re actively monitoring me; it’s because I tell them about my day voluntarily. So, rather than talking about how a father’s monitoring of his daughter might have a causal effect on her sexual behavior, we could just as easily talk about how daughters who engage in risky behavior prefer not to tell their parents about what they’re doing, especially if their personal relationship is already strained by divorce.

The second concern I have concerns divorce itself. Divorce can indeed affect the personal relationships of children with their parents. However, that’s not the only thing that happens after a divorce. There are other effects that extend beyond emotional closeness. An important example of these other factors are the financial ones. If a father has been working while the mother took care of the children – or if both parents were working – divorce can result in massive financial hits for the children (as most end up living with their mother or in a joint custody arrangement). The results of entering additional economic problems into an already emotionally-upsetting divorce can entail not only additional resentment between children and parents (and, accordingly, less sharing of information between them; the reduced monitoring), but also major alterations to the living conditions of the children. These lifestyle shifts could include moving to a new home, upsetting existing peer relations, entering new social groups, and presenting children with new logistical problems to solve.

Any observed changes in a daughter’s sexual behavior in the years following a divorce, then, can be thought of as a composite of all the changes that take place post-divorce. While the quality and amount of the father-daughter relationship might indeed change during that time, there are additional and important factors that aren’t controlled for in the present paper.

Too bad the house didn’t split down the middle as nicely

The final concern I wanted to discuss was more of a theoretical one, and it’s slightly larger than the methodological points above. According to the theory proposed at the beginning of the paper:

“…the quality of fathering that daughters receive provides information about the availability and reliability of male investment in the local ecology, which girls use to calibrate their mating behavior and expectations for long-term investment from future mates.”

This strikes me as a questionable foundation for a few reasons. First, it would require that the relationship of a daughter’s parents are substantially predictive of the relationships she is likely to encounter in the world with regard to male investment. In other words, if your father didn’t invest in your mother (or you) that heavily (or at least during your childhood), that needs to mean that many other potential fathers are likely to do the same to you (if you’re a girl). This would further require, then, that male investment be appreciably uniform across time in the world. If male investment wasn’t stable between males and across time within a given male, then trying to predict the general availability of future male investment from your father’s seems like a losing formula for accuracy.

It seems unlikely the world is that stable. For similar reasons, I suggested that children probably can’t accurately gauge future food availability from their access to food at a young age. Making matters even worse in this regard is that, unlike food shortages, the presence or absence of male parental investment doesn’t seem like the kind of thing that will be relatively universal. Some men in a local environment might be perfectly willing to invest heavily in women while others are not. But that’s only considering the broad level: men who are willing to invest in general might be unwilling to invest in a particular woman, or might be willing or unwilling to invest in that woman at different stages in her life, contingent on her mate value shifting with age. Any kind of general predictive power that could be derived about men in a local ecology seems weak indeed, especially if you are basing that decision off a single relationship: the one between your parents. In short, if you want to know what men in your environment are generally like, one relationship should be as informative as another. There doesn’t seem to be a good reason to assume your parents will be particularly informative.

Matters get even worse for the predictive power of father-daughter relationships when one realizes the contradiction between that theory and the predictions of the authors. The point can be made crystal clear simply by considering the families examined in this very study. The sample of interest was comprised of daughters from the same family who had different levels exposure to paternal investment. That ought to mean, if I’m following the predictions properly, that the daughters – the older and younger one – should develop different expectations about future paternal investment in their local ecology. Strangely, however, these expectations would have been derived from the same father’s behavior. This would be a problem because both daughters cannot be right about the general willingness of males to invest if they hold different expectations. If the older daughter with more years of exposure to her father comes to believe male investment will be available and the younger daughter with fewer years of exposure comes to believe it will be unavailable, these are opposing expectations of the world.

However, if those different expectations are derived from the same father, that alone should cast doubt on the ability of a single parental relationship to predict broad trends about the world. It doesn’t even seem to be right within families, let alone between them (and it’s probably worth mentioning at this point that, if children are going to be right about the quality of male investment in their local ecology more generally, all the children in the same area should develop similar expectations, regardless of their parent’s behavior. It would be strange for literal neighbors to develop different expectations of general male behavior in their local environment just because the parents of one home got divorced while the other stayed together. Then again, it should strange for daughters of the same home to develop different expectations, too).

Unless different ecologies have rather sharp boarders

On both a methodological and theoretical level, then, there are some major concerns with this paper that render its interpretation suspect. Indeed, at the heart of the paper is a large contradiction: if you’re going to predict that two girls from the same family develop substantially different expectations about the wider world from the same father, then it seems impossible that the data from that father is very predictive of the world. In any case, the world doesn’t seem as stable as it would need to be for that single data point to be terribly useful. There ought not be anything special about the relationship of your parents (relative to other parents) if you’re looking to learn something about the world in general.

While I fully expect that children’s lives following their parents divorce will be different – and those differences can affect development, depending on when they occur – I’m not so sure that the personal relationship between fathers and daughters is the causal variable of primary interest.

References: DelPriore, D., Schlomer, G., & Ellis, B. (2017). Impact of Fathers on Parental Monitoring of Daughters and Their Affiliation With Sexually Promiscuous Peers: A Genetically and Environmentally Controlled Sibling Study. Developmental Psychology. Advance online publication. http://dx.doi.org/10.1037/dev0000327

Harris, J. (2009) The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press, NY.

Why Do So Many Humans Need Glasses?

When I was very young, I was given an assignment in school to write a report on the Peregrine Falcon. One interesting fact about this bird happens to be that it’s quite fast: when the bird spots prey (sometimes from over a mile away) it can enter into a high-altitude dive, reaching speeds in excess of 200 mph, and snatch its prey out of midair (if you’re interested in watching a video of such a hunt, you can check one out here). The Peregrine would be much less capable of achieving these tasks – both the location and capture of prey – if its vision was not particularly acute: failures of eyesight can result in not spotting the prey in the first place, or failing to capture it if distances and movements aren’t properly tracked. For this reason I suspect (though am not positive) that you’ll find very few Peregrines that have bad vision: their survival depends very heavily on seeing well. These birds would probably not be in need of corrective lens, like the glasses and contacts that humans regularly rely upon in modern environments. This raises a rather interesting question: why do so many humans wear glasses?

And why does this human wear so many glasses?

What I’m referring to in this case is not the general degradation of vision with age. As organisms age, all their biological systems should be expected to breakdown and fail with increasing regularity, and eyes are no exception. Crucially, all these systems should be expected to all breakdown, more-or-less, at the same time. This is because there’s little point in a body investing loads of metabolic resources into maintaining a completely healthy heart that will last for 100 years if the liver is going to shut down at 60. The whole body will die if the liver does, healthy heart (or eyes) included, so it would be adaptive to allocate those development resources differently. The mystery posed by frequently-poor human eyesight is appreciably different, as poor vision can develop early in life; often before puberty. When you observe apparent maladaptive development early in life like that, it requires another type of explanation.

So what might explain why human visual acuity appears so lackluster early in life (to the tune of over 20% of teenagers using corrective lenses)? There are a number of possible explanations we might entertain. The first of these is that visual acuity hasn’t been terribly important to human populations for some time, meaning that having poor eyesight did not have an appreciable impact on people’s ability to survive and reproduce. This strikes me as a rather implausible hypothesis on the face of it not only because vision seems rather important for navigating the world, but also because it ought to predict that having poor vision should be something of a species universal. While 20% of young people using corrective lenses is a lot, eyes (and the associated brain regions dedicated to vision) are costly organs to grow and maintain. If they truly weren’t that important to have around, then we might expect that everyone needs glasses to see better; not just pockets of the population. Humans don’t seem to resemble the troglobites that have lost their vision after living in caves away from sunlight for many generations.

Another possibility is that visual acuity has been important – it’s adaptive to have good vision – but people’s eyes fail to develop properly sometimes because of development insults, like infectious organisms. While this isn’t implausible in principle – infectious agents have been known to disrupt development and result in blindness, deafness, and even death on the extreme end – the sheer numbers of people who need corrective lenses seem a bit high to be caused by some kind of infection. Further, the numbers of younger children and adults who need glasses appear to have been rising over time, which might seem strange as medical knowledge and technologies have been steadily improving. If the need for glasses is caused by some kind of infectious agent, we would need to have been unaware of its existence and not accidentally treated it with antibiotics or other such medications. Further, we might expect glasses to be associated with other signs of developmental stress, like bodily asymmetries, low IQ, or other such outcomes. If your immune system didn’t fight off the bugs that harmed your eyes, it might not be good enough to fight off other development-disrupting infections. However, there seems to be a positive correlation between myopia and intelligence, which would be strange under a disease hypothesis.

The negative correlation with fashion sense begs for explanation, too

A third possible explanation is that visual acuity is indeed important for humans, but our technologies have been relaxing the selection pressures that were keeping it sharp. In other words, since humans invented glasses and granted those who cannot see as well a crutch to overcome this issue, any reproductive disadvantage associated with poor vision was effectively removed. It’s an interesting hypothesis that should predict people’s eyesight in a population begins to get worse following the invention and/or proliferation of corrective lenses. So, if glasses were invented in Italy around 1300, that should have lead to the Italian population’s eyesight growing worse, followed by the eyesight of other cultures to which glasses spread but not beforehand. I don’t know much about the history of vision across time in different cultures, but something tells me that pattern wouldn’t show up if it could be assessed. In no small part, that intuition is driven by the relatively-brief window of historical time between when glasses were invented, and subsequently refined, produced in sufficient numbers, distributed globally, and today. A window of only about 700 years for all of that to happen and reduce selection pressures for vision isn’t a lot of time. Further, there seems to be evidence that myopia can develop rather rapidly in a population, sometimes as quick as a generation:

One of the clearest signs came from a 1969 study of Inuit people on the northern tip of Alaska whose lifestyle was changing2. Of adults who had grown up in isolated communities, only 2 of 131 had myopic eyes. But more than half of their children and grandchildren had the condition. 

That’s much too fast for a relaxation of selection pressures to be responsible for the change.

This brings us to the final hypothesis I wanted to cover today: an evolutionary mismatch hypothesis. In the event that modern environments differ in some key ways from the typical environments humans have faced ancestrally, it is possible that people will develop along an atypical path. In this case, the body is (metaphorically) expecting certain inputs during its development, and if they aren’t received things can go poorly. As a for instance, it has been suggested that people develop allergies, in part, as a result of improved hygiene: our immune systems are expecting a certain level of pathogen threat which, when not present, can result in our immune system attacking inappropriate targets, like pollen.

There does seem to be some promising evidence on this front for understanding human vision issues. A paper by Rose et al (2008) reports on myopia in two samples of similarly-aged Chinese children: 628 children living in Singapore and 124 living in Sydney. Of those living in Singapore, 29% appeared to display myopia, relative to only 3% of those living in Sydney. These dramatic differences in rates of myopia are all the stranger when you consider the rates of myopia in their parents were quite comparable. For the Sydney/Singapore samples, respectively, 32/29% of the children had no parent with myopia, 43/43% had one parent with myopia, and 25/28% had two parents with myopia. If myopia was simply the result of inherited genetic mutations, its frequencies between countries shouldn’t be as different as they are, disqualifying hypotheses one and three from above.

When examining what behavioral correlates of myopia existed between countries, several were statistically – but not practically – significant, including number of books read and hours spent on computers or watching TV. The only appreciable behavioral difference between the two samples was the number of hours the children tended to spend outdoors. In Sydney, the children spent an average of about 14 hours a week outside, compared to a mere 3 hours in Singapore. It might be the case, then, that the human eye requires exposure to certain kinds of stimulation provided by outdoor activities to develop properly, and some novel aspects of modern culture (like spending lots of time indoors in a school when children are young) reduce such exposure (which might also explain the aforementioned IQ correlation: smarter children may be sent to school earlier). If that were true, we should expect that providing children with more time outdoors when they are young is preventative against myopia, which it actually seems to be.

Natural light and no Wifi? Maybe I’ll just go blind instead…

It should always strike people as strange when key adaptive mechanisms appear to develop along an atypical path early in life that ultimately makes them worse at performing their function. An understanding of what types of biological explanations can account for these early maladaptive outcomes goes a long way in helping you understand where to begin your searches and what patterns of data to look out for.

References: Rose, K., Morgan, I., Smith, W., Burlutsky, G., Mitchell, P., & Saw, S. (2008). Myopia, lifestyle, and schooling in students of Chinese ehtnicity in Singapore and Sydney. Archives of Ophthalmology, 126, 527-530.

More About Dunning-Kruger

Several years back I wrote a post about the Dunning-Kruger effect. At the time I was still getting my metaphorical sea legs for writing and, as a result, I don’t think the post turned out as well as it could have. In the interests of holding myself to a higher standard, today I decided to revisit the topic both in the interests of improving upon the original post and generating a future reference for me (and hopefully you) when discussing it with others. This is something of a time-saver for me because people talk about the effect frequently despite, ironically, not really understanding it too deeply.

First things first, what is the Dunning-Kruger effect? As you’ll find summarized just about everywhere, it refers to the idea that people who are below-average performers in some domains – like logical reasoning or humor – will tend to judge their performance as being above average. In other words, people are inaccurate at judging how well their skills stack up to their peers or, in some cases, to some objective standard. Moreover, this effect gets larger the more unskilled one happens to be. Not only are the worst performers worse at the task then others, but they’re also worse at understanding they’re bad at the task. This effect was said to obtain because people need to know what good performance is before they can accurately assess their own. So, because below-average performers don’t understand how to perform a task correctly, they also lack the skills to judge their performance accurately, relative to others.

Now available at Ben & Jerry’s: Two Scoops of Failure

As mentioned in my initial post (and by Kruger & Dunning themselves), this type of effect shouldn’t extend to domains where production and judging skills can be uncoupled. Just because you can’t hit a note to save your life on karaoke night, that doesn’t mean you will be unable to figure out which other singers are bad. This effect should also be primarily limited to domains in which the feedback you receive isn’t objective or standards for performance are clear. If you’re asked to re-assemble a car engine, for instance, unskilled people will quickly realize they cannot do this unassisted. That said, to highlight the reason why the original explanation for this finding doesn’t quite work – not even for the domains that were studied in the original paper – I wanted to examine a rather important graph of the effect from Kruger & Dunning (1999) with respect to their humor study:

My crudely-added red arrows demonstrate the issue. On the left-hand side, we see what people refer to as the Dunning-Kruger effect: those who were the worst performers in the humor realm were also the most inaccurate in judging their own performance, compared to others. They were unskilled and unaware of it. However, the right-hand side betrays the real issue that caught my eye: the best performers were also inaccurate. The pattern you should expect, according to the original explanation, is that the higher one’s performance, the more accurately they estimate their relative standings, but what we see is that the best performers aren’t quite as accurate as those who are only modestly above average. At this point, some of you might be thinking that this point I’m raising is basically a non-issue because the best performers were still more accurate than the worst performers, and the right-hand inaccuracy I’m highlighting isn’t appreciable. Let me try to persuade you otherwise.

Assume for a moment that people were just guessing as to how they performed, relative to others. Because having a good sense of humor is a socially-desirable skill, people all tend to rate themselves “modestly above-average” in the domain to try and persuade others they actually are funny (and because, in that moment, there are no consequences to being wrong). Despite these just being guesses, those who actually are modestly above-average will appear to be more accurate in their self-assessment than those who are in the bottom half of the population; that accuracy just doesn’t have anything to do with their true level of insight into their abilities (referred to as their meta-cognitive skills). Likewise, those who are more than modestly above average (i.e. are underestimating their skills) will be less accurate as well; there will just be fewer of them than those who overestimated their abilities.

Considering the findings of Kruger & Dunning (1999) on the whole, the above scenario I just outlined doesn’t reflect reality perfectly. There was a positive correlation between people’s performance and their rating of their relative standing (r = .39), but, for the most part, people’s judgments of their own ability (the black line) appear relatively uniform. Then again, if you consider their results in studies two and three of that same paper (logical reasoning and grammar), the correlations between performance and judgments of performance relative to others drop to a low of r = .05 ranging up to a peak of r = .19, which was statistically significant. People’s judgments of their relative performance were almost flat across several such tasks. To the extent these meta-cognitive judgments of performance use actual performance as an input for determining relative standings, it’s clearly not the major factor for either low or high performers.

They all shop at the same cognitive store

Indeed, actual performance shouldn’t be expected to be the primary input for these meta-cognitive systems (the ones that generate relative judgments of performance) for two reasons. The first of these is the original performance explanation posited by Kruger & Dunning (1999): if the system generating the performance doesn’t have access to the “correct” answer, then it would seem particularly strange that another system – the meta-cognitive one – would have access to the correct answer, but only use it to judge performance, rather than to help generate it.

To put that in a quick memory example, say you were experiencing a tip-of-the-tongue state, where you are sure you know the right answer to a question, but you can’t quite recall it.  In this instance, we have a long-term memory system generating performance (trying to recall an answer) and a meta-cognitive system generating confidence judgments (the tip-of-the-tongue state). If the meta-cognitive system had access to the correct answer, it should just share it with the long-term memory system, rather than using the correct answer to tell the other system to keep looking for the correct answer. The latter path is clearly inefficient and redundant. Instead, the meta-cognitive system should use some cues other than direct access to information in generating its judgments.

The second reason actual performance (relative to others) wouldn’t be an input for these meta-cognitive systems is that people don’t have reliable and accurate access to population-level data. If you’re asking people how funny they are relative to everyone else, they might have some sense for it (how funny are you, relative to some particular people you know), but they certainly don’t have access to how funny everyone is because they don’t know everyone; they don’t even know most people. If you don’t have the relevant information, then it should go without saying that you cannot use it to help inform your responses.

Better start meeting more people to do better in the next experiment

So if these meta-cognitive systems are using inputs other than accurate information in generating their judgments about how we stack up to others, what would those inputs be? One possible input would be task difficulty, not in the sense of how hard the task objectively is for a person to complete, but rather in terms of how difficult a task feels. This means that factors like how quickly an answer can be called to mind likely play a role in these judgments, even if the answer itself is wrong. If judging the humor value of a joke feels easy, people might be inclined to say they are above average in that domain, even if they aren’t.

This yields an important prediction: if you provide people with tasks that feel difficult, you should see them largely begin to guess they are below-average in that domain. If everyone is effectively guessing that they are below average (regardless of their actual performance), this means that those who perform the best will be the most inaccurate in judging their relative ability. In tasks that feel easy, people might be unskilled and unaware; for those that feel hard, people might be skilled but still unaware.

This is precisely what Burson, Larrick, & Klayman (2006) tested, across three studies. While I won’t go into details about the specifics of all their studies (this is already getting long), I will recreate a graph from one of their three studies that captures their overall pattern of results pretty well:

As we can see, when the domains being tested became harder, it was now the case that the worst performers were more accurate in estimating their percentile rank than the best ones. On tasks of moderate difficulty, the best and worst performers were equally calibrated. However, it doesn’t seem that this accuracy is primarily due to their real insights into their performance; it just so happened to be the case that their guesses landed closer to the truth. When people think, “this task is hard,” they all seem to estimate their performance as being modestly below average; when the task feels easy instead, they all seem to estimate their performance as being modestly above average. The extent to which that matches reality is largely due to chance, relative to true insight.

Worth noting is that when you ask people to make different kinds of judgments, there is (or at least can be) a modest average advantage for top performers, relative to bottom ones. Specifically, when you ask people to judge their absolute performance (i.e., how many of these questions did you get right?) and compare that to their actual performance, the best performers sometimes had a better grasp on that estimate than the worst ones, but the size of that advantage varied depending on the nature of the task and wasn’t entirely consistent. Averaged across the studies reported by Burson et al (2006), top-half performers displayed a better correlation between their perceived and actual absolute performance (r = .45), relative to bottom performers (r = .05). The corresponding correlations for actual and relative percentiles were in the same direction, but lower (rs = .23 and .03, respectively). While there might be some truth to the idea that the best performers are more sensitive to their relative rank, the bulk of the miscalibration seems to be driven by other factors.

Driving still feels easy, so I’m still above-average at it

These judgments of one’s relative standing compared to others appear rather difficult for people to get accurate. As they should, really; for the most part we lack access to the relevant information/feedback and there are possible social-desirability issues to contend with, coupled with a lack on consequences for being wrong. This is basically a perfect storm for inaccuracy. Perhaps worth noting is that the correlation between one’s relative performance and their actual performance was pretty close for one domain in particular in Burson et al (2006): knowledge of pop music trivia (the graph of which can seen here). As pop music is the kind of thing people have more experience learning and talking about with others, it is a good candidate for a case when these judgments might be more accurate because people do have more access to the relevant information.

The important point to take away from this research is that people don’t appear to be particularly good at judging their abilities relative to others, and this obtains regardless of whether the judges are themselves skilled or unskilled. At least for most of the contexts studied, anyway; it’s perfectly plausible that people – again, skilled and unskilled – will be better able to judge their relative (and absolute) performance when they have experience with a domain in question and have received meaningful feedback on their performance. This is why people sometimes drop out of a major or job after receiving consistent negative feedback, opting to believe they aren’t as cut out for it instead of persisting to believe they are actually above average in that context. You will likely see the least miscalibration for domains where people’s judgments of their ability need to hit reality and there are consequences for being wrong.

References: Burson, K., Larrick, R., & Klayman, J. (2006). Skilled or unskilled, but still unaware of it: How perceptions of difficulty drive miscalibration in relative comparisons. Journal of Personality & Social Psychology, 90, 60-77.

Kruger, J. & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality & Social Psychology, 77, 1121-1134.

Why Do We Roast The Ones We Love?

One very interesting behavior that humans tend to engage in is murder. While we’re far from the only species that does this (as there are some very real advantages to killing members of your species – even kin – at times), it does tend to garner quite a bit of attention, and understandably so. One very interesting piece of information about this interesting behavior concerns motives; why people kill. If you were to hazard a guess as to some of the most common motives for murder, what would you suggest? Infidelity is a good one, as is murder resulting from other deliberate crimes, like when a robbery is resisted or witnesses are killed to reduce the probability of detection. Another major factor that many might not guess is minor slights or disagreements, such as one person stepping on another person’s foot by accident, followed by an insult (“watch where you’re going, asshole!”), which is responded to with an additional insult, and things kind of get out of hand until someone is dead (Daly & Wilson, 1988). Understanding why seemingly minor slights get blown so far out of proportion is a worthwhile matter in its own right. The short-version of the answer as to why it happens is that one’s social status (especially if you’re a male) can be determined, in large part, by whether other people know they can push you around. If I know you will tolerate negative behavior without fighting back, I might be encouraged to take advantage of you in more extreme ways more often. If others see you tolerating insults, they too may exploit you, knowing you won’t fight back. On the other hand, if I know you will respond to even slight threats with violence, I have a good reason to avoid inflicting costs on you. The more dangerous you are, the more people will avoid harming you.

“Anyone else have something to say about my shirt?! Didn’t think so…”

This is an important foundation for understanding why another facet of human behavior is strange (and, accordingly, interesting): friends frequently insult each other in a manner intended to be cordial. This behavior is exemplified well by the popular Comedy Central Roasts, where a number of comedians will get together to  publicly make fun of each other and their guest of honor. If memory serves, the (unofficial?) motto of these events is, “We only roast the ones we love,” which is intended to capture the idea that these insults are not intended to burn bridges or truly cause harm. They are insults born of affection, playful in nature. This is an important distinction because, as the murder statistics help demonstrate, strangers often do not tolerate these kinds of insults. If I were to go up to someone I didn’t know well (or knew well as an enemy) and started insulting their drug habits, dead loved ones, or even something as simple as their choice of dress, I could reasonably expect anything from hurt feelings to a murder. This raises an interesting series of mysteries surrounding the matter of why the stranger might want to kill me but my friends will laugh, as well as when my friends might be inclined to kill me as well.

Insults can be spoken in two primary manners: seriously and in jest. In the former case, harm is intended, while in the latter it often isn’t. As many people can attest to, however, the line between serious and jesting insults is not always as clear as we’d like. Despite our best intentions, ill-phrased or poorly-timed jokes can do harm in much the same way that a serious insult can. This suggests that the nature of the insults is similar between the two contexts. As the function of a serious insult between strangers would seem to be to threaten or lower the insulted target’s status, this is likely the same function of an insult made in jest between friends, though the degree of intended threat is lower in those contexts. The closest analogy that comes to mind is the difference between a serious fight and a friendly tussle, where the combatants either are, or are not, trying to inflict serious harm on each other. Just like play fighting, however, things sometimes go too far and people do get hurt. I think joking insults between friends go much the same way.

This raises another worthwhile question: as friends usually have a vested interest in defending each other from outside threats and being helpful, why would they then risk threatening the well-being of their allies through such insults? It would be strange if they were all risk and reward, so it would be up to us to explain what that reward is. There are a few explanations that come to mind, all of which focus on one crucial facet of friendships: they are dynamic. While friendships can be – and often are – stable over time, who you are friends with in general as well as the degree of that friendship changes over time. Given that friendships are important social resources that do shift, it’s important that people have reliable ways of assessing the strength of these relationships. If you are not assessing these relationships now and again, you might come to believe that your social ties are stronger than they actually are, which can be a problem when you find yourself in need of social support and realize that you don’t have it. Better to assess what kind of support you have before you actually need it so you can tailor your behavior more appropriately.

“You guys got my back, right?….Guys?….”

Insults between friends can help serve this relationship-monitoring function. As insults – even the joking kind – carry the potential to inflict costs on their target, the willingness of an individual to tolerate the insult – to endure those costs – can serve as a credible signal for friendship quality. After all, if I’m willing to endure the costs of being insulted by you without responding aggressively in turn, this likely means I value your friendship more than I dislike the costs being inflicted. Indeed, if these insults did not carry costs, they would not be reliable indications of friendship strength. Anyone could tolerate behavior that didn’t inflict costs to maintain a friendship, but not everyone will tolerate behaviors that do. This yields another prediction: the degree of friendship strength can also be assessed by the degree of insults willing to be tolerated. In other words, the more it takes to “go too far” when it comes to insults, the closer and stronger the friendship between two individuals. Conversely, if you were to make a joke about your friend that they become incredibly incensed over, this might result in your reevaluating the strength of that bond: if you thought the bond was stronger than it was, you might either take steps to remedy the cost you just inflicted and make the friendship stronger (if you value the person highly) or perhaps spend less time investing in the relationship, even to the point of walking away from it entirely (if you do not).

Another possible related function of these insults could be to ensure that your friends don’t start to think too highly of themselves. As mentioned previously, friendships are dynamic things based, in part, on what each party can offer to the other. If one friend begins to see major changes to their life in a positive direction, the other friend may no longer be able to offer the same value they did previously. To put that in a simple example, if two friends have long been poor, but one suddenly gets a new, high-paying job, the new status that job affords will allow that person to make friends he likely could not before. Because the job makes them more valuable to others, others will now be more inclined to be their friend. If the lower-status friend wishes to retain their friendship with the newly-employed one, they might use these insults to potentially undermine the confidence of their friend in a subtle way. It’s an indirect way of trying to ensure the high-status friend doesn’t begin to think he’s too good for his old friends.

Such a strategy could be risky, though. If the lower-status party can no longer offer the same value to the higher-status one, relative to their new options, that might also not be the time to test the willingness of the higher-status one to tolerate insults. At the same time, times of change are also precisely when the value of reassessing relationship strength can be at its highest. There’s less of a risk of a person abandoning a friendship when nothing has changed, relative to when it has. In either case, the assessment and management of social relationships is likely the key for understanding the tolerance of insults from friends and intolerance of them from strangers.

“Enjoy your new job, sellout. You used to be cool”

This analysis can speak to another interesting facet of insults as well: they’re directed towards the speaker at times, referred to self-deprecating humor when done in jest (and just self-deprecation when not). It might seem strange that people would insult themselves, as it would act to directly threaten their own status. That people do so with some regularity suggests there might be some underlying logic to these self-directed insults as well. One possibility is that these insults do what was just discussed: signal that one doesn’t hold themselves in high esteem and, accordingly, signal that one isn’t “too good” to be your friend. This seems like a profitable place from which to understand self-depreciating jokes. When such insults directed towards the self are not made in jest, they likely carry additional implications as well, such as that expectations should be set lower (e.g., “I’m really not able to do that”) or that one is in need of additional investment, relative to the joking kind. 

References: Daly, M. & Wilson, M. (1988). Homicide. Aldine De Gruyter: NY.