Understanding Conspicuous Consumption (Via Race)

Buckle up, everyone; this post is going to be a long one. Today, I wanted to discuss the matter of conspicuous consumption: the art of spending relatively large sums of money on luxury goods. When you see people spending close to $600 on a single button-up shirt, two-months salary on engagement rings, or tossing spinning rims on their car, you’re seeing examples of conspicuous consumption. A natural question that many people might (and do) ask when confronted with such outrageous behavior is, “why do you people seem to (apparently) waste money?” A second, related question that might be asked once we have an answer to the first question (indeed, our examination of this second question should be guided by – and eventually inform – our answer to the first) is how can we understand who is most likely to spend money in a conspicuous fashion? Alternatively, this question could be framed by asking about what contexts tend to favor conspicuous consuming behavior. Such information should be valuable to anyone looking to encourage or target big-ticket spending or spenders or, if you’re a bit strange, you could also try to create contexts in which people spend their money more responsibly.

But how fun is sustainability when you could be buying expensive teeth  instead?

The first question – why do people conspicuously consume – is perhaps the easier question to initially answer, as it’s been discussed for the last several decades. In the biological world, when you observe seemingly gaudy ornaments that are costly to grow and maintain – peacock feathers being the go-to example – the key to understanding their existence is to examine their communicative function (Zahavi, 1975). Such ornaments are typically a detriment to an organism’s survival; peacocks could do much better for themselves if they didn’t have to waste time and energy growing the tail feathers which make it harder to maneuver in the world and escape from predators. Indeed, if there was some kind of survival benefit to those long, colorful tail feathers, we would expect that both sexes would develop them; not just the males.

However, it is because these feathers are costly that they are useful signals, since males in relatively poor condition could not shoulder their costs effectively. It takes a healthy, well-developed male to be able to survive and thrive in spite of carrying these trains of feathers. The costs of these feathers, in other words, ensures their honesty, in the biological sense of the word. Accordingly, females who prefer males with these gaudy tails can be more assured that their mate is of good genetic quality, likely leading to offspring well-suited to survive and eventually reproduce themselves. On the other hand, if such tails were free to grow and develop – that is, if they did not reliably carry much cost – they would not make good cues for such underlying qualities. Essentially, a free tail would be a form of biological cheap talk. It’s easy for me to just say I’m the best boxer in the world, which is why you probably shouldn’t believe such boasts until you’ve actually seen me perform in the ring.

Costly displays, then, owe their existence to the honesty they impart on a signal. Human consumption patterns should be expected to follow a similar pattern: if someone is looking to communicate information to others, costlier communications should be viewed as more credible than cheap ones. To understand conspicuous consumption we would need to begin by thinking about matters such as what signal someone is trying to send to others, how that signal is being sent, and what conditions tend to make the sending of particular signals more likely? Towards that end, I was recently sent an interesting paper examining how patterns of conspicuous consumption vary among racial groups: specifically, the paper examined racial patterns of spending on what was dubbed visible goods: objects which are conspicuous in anonymous interactions and portable, such as jewelry, clothing, and cars. These are good designed to be luxury items which others will frequently see, relative to other, less-visible luxury items, such as hot tubs or fancy bed sheets.

That is, unless you just have to show off your new queen mattress

The paper, by Charles et al (2008), examined data drawn from approximately 50,000 households across the US, representing about 37,000 White 7,000 Black, and 5,000 Hispanic households between the ages of 18 and 50. In absolute dollar amounts, Black and Hispanic households tended to spend less on all manner of things than Whites (about 40% and 25%, respectively), but this difference needs to be viewed with respect to each group’s relative income. After all, richer people tend to spend more than poorer people. Accordingly, the income of these households was estimated through their reports of their overall reported spending on a variety of different goods, such as food, housing, etc. Once a household’s overall income was controlled for, a better picture of their relative spending on a number of different categories emerged. Specifically, it was found that Blacks and Hispanics tended to spend more on visible  goods (like clothing, cars, and jewelry) than Whites by about 20-30%, depending on the estimate, while consuming relatively less in other categories like healthcare and education.

This visible consumption is appreciable in absolute size, as well. The average white household was spending approximately $7,000 on such purchases each year, which would imply that a comparably-wealthy Black or Hispanic household would spend approximately $9,000 on such purchases. These purchases come at the expense of all other categories as well (which should be expected, as the money has to come from somewhere), meaning that the money spent on visible goods often means less is spent on education, health care, and entertainment.

There are some other interesting findings to mention. One – which I find rather notable, but the authors don’t see to spend any time discussing – is that racial differences in consumption of visible goods declines sharply with age: specifically, the Black-White gap in visible spending was 30% in the 18-34 group, 23% in the 35-49 group, and only 15% in the 50+ group. Another similarly-undiscussed finding is that visible consumption gap appears to decline as one goes from single  to married. The numbers Charles et al (2009) mention estimate that the average percentage of budgets used on visible purchases was 32% higher for single Black men, 28% higher for single Black women, and 22% higher for married Black couples, relative to their White counterparts. Whether these declines represent declines in absolute dollar amounts or just declines in racial differences, I can’t say, but my guess is that it represents both. Getting old and getting into relationships tended to reduce the racial divide in visible good consumption.

Cool really does have a cut-off age…

Noting these findings is one thing; explaining them is another, and arguably the thing we’re more interested in doing. The explanation offered by Charles et al (2009) goes roughly as follows: people have a certain preference for social status, specifically with respect to their economic standing. People are interested in signaling their economic standing to others via conspicuous consumption. However, the degree to which you have to signal depends strongly on the reference group to which you belong. For example, if Black people have a lower average income than Whites, then people might tend to assume that a Black person has a lower economic standing. To overcome this assumption, then, Black individuals should be particularly motivated to signal that they do not, in fact, have a lower economic standing more typical of their group. In brief: as the average income of a group drops, those with money should be particularly inclined to signal that they are not as poor as other people below them in their group.

In support of this idea, Charles et al (2008) further analyzed their data, finding that the average spending on visible luxury goods declined in states with higher average incomes, just as it also declined among racial groups with higher average incomes. In other words, raising the average income of a racial group within a state tended to strongly impact what percentage of consumption was visible in nature. Indeed, the size of this effect was such that, controlling for the average income of a race within a state, the racial gaps almost entirely disappeared.

Now there are a few things to say about this explanation, first of which being that it’s incomplete as stands. From my reading of it, it’s a bit unclear to me how the explanation works for the current data. Specifically, it would seem to posit that people are looking to signal that they are wealthier than those immediately below them in the social ladder. This could explain the signaling in general, but not the racial divide. To explain the racial divide, you need to add something else; perhaps that people are trying to signal to members of higher income groups that, though one is a member of a lower income group, one’s income is higher than the average income. However, that explanation would not explain the age/marital status information I mentioned before without adding on other assumption, nor would directly explain the benefits which arise from signaling one’s economic status in the first place. Moreover, if I’m understanding the results properly, it wouldn’t directly explain why visible consumption drops as the overall level of wealth increases. If people are trying to signal something about their relative wealth, increasing the aggregate wealth shouldn’t have much of an impact, as “rich” and “poor” are relative terms.

“Oh sure, he might be rich, but I’m super rich; don’t lump us together”

So how might this explanation be altered to fit the data better? The first step is to be more explicit about why people might want to signal their economic status to others in the first place. Typically, the answer to this question hinges on the fact that being able to command more resources effectively makes one a more valuable associate. The world is full of people who need things – like food and shelter – so being able to provide those things should make one seem like a better ally to have. For much the same reason, being in command of resources also tends to make one appear to be a more desirable mate as well. A healthy portion of conspicuous signaling, as I mentioned initially, has to do with attracting sexual partners. If you know that I am capable of providing you with valuable resources you desire, this should, all else being equal, make me look like a more attractive friend or mate, depending on your sexual preferences.

However, recognition of that underlying logic helps make a corollary point: the added value that I can bring you, owing to my command of resources, diminishes as overall wealth increases. To place it in an easy example, there’s a big difference between having access to no food and some food; there’s less of a difference between having access to some food and good food; there’s less of a difference still between good food and great food. The same holds for all manner of other resources. As the marginal value of resources decreases as access to resources increases overall, we can explain the finding that increases in average group wealth decrease relative spending on visible goods: there’s less of a value in signaling that one is wealthier than another if that wealth difference isn’t going to amount to the same degree of marginal benefit.

So, provided that wealth has a higher marginal value in poorer communities – like Black and Hispanic ones, relative to Whites – we should expect more signaling of it in those contexts. This logic could explain the racial gap on spending patterns. It’s not that people are trying to avoid a negative association with a poor reference group as much as they’re only engaging in signaling to the extent that signaling holds value to others. In other words, it’s not about my signaling to avoid being thought of as poor; it’s about my signaling to demonstrate that I hold a high value as a partner, socially or sexually, relative to my competition.

Similarly, if signaling functions in part to attract sexual partners, we can readily explain the age and martial data as well. Those who are married are relatively less likely to engage in signaling for the purposes of attracting a mate, as they already have one. They might engage in such purchases for the purposes of retaining that mate, though such purchases should involve spending money on visible items for other people, rather than for themselves. Further, as people age, their competition in the mating market tends to decline for a number reasons, such as existing children, inability to compete effectively, and fewer years of reproductive viability ahead of them. Accordingly, we see that visible consumption tends to drop off, again, because the marginal value of sending such signals has surely declined.

“His most attractive quality is his rapidly-approaching demise”

Finally, it is also worth noting other factors which might play an important role in determining the marginal value of this kind of conspicuous signaling. One of these is an individual’s life history. To the extent that one is following a faster life history strategy – reproducing earlier, taking rewards today rather than saving for greater rewards later – one might be more inclined to engage in such visible consumption, as the marginal value of signaling you have resources now is higher when the stability of those resources (or your future) is called into question. The current data does not speak to this possibility, however. Additionally, one’s sexual strategy might also be a valuable piece of information, given the links we saw with age and martial status. As these ornaments are predominately used to attract the attention of prospective mates in nonhuman species, it seems likely that individuals with a more promiscuous mating strategy should see a higher marginal value in advertising their wealth visibly. More attention is important if you’re looking to get multiple partners. In all cases, I feel these explanations make more textured predictions than the “signaling to not seem as poor as others” hypothesis, as considerations of adaptive function often do.

References: Charles, K., Hurst, E., & Roussanov, N. (2008). Conspicuous consumption and race. The Journal of Quarterly Economics, 124, 425-467.

Zahavi, A. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.


Stereotyping Stereotypes

I’ve attended a number of talks on stereotypes; I’ve read many more papers in which the word was used; I’ve seen still more instances where the term has been used outside of academic settings in discussions or articles. Though I have no data on hand, I would wager that the weight of this academic and non-academic literature leans heavily towards the idea that stereotypes are, by in large, inaccurate. In fact, I would go a bit farther than that: the notion that stereotypes are inaccurate seems to be so common that people often see little need in ensuring any checks were put into place to test for their accuracy in the first place. Indeed, one of my major complaints about the talks on stereotypes I’ve attended is just that: speakers never mentioning the possibility that people’s beliefs about other groups happen to, on the whole, match up to reality fairly well in many cases (sometimes they have mentioned this point as an afterthought but, from what I’ve seen, that rarely translates into later going out and testing for accuracy). To use a non-controversial example, I expect that many people believe men are taller than women, on average, because men do, in fact, happen to be taller.

Pictured above: not a perceptual bias or an illusory correlation

This naturally raises the question of how accurate stereotypes – when defined as beliefs about social groups – tend to be. It should go without saying that there will not be a single answer to that question: accuracy is not an either/or type of matter. If I happen to think it’s about 75 degrees out when the temperature is actually 80, I’m more accurate in my belief than if the temperature was 90. Similarly, the degree of that accuracy should be expected to vary on the intended nature of the stereotype in question; a matter to which I’ll return later. That said, as I mentioned before, quite a bit of the exposure I’ve had to the subject of stereotypes suggests rather strongly and frequently that they’re inaccurate. Much of the writing about stereotypes I’ve encountered focuses on notions like “tearing them down”, “busting myths”, or about how people are unfairly discriminated against because of them; comparatively little of that work has focused on instances in which they’re accurate which, one would think, would represent the first step in attempting to understand them.

According to some research reviewed by Jussim et al (2009), however, that latter point is rather unfortunate, as stereotypes often seem to be quite accurate, at least by the standards set by other research in psychology. In order to test for the accuracy of stereotypes, Jussim et al (2009) report on some empirical studies that met two key criteria: first, the research had to compare people’s beliefs about a group to what that group was actually like; that much is a fairly basic requirement. Second, the research had to use an appropriate sample to determine what that group was actually like. For example, if someone was interested in people’s beliefs about some difference between men and women in general, but only tested these beliefs against data from a convenience sample (like men and women attending the local college), this could pose something of a problem to the extent that the convenience sample differs from the reference group of people holding the stereotypes. If people, by in large, have accurate stereotypes, researchers would never know if they make use of a non-represented reference group.

Within the realm of racial stereotypes, Jussim et al (2009) summarized the results of 4 papers that met this criteria. The majority of the results fell within what the authors consider “accurate” range (as defined by being 0-10% off from the criteria values) or near-misses (those between 10-20% off). Indeed, the average correlations between the stereotypes and criteria measures ranged from .53 to .93, which are very high, relative to the average correlation uncovered by psychological research. Even the personal stereotypes, while not as high, were appreciably accurate, ranging from .36 to .69. Further, while people weren’t perfectly accurate in their beliefs, those who overestimated differences between racial groups tended be balanced out by those who underestimated those differences in most instances. Interestingly enough, people’s stereotypes about group differences tended to be a bit more accurate than their within group stereotypes.

“Ha! Look at all that inaccurate shooting. Didn’t even come close”

The same procedure was used to review research on gender stereotypes as well, yielding 7 papers with larger sample sizes. A similar set of results emerged: the average stereotype was rather accurate, with correlations ranging between .34 to .98, most of which hovered in the range of .7. Individual stereotypes were again less accurate, but most were still heading in the right direction. To put those numbers in perspective, Jussim et al (2009) summarized a meta-analyses examining the average correlation found in psychological research. According to that data, only 24% of social psychology effects represent correlations larger than .3 and a mere 5% exceeded a correlation of .5; the corresponding numbers for averaged stereotypes were 100% of the reviewed work meeting the .3 threshold, and about 89% of the correlations exceeding the .5 threshold (personal stereotypes at 81% and 36%, respectively).

Now neither Jussim et al (2009) or I would claim that all stereotypes are accurate (or at least reasonably close); no one I’m aware of has. This brings us to the matter of when we should expect stereotypes to be accurate and when we should expect them to fall shorter of that point. As an initial note, we should always expect some degree of inaccuracy in stereotypes – indeed, in all beliefs about the world – to the extent that gathering information takes time and improving accuracy is not always worth that investment in the adaptive sense. To use a non-biological example, spending an extra three hours studying to improve one’s grade on a test from a 70 to a 90 might seem worth it, but the same amount of time used to improve from a 90 to a 92 might not. Similarly, if one lacks access to reliable information about the behavior of others in the first place, stereotypes should also tend to be relatively inaccurate. For this reason, Jussim et al (2009) note that cross-cultural stereotypes in national personalities tend to be among the most inaccurate, as people from, say, India, might have relatively little exposure to information about people from South Africa, and vice versa.

The second point to make on accuracy is that, to the extent that beliefs guide behavior and that behavior carries costs or benefits, we should expect beliefs to tend towards accuracy (again, regardless of whether they’re about social groups or the world more generally). If you believe, incorrectly, that group A is as likely to assault you as group B (the example that Jussim et al (2009) use involves biker gang members and ballerinas), you’ll either end up avoiding one group more than you need to, not being wary enough around one, or miss in both directions, all of which involves social and physical costs. One of the only cases in which being wrong might reliably carry benefits are contexts in which one’s inaccurate beliefs modifies the behavior of other people. In other words, stereotypes can be expected to be inaccurate in the realm of persuasion. Jussim et al (2009) make nods toward this possibility, noting that political stereotypes are among the least accurate ones out there, and that certain stereotypes might have been crafted specifically with the intent of maligning a particular group.

For instance…

While I do suspect that some stereotypes exist specifically to malign a particular group, that possibility does raise another interesting question: namely, why would anyone, let alone large groups of people, be persuaded to accept inaccurate stereotypes? For the same reason that people should prefer accurate information over inaccurate information when guiding their own behaviors, they should also be relatively resistant to adopting stereotypes which are inaccurate, just as they should be when it comes to applying them to individuals when they don’t fit. To the extent that a stereotype is of this sort (inaccurate), then, we should expect that it not be widely held, except in a few particular contexts.

Indeed, Jussim et al (2009) also review evidence that suggests people do not inflexibly make use of stereotypes, preferring individuating information when it’s available: according to the meta-analyses reviewed, the average influence of stereotypes on judgments hangs around r = .1 (which does not, in many instances, have anything to say about the accuracy of the stereotype; just the extent of its effect); by contrast, individuating information had an average effect of about .7 which, again, is much larger than the average psychology effect. Once individuating information is controlled for, stereotypes tend to have next to zero impact on people’s judgments of others. People appear to rely on personal information to a much higher degree than stereotypes, and often jettison ill-fitting stereotypes in favor of personal information. In other words, the knowledge that men tend to be taller than women does not have much of an influence on whether I think a particular women is taller than a particular man.

When should we expect that people will make the greatest use of stereotypes, then? Likely when they have access to the least amount of individuating information. This has been the case in a lot of the previous research on gender bias where very little information is provided about the target individual beyond their sex (see here for an example). In these cases, stereotypes represent an individual doing the best they can with limited information. In some cases, however, people express moral opposition to making use of that limited information, contingent on the group(s) it benefits or disadvantages. It is in such cases that, ironically, stereotypes might be stereotyped as inaccurate (or at least insufficiently accurate) to the greatest degree.

References: Jussim, L., Cain, T., Crawford, J., Harber, K., & Cohen, F. (2009). The unbearable accuracy of stereotypes. In Nelson, T. The Handbook of Prejudice, Stereotyping, and Discrimination (199-227). NY: Psychological Press.  

Some Bathwater Without A Baby

When reading psychology papers, I am often left with the same dissatisfaction: the lack of any grounding theories in them and their inability to deliver what I would consider a real explanation for their findings. While it’s something I have harped on for a few years now, this dissatisfaction is hardly confined to me, as others have voiced similar concerns for at least around the last two decades, and I suspect it’s gone on quite a bit longer than that. A healthy amount of psychological research strikes me as empirical bathwater without a theoretical baby, in a manner of speaking; no matter how interesting that empirical bathwater might be – whether it’s ignored or the flavor of the week – almost all of it will eventually be thrown out and forgotten if there’s no baby there. Some new research that has crossed my eyes a few times lately follows that same trend; a paper examining the reactions of individuals who were feeling powerful to inequality that disadvantaged them or others. I wanted to review that paper today and help fill in the missing sections from it where explanations should go.

Next step: add luxury items, like skin and organs

The paper, by Sawaoka, Hughes, & Ambady (2015), contained four or five experiments – depending on how one counts a pilot study – in which participants were primed to think of themselves as powerful or not. This was achieved, as it so often is, by having the participants in each experiment write about a time they had power over another person or about a time that other people had power over them, respectively. In the first pilot study, about 20 participants were primed as powerful and another 20 primed as relatively powerless. Subsequently, they were told they would be playing a dictator game with another person, in which the other person (who was actually not a person) would be serving as the dictator in charge of dividing up 10 experimental tokens between the two; tokens which, presumably, were supposed to redeemed for some kind of material reward. Those participants who had been primed to feel more powerful expected to receive a higher average number of these tokens (M = 4.2) relative to those primed to feel less powerful (M = 2.2). Feeling powerful, it seemed, lead to participants expecting better treatment from others.

In the next experiment, participants (N = 227) were similarly primed before completing a fairness reaction task. Specifically, participants were presented with three pictures representing distributions of tokens: one of which represented the participant’s payment while the other two represented the payments to others. It was the job of participants to indicate whether these tokens were distributed equally between the three people or whether the distribution was unequal. The distributions could have been (a) equal, (b) unequal, favoring the participant, or (c) unequal, disfavoring the participant. The measure of interest here was how quickly the participants were able to identify equal and unequal distributions. As it turns out, participants primed to feel powerful were quicker to identify unfair arrangements that disfavored them, relative to less powerful participants by about a tenth of a second, but were not quicker to do so when the unequal distributions favored them.

The next two studies followed pretty much the same format and echoed the same conclusion, so I don’t want to spend too much time on their details. The final experiment, however, examined not just reaction times to assessments of equality, but rather how quickly participants were willing to do something about it. In this case, participants were told they were being paid by an experimental employer. The employer to whom they were randomly assigned would be responsible for distributing a payment amount between them and two other participants over a number of rounds (just like the experiment I just mentioned). However, participants were also told that there were other employers they could switch to if they wanted after each round. The question of interest, then, was how quickly participants would switch away from employers who disfavored them. Those participants that were primed to feel powerful didn’t wait around very long in the face of unfair treatment that disfavored them, leaving after the first round, on average; by contrast, those primed to feel less powerful waited about 3.5 rounds to switch if they were getting a bad relative deal. If the inequality favored them, however, the powerful participants were about as likely to stay over time as the less powerful ones. In short, those who felt powerful not only recognized poor treatment of themselves (but not others) quicker, they also did something about it sooner.

They really took Shia’s advice about doing things to heart

These experiments are quite neat, but, as I mentioned before, they are missing a deeper explanation to anchor them anywhere.. Sawaoka, Hughes, & Ambady (2015) attempt an explanation for their results, but I don’t think they get very far with it. Specifically, the authors suggest that power makes people feel entitled to better treatment, subsequently making them quicker to recognize worse treatment and do something about it. Further, the authors make some speculations about how unfair social orders are maintained by powerful people being motivated to do things that maintain their privileged status while the disadvantaged sections of the population are sent messages about being powerless, resulting in their coming to expect unfair treatment and being less likely to change their station in life. These speculations, however, naturally yield a few important questions, chief among which being, “if feeling entitled yields better treatment on the part of others, then why would anyone ever not feel that way? Do, say, poor people really want to stay poor and not demand better treatment from others as well?” It seems that there are very real advantages being forgone by people who don’t feel as entitled as powerful people do, and we would not expect a psychology that behaved that way – that just avoided taking welfare benefits – to have been selected for.

In order to craft something approaching a real explanation for these findings, then, one would need to begin with a discussion about some possible trade-offs that have to be made: if feeling entitled was always good for business, everyone would feel entitled all the time; since they don’t, there are likely some costs associated with feeling entitled that, at least in certain contexts, prevents its occurrence. One of the most likely trade-offs involves the costs associated with conflict: if you feel you’re entitled to a certain kind of treatment you feel you’re not receiving, you need to take steps to ensure the correction of that treatment, since other people aren’t exactly expected just going to start giving you more benefits for no reason. To use a real life example, if you feel your boss isn’t compensating you properly for your work, you need to demand a raise, threatening to inflict costs on him – such as your quitting – if your demands aren’t met.

The problems with such a course of action are two-fold: first, your boss might disagree with your assessment and let you quit, and losing that job could pose other, very real costs (like starving and homelessness). Sometimes an unfair arrangement is better than no arrangement at all. Second, the person with whom you’re bargaining might attempt to inflict costs on you in turn. For instance, if you begin a dispute with law enforcement officers because you believe they have treated you unfairly and are seeking to rectify that situation, they might encourage your compliance with the arrangement with a well-placed fist to your nose. In other words, punishment is a two-way street, and trying to punish stronger individuals – whether physically or socially stronger – is often a poor course of action to take. While “punching-up” might be appealing to certain sensitivities in, say, comedy, it works less well when you’re facing down that bouncer with a few inches and a few dozens pounds of muscle on you.

I’m sure he’ll find your arguments about equality quite persuasive

Indeed, this is the same kind of evolutionary explanation offered by Sell, Tooby, & Cosmides (2009) for understanding the emotion of anger and its associated entitlement: one’s formidability – physically and/or socially – should be a key factor in understanding the emotional systems underlying how they resolve their conflicts; conflicts which may well have to do with distributions of material resources. Those who are better suited to inflict costs on others (e.g., the powerful) are also likely to be treated better by others who wish to avoid the costs of conflicts that accompany poor treatment. This could suggest, however, that making people feel more powerful than they actually are would, in the long-term, tend to produce quite a number of costs for the powerful-feeling, but actually-weak, individuals: making that 150-pound guy think he’s stronger than the 200-pound one might encourage the former to initiate a fight, but not make him more likely to win it. Similarly, encouraging your friend who isn’t that good at their job to demand that raise could result in their being fired. In other words, it’s not that social power structures in society are maintained simply on the basis of inertia or people getting sent particular kinds of social messages, but rather that they reflect (albeit imperfectly) important realities in the actual value people are able to demand from others. While the idea that some of the power dynamics observed in the social world reflect non-arbitrary differences between people might not sit well with certain crowds, it is a baby capable of keeping this bathwater around.

References: Sawaoka, T., Hughes, B., & Ambady, N. (2015). Power heightens sensitive to unfairness against the self. Personality & Social Psychology Bulletin, 41, 1023-1035.

Sell, A., Tooby, J., & Cosmides, L. (2009). Formidability and the logic of human anger. Proceedings of the National Academy of Science, 106, 15073-78.

Examining Arousal And Homophobia

In my last post, I mentioned that the idea of people misplacing or misinterpreting their arousal as being a silly one (as I also did previously here). Today, I wanted to talk about that arousal issue again. In the wake of the supreme court’s legalization of same-sex marriage here in the US, let’s consider arousal in the context straight men’s penises reacting to gay, straight, and lesbian pornography. Specifically, I wanted to discuss a rather strange instance where some people have interpreted men’s physiological arousal as sexual arousal, despite the protests of those men themselves, in the apparent interests of making a political point about homophobia. The political point in question happens to be that a disproportionate number of homophobes are actually latent homosexual themselves who, in true Freudian fashion, are trying to deny and suppress their gay urges in the form of their homophobic attitudes  (see here and here for some examples).

Homosexual individuals, on the other hand, are only repressing a latent homophobia

The paper in question I wanted to examine today is a 1996 piece by Adams, Wright, & Lohr. The paper was designed to test a Freudian idea about homophobia: namely, as mentioned above, that individuals might express homophobic attitudes as a result of their own internal struggle regarding some unresolved homosexual desires. As an initial note, this idea seems rather on the insane side of things, as many Freudian ideas tend to seem. I won’t get too mired in the reasons the idea is crazy, but it should be sufficient to note that the underlying idea appears to be that people develop maladaptive sexual desires in early childhood (long before puberty, when they’d be relevant) which then need to be suppressed by different mechanisms that don’t actually do that job very well. In other words, the idea seems to be positing that we have cognitive mechanisms whose function is generate maladaptive sexual behavior, only to develop different mechanisms later that (poorly and inconsistently) suppress the maladaptive ones. If that isn’t torturous logic, I don’t know what would be.

In any case, the researchers recruited 64 men from their college’s subject pool who had all previously self-identified as 100% straight. These men were then given the internalized homophobia scale (IHP), which, though I can’t access the original paper with the questions, appears to contain 25 questions aimed at assessing people’s emotional reactions to homosexuals, largely focused on their level of comfort/dread being around them. The men were divided into two groups: those who scored above the midpoint on the scale (the men labeled as homophobes) and those who scored below the midpoint (the non-homophobes). Each subject was provided with a stain gauge to attach to their penis which functioned to measure changes in penile diameter; basically how erect the men were getting. Each subject then watched three, four-minute long pornographic scenes: one depicting heterosexual intercourse, another gay intercourse, and another for lesbian intercourse. After each clip, they were asked how sexually aroused they were and how erect their penis was, before being given a change to return to flaccid before the next clip was shown.

In terms of the arousal to the heterosexual and lesbian pornography, there was no difference between the homophobic and non-homophobic groups with respect to how erect the men got and how aroused they reported being. However, in the gay porn condition, the homophobic men became more erect. Framed in terms of the degree of tumescence (engorgement), the non-homophobic men displayed no tumescence 66% of the time, modest tumescence 10% of the time, and definite tumescence 24% of the time in response to the gay porn; the corresponding numbers for the homophobic group were 20%, 26%, and 55%, respectively, while there was no difference between the homophobic and non-homophobic groups with respect how aroused they reported being, the physiological arousal did seem to differ. So what’s going on here? Does homophobia have its roots in some latent homosexual desires being denied?

And does ignoring those desires place you in the perfect position for penetration?

I happen to think that such an idea is highly implausible. There are a few reasons I feel that way, but let’s start with the statistical arguments for why that interpretation probably isn’t right. In terms of the number of men who identify as homosexual or bisexual at a population level, we’re only looking about 1-3%. Given that rough estimate, with a sample size of 60 individuals, you should expect about 1.5 gay people if you were sampling randomly. However, this sampling was anything but random: the subjects were selected specifically because they identified as straight. This should bias the number of gay or bisexual participants in the study downward. Simply put, this sample size is not large enough to expect that any gay or bisexual male participants were in it at all, let alone in large enough numbers to detect any kind of noticeable effect. That problem gets even worse in that they’re looking to find participants that are both bisexual/gay and homophobic, which cuts the probability down even further.

The second statistical reason to be wary of these results is that bisexual men tend to be less common that gay men by a ratio of approximately 1:2. However, the pattern of results observed in the paper from the homophobic group could better be described as bisexual than gay: each group reported the same degree of subjective and physiological arousal to the straight and lesbian porn; there was only the erection difference observed during the homosexual porn. This means that the sample would have been needed to have been compromised of many bisexual homophobes who publicly identified as straight, which seems outlandishly unlikely.

Moreover, the sheer number of the participants displaying “definite tumescence” requires some deeper consideration. If we assume that the physiological arousal translates directly into some kind of sexual desire, then about 25% of non-homophobic men and 55% of homophobic men are sexually interested in homosexual intercourse despite, as I mentioned before, only about 1-3% of the population saying they are gay or bisexual. Perhaps that rather strange state of affairs holds, but a much likelier explanation is that something has gone wrong in the realm of interpretation somewhere. Adams et al (1996) note in their discussion that another interpretation of their results involves the genital swelling being the result of other arousing emotions, such as anxiety, rather than sexual arousal per se. While I can’t say whether such an explanation is true, I can say that it certainly sounds a hell of a lot more plausible than the idea that most homophobes (and about 1-in-4 non-homophobes) are secretly harboring same-sex desires. At least the anxiety-arousal explanation could, in principle, explain why 25% of non-homophobic men’s penises wiggled a little when viewing guy-on-guy action; they’re actually uncomfortable.

Maybe they’re not as comfortable with gay people as they like to say they are…

Now don’t get me wrong: to the extent that one perceives there to be social costs associated with a particular sexual orientation (or social attitude), we should expect people to try and send the the message that they do not possess such things to others. Likewise, if I’ve stolen something, there might be a good reason for me to lie about having stolen it publicly if I don’t want to suffer the costs of moral condemnation for having done so. I’m not saying that everyone will be accurate or truthful about themselves at all times to others; far from it. However, we should also expect that others will not be accurate or truthful about others either, at least to the extent they are trying to persuade people about things. In this case, I think people are misinterpreting data on physiological arousal to imply a non-existent sexual arousal for the purposes of making some kind of social progress. After all, if homophobes are secretly gay, you don’t need to take their points into consideration to quite the same degree you might have otherwise (since once we reach a greater level of societal acceptance, they’ll just come out anyway and probably thank you for it, or something along those lines). I’m all for social acceptance; just not at the expense of accurately understanding reality.

References: Adams, H., Wright L., & Lohr, B. (1996). Is homophobia associated with homosexual arousal? Journal of Abnormal Psychology, 105, 440-445.