Understanding Conspicuous Consumption (Via Race)

Buckle up, everyone; this post is going to be a long one. Today, I wanted to discuss the matter of conspicuous consumption: the art of spending relatively large sums of money on luxury goods. When you see people spending close to $600 on a single button-up shirt, two-months salary on engagement rings, or tossing spinning rims on their car, you’re seeing examples of conspicuous consumption. A natural question that many people might (and do) ask when confronted with such outrageous behavior is, “why do you people seem to (apparently) waste money?” A second, related question that might be asked once we have an answer to the first question (indeed, our examination of this second question should be guided by – and eventually inform – our answer to the first) is how can we understand who is most likely to spend money in a conspicuous fashion? Alternatively, this question could be framed by asking about what contexts tend to favor conspicuous consuming behavior. Such information should be valuable to anyone looking to encourage or target big-ticket spending or spenders or, if you’re a bit strange, you could also try to create contexts in which people spend their money more responsibly.

But how fun is sustainability when you could be buying expensive teeth  instead?

The first question – why do people conspicuously consume – is perhaps the easier question to initially answer, as it’s been discussed for the last several decades. In the biological world, when you observe seemingly gaudy ornaments that are costly to grow and maintain – peacock feathers being the go-to example – the key to understanding their existence is to examine their communicative function (Zahavi, 1975). Such ornaments are typically a detriment to an organism’s survival; peacocks could do much better for themselves if they didn’t have to waste time and energy growing the tail feathers which make it harder to maneuver in the world and escape from predators. Indeed, if there was some kind of survival benefit to those long, colorful tail feathers, we would expect that both sexes would develop them; not just the males.

However, it is because these feathers are costly that they are useful signals, since males in relatively poor condition could not shoulder their costs effectively. It takes a healthy, well-developed male to be able to survive and thrive in spite of carrying these trains of feathers. The costs of these feathers, in other words, ensures their honesty, in the biological sense of the word. Accordingly, females who prefer males with these gaudy tails can be more assured that their mate is of good genetic quality, likely leading to offspring well-suited to survive and eventually reproduce themselves. On the other hand, if such tails were free to grow and develop – that is, if they did not reliably carry much cost – they would not make good cues for such underlying qualities. Essentially, a free tail would be a form of biological cheap talk. It’s easy for me to just say I’m the best boxer in the world, which is why you probably shouldn’t believe such boasts until you’ve actually seen me perform in the ring.

Costly displays, then, owe their existence to the honesty they impart on a signal. Human consumption patterns should be expected to follow a similar pattern: if someone is looking to communicate information to others, costlier communications should be viewed as more credible than cheap ones. To understand conspicuous consumption we would need to begin by thinking about matters such as what signal someone is trying to send to others, how that signal is being sent, and what conditions tend to make the sending of particular signals more likely? Towards that end, I was recently sent an interesting paper examining how patterns of conspicuous consumption vary among racial groups: specifically, the paper examined racial patterns of spending on what was dubbed visible goods: objects which are conspicuous in anonymous interactions and portable, such as jewelry, clothing, and cars. These are good designed to be luxury items which others will frequently see, relative to other, less-visible luxury items, such as hot tubs or fancy bed sheets.

That is, unless you just have to show off your new queen mattress

The paper, by Charles et al (2008), examined data drawn from approximately 50,000 households across the US, representing about 37,000 White 7,000 Black, and 5,000 Hispanic households between the ages of 18 and 50. In absolute dollar amounts, Black and Hispanic households tended to spend less on all manner of things than Whites (about 40% and 25%, respectively), but this difference needs to be viewed with respect to each group’s relative income. After all, richer people tend to spend more than poorer people. Accordingly, the income of these households was estimated through their reports of their overall reported spending on a variety of different goods, such as food, housing, etc. Once a household’s overall income was controlled for, a better picture of their relative spending on a number of different categories emerged. Specifically, it was found that Blacks and Hispanics tended to spend more on visible  goods (like clothing, cars, and jewelry) than Whites by about 20-30%, depending on the estimate, while consuming relatively less in other categories like healthcare and education.

This visible consumption is appreciable in absolute size, as well. The average white household was spending approximately $7,000 on such purchases each year, which would imply that a comparably-wealthy Black or Hispanic household would spend approximately $9,000 on such purchases. These purchases come at the expense of all other categories as well (which should be expected, as the money has to come from somewhere), meaning that the money spent on visible goods often means less is spent on education, health care, and entertainment.

There are some other interesting findings to mention. One – which I find rather notable, but the authors don’t see to spend any time discussing – is that racial differences in consumption of visible goods declines sharply with age: specifically, the Black-White gap in visible spending was 30% in the 18-34 group, 23% in the 35-49 group, and only 15% in the 50+ group. Another similarly-undiscussed finding is that visible consumption gap appears to decline as one goes from single  to married. The numbers Charles et al (2009) mention estimate that the average percentage of budgets used on visible purchases was 32% higher for single Black men, 28% higher for single Black women, and 22% higher for married Black couples, relative to their White counterparts. Whether these declines represent declines in absolute dollar amounts or just declines in racial differences, I can’t say, but my guess is that it represents both. Getting old and getting into relationships tended to reduce the racial divide in visible good consumption.

Cool really does have a cut-off age…

Noting these findings is one thing; explaining them is another, and arguably the thing we’re more interested in doing. The explanation offered by Charles et al (2009) goes roughly as follows: people have a certain preference for social status, specifically with respect to their economic standing. People are interested in signaling their economic standing to others via conspicuous consumption. However, the degree to which you have to signal depends strongly on the reference group to which you belong. For example, if Black people have a lower average income than Whites, then people might tend to assume that a Black person has a lower economic standing. To overcome this assumption, then, Black individuals should be particularly motivated to signal that they do not, in fact, have a lower economic standing more typical of their group. In brief: as the average income of a group drops, those with money should be particularly inclined to signal that they are not as poor as other people below them in their group.

In support of this idea, Charles et al (2008) further analyzed their data, finding that the average spending on visible luxury goods declined in states with higher average incomes, just as it also declined among racial groups with higher average incomes. In other words, raising the average income of a racial group within a state tended to strongly impact what percentage of consumption was visible in nature. Indeed, the size of this effect was such that, controlling for the average income of a race within a state, the racial gaps almost entirely disappeared.

Now there are a few things to say about this explanation, first of which being that it’s incomplete as stands. From my reading of it, it’s a bit unclear to me how the explanation works for the current data. Specifically, it would seem to posit that people are looking to signal that they are wealthier than those immediately below them in the social ladder. This could explain the signaling in general, but not the racial divide. To explain the racial divide, you need to add something else; perhaps that people are trying to signal to members of higher income groups that, though one is a member of a lower income group, one’s income is higher than the average income. However, that explanation would not explain the age/marital status information I mentioned before without adding on other assumption, nor would directly explain the benefits which arise from signaling one’s economic status in the first place. Moreover, if I’m understanding the results properly, it wouldn’t directly explain why visible consumption drops as the overall level of wealth increases. If people are trying to signal something about their relative wealth, increasing the aggregate wealth shouldn’t have much of an impact, as “rich” and “poor” are relative terms.

“Oh sure, he might be rich, but I’m super rich; don’t lump us together”

So how might this explanation be altered to fit the data better? The first step is to be more explicit about why people might want to signal their economic status to others in the first place. Typically, the answer to this question hinges on the fact that being able to command more resources effectively makes one a more valuable associate. The world is full of people who need things – like food and shelter – so being able to provide those things should make one seem like a better ally to have. For much the same reason, being in command of resources also tends to make one appear to be a more desirable mate as well. A healthy portion of conspicuous signaling, as I mentioned initially, has to do with attracting sexual partners. If you know that I am capable of providing you with valuable resources you desire, this should, all else being equal, make me look like a more attractive friend or mate, depending on your sexual preferences.

However, recognition of that underlying logic helps make a corollary point: the added value that I can bring you, owing to my command of resources, diminishes as overall wealth increases. To place it in an easy example, there’s a big difference between having access to no food and some food; there’s less of a difference between having access to some food and good food; there’s less of a difference still between good food and great food. The same holds for all manner of other resources. As the marginal value of resources decreases as access to resources increases overall, we can explain the finding that increases in average group wealth decrease relative spending on visible goods: there’s less of a value in signaling that one is wealthier than another if that wealth difference isn’t going to amount to the same degree of marginal benefit.

So, provided that wealth has a higher marginal value in poorer communities – like Black and Hispanic ones, relative to Whites – we should expect more signaling of it in those contexts. This logic could explain the racial gap on spending patterns. It’s not that people are trying to avoid a negative association with a poor reference group as much as they’re only engaging in signaling to the extent that signaling holds value to others. In other words, it’s not about my signaling to avoid being thought of as poor; it’s about my signaling to demonstrate that I hold a high value as a partner, socially or sexually, relative to my competition.

Similarly, if signaling functions in part to attract sexual partners, we can readily explain the age and martial data as well. Those who are married are relatively less likely to engage in signaling for the purposes of attracting a mate, as they already have one. They might engage in such purchases for the purposes of retaining that mate, though such purchases should involve spending money on visible items for other people, rather than for themselves. Further, as people age, their competition in the mating market tends to decline for a number reasons, such as existing children, inability to compete effectively, and fewer years of reproductive viability ahead of them. Accordingly, we see that visible consumption tends to drop off, again, because the marginal value of sending such signals has surely declined.

“His most attractive quality is his rapidly-approaching demise”

Finally, it is also worth noting other factors which might play an important role in determining the marginal value of this kind of conspicuous signaling. One of these is an individual’s life history. To the extent that one is following a faster life history strategy – reproducing earlier, taking rewards today rather than saving for greater rewards later – one might be more inclined to engage in such visible consumption, as the marginal value of signaling you have resources now is higher when the stability of those resources (or your future) is called into question. The current data does not speak to this possibility, however. Additionally, one’s sexual strategy might also be a valuable piece of information, given the links we saw with age and martial status. As these ornaments are predominately used to attract the attention of prospective mates in nonhuman species, it seems likely that individuals with a more promiscuous mating strategy should see a higher marginal value in advertising their wealth visibly. More attention is important if you’re looking to get multiple partners. In all cases, I feel these explanations make more textured predictions than the “signaling to not seem as poor as others” hypothesis, as considerations of adaptive function often do.

References: Charles, K., Hurst, E., & Roussanov, N. (2008). Conspicuous consumption and race. The Journal of Quarterly Economics, 124, 425-467.

Zahavi, A. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.

 

Stereotyping Stereotypes

I’ve attended a number of talks on stereotypes; I’ve read many more papers in which the word was used; I’ve seen still more instances where the term has been used outside of academic settings in discussions or articles. Though I have no data on hand, I would wager that the weight of this academic and non-academic literature leans heavily towards the idea that stereotypes are, by in large, inaccurate. In fact, I would go a bit farther than that: the notion that stereotypes are inaccurate seems to be so common that people often see little need in ensuring any checks were put into place to test for their accuracy in the first place. Indeed, one of my major complaints about the talks on stereotypes I’ve attended is just that: speakers never mentioning the possibility that people’s beliefs about other groups happen to, on the whole, match up to reality fairly well in many cases (sometimes they have mentioned this point as an afterthought but, from what I’ve seen, that rarely translates into later going out and testing for accuracy). To use a non-controversial example, I expect that many people believe men are taller than women, on average, because men do, in fact, happen to be taller.

Pictured above: not a perceptual bias or an illusory correlation

This naturally raises the question of how accurate stereotypes – when defined as beliefs about social groups – tend to be. It should go without saying that there will not be a single answer to that question: accuracy is not an either/or type of matter. If I happen to think it’s about 75 degrees out when the temperature is actually 80, I’m more accurate in my belief than if the temperature was 90. Similarly, the degree of that accuracy should be expected to vary on the intended nature of the stereotype in question; a matter to which I’ll return later. That said, as I mentioned before, quite a bit of the exposure I’ve had to the subject of stereotypes suggests rather strongly and frequently that they’re inaccurate. Much of the writing about stereotypes I’ve encountered focuses on notions like “tearing them down”, “busting myths”, or about how people are unfairly discriminated against because of them; comparatively little of that work has focused on instances in which they’re accurate which, one would think, would represent the first step in attempting to understand them.

According to some research reviewed by Jussim et al (2009), however, that latter point is rather unfortunate, as stereotypes often seem to be quite accurate, at least by the standards set by other research in psychology. In order to test for the accuracy of stereotypes, Jussim et al (2009) report on some empirical studies that met two key criteria: first, the research had to compare people’s beliefs about a group to what that group was actually like; that much is a fairly basic requirement. Second, the research had to use an appropriate sample to determine what that group was actually like. For example, if someone was interested in people’s beliefs about some difference between men and women in general, but only tested these beliefs against data from a convenience sample (like men and women attending the local college), this could pose something of a problem to the extent that the convenience sample differs from the reference group of people holding the stereotypes. If people, by in large, have accurate stereotypes, researchers would never know if they make use of a non-represented reference group.

Within the realm of racial stereotypes, Jussim et al (2009) summarized the results of 4 papers that met this criteria. The majority of the results fell within what the authors consider “accurate” range (as defined by being 0-10% off from the criteria values) or near-misses (those between 10-20% off). Indeed, the average correlations between the stereotypes and criteria measures ranged from .53 to .93, which are very high, relative to the average correlation uncovered by psychological research. Even the personal stereotypes, while not as high, were appreciably accurate, ranging from .36 to .69. Further, while people weren’t perfectly accurate in their beliefs, those who overestimated differences between racial groups tended be balanced out by those who underestimated those differences in most instances. Interestingly enough, people’s stereotypes about group differences tended to be a bit more accurate than their within group stereotypes.

“Ha! Look at all that inaccurate shooting. Didn’t even come close”

The same procedure was used to review research on gender stereotypes as well, yielding 7 papers with larger sample sizes. A similar set of results emerged: the average stereotype was rather accurate, with correlations ranging between .34 to .98, most of which hovered in the range of .7. Individual stereotypes were again less accurate, but most were still heading in the right direction. To put those numbers in perspective, Jussim et al (2009) summarized a meta-analyses examining the average correlation found in psychological research. According to that data, only 24% of social psychology effects represent correlations larger than .3 and a mere 5% exceeded a correlation of .5; the corresponding numbers for averaged stereotypes were 100% of the reviewed work meeting the .3 threshold, and about 89% of the correlations exceeding the .5 threshold (personal stereotypes at 81% and 36%, respectively).

Now neither Jussim et al (2009) or I would claim that all stereotypes are accurate (or at least reasonably close); no one I’m aware of has. This brings us to the matter of when we should expect stereotypes to be accurate and when we should expect them to fall shorter of that point. As an initial note, we should always expect some degree of inaccuracy in stereotypes – indeed, in all beliefs about the world – to the extent that gathering information takes time and improving accuracy is not always worth that investment in the adaptive sense. To use a non-biological example, spending an extra three hours studying to improve one’s grade on a test from a 70 to a 90 might seem worth it, but the same amount of time used to improve from a 90 to a 92 might not. Similarly, if one lacks access to reliable information about the behavior of others in the first place, stereotypes should also tend to be relatively inaccurate. For this reason, Jussim et al (2009) note that cross-cultural stereotypes in national personalities tend to be among the most inaccurate, as people from, say, India, might have relatively little exposure to information about people from South Africa, and vice versa.

The second point to make on accuracy is that, to the extent that beliefs guide behavior and that behavior carries costs or benefits, we should expect beliefs to tend towards accuracy (again, regardless of whether they’re about social groups or the world more generally). If you believe, incorrectly, that group A is as likely to assault you as group B (the example that Jussim et al (2009) use involves biker gang members and ballerinas), you’ll either end up avoiding one group more than you need to, not being wary enough around one, or miss in both directions, all of which involves social and physical costs. One of the only cases in which being wrong might reliably carry benefits are contexts in which one’s inaccurate beliefs modifies the behavior of other people. In other words, stereotypes can be expected to be inaccurate in the realm of persuasion. Jussim et al (2009) make nods toward this possibility, noting that political stereotypes are among the least accurate ones out there, and that certain stereotypes might have been crafted specifically with the intent of maligning a particular group.

For instance…

While I do suspect that some stereotypes exist specifically to malign a particular group, that possibility does raise another interesting question: namely, why would anyone, let alone large groups of people, be persuaded to accept inaccurate stereotypes? For the same reason that people should prefer accurate information over inaccurate information when guiding their own behaviors, they should also be relatively resistant to adopting stereotypes which are inaccurate, just as they should be when it comes to applying them to individuals when they don’t fit. To the extent that a stereotype is of this sort (inaccurate), then, we should expect that it not be widely held, except in a few particular contexts.

Indeed, Jussim et al (2009) also review evidence that suggests people do not inflexibly make use of stereotypes, preferring individuating information when it’s available: according to the meta-analyses reviewed, the average influence of stereotypes on judgments hangs around r = .1 (which does not, in many instances, have anything to say about the accuracy of the stereotype; just the extent of its effect); by contrast, individuating information had an average effect of about .7 which, again, is much larger than the average psychology effect. Once individuating information is controlled for, stereotypes tend to have next to zero impact on people’s judgments of others. People appear to rely on personal information to a much higher degree than stereotypes, and often jettison ill-fitting stereotypes in favor of personal information. In other words, the knowledge that men tend to be taller than women does not have much of an influence on whether I think a particular women is taller than a particular man.

When should we expect that people will make the greatest use of stereotypes, then? Likely when they have access to the least amount of individuating information. This has been the case in a lot of the previous research on gender bias where very little information is provided about the target individual beyond their sex (see here for an example). In these cases, stereotypes represent an individual doing the best they can with limited information. In some cases, however, people express moral opposition to making use of that limited information, contingent on the group(s) it benefits or disadvantages. It is in such cases that, ironically, stereotypes might be stereotyped as inaccurate (or at least insufficiently accurate) to the greatest degree.

References: Jussim, L., Cain, T., Crawford, J., Harber, K., & Cohen, F. (2009). The unbearable accuracy of stereotypes. In Nelson, T. The Handbook of Prejudice, Stereotyping, and Discrimination (199-227). NY: Psychological Press.  

Some Bathwater Without A Baby

When reading psychology papers, I am often left with the same dissatisfaction: the lack of any grounding theories in them and their inability to deliver what I would consider a real explanation for their findings. While it’s something I have harped on for a few years now, this dissatisfaction is hardly confined to me, as others have voiced similar concerns for at least around the last two decades, and I suspect it’s gone on quite a bit longer than that. A healthy amount of psychological research strikes me as empirical bathwater without a theoretical baby, in a manner of speaking; no matter how interesting that empirical bathwater might be – whether it’s ignored or the flavor of the week – almost all of it will eventually be thrown out and forgotten if there’s no baby there. Some new research that has crossed my eyes a few times lately follows that same trend; a paper examining the reactions of individuals who were feeling powerful to inequality that disadvantaged them or others. I wanted to review that paper today and help fill in the missing sections from it where explanations should go.

Next step: add luxury items, like skin and organs

The paper, by Sawaoka, Hughes, & Ambady (2015), contained four or five experiments – depending on how one counts a pilot study – in which participants were primed to think of themselves as powerful or not. This was achieved, as it so often is, by having the participants in each experiment write about a time they had power over another person or about a time that other people had power over them, respectively. In the first pilot study, about 20 participants were primed as powerful and another 20 primed as relatively powerless. Subsequently, they were told they would be playing a dictator game with another person, in which the other person (who was actually not a person) would be serving as the dictator in charge of dividing up 10 experimental tokens between the two; tokens which, presumably, were supposed to redeemed for some kind of material reward. Those participants who had been primed to feel more powerful expected to receive a higher average number of these tokens (M = 4.2) relative to those primed to feel less powerful (M = 2.2). Feeling powerful, it seemed, lead to participants expecting better treatment from others.

In the next experiment, participants (N = 227) were similarly primed before completing a fairness reaction task. Specifically, participants were presented with three pictures representing distributions of tokens: one of which represented the participant’s payment while the other two represented the payments to others. It was the job of participants to indicate whether these tokens were distributed equally between the three people or whether the distribution was unequal. The distributions could have been (a) equal, (b) unequal, favoring the participant, or (c) unequal, disfavoring the participant. The measure of interest here was how quickly the participants were able to identify equal and unequal distributions. As it turns out, participants primed to feel powerful were quicker to identify unfair arrangements that disfavored them, relative to less powerful participants by about a tenth of a second, but were not quicker to do so when the unequal distributions favored them.

The next two studies followed pretty much the same format and echoed the same conclusion, so I don’t want to spend too much time on their details. The final experiment, however, examined not just reaction times to assessments of equality, but rather how quickly participants were willing to do something about it. In this case, participants were told they were being paid by an experimental employer. The employer to whom they were randomly assigned would be responsible for distributing a payment amount between them and two other participants over a number of rounds (just like the experiment I just mentioned). However, participants were also told that there were other employers they could switch to if they wanted after each round. The question of interest, then, was how quickly participants would switch away from employers who disfavored them. Those participants that were primed to feel powerful didn’t wait around very long in the face of unfair treatment that disfavored them, leaving after the first round, on average; by contrast, those primed to feel less powerful waited about 3.5 rounds to switch if they were getting a bad relative deal. If the inequality favored them, however, the powerful participants were about as likely to stay over time as the less powerful ones. In short, those who felt powerful not only recognized poor treatment of themselves (but not others) quicker, they also did something about it sooner.

They really took Shia’s advice about doing things to heart

These experiments are quite neat, but, as I mentioned before, they are missing a deeper explanation to anchor them anywhere.. Sawaoka, Hughes, & Ambady (2015) attempt an explanation for their results, but I don’t think they get very far with it. Specifically, the authors suggest that power makes people feel entitled to better treatment, subsequently making them quicker to recognize worse treatment and do something about it. Further, the authors make some speculations about how unfair social orders are maintained by powerful people being motivated to do things that maintain their privileged status while the disadvantaged sections of the population are sent messages about being powerless, resulting in their coming to expect unfair treatment and being less likely to change their station in life. These speculations, however, naturally yield a few important questions, chief among which being, “if feeling entitled yields better treatment on the part of others, then why would anyone ever not feel that way? Do, say, poor people really want to stay poor and not demand better treatment from others as well?” It seems that there are very real advantages being forgone by people who don’t feel as entitled as powerful people do, and we would not expect a psychology that behaved that way – that just avoided taking welfare benefits – to have been selected for.

In order to craft something approaching a real explanation for these findings, then, one would need to begin with a discussion about some possible trade-offs that have to be made: if feeling entitled was always good for business, everyone would feel entitled all the time; since they don’t, there are likely some costs associated with feeling entitled that, at least in certain contexts, prevents its occurrence. One of the most likely trade-offs involves the costs associated with conflict: if you feel you’re entitled to a certain kind of treatment you feel you’re not receiving, you need to take steps to ensure the correction of that treatment, since other people aren’t exactly expected just going to start giving you more benefits for no reason. To use a real life example, if you feel your boss isn’t compensating you properly for your work, you need to demand a raise, threatening to inflict costs on him – such as your quitting – if your demands aren’t met.

The problems with such a course of action are two-fold: first, your boss might disagree with your assessment and let you quit, and losing that job could pose other, very real costs (like starving and homelessness). Sometimes an unfair arrangement is better than no arrangement at all. Second, the person with whom you’re bargaining might attempt to inflict costs on you in turn. For instance, if you begin a dispute with law enforcement officers because you believe they have treated you unfairly and are seeking to rectify that situation, they might encourage your compliance with the arrangement with a well-placed fist to your nose. In other words, punishment is a two-way street, and trying to punish stronger individuals – whether physically or socially stronger – is often a poor course of action to take. While “punching-up” might be appealing to certain sensitivities in, say, comedy, it works less well when you’re facing down that bouncer with a few inches and a few dozens pounds of muscle on you.

I’m sure he’ll find your arguments about equality quite persuasive

Indeed, this is the same kind of evolutionary explanation offered by Sell, Tooby, & Cosmides (2009) for understanding the emotion of anger and its associated entitlement: one’s formidability – physically and/or socially – should be a key factor in understanding the emotional systems underlying how they resolve their conflicts; conflicts which may well have to do with distributions of material resources. Those who are better suited to inflict costs on others (e.g., the powerful) are also likely to be treated better by others who wish to avoid the costs of conflicts that accompany poor treatment. This could suggest, however, that making people feel more powerful than they actually are would, in the long-term, tend to produce quite a number of costs for the powerful-feeling, but actually-weak, individuals: making that 150-pound guy think he’s stronger than the 200-pound one might encourage the former to initiate a fight, but not make him more likely to win it. Similarly, encouraging your friend who isn’t that good at their job to demand that raise could result in their being fired. In other words, it’s not that social power structures in society are maintained simply on the basis of inertia or people getting sent particular kinds of social messages, but rather that they reflect (albeit imperfectly) important realities in the actual value people are able to demand from others. While the idea that some of the power dynamics observed in the social world reflect non-arbitrary differences between people might not sit well with certain crowds, it is a baby capable of keeping this bathwater around.

References: Sawaoka, T., Hughes, B., & Ambady, N. (2015). Power heightens sensitive to unfairness against the self. Personality & Social Psychology Bulletin, 41, 1023-1035.

Sell, A., Tooby, J., & Cosmides, L. (2009). Formidability and the logic of human anger. Proceedings of the National Academy of Science, 106, 15073-78.

Examining Arousal And Homophobia

In my last post, I mentioned that the idea of people misplacing or misinterpreting their arousal as being a silly one (as I also did previously here). Today, I wanted to talk about that arousal issue again. In the wake of the supreme court’s legalization of same-sex marriage here in the US, let’s consider arousal in the context straight men’s penises reacting to gay, straight, and lesbian pornography. Specifically, I wanted to discuss a rather strange instance where some people have interpreted men’s physiological arousal as sexual arousal, despite the protests of those men themselves, in the apparent interests of making a political point about homophobia. The political point in question happens to be that a disproportionate number of homophobes are actually latent homosexual themselves who, in true Freudian fashion, are trying to deny and suppress their gay urges in the form of their homophobic attitudes  (see here and here for some examples).

Homosexual individuals, on the other hand, are only repressing a latent homophobia

The paper in question I wanted to examine today is a 1996 piece by Adams, Wright, & Lohr. The paper was designed to test a Freudian idea about homophobia: namely, as mentioned above, that individuals might express homophobic attitudes as a result of their own internal struggle regarding some unresolved homosexual desires. As an initial note, this idea seems rather on the insane side of things, as many Freudian ideas tend to seem. I won’t get too mired in the reasons the idea is crazy, but it should be sufficient to note that the underlying idea appears to be that people develop maladaptive sexual desires in early childhood (long before puberty, when they’d be relevant) which then need to be suppressed by different mechanisms that don’t actually do that job very well. In other words, the idea seems to be positing that we have cognitive mechanisms whose function is generate maladaptive sexual behavior, only to develop different mechanisms later that (poorly and inconsistently) suppress the maladaptive ones. If that isn’t torturous logic, I don’t know what would be.

In any case, the researchers recruited 64 men from their college’s subject pool who had all previously self-identified as 100% straight. These men were then given the internalized homophobia scale (IHP), which, though I can’t access the original paper with the questions, appears to contain 25 questions aimed at assessing people’s emotional reactions to homosexuals, largely focused on their level of comfort/dread being around them. The men were divided into two groups: those who scored above the midpoint on the scale (the men labeled as homophobes) and those who scored below the midpoint (the non-homophobes). Each subject was provided with a stain gauge to attach to their penis which functioned to measure changes in penile diameter; basically how erect the men were getting. Each subject then watched three, four-minute long pornographic scenes: one depicting heterosexual intercourse, another gay intercourse, and another for lesbian intercourse. After each clip, they were asked how sexually aroused they were and how erect their penis was, before being given a change to return to flaccid before the next clip was shown.

In terms of the arousal to the heterosexual and lesbian pornography, there was no difference between the homophobic and non-homophobic groups with respect to how erect the men got and how aroused they reported being. However, in the gay porn condition, the homophobic men became more erect. Framed in terms of the degree of tumescence (engorgement), the non-homophobic men displayed no tumescence 66% of the time, modest tumescence 10% of the time, and definite tumescence 24% of the time in response to the gay porn; the corresponding numbers for the homophobic group were 20%, 26%, and 55%, respectively, while there was no difference between the homophobic and non-homophobic groups with respect how aroused they reported being, the physiological arousal did seem to differ. So what’s going on here? Does homophobia have its roots in some latent homosexual desires being denied?

And does ignoring those desires place you in the perfect position for penetration?

I happen to think that such an idea is highly implausible. There are a few reasons I feel that way, but let’s start with the statistical arguments for why that interpretation probably isn’t right. In terms of the number of men who identify as homosexual or bisexual at a population level, we’re only looking about 1-3%. Given that rough estimate, with a sample size of 60 individuals, you should expect about 1.5 gay people if you were sampling randomly. However, this sampling was anything but random: the subjects were selected specifically because they identified as straight. This should bias the number of gay or bisexual participants in the study downward. Simply put, this sample size is not large enough to expect that any gay or bisexual male participants were in it at all, let alone in large enough numbers to detect any kind of noticeable effect. That problem gets even worse in that they’re looking to find participants that are both bisexual/gay and homophobic, which cuts the probability down even further.

The second statistical reason to be wary of these results is that bisexual men tend to be less common that gay men by a ratio of approximately 1:2. However, the pattern of results observed in the paper from the homophobic group could better be described as bisexual than gay: each group reported the same degree of subjective and physiological arousal to the straight and lesbian porn; there was only the erection difference observed during the homosexual porn. This means that the sample would have been needed to have been compromised of many bisexual homophobes who publicly identified as straight, which seems outlandishly unlikely.

Moreover, the sheer number of the participants displaying “definite tumescence” requires some deeper consideration. If we assume that the physiological arousal translates directly into some kind of sexual desire, then about 25% of non-homophobic men and 55% of homophobic men are sexually interested in homosexual intercourse despite, as I mentioned before, only about 1-3% of the population saying they are gay or bisexual. Perhaps that rather strange state of affairs holds, but a much likelier explanation is that something has gone wrong in the realm of interpretation somewhere. Adams et al (1996) note in their discussion that another interpretation of their results involves the genital swelling being the result of other arousing emotions, such as anxiety, rather than sexual arousal per se. While I can’t say whether such an explanation is true, I can say that it certainly sounds a hell of a lot more plausible than the idea that most homophobes (and about 1-in-4 non-homophobes) are secretly harboring same-sex desires. At least the anxiety-arousal explanation could, in principle, explain why 25% of non-homophobic men’s penises wiggled a little when viewing guy-on-guy action; they’re actually uncomfortable.

Maybe they’re not as comfortable with gay people as they like to say they are…

Now don’t get me wrong: to the extent that one perceives there to be social costs associated with a particular sexual orientation (or social attitude), we should expect people to try and send the the message that they do not possess such things to others. Likewise, if I’ve stolen something, there might be a good reason for me to lie about having stolen it publicly if I don’t want to suffer the costs of moral condemnation for having done so. I’m not saying that everyone will be accurate or truthful about themselves at all times to others; far from it. However, we should also expect that others will not be accurate or truthful about others either, at least to the extent they are trying to persuade people about things. In this case, I think people are misinterpreting data on physiological arousal to imply a non-existent sexual arousal for the purposes of making some kind of social progress. After all, if homophobes are secretly gay, you don’t need to take their points into consideration to quite the same degree you might have otherwise (since once we reach a greater level of societal acceptance, they’ll just come out anyway and probably thank you for it, or something along those lines). I’m all for social acceptance; just not at the expense of accurately understanding reality.

References: Adams, H., Wright L., & Lohr, B. (1996). Is homophobia associated with homosexual arousal? Journal of Abnormal Psychology, 105, 440-445.

Evolutionary Marketing

There are many popular views about the human mind that, roughly, treat it as a rather general-purpose kind of tool: one that’s not particularly suited to this task or that, but more as a Jack of all trades and master of none. In fact, many such perspectives view the mind as (baffling) being wrong about the world almost all the time. If one views the mind this way, one can be lead into making some predictions about how it ought to behave. As one for instance, some people might predict that our minds will, essentially, mistake one kind of arousal for another. A common example of this thinking involves experiments in which people are placed in a fear-arousal condition in the hopes that they will subsequently report more romantic or sexual attraction to certain partners they meet at that time. The explanation for this finding often hinges on some notion of people “misplacing” their arousal – since both kinds of arousal involve some degree of overlapping physiological responses – or reinterpreting a negative arousal as a positive one (e.g., “I dislike being afraid, so I must actually be turned on instead”). I happen to think that such explanations can’t even possibly be close to true, largely because the response to arousal generated by fear and sexual interest should motivate categorically different kinds of behavior.

Here’s one instance where an arousal mistake like that can be costly

Bit by bit, this view of the human mind is being eroded (though progress can be slow), as it does not fit the empirical evidence or possess any solid theoretical groundings. As a great example of this forward progress, consider the experiments demonstrating that learning mechanisms appear to be eloquently tailored to specific kinds of adaptive problems, since learning to, say, avoid poisonous foods requires much different cognitive rules, inputs, and outputs, than learning to avoid predator attacks. Learning, in other words, represents a series of rather domain-specific tasks which a general-purpose mechanism could not navigate successfully. As psychological hypotheses begin to get tailored more closely to considerations of recurrent adaptive problems, new previously-unappreciated, features of our minds come into stark relief.

So let’s return to the matter of arousal and think about how arousal might impact our day-to-day behavior, specifically with respect to persuasion; a matter of interest to anyone in the fields of marketing or advertising. If your goal is to sell something to someone else – to persuade them to buy what you’re offering – the message you use to try and sell it is going to be crucial. You might, for example, try to appeal to someone’s desire to stand out from the crowd in order to get them interested in your product (e.g., “Think different“); alternatively, you might try to appeal to the popularity of a product to get them to buy (e.g., “The world’s most popular computer”). Importantly, you can’t try to send both of these messages at once (“Be different by doing that thing everyone else is doing”), so which message should you use, and in what contexts should you use it?

A paper by Griskevicius et al (2009) sought to provide an answer to that very question by considering the adaptive functions of particular arousal states. Previous accounts examining how arousal affected information processing were on the general side of things: the general arousal-based accounts would predict that arousal – irrespective of the source – should yield shallower processing of information, causing people to rely more on mental heuristics, like scarcity or popularity, when assessing a product; affect valance-based accounts took this idea one step further, suggesting that positive emotions, like happiness, should yield shallower processing, whereas negative emotions, like fear, should yield deeper processing. However, the authors proposed a new way of thinking about arousal – based on evolutionary theory that suggests those previous theories are too vague to help us truly understand how arousal shapes behavior. Instead, one needs to consider what adaptive functions particular arousal states serve in order to understand when one type of message will be persuasive in that context.

Don’t worry; if this gets too complicated, you can just fall back on using sex

To demonstrate this point, Griskevicius et al (2009) examined two arousal-inducing contexts: the aforementioned fear and romantic desire. If the general arousal-based accounts are correct, both the scarcity and popularity appeals should become more persuasive as people become aroused by romance or fear; by contrast, if the affect valance-accounts are correct, the positively-valanced romantic feelings should make all sorts of heuristics more persuasive, whereas the negatively-valanced fear arousal should make both less persuasive. The evolutionary account instead focuses on the functional aspects of fear and romance: fear activates self-defense-relevant behavior, one form of which would be to seek safety in numbers; a common animal defense tactic. If one were motivated to seek safety in numbers, a popularity appeal might be particularly persuasive (since that’s where a lot of other people are), whereas a scarcity appeal would not be; in fact, sending the message that a product would help make one stand out from the crowd when they’re afraid could actually be counterproductive. By contrast, if one is in a romantic state of mind, positively differentiating oneself from your competition can be useful for attracting and subsequently retaining attention. Accordingly, romance-based arousal might have the reverse effect, making popularity heuristics less persuasive while making scarcity appeals more so.

To test these ideas, Griskevicius et al (2009) induced romantic desire or fear in about 300 participants by having them read stories or watch movie clips related to each domain. Following the arousal-inducing, participants were then asked to briefly examine an advertisement for a museum or restaurant which contained a message that appealed to popularity (e.g., “visited by over 1,000,000 people each year”), scarcity (“stand out from the crowd”), or neither message, and then report on how appealing the location was and whether or not they would be likely to go there (on a 9-point scale across a few questions).

As predicted, the fear condition led to popularity messages to be more persuasive (M = 6.5) than the control advertisements (M = 5.9). However, fear had the opposite effect for the scarcity messages (M = 5.0), making them less appealing than the control ads. That pattern of results was flipped for the romantic desire condition: scarcity appeals (M = 6.5) were more persuasive than controls (M = 5.8), whereas the popularity appeals were less persuasive than either (M = 5.0). Without getting too bogged down in the details on their second experiment, the authors also reported that these effects were even more specific than that: in particular, appeals to scarcity and popularity only had their effects when discussing behavioral aspects (stand out from the crowd/everyone’s doing it); when discussing attitudes (everyone’s talking about it) or opportunities (limited time offer) popularity and scarcity did not differ in their effectiveness, regardless of the type of arousal being experienced.

One condition did pose interpretive problems, though…

Thinking about the adaptive problems and selection pressures that shaped our psychology is critical for constructing hypotheses and generating theoretically plausible explanations for understanding its features. Expecting some kind of general arousal, emotional valance, or other such factors to explain much about the human (or nonhuman) mind is unlikely to pan out well; indeed, it hasn’t been working out for the field for many decades now. I don’t suspect such general explanations will disappear in the near future, despite their lack of explanatory power, though; they have saturated much of the field in psychology and many psychologists lack the necessary theoretical background to fully appreciate why such explanations are implausible to begin with. Nevertheless, I remain hopeful that someday the future of psychology might not include reams of thinking about misplaced arousal and general information processing mechanisms that are, apparently, quite bad at solving important adaptive problems.

References: Griskevicius, V., Goldstein, N., Mortensen, C., Sundie, J., Cialdini, R., & Kenrick, D. (2009). Fear and loving in Las Vegas: Evolution, emotion, and persuasion. Journal of Marketing Research, 46, 384-395.

A Curious Case Of Welfare Considerations In Morality

There was a stage in my life, several years back, where I was a bit of a chronic internet debater. As anyone who has engaged in such debates – online or off, for that matter – can attest to, progress can be quite slow if any is observed at all. Owing to the snail’s pace of such disputes, I found myself investing more time in them than I probably should have. In order to free up my time while still allowing me to express my thoughts, I created my own site (this one) where I could write about topics that interested me, express my view points, and then be done with them, freeing me from the quagmire of debate. Happily, this is a tactic that has not only proven to be effective, but I like to think that it has produced some positive externalities for my readers in the form of several years worth of posts that, I am told, some people enjoy. Occasionally, however, I do still wander back into a debate here and there, since I find them fun and engaging. Sharing ideas and trading intellectual blows is nice recreation.

 My other hobbies follow a similar theme

In the wake of the recent shooting in Charleston, the debate I found myself engaged in concerned the arguments for the moral and legal removal guns from polite society, and I wanted to write a bit about it here, serving both the purposes of cleansing it from my mind and, hopefully, making an interesting point about our moral psychology in the process. The discussion itself centered around a clip from one of my favorite comedians, Jim Jefferies, who happens to not be a fan of guns himself. While I recommend watching the full clip and associated stand-up because Jim is a funny man, for those not interested in investing the time and itching to get to the moral controversy, here’s the gist of Jim’s views about guns:

“There’s one argument and one argument alone for having a gun, and this is the argument: Fuck off; I like guns”

While Jim notes that there’s nothing wrong with saying, “I like something; don’t take it away from me”, the rest of the routine goes through various discussions of how other arguments for the owning of guns are, in Jim’s word’s, bullshit (including owning guns for self-defense or the overthrow of an oppressive government. For a different comedic perspective, see Bill Burr).

Laying my cards on the table, I happen to be one of those people who enjoys shooting recreationally (just target practice; I don’t get fancy with it and I have no interest in hunting). That said, I’m not writing today to argue with any of Jim’s points; in fact, I’m quite sympathetic to many of the concerns and comments he makes: on the whole, I feel the expected value of guns, in general, to be a net cost for society. I further feel that if guns were voluntarily abandoned by the population, there would probably be many aggregate welfare benefits, including reduced rates of suicide, homicide, and accidental injury (owing to the possibility that many such conflicts are heat of the moment issues, and lacking the momentary ability to employ deadly force might mean it’s never used at all later). I’m even going to grant his point I quoted above: the best justification for owning a gun is recreational in nature. I don’t ask that you agree or disagree with all this; just that you follow the logical form of what’s to come.

Taking all of that together, the argument for enacting some kind of legal ban of guns – or at the very least the moral condemnation of the ability to own them – goes something like this: because the only real benefit to having a gun is that you get to have some fun with it, and because the expected costs to all those guns being around tend to be quite high, we ought to do away with the guns. The welfare balance just shifts away from having lots of deadly weapons around. Jim even notes that while most gun owners will never use their weapons intentionally or accidentally to inflict costs on others or themselves, the law nevertheless needs to cater to the 1% or so of people who would do such things. So, this thing – X – generates welfare costs for others which far outstrip its welfare benefits, and therefore should be removed. The important point of this argument, then, would seem to focus on these welfare concerns.

Coincidentally, owning a gun may make people put a greater emphasis on your concerns

The interesting portion of this debate is that the logical form of the argument can be applied to many other topics, yet it will not carry the same moral weight; a point I tried to make over the course of the discussion with a very limited degree of success. Ideas die one person at a time, the saying goes, and this debate did not carry on to the point of anyone losing their life.

In the case, we can try and apply the above logic to the very legal, condoned, and often celebrated topic of alcohol. On the whole, I would expect that the availability of alcohol is a net cost for society: drunk driving deaths in the US yield about 10,000 bodies (a comparable number to homicides committed with a firearm), which directly inflict costs on non-drinkers. While it’s more difficult to put numbers on other costs, there are a few non-trivial matters to consider, such as the number of suicides, assaults, and non-traffic accidents encouraged by the use of alcohol, the number of unintended pregnancies and STIs spread through more casual and risky drunk sex, as well as the number of alcohol-related illnesses and liver damage. Broken homes, abused and neglected children, spirals of poverty, infidelity, and missed work could also factor into these calculations somewhere. Both of these products – guns and booze – tend to inflict costs on individuals other than the actor when they’re available, and these costs appear to be substantial,

So, in the face of all those costs, what’s the argument in favor of alcohol being approved of, legally or moally? Well, the best and most common argument seems to be, as Jim might say, “Fuck off; I like drinking”. Now, of course, there are some notable differences between drinking and owning guns, mainly being that people don’t often drink to inflict costs on others while many people do use guns to intentionally do harm. While the point is well taken, it’s worth bearing in mind that the arguments against guns are not the same arguments against murder. The argument as it pertains to guns seemed to be, as I noted above, that regular people should not be allowed to own guns because some small portion of the population that does have one around will do something reprehensible or stupid with it, and that these concerns trump the ability of the responsible owners to do what they enjoy. Well, presumably, we could say the same thing about booze: even if most people who drink don’t drive while drunk, and even if not all drunk drivers end up killing someone, our morals and laws need to cater to that percentage of people that do.

(As an aside, I spent the past few years at New Mexico State University. One day, while standing outside a classroom in the hall, I noticed a poster about drunk driving. The intended purpose of the flyer seemed to be to inform students that most people don’t drive drunk; in fact, about 75% students reported not driving under the influence, if I recall correctly. That does mean, of course, that about 1 in 4 students did at some point, which is a worrying figure; perhaps enough to make a solid argument for welfare concerns)

There is also the matter of enforcement: making alcohol illegal didn’t work out well in the past; making guns illegal could arguably be more successful on a logistical level. While such a point is worth thinking about, it is also a bit of a red herring from the heart of the issue: that is, most people are not opposed to the banning of alcohol because it’s difficult in practice, but otherwise supportive of the measure on principle; instead, people seem as if they would oppose the idea even if it could be implemented efficiently. People’s moral judgments can be quite independent of enforcement capacity. Computationally, it seems like the judgments concerning whether something is worth condemning in the first place ought to proceed judgments about whether it could be done feasibly, simply because the latter estimation is useless without the former. Spending time thinking about what one could punish effectively without any interest in following through would be like thinking about all the things one could chew and swallow when they’re hungry, even if they wouldn’t want to eat them.

Plenty of fiber…and there’s lots of it….

There are two points to bear in mind from this discussion to try and tie it back to understanding our own moral psychology and making a productive point. The first is that there is some degree of variance in moral judgments that is not being determined by welfare concerns. Just because something ends up resulting in harm to others, people are not necessarily going to be willing to condemn it. We might (not) accept a line of reasoning for condemning a particular act because we have some vested interest in (encouraging) preventing it while categorically (accepting) rejecting that same line in other cases where our strategic interests run in the opposite direction; interests which we might not even be consciously aware of in many cases. This much, I suspect, will come as no surprise to anyone, especially because other people in debates are known for being so clearly biased to you, the dispassionate observer. Strategic interests lead us to preference our own concerns.

The other point worth considering, though, is that people raise or deny these welfare concerns in the interests of being persuasive to others. The welfare of other people appears to have some impact on our moral judgments; if welfare concerns were not used as inputs, it would seem rather strange that so many arguments about morality often lean so heavily and explicitly upon them. I don’t argue that you should accept my moral argument because it’s Sunday, as that fact seems to have little bearing to my moral mechanisms. While this too might seem obvious to people (“of course other people’s suffering matters to me!”), understanding why the welfare of others matters to our moral judgments is a much trickier explanatory issue than understanding why our own welfare matters to us. Both of these are matters that any complete theory of morality needs to deal with.

The Morality Of Guilt

Today, I wanted to discuss the topic of guilt; specifically, what the emotion is, whether we should consider it to be a moral emotion, and whether it generates moral behavioral outputs. The first part of that discussion will be somewhat easier to handle than the latter. In the most common sense, guilt appears to an emotion aroused by the perception of wrong-doing which has harmed someone else on the part of the individual experiencing guilt. The negative feelings that accompany guilt often lead to the guilty party desiring to make amends to the injured one so as to compensate the damage done and repair the relationship between the two (e.g., “I’m sorry that totaled your car by driving it into your house; I feel like a total heel. Let me buy you dinner to make up for it”). Because the emotion appears to be aroused by the perceptions of a moral transgression – that is, someone feels they have done something wrong, or impermissible –  it seems like guilt could rightly be considered a moral emotion; specifically, an emotion related to moral conscience (a self regulating mechanism), rather than moral condemnation (an other regulating mechanism).

Nothing beats packing for a nice, relaxing guilt trip

The understanding that guilt is a moral emotion, then, allows us to inform our opinion about what kind of thing morality is by examining how guilt works in greater, proximate detail. In other words, we can infer what adaptive value our moral sense might have had through studying the form of the emotional guilt mechanisms: what inputs they use and what outputs they produce. This brings us to some rather interesting work I recently dug out of my backlog of papers to read, by de Hooge et al (2011), that focused on figuring out what kinds of effects guilt tends to have on people’s behavior when you take guilt out of a dyadic (two-person) relationship and drop it into larger groups of people. The authors were interested, in part, on deciding whether or not guilt could be classified as a morally good emotion. While they acknowledge guilt is a moral emotion, they question whether it produces morally good outcomes in certain types of situations.

This leads naturally to the following question: what is a morally good outcome? The answer to that question is going to depend on what type of function one thinks morality has. In this case, de Hooge et al (2011) write as if our moral sense is an altruism device – one that functions to deliver benefits to others at a cost to one’s self. Accordingly, a morally good outcome is going to be one that results in benefits flowing to others at a cost to the actor. Framed in terms of guilt, we might expect that individuals experiencing guilt will behave more altruistically than individuals who are not; the guilty’s regard for the welfare of others will be regulated upwards, with a corresponding down-regulation placed on their own welfare. The authors note that much of the previous research on guilt has uncovered evidence consistent with that pattern: guilty parties tend to forgo benefits to themselves or suffer costs in order to deliver benefits to the party they have wronged. This makes guilt look rather altruistic.

Such research, however, was typically conducted in a two-party context: the guilty party and their victim. This presents something of an interpretative issue, inasmuch as the guilty party only has that one option available to them: if, say, I want to make you better off, I need to suffer a cost myself. While that might make the behavior look altruistic in nature, in the social world that we reside within, that is usually not the only option available; I could, for instance, also make you better off not at an expense to myself, but rather at the expense of someone else; an outcome most people wouldn’t exactly call altruism, and one de Hooge et al (2011) wouldn’t consider morally good either. To the extent a guilty party is interested in making their victim better off in both case, both outcomes would look the same in a two-party case; to the extent the guilty party is interested in behaving altruistically towards the victimized party, though, things would look different in a three-party context.

As they usually do…

de Hooge et al (2011) report on the results of three pilot studies and four experiments examining how guilt affects behavior in these three-party contexts in terms of welfare-relevant choices. While I don’t have time to discuss all of what they did, I wanted to highlight one of their experiments in more detail while noting that each of them generated data consistent with the same general pattern. The experiment I will discuss is their third one. In that experiment, 44 participants were assigned to either a guilt or a control condition. In both conditions, the participants were asked to complete a two-part joint effort task with another person to earn payment rewards. Colored letters (red or green) would pop up on each player’s screens and the participant and their partner had to click a button quickly in order to complete the task: the participant would push the button if the letter was green, whereas their partner would have to push if the letter was red. In the first part of the task, the performance of both the participant and their partner would be earning rewards for the participant; in the second part, the pair would be earning rewards for the partner instead. Each reward was worth 8 units of what I’ll call welfare points.

The participants were informed that while they would receive the bonus from the first round, their partner would not receive a bonus from the second. In the control condition, the partner did not earn the bonus because of their own poor performance; in the guilt condition, the partner did not earn the bonus because of the participant’s poor performance. In the next phase of this experiment, the participants were presented with three pay offs: their own, their partner’s, and an unrelated individual from the experiment who had also earned the bonus. The participants were told that one of the three would be randomly assigned the chance to redistribute the earnings though, of course, the participants always received that assignment. This allowed participants to give a benefit to their partner, but to do so at either a cost to themselves or at a cost to someone else.

Out of the 8 welfare units the participants had earned, they opted to give an average of 2.2 of them to their partner in the guilt condition, but only 1 unit in the control condition, so guilt did seem to make the participants somewhat more altruistic. Interestingly, however, guilt made participants even more willing to take from the outside party: guilty parties took an average of 4.2 units from the third party for their partner, relative to the 2.5 units they took in the control condition. In short, the participants appeared to be interested in repairing the relationship between themselves and their partners, but were more interested in doing so via taking from someone else, rather than giving up their own resources. Participants also viewed the welfare of the third party as being relatively unimportant as compared to the welfare of the partner they had ostensibly failed.

“To make up for hurting Mike, I think it’s only fair that Karen here suffers”

This returns us to the matter of what kind of thing morality is. de Hooge et al (2011) appear to view morality as an altruism device and view guilt as a moral emotion, yet, strangely, guilt did not appear to make people substantially more altruistic; instead, it seems to make them partial. Given that guilt was not making people behave more altruistically, we might want to reconsider the adaptive function of morality. What if, rather than acting as an altruism device, morality functions as an association management mechanism? If our moral sense functions to build and manage partial relationships, benefiting someone you’ve harmed at the expense of other targets of investment might make more sense. This is because there are good reasons to suspect that friendships represent partial allies maintained in the service of being able to win potential future disputes (DeScioli & Kurzban, 2009). These partial alliances are rank-ordered, however: I have a best friend, close friends, and more distant ones. In order to signal that I rank you highly as a friend, then, I need to demonstrate that I value you more than other people. Showing that I value you highly relative to myself – as would be the case with acts of altruism – would not necessarily tell you much about your value as my friend, relative to other friends. By contrast, behaving in ways that signal I value you more than others at least temporarily – as appeared to be the case in current experiments – could serve to repair a damaged alliance. Morality as an altruism device doesn’t fit the current pattern of data; an alliance management device does, though.

References: DeScioli, P. & Kurzban, R. (2009). The alliance hypothesis for human friendship. PLoS ONE 4(6): e5802. doi:10.1371/journal.pone.0005802

de Hooge, I. Nelissen R., Breugelmans, S., & Zeelenberg, M. (2011). What is moral about guilt? Acting “prosocially” at the disadvantage of others. Journal of Personality & Social Psychology, 100, 462-473.

 

Privilege And The Nature Of Inequality

Recently, there’s been a new comic floating around my social news feeds claiming that it will forever change the way I think about something. It’s not like there’s ever isn’t such article on my feeds, really, but I decided it would provide me with the opportunity to examine some research I’ve wanted to write about for some time. In the case of this mind-blowing comic, the concept of privilege is explained through a short story. The concept itself is not a hard one to understand: privilege here refers to cases in which an individual goes through their life with certain advantages they did not earn. The comic in question looks at an economic privilege: two children are born, but one has parents with lots of money and social connections. As expected, the one with the privilege ends up doing fairly well for himself, as many burdens of life have been removed, while the one without ends up working a series of low-paying jobs, eventually in service to the privileged one. The privileged individual declares that nothing has ever been handed to him in life as he is literally being handed some food on a silver platter by the underprivileged individual, apparently oblivious to what his parent’s wealth and connections have brought him.

Stupid, rich baby…

In the interests of laying my cards on the table at the outset, I would count myself among those born into privilege. While my family is not rich or well-connected the way people typically think about those things, there haven’t been any necessities of life I have wanted for; I have even had access to many additional luxuries that others have not. Having those burdens removed is something I am quite grateful for, and it has allowed me to invest my time in ways other people could not. I have the hard-work and responsibility of my parents to thank for these advantages. These are not advantages I earned, but they are certainly not advantages which just fell from the sky; if my parents had made different choices, things likely would have worked out differently for me. I want to acknowledge my advantages without downplaying their efforts at all.

That last part raises a rather interesting question that pertains to the privilege debate, however. In the aforementioned comic, the implication seems to be – unless I’m misunderstanding it – that things likely would have turned out equally well for both children if they had been given access to the same advantages in their life. Some of the differences that each child starts with seems to be the results of their parent’s work, while other parts of that difference are the result of happenstance. The comic appears to suggest the differences in that case were just due to chance: both sets of parents love their children, but one set seems to have better jobs. Luck of the draw, I suppose. However, is that the case for life more generally; you know, the thing about which the comic intends to make a point?

For instance, if one set of parents happen to be more short-term oriented – interested in taking rewards now rather than foregoing them for possibly larger rewards in the future, i.e., not really savers – we could expect that their children will, to some extent, inherit those short-term psychological tendencies; they will also inherit a more meager amount of cash. Similarly, the child of the parents who are more long-term focused should inherit their proclivities as well, in addition to the benefits those psychologies eventually accrued.

Provided that happened to be the case, what would become of these two children if they both started life in the same position? Should we expect that they both end up at similar places? Putting the questions another way, let’s imagine that, all the sudden, the wealth of this world was evenly distributed among the population; no one had more or less than anyone else. In this imaginary world, how long would that state of relative equality last? I can’t say for certain, but my expectation is that it wouldn’t last very long at all. While the money might be equally distributed in the population, the psychological predispositions for spending, saving, earning, investing, and so on are unlikely to be. Over time, inequalities will again begin to assert themselves as those psychological differences – be they slight or large – accumulate from decision after decision.

Clearly, this isn an experiment that couldn’t be run in real life – people are quite attached to their money – but there are naturally occurring versions of it in everyday life. If you want to find a context in which people might randomly come into possession of a sum of money, look no further than the lottery. Winning the lottery, both whether one wins at all and how much money you get, are as close to randomly determined as we’re going to get. If the differences between the families in the mind-blowing comic are due to chance factors, we would predict that people who win more money in the lottery should, subsequently, be doing better in life, relative to those who won smaller amounts. By contrast, if chance factors are relatively unimportant, than the amount won should be less important: whether they win large or small amounts, they might spend it (or waste it) at similar rates.

Nothing quite like a dose of privilege to turn your life around

This was precisely what was examined by Hankins et al (2010): the authors sought to assess the relationship between the amount of money won in a lottery and the probability of the winner filing for bankruptcy within a five year period of their win. Rather than removing inequalities and seeing how things shake out, then, this research took the opposite approach: examining a process that generated inequalities and seeing how long it took for them to dissipate.

The primary sample for this research were the Fantasy 5 winners in Florida from April 1993 to November, 2002 who had won $600 or more: approximately 35,000 of them after certain screening measures had been implemented. These lottery winners were grouped into those who won between $10,000 and $50,000, and those who won between $50,000 and $150,000 (subsequent analyses would examine those who won $10,000 or less as well, leading to small, medium, and large winner groups).

Of those 35,000 winners, about 2,000 were linked to a bankruptcy filing within five years of their win, meaning that a little more than 1% of winners were filing each year on average; a rate comparable to the broader Florida population. The first step was to examine whether the large winners were doing comparable amounts of bankruptcy filing prior to their win, relative to the low winners which, thankfully, they were. In pretty much all respects, those who won a lot of money did not differ from those who won less before their win (including race, gender, marital status, educational attainment, and nine other demographic variables). That’s what one would expect from the lottery, after all.

Turning to what happened after their win, within the first two years, those who won larger sums of money were less likely to file for bankruptcy than smaller winners; however, in years 3 through 5 that pattern reversed itself, with larger winners becoming more likely to file. The end result of this shifting pattern was that, in five years time, large winners were equally likely to have filed for bankruptcy, relative to smaller winners. As Hankins et al (2010) put it, large cash payments did not prevent bankruptcy; they only postponed it. This result was consistently obtained after attempting a number of different analyses, suggesting that the finding is fairly robust. In fact, when the winners eventually did file for bankruptcy, the big winners didn’t have much more to show for it than small winners: those who won between $25,000 and $150,000 only had about $8,000 more in assets than those who had won less than $1,500, and the two groups had comparable debts.

Not much of an ROI on making it rain these days, it seems

At least when it came to one of the most severe forms of financial distress, large sums of cash did not appear to stop people from falling back into poverty in the long term, suggesting that there’s more going on in the world than just poor luck and unearned privilege. Whatever this money was being spent on, it did not appear to be sound investments. Maybe people were making more of their luck than they realized.

It should be noted that this natural experiment does pose certain confounds, perhaps the most important of which is that not everyone plays the lottery. In fact, given that the lottery itself is quite a bad investment, we are likely looking at a non-random sample of people who choose to play it in the first place; people who already aren’t prone to making wise, long-term decisions. Perhaps these results would look different if everyone played the lottery but, as it stands, thinking about these results in the context of the initial comic about privilege, I would have to say that my mind remains un-blown. Unsurprisingly, deep truths about social life can be difficult to sum up in a short comic.

References: Hankins, S., Hoekstra, M., & Skiba, P. (2010). The ticket to easy street? The financial consequences of winning the lottery. Vanderbilt Law and Economics Research Paper, 10-12.

Relaxing With Some Silly Research

In psychology, there is a lot of bad research out there by all estimates. The poor quality of this research can be attributed to concerns about ideology-driven research agendas, research bias, demand characteristics, lack of any real theory guiding the research itself, p-hacking, file-drawer effects, failures to replicate, small sample sizes, and reliance on undergraduate samples, among others. Arguably, there is more bad (or at least inaccurate) research than good research floating around as, in principle, there are many more ways of being wrong about the human mind than there are of being right about it (even given our familiarity with it); a problem made worse by the fact that being (or appearing) wrong or reporting null findings does not tend to garner one social status in the world of academia. If many of the incentives reside in finding particular kinds of results – and those kinds are not necessarily accurate – the predictable result is a lot of misleading papers. Determining what parts of the existing psychological literature are an accurate description of human psychology can be something of a burden, however, owing to the obscure nature of some of these issues: it’s not always readily apparent that a paper found a fluke result or that certain shady research practices have been employed. Thankfully, it doesn’t take a lot of effort to see why some particular pieces of psychological research are silly; criticizing that stuff can be as relaxing as a day off at the beach.

Kind of like this, but indoors and with fewer women

The last time I remember coming across some of the research that can easily be recognized as silly was when one brave set of researchers asked if leaning to the left made the Eiffel tower look smaller. The theory behind that initial bit of research is called, I think, number line theory, though I’m not positive on that. Regardless of the name, the gist of the idea seems to be that people - and chickens, apparently - associate smaller numbers with a relative leftwardly direction and larger numbers with a rightwardly one. For humans, such a mental representation might make sense in light of our using certain systems of writing; for nonhumans, this finding would seem to make zero sense. To understand why this finding makes no sense, try and place it within a functional framework by asking (a) why might humans and chickens (and perhaps other animals as well) represent smaller quantities with their left, and (b) why might leaning to the left be expected to bias one’s estimate of size? Personally, I’m coming up with a blank on the answer to those questions, especially because biasing one’s estimate of size on the basis of how one is leaning is unlikely to yield more accurate estimates. A decrease in accuracy seems like that could only carry costs in this case; not benefits. So, at best, we’re left calling those findings a development byproduct for humans and likely a fluke for the chickens. In all likelihood, the human finding is probably a fluke as well.

Thankfully, for the sake of entertainment, silly research is not to be deterred. One of the more recent tests of this number line hypothesis (Anelli et al, 2014) makes an even bolder prediction than the Eiffel tower paper: people will actually get better at performing certain mathematical operations when they’re traveling to the left or the right: specifically, going right will make you better at addition and left better at subtraction. Why? Because smaller numbers are associated with the left? How does that make one better at subtraction? I don’t know and the paper doesn’t really go into that part. On the face of it, this seems like a great example of what I have nicknamed “dire straits thinking”. Named after the band’s song, “money for nothing” this type of thinking leads people to hypothesizing that others can get better (or worse) at tasks without any associated costs. The problem with this kind of thinking is that if people did possess the cognitive capacities to be better at certain tasks, one might wonder why people ever perform worse than they could. This would lead me to pose questions like, “why do I have to be traveling right to be better at addition; why not just be better all the time?” Some kind of trade-offs need to referenced to explain that apparent detriment/bonus to performance, but none ever are in dire straits thinking.

In any case, let’s look at the details of the experiment, which was quite simple. Anelli et al, (2014) had a total of 48 participants walk with an experimenter (one at a time; not all 48 at once). The pair would walk together for 20 seconds in a straight line, at which point the experimenter would call out a three-digit number, tell the participants to add or subtract from it by 3 aloud for 22 seconds, give them a direction to turn (right or left), and tell them to begin. At that point, the participant would turn and start doing the math. Each participant completed four trials: two congruent (right/addition or left/subtraction) and two incongruent (right/subtraction or left/addition). The researchers hoped to uncover a congruency effect, such that more correct calculations would be performed in the congruent, relative to incongruent, trials.

Now put the data into to the “I’m right” program and it’s ready to publish

Indeed, just such an effect was found: when participants were moving in a congruent direction as their mathematical operations, they performed more correct calculations on average (M = 10.1), relative to when they were traveling in an incongruent direction (M = 9.6). However, when this effect was broken down by direction, it turns out that the effect only exists when participants were doing addition (M = 11.1 when going right, 10.2 when going left); there was no difference for subtraction (M = 9.0 and 9.1, respectively). Why was there no effect for subtraction? Well, the authors postulate a number of possibilities – one of which being that perhaps participants needed to be walking backwards – though none of them include the possibility of the addition finding being a statistical fluke. It’s strange how infrequently this possibility is ever mentioned in published work, especially in the face of inconsistent findings.

Now one obvious criticism of this research is that the participants were never traveling right or left; they were walking straight ahead in all cases. Right or left, unlike East or West, depends on perspective. When I am facing my computer, I feel I am facing ahead; when I turn around to walk to the bathroom, I don’t feel like I’m walking behind me. The current research would thus rely on the effects of a momentary turn affecting participant’s math abilities for about half a minute. Accordingly, participants shouldn’t even have needed to be walking; asking them to turn and stand in place should be expected to have precisely the same effect. If the researchers wanted to measure walking to the right or left, they should have had participants moving to the side by sliding, rather than turning and walking forward.

Other obvious criticisms of the research could include the small sample size, the small effect size, the inconsistency of the effect (works for addition but not subtraction and is inconsistent with other research they cite which was itself inconsistent – people being better at addition when going up in an elevator but not walking up stairs, if I understand correctly), or the complete lack of anything resembling a real theory guiding the research. But let’s say for a moment that my impression of these results as silly is incorrect; let’s assume that these results accurately describe the workings of human mind in some respect. What are the implications of that finding? What, in other words, happens to be at stake here? Why would this research be published, relative to the other submissions received by Frontiers in Psychology? Even if it’s a true effect – which already seems unlikely, given the aforementioned issues – it doesn’t seem particularly noteworthy. Should people be turning to the right and left while taking their GREs? Do people need to be doing jumping jacks to improve their multiplication skills so as to make their body look more like the multiplication symbol? If so, how could you manage to do them while you’re supposed to be sitting down quietly while taking your GREs without getting kicked out of the testing site? Perhaps someone more informed on the topic could lend a suggestion, because I’m having trouble seeing the importance of it.

Maybe the insignificance of the results is supposed to make the reader feel more important

Without wanting to make a mountain out of a mole hill, this paper was authored by five researchers and presumably made it passed an editor and several reviewers before it saw publication. At a minimum, that’s probably about 8 to 10 people. That seems like a remarkable feat, given how strange the paper happens to look on its face. I’m not just mindlessly poking fun at the paper, though: I’m bringing attention to it because it seems to highlight a variety of problems in the world of psychological research. There are, of course, many suggestions as to how these problems might be ferreted out, though many of them that I have seen focus more on statistical solutions or combating researcher degrees of freedom. While such measures might reduce the quantity of bad research (like pre-registering studies), they will be unlikely to increase the absolute quality of good work (since one can pre-register silly ideas like this), which I think is an equally valuable goal. For my money, the requirement of some theoretical functional grounding for research would likely be the strongest candidate for improving work in psychology. I imagine many people would find it harder to propose such an idea in the first place if they needed to include some kind of functional considerations as to why turning right makes you better at addition. Even if such a feat was accomplished, it seems those considerations would make the rationale for the paper even easier to pick apart by reviewers and readers.

Instead of asking for silly research to be conducted on larger, more diverse samples, it seems better to ask that silly research not be conducted at all.

References: Anelli, F., Lugli, L., Baroni G., Borghi, A., & Nicoletti, R. (2014). Walking boosts your performance in making additions and subtractions. Frontiers in Psychology, 5, doi: 10.3389/fpsyg.2014.01459

Do Moral Violations Require A Victim?

If you’ve ever been a student of psychology, chances are pretty good that you’ve heard about or read a great many studies concerning how people’s perceptions about the world are biased, incorrect, inaccurate, erroneous, and other such similar adjectives. A related sentiment exists in some parts of the morality literature as well. Perhaps the most notable instance is the unpublished paper on moral dumbfounding, by Haidt, Bjorklund, & Murphy (2000). In that paper, the authors claim to provide evidence that people first decide whether an act is immoral and then seek to find victims or harms for the act post hoc. Importantly, the point seems to be that people seek out victims and harm despite them not actually existing. In other words, people are mistaken in perceiving harm or victims. We could call such tendencies the “fundamental victim error” or the “harm bias”, perhaps. If that interpretation of the results is correct, it would carry a number of implications, chief among which (for my present purposes) is that harm is not a required input for moral systems. Whatever cognitive systems are in charge of processing morally-relevant information, they seem to be able to do so without knowledge of who – if anyone – is getting harmed.

Just a little consensual incest. It’s not like anyone is getting hurt.

Now I’ve long found that implication to be a rather interesting one. The reason it’s interesting is because, in general, we should expect that people’s perceptions about the world are relatively accurate. Not perfect, mind you, but we should be expected to be as accurate as available information allows us to be. If our perceptions weren’t generally accurate, this would likely yield all sorts of negative fitness consequences: for example, believing you can achieve a goal you actually cannot could lead to the investment of time and resources in a fruitless endeavor; resources which could be more profitably spent elsewhere. Sincerely believing you’re going to win the lottery does not mean the tickets are wise investments. Given these negative consequences for acting on inaccurate information, we should expect that our perceptual systems evolved to be as accurate as they can be, given certain real-world constraints.

The only context I’ve seen in which being wrong about something could consistently lead to adaptive outcomes is in the realm of persuasion. In this case, however, it’s not that being wrong about something per se helps you, as much as someone else being wrong helps you. If people happen to think my future prospects are bright – even if they’re not – it might encourage them to see me as an attractive social partner or mate; an arrangement from which I could reap benefits. So, if some part of me happen to be wrong, in some sense, about my future prospects, and being wrong doesn’t cause me to behave in too many maladaptive ways, and it also helps persuade you to treat me better than you would given accurate information, being wrong (or biased) could be, at times, adaptive.

How does persuasion relate to morality and victimhood, you may well be wondering? Consider again the initial point about people, apparently, being wrong about the existence of harms and victims of acts they deem to be immoral. If one was to suggest that people are wrong in this realm – indeed, that our psychology appears to be designed in such a way to consistently be wrong – one would also need to couch that suggestion in the context of persuasion (or some entirely new hypothesis about why being wrong is a good thing). In other words, the argument would need to go something like this: by perceiving victims and harms where none actually exist, I could be better able to persuade other people to take my side in a moral dispute. The implications of that suggestion would seem to, in a rather straight-forward way, rely on people taking sides on moral issues on the basis of harm in the first place; if they didn’t, claims of harm wouldn’t be very persuasive. This would leave the moral dumbfounding work in a bit of a bind, theoretically-speaking, with respect to whether harms are required inputs for moral systems or not: that people perceive something as immoral and then later perceive harms would suggest harms are not required inputs; that arguments about harms are rather persuasive could suggest that harms are required inputs.

Enough about implications; let’s get to some research 

At the very least, the perceptions of victimhood and harm appear intimately tied perceptions of immorality. The connection between the two was further examined recently by Gray, Schein, & Ward, (2014) across five studies, though I’m only going to discuss one of them. In the study of interest, 82 participants each rated 12 actions on whether they wrong (1-5 scale, from ‘not wrong at all’ to ‘extremely wrong’) and whether the act had a victim (1-5 scale, from ‘definitely not’ to definitely yes’). These 12 actions were broken down into three groups of four acts each: the harmful group (including items like kicking a dog or hitting a spouse), the impure group (including masturbating to a picture of your dead sister or covering a bible with feces), and the neutral group (such as eating toast or riding a bus). The interesting twist in this study involved the time frame in which participants answered: one group was placed under a time constraint in which they had to read the question and provide their answers within seven seconds; the other group was not allowed to answer until at least a seven-second delay had passed, and were given an unlimited amount of time in which to answer. So one group was relying on, shall we say, their gut reaction, while the other was given ample time to reason about things consciously.

Unsurprisingly, there appeared to be a connection between harm and victimhood: the directly harmful scenarios generated more certainty about a victim (M = 4.8) than the impure ones (M = 2.5), and the neutral scenarios didn’t generate any victims (M = 1). More notably, the time constraint did have an effect, but only in the impure category: when answering under time constraints in the impure category, participants reported more certainty about the existence of a victim (M = 2.9) relative to when they had more time to think (M = 2.1). By contrast, the perceptions of victims in the harm (M = 4.8 and 4.9, respectively) and neutral categories (M = 1 and 1) did not differ across time constraints.

This finding puts a different interpretive spin on the moral dumbfounding literature: when people had more time to think about (and perhaps invent) victims for more ambiguous violations, they came up with fewer victims. Rather than people reaching a conclusion about immorality first and then consciously reasoning about who might have been harmed, it seems that people could have instead been reaching implicit conclusions about both harm and immorality quite early on, and only later consciously reasoning about why an act which seemed immoral isn’t actually making any worthy victims. If representations about victims and harms are arising earlier in this process than would be anticipated by the moral dumbfounding research, this might speak to whether or not harms are required inputs for moral systems.

Turns out that piece might have been more important than we thought

It is possible, I suppose, that morality could simply use harm as an input sometimes without it being a required input. That possibility would allow harm to be both persuasive and not required, though it would require some explanation as to why harm is only expected to matter in moral judgments at times. At present, I know of no such argument having ever been made, so there’s not too much to engage with on that front.

It is true enough that, at times, when people perceive victims, they tend to perceive victims in a rather broad sense, naming entities like “society” to be harmed by certain acts. Needless to say, it seems rather difficult to assess such claims, which makes one wonder how people perceive such entities as being harmed in the first place. One possibility, obviously, is that such entities (to the extent they can be said to exist at all) aren’t really being harmed and people are using unverifiable targets to persuade others to join a moral cause without the risk of being proved wrong. Another possibility, of course, is that the part of the brain that is doing the reporting isn’t quite able to articulate the underlying reason for the judgment well to others. That is, one part of the brain is (accurately) finding harm, but the talking part isn’t able to report on it. Yet another possibility still is that harm befalling different groups is strategically discounted (Marczyk (2015). For instance, members of a religious group might find disrespect towards a symbol of their faith (rubbing feces on the bible, in this case) to be indicative of someone liable to do harm to their members; those opposed to the religious group might count that harm differently – perhaps not as harm at all. Such an explanation could, in principle, explain the time-constraint effect I mentioned before: the part of the brain discounting harm towards certain groups might not have had enough time to act on the perceptions of harm yet. While these explanations are not necessarily mutually exclusive, they are all ideas worth thinking about.

References: Gray, K., Schein, C., & Ward, A. (2014). The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering. Journal of Experimental Psychology, 143, 1600-1615.

Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished Manuscript. 

Marczyk, J. (2015). Moral alliance strategies theory. Evolutionary Psychological Science, 1, 77-90.