Inequality Aversion, Evolution, And Reproduction

Here’s a scenario that’s not too difficult to imagine: a drug company has recently released a paper claiming that a product they produce is both safe and effective. It would be foolish of any company with such a product to release a report saying their drugs were in any way harmful or defective, as it would likely lead to a reduction in sales and, potentially, a banning or withdrawal of the drugs from the wider market. Now, one day, an outside researcher claims to have some data suggesting that drug company’s interpretation of their data isn’t quite right; once a few other data points are considered, it becomes clear that the drug is only contextually effective and, in other cases, not really effective at all. Naturally, were some representatives of the drug company asked about the quality of this new data, one might expect them to engage in a bit of motivated reasoning: some concerns might be raised about the quality of the new research that otherwise would not be, were its conclusions different. In fact, the drug company would likely wish to see the new research written up to be more supportive of their initial conclusion that the drug works. Because of their conflict of interests, however, expecting an unbiased appraisal of the research suggesting the drug is actually much less effective than previously stated from those representatives would be unrealistic. For this reason, you probably shouldn’t ask representatives from the drug company to serve as reviewers for the new research, as they’d be assessing both the quality of their own work and the quality of the work of others with factors like ‘money’ and ‘prestige’ on the table.

“It must work, as it’s been successfully making us money; a lot of money”

On an entirely unrelated note, I was the lucky recipient of a few comments about some work of mine concerning inequality aversion: the idea that people dislike inequality per se (or at least when they get the short end of the inequality stick) and are willing to actually punish it. Specifically, I happen to have some data that suggests people do not punish inequality per se: they are much more interested in punishing losses, with inequality only playing a secondary role in – occasionally – increasing the frequency of that punishment. To place this in an easy example, let’s consider TVs. If someone broke into your house and destroyed your TV, you would likely want to see the perpetrator punished, regardless of whether they were richer or poorer than you. Similarly, if someone went out and bought themselves a TV (without having any effect on yours), you wouldn’t really have any urge to punish them at all, whether they were poorer or richer than you. If, however, someone broke into your house and took your TV for themselves, you would likely want to see them punished for their actions. However, if they were actually poorer than you, this might incline you to go after the thief a bit less. This example isn’t perfect, but it basically describes what I found.

Inequality aversion would posit that people show a different pattern of punitive sentiments: that you would want to punish people who end up better off than you, regardless of how they got that way. This means that you’d want to punish the guy who bought the TV for himself if it meant he ended up better off than you, even though he had no effect on your well-being. Alternatively, you wouldn’t be particularly inclined to punish the person who stole/broke your TV either unless they subsequently ended up better off than you. If they were poorer than you to begin with and were still poorer than you after stealing/destroying the TV, you ought not to be particularly interested in seeing them punished.

In case that wasn’t clear, the argument being put forth is that how well you are doing, relative to others ought to be used as an input for punishment decisions to a greater extent – a far greater one – than absolute losses or gains are.

Now there’s a lot to say about that argument. The first thing to say is that, empirically, it is not supported by the data I just mentioned: if people were interested in punishing inequality itself, they ought to be willing to punish that inequality regardless of how it came about: stealing a TV, buying a TV, or breaking a TV should be expected to prompt very similar punishment responses; it’s just that they don’t: punishment is almost entirely absent when people create inequality by benefiting themselves at no cost to others. By contrast, punishment is rather common when costs are inflicted on someone, whether those costs involve taking (where one party benefits while the other suffers) or destruction (where one party suffers a loss at no benefit to anyone else). On those grounds alone we can conclude that something is off about the inequality aversion argument: the theory does not match the data. Thankfully – for me, anyway – there are also many good theoretical justifications for rejecting inequality aversion.

“It’s a great home in a good neighborhood; pay no mind to the foundation”

The next thing to say about the inequality argument is that, in one regard, it is true: relative reproduction rates determine how quickly the genes underlying an adaptation spread – or fail to spread – throughout the population. As resources are not unlimited, a gene that reproduces itself 1.1 times for each time an alternative variant reproduces itself once will eventually replace the other in the population entirely, assuming that the reproductive rates stay constant. It’s not enough for genes to reproduce themselves, then, but for them to reproduce themselves more frequently than competitors if they metaphorically hope to stick around in the population over time. That this much is true might lure people into accepting the rest of the line of reasoning, though to do so would be a mistake for a few reasons.

Notable among this reasons is that “relative reproductive advantage” does not have three modes of “equal”, “better”, or “worse”. Instead, relative advantage is a matter of degree: a gene that reproduces itself twice as frequently as other variants is doing better than a gene that does so with 1.5 times the frequency; a gene that reproduces itself three times as frequently will do better still, and so on. As relative reproductive advantages can be large or small, we ought to expect mechanisms that generate larger relative reproductive advantages to be favored over those which generate smaller ones. On that point, it’s worth bearing in bearing in mind that the degree of relative reproductive advantage is an abstract quantity compromised of absolute differences between variants. This is the same point as noting that, even if the average woman in the US has 2.2 children, no woman actually has two-tenths of a child laying around; they only come in whole numbers. That means, of course, that evolution (metaphorically) must care about absolute advantages to precisely the same degree it cares about relative ones, as maximizing a relative reproductive rate is the same thing as maximizing an absolute reproductive rate.

The question remains, however, as to what kind of cognitive adaptations would arise from that state of affairs. On the one hand, we might expect adaptations that primarily monitor one’s own state of affairs and makes decisions based on those calculations. For instance, if a male with two mates has an option to pursue a third and the expected fitness benefits of doing so outweigh the expected costs, then the male in question would likely pursue the opportunity. On the other hand, we might follow the inequality aversion line of thought and say that the primary driver of the decision to pursue this additional mate should be how well the male in question is doing, relative to his competitors. If most (or should it be all?) of his competitors currently have fewer than two mates, then the cognitive mechanisms underlying his decision should generate a “don’t pursue” output, even if the expected fitness costs are smaller than the benefits. It’s hard to imagine how this latter strategy is expected to do better (much less far better) than the former, especially in light of the fact that calculating how everyone else is doing is more costly and prone to errors than calculating how you are doing. It’s similarly hard to imagine how the latter strategy would do better if the state of the world changes: after all, just because someone is not currently doing as well as you, it does not mean they won’t eventually be. If you miss an opportunity to be doing better today, you may end up being relatively disadvantaged in the long run.

“I do see her more than the guy she’s cheating on me with, so I’ll let it slide…”

I’m having a hard time seeing how a mechanism that operates on an expected fitness cost/benefit analysis would get out-competed by a more cognitively-demanding strategy that either ignores such a cost/benefit strategy or takes it and adds something irrelevant into the calculations (e.g.,” get that extra benefit, but only so long as other people are currently doing better than you)”. As I mentioned initially, the data shows the absolute cost/benefit pattern predominates: people do not punish others primarily on the basis of whether they’re doing better than them or not; they primarily punish on the basis of whether they experienced losses. Nevertheless inequality does play a secondary role – sometimes – in the decision regarding whether to punish someone for taking from you. I happen to think I have an explanation as to why that’s the case but, as I’ve also been informed by another helpful comment (which might or might not be related to the first one), speculating about such things is a bit on the taboo side and should be avoided. Unless one is speculating that inequality, and not losses, primarily drives punishment, that is.

No Such Thing As A Free Evolutionary Lunch

Perceiving the world does not typically strike people as a particularly demanding task. All you need to do is open your eyes to see, put something in your mouth to taste, run your hand along an object to feel, and hearing requires less effort still. Perhaps somewhat less appreciated but similar in spirit is the ease with which other kinds of social perceptions are generated, such a perceiving a moral dimension and intentions in the behavior of others. Unless the cognitive mechanisms underlying such perceptions are damaged, all this perceiving feels as if it takes place simply, easily, and automatically. It would be strange for someone to return home from a long day of work and complain that their ears can’t possibly listen to anything else, as they’re too worn out (quite a different complaint from not wanting to listen to someone’s particular speech about their day). Indeed, we ought to expect such processes to work quite efficiently and quickly, owing the historical adaptive value of generating such perceptions. Being able to see and hear, as well as read the minds of others, turn out to be pretty important tasks when it comes to the day-to-day business of survival and reproduction. If one was unable to accomplish such goals quickly and automatically, they would frequently find themselves suffering costs they could have avoided.

“Nothing to it; I can easily perceive the world all day”

That these tasks might feel easy – in fact, perception often doesn’t feel like anything at all – does not mean they are actually easy, either computationally or, importantly, metabolically. Growing, maintaining, and running the appropriate cognitive and physiological mechanisms for generating perception is not free for a body to do. Accordingly, we ought to expect that these perceptual mechanisms are only maintained in the population to extent that they are continuously useful for doing adaptive things. Now for us the value of hearing or seeing in our environment is unlikely to change, and so these mechanisms are maintained in the population. However, that status quo is not always maintained in different species or across time. One example I used for my undergraduate evolutionary psychology course of when this is not the case involves cave-dwelling organisms; specifically, organisms which did not always live in caves exclusively, but came to reside there over time.

What’s notable about these underground caves is that light does not reach the creatures that live there regularly. Without any light, the physiological mechanisms designed to process such information – specifically, the eyes – no longer grant an adaptive benefit to the cave-dwellers. Similarly, the neural tissue required for processing this visual information would not provide any advantage to the bearer either. When that adaptive value of vision is removed, the value of growing the eyes and associated brain regions are compromised and, as a result, many cave-dwelling species either fail to develop eyes altogether, or develop reduced, non-functional ones. Similarly, if there’s no light in the environment, other organisms cannot see you, resulting in many of these cave dwellers losing any skin pigmentation as well. (In a parallel fashion, people tend to lose track of their grooming and dressing habits when they know they aren’t going to leave the house. Now just imagine you would never leave the house again…)

Some recent research attempted to estimate the metabolic costs avoided by cave-dwelling fish who fail to develop functioning eyes and possess a reduced optic tectum (the brain region associated with vision in the surface-dwelling varieties). To do so, researchers removed the brains from surface and cave varieties of Pachon fish and placed them in individual respirator chambers. The oxygenated fluid that filled these chambers was replaced every 10 minutes, allowing measurements to be taken on how much oxygen was consumed by each brain over time. The floating brain/eyes of the surface fish consumed about 23% of the fish’s  estimated resting metabolism (for smaller fish; for larger fish, this percentage was closer to 10%). By contrast, the eyeless brains of the cave fish only consumed about 10% of their metabolism (again, for the smaller fish; larger fish used about 5%). Breaking the numbers down for an estimate of vision specifically, the cost of vision mechanisms was estimated to be about 5-15% of the resting metabolism in the surface fish. The cost of vision, it would seem, is fairly substantial.

Much better; just think of the long-term savings!

It is also worth noting that the other organs (hearts, digestive systems, and gonads) of the fish did not tend to differ between surface and cave dwelling varieties, suggesting that the selective pressure against vision was rather specific, as one should expect, given the domain-specific nature of adaptive problems: just because you don’t have to see, it doesn’t mean you don’t have to circulate blood, eat, and mate. One lesson to take from the current results, then, is to appreciate that adaptive problems are rather specific, instead of being more general. Organisms don’t need to just “do reproductively useful things”, as such a problem space is too under-specified to result in any kind of useful adaptations. Instead, organisms need to do a variety of specific things, like avoid predators, locate food, and remove rivals (and each of those larger class of problems are made up of very many sub-problems).

The second, and larger, important point to draw out from this research is that all features of an organism – from physiological to cognitive – are not free to develop or use. While perceptions like vision, taste, morality, theory of mind, and so on might feel as if they come to us effortlessly, they certainly do not come free. Vision might not feel like lifting weight or going on a run, but the systems required to make it happen need fuel all the same; quite a lot of it, in fact, if the current results are any indication. The implication of this is idea is that we are not allowed to take perceptions, or other psychological functioning, for granted; not if we want to understand them, that is. It’s not enough to say the such feelings or perceptions are “natural” or, in some sense, the default. There need to be reproductively-relevant benefits the justify the existence of any cognitive mechanisms. Even a relatively-minor but consistent drain on metabolic resources can become appreciable when considered over the span of an organism’s life.

To apply this thinking to a topic I’ve written about recently, we could consider stereotypes briefly. There are many psychologists who – and I am glossing this issue broadly – believe that the human mind contains mechanisms for generating beliefs about other groups which end up being, in general, very wrong. A mechanism which uses metabolic resources to generate beliefs that do not correspond well to reality would be a strange find indeed; kind of like a visual mechanism in the surface fish that does not actually result in the ability to navigate the world successfully. When it comes to the related stereotype threat, there are researchers positing the existence of a cognitive mechanism that generates anxiety in response to the existence of stereotypes that result in their bearer performing worse at socially-important tasks. Now you have a metabolically costly cognitive mechanism which seems to be handicapping its host. These would be strange mechanisms to posit the existence of when one is not making (or testing) claims about how and why they might compensate their bearer in other, important ways. It is when you stop taking the existence of cognitive functioning for granted and need to justify it that new, better research questions and clearer thinking about the matter will begin to emerge.

References: Moran, D., Softley, R., & Warrant, E. (2015). The energetic costs of vision and the evolution of eyeless Mexican cavefish. Science Advances, 11, DOI: 10.1126/sciadv.1500363

Tilting At The Windmills Of Stereotype Threat

If I had the power to reach inside your mind and effect your behavior, this would be quite the adaptive skill for me. Imagine being able to effortless make your direct competitors less effective than you, those who you find appealing more interested in associating with you, and, perhaps, even reaching inside your own mind, improving your performance to levels you couldn’t previously reach. While it would be good for me to possess these powers, it would be decidedly worse for other people if I did. Why? Simply put, because my adaptive best interests and theirs do not overlap 100%. Improving my standing in the evolutionary race will often come at their expense, and being able to manipulate them effectively would do just that. This means that they would be better off if they possessed the capacity to resist my fictitious mind-control powers. To bring this idea back down to reality, we could consider the relationship between parasites and hosts: parasites often make their living at their host’s expense, and the hosts, in turn, evolve defense mechanisms – like immune systems – to fight off the parasites.

 Now with 10% more Autism!

This might seem rather straightforward: avoiding manipulative exploitation is a valuable skill. However, the same kind of magical thinking present in the former paragraph seems to present in psychological research from time to time; the line of reasoning that goes, “people have this ability to reach into the minds of others and change their behavior to suit their own ends”. Admittedly, the reasoning is a lot more subtle and requires some digging to pick up on, as very few psychologists would ever say that humans possess such magical powers (with Daryl Bem being one notable exception). Instead, the line of thinking seems to go something like this: if I hold certain beliefs about you, you will begin to conform to those beliefs; indeed, even if such beliefs exist in your culture more generally, you will bend your behavior to meet them. If I happen to believe you’re smart, for example, you will become smarter; if I happen to believe you are a warm, friendly person, you will become warmer. This, of course, is expected to work in the opposite direction as well: if I believe you’re stupid, you will subsequently get dumber; if I believe you’re hostile, you will in turn become more hostile. This is a bit of an oversimplification, perhaps, but it captures the heart of these ideas well.

The problem with this line of thinking is precisely the same as the problem I outlined initially: there is a less than perfect (often far less than perfect) overlap between the reproductive best interests of the believers and the targets. If I allowed your beliefs about me to influence my behavior, I could be pushed and pulled in all sorts of directions I would rather not go in. Those who would rather not see me succeed could believe that I will fail, which would, generally, have negative implications for my future prospects (unless, of course, other people could fight that belief by believing I would succeed, leading to an exciting psychic battle). It would be better for me if I ignored their beliefs and simply proceeded forward on my own. In light of that, it would be rather strange to expect that humans possess cognitive mechanisms which use the beliefs of others as inputs for deciding our own behavior in a conformist fashion. Not only are the beliefs of others hard to accurately assess directly, but conforming to them is not always a wise idea even if they’re inferred correctly.

This hasn’t stopped some psychologists from suggesting that we do basically that, however. One such line of research that I wanted to discuss today is known as “stereotype threat”. Pulling a quick definition from reducingstereotyethreat.org: “Stereotype threat refers to being at risk of confirming, as self-characteristic, a negative stereotype about one’s group”. From the numerous examples they list, a typically research paradigm involves some variant of the following: (1) get two groups together to take a test that (2) happen to differ with respect to cultural stereotypes about who will do well. Following that, you (3) make salient their group membership in some way. The expected result is that the group that is on the negative end of the stereotype will perform worse when they’re aware of their group membership. To turn that into an easy example, men are believed to be better at math than women, so if you remind women about their gender prior to a math test, they ought to do worse than women not so reminded. The stereotype of women doing poorly on math actually makes women perform worse.

The psychological equivalent of getting Nancy Kerrigan’d

In the interests of understanding more about stereotype threat – specifically, its developmental trajectory with regard to how children of different ages might be vulnerable to it – Ganley et al (2013) ran three stereotype threat experiments with 931 male and female students, ranging from 4th to 12th grade. In their introduction, Ganley et al (2013) noted that some researchers regularly talk about the conditions under which stereotype threat is likely to have its negative impact: perhaps on hard questions, relative to easy ones; on math-identified girls but not non-identified ones; ones in mixed-sex groups but not single-sex groups, and so on. While some psychological phenomenon are indeed contextually specific, one could also view all that talk of the rather specific contexts required for stereotype threat to obtain as a post-hoc justification for some sketchy data analysis (didn’t find the result you wanted? Try breaking the data into different groups until you do find it). Nevertheless, Ganley et al (2013) set up their experiments with these ideas in mind, doing their best to find the effect: they selected high-performing boys and girls who scored above the mid-point of math identification, used evaluative testing scenarios, and used difficult math questions.

Ganley et al (2013) even used some rather explicit stereotype threat inductions: rather than just asking students to check off their gender (or not do so), their stereotype-threat conditions often outright told the participants who were about to take the test that boys outperform girls. It doesn’t get much more threatening than that. Their first study had 212 middle school students who were told either that boys showed more brain activation associated with math ability and, accordingly, performed better than girls, or that both sexes performed equally well. In this first experiment, there was no effect of condition: the girls who were told that boys do better on math tests did not under-perform, relative to the girls who were told that both sexes do equally well. In fact, the data went in the opposite direction, with girls in the stereotype threat condition performing slightly, though not significantly, better. Their next experiment had 224 seventh-graders and 117 eighth-graders. In this stereotype threat condition, they were asked to indicate their gender on a test before than began it because boys tended to outperform girls on these measures (this wasn’t mentioned in the control condition). Again, the results found no stereotype threat at either grade and, again, their data went in the opposite direction, with stereotype threat groups performing better.

Finally, their third study contained 68 forth-graders, 105 eighth-graders, and 145 twelfth-graders. In this stereotype threat condition, students first solved an easy math problem concerning many more boys being on the math team than girls before taking their test (the control condition’s problem did not contain the sex manipulation). They also tried to make the test seem more evaluative in the stereotype threat condition (referring to it as a “test”, rather than “some problems”). Yet again, no stereotype threat effects emerged at any grade level, with two of the three means going in the wrong direction. No matter how they sliced it, no stereotype threat effects fell out. Their data wasn’t even consistently in the direction of stereotype threat being a negative thing. Ganley et al (2013) even took their analysis just a little further in the discussion section, noting that published studies of such effects found some significant effect 80% of the time. However, these effects were also reported among other, non-significant findings. In other words, these effects were likely found after cutting the data up in different ways. By contrast, the three unpublished dissertations on stereotype threat all found nothing, suggesting the possibility that both data cheating and publication bias were probably at work in the literature (and they’re not the only ones).

     ”Gone fishing for P-values”

The current findings appear to build upon the trend of the frequently non-replicable nature of psychological research. More importantly, however, the type of thinking that inspired this research doesn’t seem to make much sense in the first place, though that part doesn’t seem to be discussed at all. There are good reasons to not let the beliefs of others affect your performance; an argument needs to made as to why we would be sensitive to such things, especially when they’re hypothesized to make us worse, and it isn’t present. To make that point crystal clear, try and apply stereotype threat thinking to any non-human species and see how plausible it sounds. By contrast, a real theory, like kin selection, applies with just as much force to humans as it does to other mammals, birds, insects, and even single-cell organisms. If there’s no solid (and plausible) adaptive reasoning in which one grounds their work – as there isn’t with stereotype threat – it should come as no surprise that effects flicker in and out of existence.

References: Ganley, C., Mingle, L., Ryan, A., Ryan, K., Vasilyeva, M., & Perry, M. (2013). An examination of stereotype threat effects on girls’ mathematical performance. Developmental Psychology, 49, 1886-1897.

Why Do We Torture Ourselves With Spicy Foods?

As I write this, my mouth is currently a bit aflame, owing to a side of beans which had been spiced with a hot pepper (serrano, to be precise). Across the world (and across YouTube), people partake in the consumption of spicy – and spiced – foods. On the surface, this behavior seems rather strange owing to the pain and other unpleasant feelings induced by such foods. To get a real quick picture of how unpleasant these food additives can be, you could always try to eat an whole raw onion or spicy pepper, though just imagining the experience is likely enough (just in case it isn’t, YouTube will again be helpful). While this taste for spices might be taken for granted – it just seems normal that some people like different amounts of spicy foods – it warrants a deeper analysis to understand this ostensibly strange taste. Why do people love/hate the experience of eating spicy foods?

   Word of caution: don’t touch your genitals afterwards. Trust me.

Food preferences do not just exist in a vacuum; the cognitive mechanisms which generate such preferences need to have evolved owing to some adaptive benefits inherent in seeking out or avoiding certain potential food sources. Some of these preferences are easier to understand than others: for example, our taste for certain foods we perceive as sweet – sugars – likely owes its existence to the high caloric density that such foods historically provided us (which used to be quite valuable when they were relatively rare. As they exist in much higher concentrations in the first world – largely due to our preferences leading us to cultivate and refine them – these benefits can now dip over into costs associated with overconsumption and obesity). By contrast, our aversion to foods which appear spoiled or rotten helps us avoid potentially harmful pathogens which might reside in them; pathogens which we would rather not purposefully introduce into our bodies. Similar arguments can be made for avoiding foods which contain toxic compounds and taste correspondingly unpleasant. When such toxins are introduced into our bodies, the typical physiological response is nausea and vomiting; behaviors which help remove the offending material as best we can.

So where do spicy foods fall with respect to what costs they avoid or benefits they provide? As many such foods do indeed taste unpleasant, it is unlikely that they are providing us with direct nutritional benefits the way that more pleasant-tasting foods do. That is to say we don’t like spicy foods because they are rich sources of calories or vital nutrients. Indeed, the spiciness that is associated with such foods represents chemical weaponry evolved on the part of the plants. As it turns out, these plants have their own set of adaptive best interests which often include not being eaten at certain times or by certain species. Accordingly, they develop certain chemical weapons that dissuade would be predators from chowing down (this is the reason that the selective breeding of plants for natural insect resistance ends up making them more toxic for humans to eat as well. Just because pesticides aren’t being used, that doesn’t mean you’re avoiding toxic compounds). Provided this analysis is correct, then, the natural question arises of why people would have a taste for plants that possess certain types and amounts of chemical weaponry designed to prevent their being eaten. On a hedonic level, growing crops of jalapenos seems as peculiar as growing a crop of edible razor blades.

The most likely answer to this mystery comes in the form of understanding what these chemical weapons do not to humans, but rather what they do to the other pathogens that tend to accompany our other foods. If these chemical weapons are damaging to our bodies – as evidenced by the painful or unpleasant tastes that accompany them – it stands to reason they are also damaging to some pathogens which might reside in our food as well. Provided our bodies are better able to withstand certain doses of these harmful chemicals, relative to the microbes in our food, then eating spicy foods could represent a trade-off between the killing food-borne pathogens against the risk of poisoning ourselves. Provided the harm done to our bodies by the chemicals is less than the expected damage done by the pathogens, a certain perverse taste for spicy foods could evolve.

As before, you should still be wary of genital contact with such perverse tastes

A healthy degree of empirical evidence is consistent with such an adaptive hypothesis from the world over. One of the most extensive data sets focuses on recipes found in 93 traditional cookbooks from 36 different countries across the world (Sherman & Billing, 1999). The recipes in these cookbooks were examined for which of 43 spices were added to meat dishes. Of the approximately 4,500 different meat dishes present in these books, the average number of spices called for by the recipes was 4, with 93% of recipes calling for at least one. Importantly, the distribution of these spices was anything but random. Recipes coming from warmer climates tended to call for a much greater use of spices. The probable reason this finding emerged relates to the fact that, in warmer climates, food – especially meats – which would have been unrefrigerated for most of human history (alien as that idea sounds currently) will tend to spoil quicker, relative to cooler climates. Accordingly, as the degree and speed of spoilage tended to increase in warmer climates, a greater use of anti-microbial spices can be introduced to dishes to help combat food-borne illness. To use one of their examples, the typical Norwegian recipe called for 1.6 spices per dish and the recipes only mentioned 10 different spices; in Hungary, the average number of spices per dish was 3, and up to 21 different spices were referenced. It is not too far-fetched to go one step further and suggest that people indigenous to such regions might also have evolved slightly different tolerances for spices in their meals.

Even more interestingly, those spices with the strongest anti-microbial effects (such as garlic and onions) also tended to be the ones used more often in warmer climates, relative to cooler ones. Among the spices which had weaker effects, the correlation between temperature and spice use ceased to exist. Nevertheless, the most inhibitory spices were also the ones that people tended to use most regularly across the globe. Further, the authors also discuss the trade-off between balancing the fighting of pathogens against the possible toxicity of such spices when consumed in large quantities. A very interesting point bearing on that matter concerns the dietary preferences of pregnant women. While an adult female’s body might be able to tolerate the toxicity inherent in such compounds fairly well, the developing fetus might be poorly equipped for the task. Accordingly, women in their first trimester tend to show a shift in food preferences towards avoiding a variety of spices, just as they also tend to avoid meat dishes. This shift in taste preferences could well reflect the new variable of the fetus being introduced to the usual cost/benefit analysis of adding spices to foods.

An interesting question related to this analysis was also posed by the Sherman & Billing (1999): do carnivorous animals ingest similar kinds of spices? After all, if these chemical compounds are effective at fighting against food-borne pathogens, carnivores – especially scavengers – might have an interest in using such dietary tricks as well (provided they did not stumble upon a different adaptive solution). While animals do not appear to spice their foods the way humans do, the authors do note that vegetation makes up a small portion of many carnivore’s diets. Having owned cats my whole life, I confess I have always found their behavior of eating the grass outside to be quiet a bit odd: not only does the grass not seem to be a major part of a cat’s diet, but it often seems to make them vomit with some regularity. While they present no data bearing on this point, Sherman & Billing (1999) do float the possibility that a supplement of vegetation to their diet might be a variant of that same kind of spicing behavior: carnivores eat vegetation not necessarily for its nutritional value, but rather because of possible anti-microbial benefits. It’s certainly an idea worth examining further, though I know of no research at present to have tackled the matter. (As a follow up, it seems that ants engage in this kind of behavior as well)

It’s a point I’ll bear in mind next time she’s vomiting outside my window.

I find this kind of analysis fascinating, frankly, and would like to take this moment to mention that these fascinating ideas would be quite unlikely to have stumbled upon without the use of evolutionary theory as a guide. The typical explanation you might get when asking people about why we spice food would typically sound like “because we like the taste the spice adds”; a response as uninformative as it is incorrect, which is to say “mostly” (and if you don’t believe that last part, go ahead an enjoy your mouthfuls of raw onion and garlic). The proximate taste explanation would fail to predict the regional differences in spice use, the aversion to eating large quantities of them (though this is a comparative “large”, as a slice of Jalapeno can be more than some people can handle), and the maternal data concerning aversions to spices during critical fetal developmental windows. Taste preferences – like any psychological preferences – are things which require deeper explanations. There’s a big difference between knowing that people tend to add spices to food and knowing why people tend to do so. I would think that findings like these would help psychology researchers understand the importance of adaptive thinking. At the very least, I hope they serve as food for thought.

References: Sherman, P. & Billing, J. (1999). Darwinian gastronomy: Why we use spices. Bioscience, 49, 453–463.

Examining The Performance-Gender Link In Video Games

Like many people around my age or younger, I’m a big fan of video games. I’ve been interested in these kinds of games for as long as I can remember, and they’ve been the most consistent form of entertainment in my life, often winning out over the company of other people and, occasionally, food. As I – or pretty much anyone who has spent time within the gaming community – can attest to, the experience of playing these games with others can frequently lead to, shall we say, less-than-pleasant interactions with those who are upset by losses. Whether being derided for your own poor performance, good performance, good luck, or tactics of choice, negative comments are a frequent occurrence in the competitive online gaming environment. There are some people, however, who believe that simply being a woman in such environments yields a negative reception from a predominately-male community. Indeed, some evidence consistent with this possibility was recently published by Kasumovic & Kuznekoff (2015) but, as you will soon see, the picture of hostile behavior towards women that emerges in much more nuanced than it is often credited as being.

Aggression, video games, and gender relations; what more could you want to read about?

As an aside, it is worth mentioning that some topics – sexism being among them – tend to evade clear thinking because people have some kind of vested social interest in what they have to say about the association value of particular groups. If, for instance, people who play video games are perceived negatively, I would likely suffer socially by extension, since I enjoy video games myself (so there’s my bias). Accordingly, people might report or interpret evidence in ways that aren’t quite accurate so as to paint certain pictures. This issue seems to rear its head in the current paper on more than one occasion. For example, one claim made by Kasumovic & Kuznekoff (2015) is that “…men and women are equally likely to play competitive video games”. The citation for this claim is listed as “Essential facts about the computer and video game industry (2014)“. However, in that document, the word “competitive” does not appear at all, let alone a gender breakdown of competitive game play. Confusingly, the authors subsequently claim that competitive games are frequently dominated by males in terms of who plays them, directly contradicting the former idea. Another claim made by Kasumovic & Kuznekoff (2015) is that women are “more often depicted as damsels in distress”, though the paper they link to to support that claim does not appear to contain any breakdown of women’s actual representation in video games as characters, instead measuring people’s perceptions of women’s representation in them. While such a claim may indeed be true – women may be depicted as in need of rescue more often than they’re depicted in other roles and/or relative to men’s depictions – it’s worth noting that the citation they use does not contain the data they imply it does.

Despite these inaccuracies, Kasumovic & Kuznekoff (2015) take a step in the right direction by considering how the reproductive benefits to competition have shaped male and female psychologies when approaching the women-in-competitive-video-games question. For men, one’s place in a dominance hierarchy was quite relevant for determining their eventual reproductive success, leading to more overt strategies of social hierarchy navigation. These overt strategies include the development of larger, more muscular upper-bodies in men, suited for direct physical contests. By contrast, women’s reproductive fitness was often less affected by their status within the social hierarchy, especially with respect to direct physical competitions. As men and women begin to compete in the same venues where differences in physical strength no longer determine the winner – as is the case in online video games – this could lead to some unpleasant situations for particular men who have the most to lose by having their status threatened by female competition.

In the interests of being more explicit about why female involvement in typically male-style competitions might be a problem for some men, let’s employ some Bayesian reasoning. In terms of physical contests, larger men tend to dominate smaller ones; this is why most fighting sports are separated into different classes based on the weight of the combatants. So what are we to infer when a smaller fighter consistently beats a larger one? Though these aren’t mutually exclusive, we could infer either that the smaller fighter is very skilled or that the larger fighter is particularly unskilled. Indeed, if the larger fighter is losing both to people of his own weight class and of a weight class below him, the latter interpretation becomes more likely. It doesn’t take much of a jump to replace size with sex in this example: because men tend to be stronger than women, our Bayesian priors should lead us to expect that men will win in direct physical competition over women, on average. A man who performs poorly against both men and women in physical competition, is going to suffer a major blow to his social status and reputation as a fighter.

It’ll be embarrassing for him to see that replayed five times from three angles.

While winning in competitive video games does not rely on physical strength, a similar type of logic applies there as well: if men tend to be the ones overwhelming dominating a video game in terms of their performance, then a man who performs poorly has the most to lose from women becoming involved in the game, as he now might compare poorly both to the standard reference group and to the disfavored minority group. By contrast, men who are high performers in these games would not be bothered by women joining in, as they aren’t terribly concerned about losing to them and having their status threatened. This yields some interesting predictions about what kind of men are going to become hostile towards women. By comparison, other social and lay theories (which are often hard to separate) do not tend to yield such predictions, instead suggesting that both high and low performing men might be hostile towards women in order to remove them from a type of male-only space; what one might consider a more general sexist discrimination.

To test these hypotheses, Kasumovic & Kuznekoff (2015) reported on some data collected while they were playing Halo 3, during which time all matches and conversations within the game were recorded. During these games, the authors had approximately a dozen neutral phrases prerecorded with either a male or female voice they would play during appropriate times in the match. These phrases served to cue the other players as to the ostensible gender of the researcher. The matches themselves were 4 vs 4 games in which the objective for each is to kill more members of the enemy team than they kill of yours. All in-game conversations were transcribed, with two coders examined the transcripts for comments directed towards the researcher playing the game, classifying them as positive, negative, or neutral. The performance of the players making these comments were also recorded with respect to whether the game was won or lost, that player’s overall skill level, and the number of their kills and deaths in the match, so as to get a sense for the type of player making them.

The data represented 163 games of Halo, during which 189 players directed comments towards the researcher across 102 of the games. Of those 189 players who made comments, all of them were males. Only the 147 of those commenters that came from a teammate were retained for analysis. In total, then, 82 players directed comments towards the female-voiced player, whereas 65 directed comments towards the male-voiced player.

A few interesting findings emerged with respect to the gender manipulation. While I won’t mention all of them, I wanted to highlight a few. First, when the researcher used the female voice, higher-skill male players tended to direct significantly more positive comments towards them, relative to low-skill players (β = -.31); no such trend was observed for the male-voiced character. Additionally, as the difference between the female-voiced researcher and the commenting player grew larger (specifically, as the person making the comment was of progressively higher ranks than the female-voiced player), the number of positive comments tended to increase. Similarly, high-skill male players tended to direct fewer negative comments towards the female-voiced research as well (β = -.18). Finally, in terms of their kills during the match, poor performing males directed more negative comments towards female voiced characters, relative to high-performing men (β = .35); no such trend was evident for the male-voiced condition.

“I’m bad at this game and it’s your fault people know it!”

Taken together, the results seem to point in a pretty consistent direction: low-performing men tended to be less welcoming of women in their competitive game of choice, perhaps because it highlighted their poor performance to a greater degree. By contrast, high-performing males were relatively less troubled by the ostensible presence of women, dipping over into being quite welcoming of them. After all, a man being good at the game might well be an attractive quality to women who also enjoy the world of Esports, and what better way to kick off a potential relationship than with a shared hobby? As a final point, it is worth noting that the truly sexist types might present a different pattern of data, relative to people who were just making positive or negative comments: only 11 of the players (out of 83 who made negative comments and 189 who made any comments) were classified as making comments considered to be “hostile sexism”, which did not yield a large enough sample for a proper analysis. The good news, then, seems to be such comments are at least relatively rare.

References: Kasumovic, M. & Kuznekoff, J. (2015). Insights into sexism: Male status and performance moderates female-directed hostile and amicable behavior. PLoS One, 10: e0131613. doi:10.1371/journal.pone.0131613

Understanding Conspicuous Consumption (Via Race)

Buckle up, everyone; this post is going to be a long one. Today, I wanted to discuss the matter of conspicuous consumption: the art of spending relatively large sums of money on luxury goods. When you see people spending close to $600 on a single button-up shirt, two-months salary on engagement rings, or tossing spinning rims on their car, you’re seeing examples of conspicuous consumption. A natural question that many people might (and do) ask when confronted with such outrageous behavior is, “why do you people seem to (apparently) waste money?” A second, related question that might be asked once we have an answer to the first question (indeed, our examination of this second question should be guided by – and eventually inform – our answer to the first) is how can we understand who is most likely to spend money in a conspicuous fashion? Alternatively, this question could be framed by asking about what contexts tend to favor conspicuous consuming behavior. Such information should be valuable to anyone looking to encourage or target big-ticket spending or spenders or, if you’re a bit strange, you could also try to create contexts in which people spend their money more responsibly.

But how fun is sustainability when you could be buying expensive teeth  instead?

The first question – why do people conspicuously consume – is perhaps the easier question to initially answer, as it’s been discussed for the last several decades. In the biological world, when you observe seemingly gaudy ornaments that are costly to grow and maintain – peacock feathers being the go-to example – the key to understanding their existence is to examine their communicative function (Zahavi, 1975). Such ornaments are typically a detriment to an organism’s survival; peacocks could do much better for themselves if they didn’t have to waste time and energy growing the tail feathers which make it harder to maneuver in the world and escape from predators. Indeed, if there was some kind of survival benefit to those long, colorful tail feathers, we would expect that both sexes would develop them; not just the males.

However, it is because these feathers are costly that they are useful signals, since males in relatively poor condition could not shoulder their costs effectively. It takes a healthy, well-developed male to be able to survive and thrive in spite of carrying these trains of feathers. The costs of these feathers, in other words, ensures their honesty, in the biological sense of the word. Accordingly, females who prefer males with these gaudy tails can be more assured that their mate is of good genetic quality, likely leading to offspring well-suited to survive and eventually reproduce themselves. On the other hand, if such tails were free to grow and develop – that is, if they did not reliably carry much cost – they would not make good cues for such underlying qualities. Essentially, a free tail would be a form of biological cheap talk. It’s easy for me to just say I’m the best boxer in the world, which is why you probably shouldn’t believe such boasts until you’ve actually seen me perform in the ring.

Costly displays, then, owe their existence to the honesty they impart on a signal. Human consumption patterns should be expected to follow a similar pattern: if someone is looking to communicate information to others, costlier communications should be viewed as more credible than cheap ones. To understand conspicuous consumption we would need to begin by thinking about matters such as what signal someone is trying to send to others, how that signal is being sent, and what conditions tend to make the sending of particular signals more likely? Towards that end, I was recently sent an interesting paper examining how patterns of conspicuous consumption vary among racial groups: specifically, the paper examined racial patterns of spending on what was dubbed visible goods: objects which are conspicuous in anonymous interactions and portable, such as jewelry, clothing, and cars. These are good designed to be luxury items which others will frequently see, relative to other, less-visible luxury items, such as hot tubs or fancy bed sheets.

That is, unless you just have to show off your new queen mattress

The paper, by Charles et al (2008), examined data drawn from approximately 50,000 households across the US, representing about 37,000 White 7,000 Black, and 5,000 Hispanic households between the ages of 18 and 50. In absolute dollar amounts, Black and Hispanic households tended to spend less on all manner of things than Whites (about 40% and 25%, respectively), but this difference needs to be viewed with respect to each group’s relative income. After all, richer people tend to spend more than poorer people. Accordingly, the income of these households was estimated through their reports of their overall reported spending on a variety of different goods, such as food, housing, etc. Once a household’s overall income was controlled for, a better picture of their relative spending on a number of different categories emerged. Specifically, it was found that Blacks and Hispanics tended to spend more on visible  goods (like clothing, cars, and jewelry) than Whites by about 20-30%, depending on the estimate, while consuming relatively less in other categories like healthcare and education.

This visible consumption is appreciable in absolute size, as well. The average white household was spending approximately $7,000 on such purchases each year, which would imply that a comparably-wealthy Black or Hispanic household would spend approximately $9,000 on such purchases. These purchases come at the expense of all other categories as well (which should be expected, as the money has to come from somewhere), meaning that the money spent on visible goods often means less is spent on education, health care, and entertainment.

There are some other interesting findings to mention. One – which I find rather notable, but the authors don’t see to spend any time discussing – is that racial differences in consumption of visible goods declines sharply with age: specifically, the Black-White gap in visible spending was 30% in the 18-34 group, 23% in the 35-49 group, and only 15% in the 50+ group. Another similarly-undiscussed finding is that visible consumption gap appears to decline as one goes from single  to married. The numbers Charles et al (2009) mention estimate that the average percentage of budgets used on visible purchases was 32% higher for single Black men, 28% higher for single Black women, and 22% higher for married Black couples, relative to their White counterparts. Whether these declines represent declines in absolute dollar amounts or just declines in racial differences, I can’t say, but my guess is that it represents both. Getting old and getting into relationships tended to reduce the racial divide in visible good consumption.

Cool really does have a cut-off age…

Noting these findings is one thing; explaining them is another, and arguably the thing we’re more interested in doing. The explanation offered by Charles et al (2009) goes roughly as follows: people have a certain preference for social status, specifically with respect to their economic standing. People are interested in signaling their economic standing to others via conspicuous consumption. However, the degree to which you have to signal depends strongly on the reference group to which you belong. For example, if Black people have a lower average income than Whites, then people might tend to assume that a Black person has a lower economic standing. To overcome this assumption, then, Black individuals should be particularly motivated to signal that they do not, in fact, have a lower economic standing more typical of their group. In brief: as the average income of a group drops, those with money should be particularly inclined to signal that they are not as poor as other people below them in their group.

In support of this idea, Charles et al (2008) further analyzed their data, finding that the average spending on visible luxury goods declined in states with higher average incomes, just as it also declined among racial groups with higher average incomes. In other words, raising the average income of a racial group within a state tended to strongly impact what percentage of consumption was visible in nature. Indeed, the size of this effect was such that, controlling for the average income of a race within a state, the racial gaps almost entirely disappeared.

Now there are a few things to say about this explanation, first of which being that it’s incomplete as stands. From my reading of it, it’s a bit unclear to me how the explanation works for the current data. Specifically, it would seem to posit that people are looking to signal that they are wealthier than those immediately below them in the social ladder. This could explain the signaling in general, but not the racial divide. To explain the racial divide, you need to add something else; perhaps that people are trying to signal to members of higher income groups that, though one is a member of a lower income group, one’s income is higher than the average income. However, that explanation would not explain the age/marital status information I mentioned before without adding on other assumption, nor would directly explain the benefits which arise from signaling one’s economic status in the first place. Moreover, if I’m understanding the results properly, it wouldn’t directly explain why visible consumption drops as the overall level of wealth increases. If people are trying to signal something about their relative wealth, increasing the aggregate wealth shouldn’t have much of an impact, as “rich” and “poor” are relative terms.

“Oh sure, he might be rich, but I’m super rich; don’t lump us together”

So how might this explanation be altered to fit the data better? The first step is to be more explicit about why people might want to signal their economic status to others in the first place. Typically, the answer to this question hinges on the fact that being able to command more resources effectively makes one a more valuable associate. The world is full of people who need things – like food and shelter – so being able to provide those things should make one seem like a better ally to have. For much the same reason, being in command of resources also tends to make one appear to be a more desirable mate as well. A healthy portion of conspicuous signaling, as I mentioned initially, has to do with attracting sexual partners. If you know that I am capable of providing you with valuable resources you desire, this should, all else being equal, make me look like a more attractive friend or mate, depending on your sexual preferences.

However, recognition of that underlying logic helps make a corollary point: the added value that I can bring you, owing to my command of resources, diminishes as overall wealth increases. To place it in an easy example, there’s a big difference between having access to no food and some food; there’s less of a difference between having access to some food and good food; there’s less of a difference still between good food and great food. The same holds for all manner of other resources. As the marginal value of resources decreases as access to resources increases overall, we can explain the finding that increases in average group wealth decrease relative spending on visible goods: there’s less of a value in signaling that one is wealthier than another if that wealth difference isn’t going to amount to the same degree of marginal benefit.

So, provided that wealth has a higher marginal value in poorer communities – like Black and Hispanic ones, relative to Whites – we should expect more signaling of it in those contexts. This logic could explain the racial gap on spending patterns. It’s not that people are trying to avoid a negative association with a poor reference group as much as they’re only engaging in signaling to the extent that signaling holds value to others. In other words, it’s not about my signaling to avoid being thought of as poor; it’s about my signaling to demonstrate that I hold a high value as a partner, socially or sexually, relative to my competition.

Similarly, if signaling functions in part to attract sexual partners, we can readily explain the age and martial data as well. Those who are married are relatively less likely to engage in signaling for the purposes of attracting a mate, as they already have one. They might engage in such purchases for the purposes of retaining that mate, though such purchases should involve spending money on visible items for other people, rather than for themselves. Further, as people age, their competition in the mating market tends to decline for a number reasons, such as existing children, inability to compete effectively, and fewer years of reproductive viability ahead of them. Accordingly, we see that visible consumption tends to drop off, again, because the marginal value of sending such signals has surely declined.

“His most attractive quality is his rapidly-approaching demise”

Finally, it is also worth noting other factors which might play an important role in determining the marginal value of this kind of conspicuous signaling. One of these is an individual’s life history. To the extent that one is following a faster life history strategy – reproducing earlier, taking rewards today rather than saving for greater rewards later – one might be more inclined to engage in such visible consumption, as the marginal value of signaling you have resources now is higher when the stability of those resources (or your future) is called into question. The current data does not speak to this possibility, however. Additionally, one’s sexual strategy might also be a valuable piece of information, given the links we saw with age and martial status. As these ornaments are predominately used to attract the attention of prospective mates in nonhuman species, it seems likely that individuals with a more promiscuous mating strategy should see a higher marginal value in advertising their wealth visibly. More attention is important if you’re looking to get multiple partners. In all cases, I feel these explanations make more textured predictions than the “signaling to not seem as poor as others” hypothesis, as considerations of adaptive function often do.

References: Charles, K., Hurst, E., & Roussanov, N. (2008). Conspicuous consumption and race. The Journal of Quarterly Economics, 124, 425-467.

Zahavi, A. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.

 

Some Bathwater Without A Baby

When reading psychology papers, I am often left with the same dissatisfaction: the lack of any grounding theories in them and their inability to deliver what I would consider a real explanation for their findings. While it’s something I have harped on for a few years now, this dissatisfaction is hardly confined to me, as others have voiced similar concerns for at least around the last two decades, and I suspect it’s gone on quite a bit longer than that. A healthy amount of psychological research strikes me as empirical bathwater without a theoretical baby, in a manner of speaking; no matter how interesting that empirical bathwater might be – whether it’s ignored or the flavor of the week – almost all of it will eventually be thrown out and forgotten if there’s no baby there. Some new research that has crossed my eyes a few times lately follows that same trend; a paper examining the reactions of individuals who were feeling powerful to inequality that disadvantaged them or others. I wanted to review that paper today and help fill in the missing sections from it where explanations should go.

Next step: add luxury items, like skin and organs

The paper, by Sawaoka, Hughes, & Ambady (2015), contained four or five experiments – depending on how one counts a pilot study – in which participants were primed to think of themselves as powerful or not. This was achieved, as it so often is, by having the participants in each experiment write about a time they had power over another person or about a time that other people had power over them, respectively. In the first pilot study, about 20 participants were primed as powerful and another 20 primed as relatively powerless. Subsequently, they were told they would be playing a dictator game with another person, in which the other person (who was actually not a person) would be serving as the dictator in charge of dividing up 10 experimental tokens between the two; tokens which, presumably, were supposed to redeemed for some kind of material reward. Those participants who had been primed to feel more powerful expected to receive a higher average number of these tokens (M = 4.2) relative to those primed to feel less powerful (M = 2.2). Feeling powerful, it seemed, lead to participants expecting better treatment from others.

In the next experiment, participants (N = 227) were similarly primed before completing a fairness reaction task. Specifically, participants were presented with three pictures representing distributions of tokens: one of which represented the participant’s payment while the other two represented the payments to others. It was the job of participants to indicate whether these tokens were distributed equally between the three people or whether the distribution was unequal. The distributions could have been (a) equal, (b) unequal, favoring the participant, or (c) unequal, disfavoring the participant. The measure of interest here was how quickly the participants were able to identify equal and unequal distributions. As it turns out, participants primed to feel powerful were quicker to identify unfair arrangements that disfavored them, relative to less powerful participants by about a tenth of a second, but were not quicker to do so when the unequal distributions favored them.

The next two studies followed pretty much the same format and echoed the same conclusion, so I don’t want to spend too much time on their details. The final experiment, however, examined not just reaction times to assessments of equality, but rather how quickly participants were willing to do something about it. In this case, participants were told they were being paid by an experimental employer. The employer to whom they were randomly assigned would be responsible for distributing a payment amount between them and two other participants over a number of rounds (just like the experiment I just mentioned). However, participants were also told that there were other employers they could switch to if they wanted after each round. The question of interest, then, was how quickly participants would switch away from employers who disfavored them. Those participants that were primed to feel powerful didn’t wait around very long in the face of unfair treatment that disfavored them, leaving after the first round, on average; by contrast, those primed to feel less powerful waited about 3.5 rounds to switch if they were getting a bad relative deal. If the inequality favored them, however, the powerful participants were about as likely to stay over time as the less powerful ones. In short, those who felt powerful not only recognized poor treatment of themselves (but not others) quicker, they also did something about it sooner.

They really took Shia’s advice about doing things to heart

These experiments are quite neat, but, as I mentioned before, they are missing a deeper explanation to anchor them anywhere.. Sawaoka, Hughes, & Ambady (2015) attempt an explanation for their results, but I don’t think they get very far with it. Specifically, the authors suggest that power makes people feel entitled to better treatment, subsequently making them quicker to recognize worse treatment and do something about it. Further, the authors make some speculations about how unfair social orders are maintained by powerful people being motivated to do things that maintain their privileged status while the disadvantaged sections of the population are sent messages about being powerless, resulting in their coming to expect unfair treatment and being less likely to change their station in life. These speculations, however, naturally yield a few important questions, chief among which being, “if feeling entitled yields better treatment on the part of others, then why would anyone ever not feel that way? Do, say, poor people really want to stay poor and not demand better treatment from others as well?” It seems that there are very real advantages being forgone by people who don’t feel as entitled as powerful people do, and we would not expect a psychology that behaved that way – that just avoided taking welfare benefits – to have been selected for.

In order to craft something approaching a real explanation for these findings, then, one would need to begin with a discussion about some possible trade-offs that have to be made: if feeling entitled was always good for business, everyone would feel entitled all the time; since they don’t, there are likely some costs associated with feeling entitled that, at least in certain contexts, prevents its occurrence. One of the most likely trade-offs involves the costs associated with conflict: if you feel you’re entitled to a certain kind of treatment you feel you’re not receiving, you need to take steps to ensure the correction of that treatment, since other people aren’t exactly expected just going to start giving you more benefits for no reason. To use a real life example, if you feel your boss isn’t compensating you properly for your work, you need to demand a raise, threatening to inflict costs on him – such as your quitting – if your demands aren’t met.

The problems with such a course of action are two-fold: first, your boss might disagree with your assessment and let you quit, and losing that job could pose other, very real costs (like starving and homelessness). Sometimes an unfair arrangement is better than no arrangement at all. Second, the person with whom you’re bargaining might attempt to inflict costs on you in turn. For instance, if you begin a dispute with law enforcement officers because you believe they have treated you unfairly and are seeking to rectify that situation, they might encourage your compliance with the arrangement with a well-placed fist to your nose. In other words, punishment is a two-way street, and trying to punish stronger individuals – whether physically or socially stronger – is often a poor course of action to take. While “punching-up” might be appealing to certain sensitivities in, say, comedy, it works less well when you’re facing down that bouncer with a few inches and a few dozens pounds of muscle on you.

I’m sure he’ll find your arguments about equality quite persuasive

Indeed, this is the same kind of evolutionary explanation offered by Sell, Tooby, & Cosmides (2009) for understanding the emotion of anger and its associated entitlement: one’s formidability – physically and/or socially – should be a key factor in understanding the emotional systems underlying how they resolve their conflicts; conflicts which may well have to do with distributions of material resources. Those who are better suited to inflict costs on others (e.g., the powerful) are also likely to be treated better by others who wish to avoid the costs of conflicts that accompany poor treatment. This could suggest, however, that making people feel more powerful than they actually are would, in the long-term, tend to produce quite a number of costs for the powerful-feeling, but actually-weak, individuals: making that 150-pound guy think he’s stronger than the 200-pound one might encourage the former to initiate a fight, but not make him more likely to win it. Similarly, encouraging your friend who isn’t that good at their job to demand that raise could result in their being fired. In other words, it’s not that social power structures in society are maintained simply on the basis of inertia or people getting sent particular kinds of social messages, but rather that they reflect (albeit imperfectly) important realities in the actual value people are able to demand from others. While the idea that some of the power dynamics observed in the social world reflect non-arbitrary differences between people might not sit well with certain crowds, it is a baby capable of keeping this bathwater around.

References: Sawaoka, T., Hughes, B., & Ambady, N. (2015). Power heightens sensitive to unfairness against the self. Personality & Social Psychology Bulletin, 41, 1023-1035.

Sell, A., Tooby, J., & Cosmides, L. (2009). Formidability and the logic of human anger. Proceedings of the National Academy of Science, 106, 15073-78.

Evolutionary Marketing

There are many popular views about the human mind that, roughly, treat it as a rather general-purpose kind of tool: one that’s not particularly suited to this task or that, but more as a Jack of all trades and master of none. In fact, many such perspectives view the mind as (baffling) being wrong about the world almost all the time. If one views the mind this way, one can be lead into making some predictions about how it ought to behave. As one for instance, some people might predict that our minds will, essentially, mistake one kind of arousal for another. A common example of this thinking involves experiments in which people are placed in a fear-arousal condition in the hopes that they will subsequently report more romantic or sexual attraction to certain partners they meet at that time. The explanation for this finding often hinges on some notion of people “misplacing” their arousal – since both kinds of arousal involve some degree of overlapping physiological responses – or reinterpreting a negative arousal as a positive one (e.g., “I dislike being afraid, so I must actually be turned on instead”). I happen to think that such explanations can’t even possibly be close to true, largely because the response to arousal generated by fear and sexual interest should motivate categorically different kinds of behavior.

Here’s one instance where an arousal mistake like that can be costly

Bit by bit, this view of the human mind is being eroded (though progress can be slow), as it does not fit the empirical evidence or possess any solid theoretical groundings. As a great example of this forward progress, consider the experiments demonstrating that learning mechanisms appear to be eloquently tailored to specific kinds of adaptive problems, since learning to, say, avoid poisonous foods requires much different cognitive rules, inputs, and outputs, than learning to avoid predator attacks. Learning, in other words, represents a series of rather domain-specific tasks which a general-purpose mechanism could not navigate successfully. As psychological hypotheses begin to get tailored more closely to considerations of recurrent adaptive problems, new previously-unappreciated, features of our minds come into stark relief.

So let’s return to the matter of arousal and think about how arousal might impact our day-to-day behavior, specifically with respect to persuasion; a matter of interest to anyone in the fields of marketing or advertising. If your goal is to sell something to someone else – to persuade them to buy what you’re offering – the message you use to try and sell it is going to be crucial. You might, for example, try to appeal to someone’s desire to stand out from the crowd in order to get them interested in your product (e.g., “Think different“); alternatively, you might try to appeal to the popularity of a product to get them to buy (e.g., “The world’s most popular computer”). Importantly, you can’t try to send both of these messages at once (“Be different by doing that thing everyone else is doing”), so which message should you use, and in what contexts should you use it?

A paper by Griskevicius et al (2009) sought to provide an answer to that very question by considering the adaptive functions of particular arousal states. Previous accounts examining how arousal affected information processing were on the general side of things: the general arousal-based accounts would predict that arousal – irrespective of the source – should yield shallower processing of information, causing people to rely more on mental heuristics, like scarcity or popularity, when assessing a product; affect valance-based accounts took this idea one step further, suggesting that positive emotions, like happiness, should yield shallower processing, whereas negative emotions, like fear, should yield deeper processing. However, the authors proposed a new way of thinking about arousal – based on evolutionary theory that suggests those previous theories are too vague to help us truly understand how arousal shapes behavior. Instead, one needs to consider what adaptive functions particular arousal states serve in order to understand when one type of message will be persuasive in that context.

Don’t worry; if this gets too complicated, you can just fall back on using sex

To demonstrate this point, Griskevicius et al (2009) examined two arousal-inducing contexts: the aforementioned fear and romantic desire. If the general arousal-based accounts are correct, both the scarcity and popularity appeals should become more persuasive as people become aroused by romance or fear; by contrast, if the affect valance-accounts are correct, the positively-valanced romantic feelings should make all sorts of heuristics more persuasive, whereas the negatively-valanced fear arousal should make both less persuasive. The evolutionary account instead focuses on the functional aspects of fear and romance: fear activates self-defense-relevant behavior, one form of which would be to seek safety in numbers; a common animal defense tactic. If one were motivated to seek safety in numbers, a popularity appeal might be particularly persuasive (since that’s where a lot of other people are), whereas a scarcity appeal would not be; in fact, sending the message that a product would help make one stand out from the crowd when they’re afraid could actually be counterproductive. By contrast, if one is in a romantic state of mind, positively differentiating oneself from your competition can be useful for attracting and subsequently retaining attention. Accordingly, romance-based arousal might have the reverse effect, making popularity heuristics less persuasive while making scarcity appeals more so.

To test these ideas, Griskevicius et al (2009) induced romantic desire or fear in about 300 participants by having them read stories or watch movie clips related to each domain. Following the arousal-inducing, participants were then asked to briefly examine an advertisement for a museum or restaurant which contained a message that appealed to popularity (e.g., “visited by over 1,000,000 people each year”), scarcity (“stand out from the crowd”), or neither message, and then report on how appealing the location was and whether or not they would be likely to go there (on a 9-point scale across a few questions).

As predicted, the fear condition led to popularity messages to be more persuasive (M = 6.5) than the control advertisements (M = 5.9). However, fear had the opposite effect for the scarcity messages (M = 5.0), making them less appealing than the control ads. That pattern of results was flipped for the romantic desire condition: scarcity appeals (M = 6.5) were more persuasive than controls (M = 5.8), whereas the popularity appeals were less persuasive than either (M = 5.0). Without getting too bogged down in the details on their second experiment, the authors also reported that these effects were even more specific than that: in particular, appeals to scarcity and popularity only had their effects when discussing behavioral aspects (stand out from the crowd/everyone’s doing it); when discussing attitudes (everyone’s talking about it) or opportunities (limited time offer) popularity and scarcity did not differ in their effectiveness, regardless of the type of arousal being experienced.

One condition did pose interpretive problems, though…

Thinking about the adaptive problems and selection pressures that shaped our psychology is critical for constructing hypotheses and generating theoretically plausible explanations for understanding its features. Expecting some kind of general arousal, emotional valance, or other such factors to explain much about the human (or nonhuman) mind is unlikely to pan out well; indeed, it hasn’t been working out for the field for many decades now. I don’t suspect such general explanations will disappear in the near future, despite their lack of explanatory power, though; they have saturated much of the field in psychology and many psychologists lack the necessary theoretical background to fully appreciate why such explanations are implausible to begin with. Nevertheless, I remain hopeful that someday the future of psychology might not include reams of thinking about misplaced arousal and general information processing mechanisms that are, apparently, quite bad at solving important adaptive problems.

References: Griskevicius, V., Goldstein, N., Mortensen, C., Sundie, J., Cialdini, R., & Kenrick, D. (2009). Fear and loving in Las Vegas: Evolution, emotion, and persuasion. Journal of Marketing Research, 46, 384-395.

The Morality Of Guilt

Today, I wanted to discuss the topic of guilt; specifically, what the emotion is, whether we should consider it to be a moral emotion, and whether it generates moral behavioral outputs. The first part of that discussion will be somewhat easier to handle than the latter. In the most common sense, guilt appears to an emotion aroused by the perception of wrong-doing which has harmed someone else on the part of the individual experiencing guilt. The negative feelings that accompany guilt often lead to the guilty party desiring to make amends to the injured one so as to compensate the damage done and repair the relationship between the two (e.g., “I’m sorry that totaled your car by driving it into your house; I feel like a total heel. Let me buy you dinner to make up for it”). Because the emotion appears to be aroused by the perceptions of a moral transgression – that is, someone feels they have done something wrong, or impermissible –  it seems like guilt could rightly be considered a moral emotion; specifically, an emotion related to moral conscience (a self regulating mechanism), rather than moral condemnation (an other regulating mechanism).

Nothing beats packing for a nice, relaxing guilt trip

The understanding that guilt is a moral emotion, then, allows us to inform our opinion about what kind of thing morality is by examining how guilt works in greater, proximate detail. In other words, we can infer what adaptive value our moral sense might have had through studying the form of the emotional guilt mechanisms: what inputs they use and what outputs they produce. This brings us to some rather interesting work I recently dug out of my backlog of papers to read, by de Hooge et al (2011), that focused on figuring out what kinds of effects guilt tends to have on people’s behavior when you take guilt out of a dyadic (two-person) relationship and drop it into larger groups of people. The authors were interested, in part, on deciding whether or not guilt could be classified as a morally good emotion. While they acknowledge guilt is a moral emotion, they question whether it produces morally good outcomes in certain types of situations.

This leads naturally to the following question: what is a morally good outcome? The answer to that question is going to depend on what type of function one thinks morality has. In this case, de Hooge et al (2011) write as if our moral sense is an altruism device – one that functions to deliver benefits to others at a cost to one’s self. Accordingly, a morally good outcome is going to be one that results in benefits flowing to others at a cost to the actor. Framed in terms of guilt, we might expect that individuals experiencing guilt will behave more altruistically than individuals who are not; the guilty’s regard for the welfare of others will be regulated upwards, with a corresponding down-regulation placed on their own welfare. The authors note that much of the previous research on guilt has uncovered evidence consistent with that pattern: guilty parties tend to forgo benefits to themselves or suffer costs in order to deliver benefits to the party they have wronged. This makes guilt look rather altruistic.

Such research, however, was typically conducted in a two-party context: the guilty party and their victim. This presents something of an interpretative issue, inasmuch as the guilty party only has that one option available to them: if, say, I want to make you better off, I need to suffer a cost myself. While that might make the behavior look altruistic in nature, in the social world that we reside within, that is usually not the only option available; I could, for instance, also make you better off not at an expense to myself, but rather at the expense of someone else; an outcome most people wouldn’t exactly call altruism, and one de Hooge et al (2011) wouldn’t consider morally good either. To the extent a guilty party is interested in making their victim better off in both case, both outcomes would look the same in a two-party case; to the extent the guilty party is interested in behaving altruistically towards the victimized party, though, things would look different in a three-party context.

As they usually do…

de Hooge et al (2011) report on the results of three pilot studies and four experiments examining how guilt affects behavior in these three-party contexts in terms of welfare-relevant choices. While I don’t have time to discuss all of what they did, I wanted to highlight one of their experiments in more detail while noting that each of them generated data consistent with the same general pattern. The experiment I will discuss is their third one. In that experiment, 44 participants were assigned to either a guilt or a control condition. In both conditions, the participants were asked to complete a two-part joint effort task with another person to earn payment rewards. Colored letters (red or green) would pop up on each player’s screens and the participant and their partner had to click a button quickly in order to complete the task: the participant would push the button if the letter was green, whereas their partner would have to push if the letter was red. In the first part of the task, the performance of both the participant and their partner would be earning rewards for the participant; in the second part, the pair would be earning rewards for the partner instead. Each reward was worth 8 units of what I’ll call welfare points.

The participants were informed that while they would receive the bonus from the first round, their partner would not receive a bonus from the second. In the control condition, the partner did not earn the bonus because of their own poor performance; in the guilt condition, the partner did not earn the bonus because of the participant’s poor performance. In the next phase of this experiment, the participants were presented with three pay offs: their own, their partner’s, and an unrelated individual from the experiment who had also earned the bonus. The participants were told that one of the three would be randomly assigned the chance to redistribute the earnings though, of course, the participants always received that assignment. This allowed participants to give a benefit to their partner, but to do so at either a cost to themselves or at a cost to someone else.

Out of the 8 welfare units the participants had earned, they opted to give an average of 2.2 of them to their partner in the guilt condition, but only 1 unit in the control condition, so guilt did seem to make the participants somewhat more altruistic. Interestingly, however, guilt made participants even more willing to take from the outside party: guilty parties took an average of 4.2 units from the third party for their partner, relative to the 2.5 units they took in the control condition. In short, the participants appeared to be interested in repairing the relationship between themselves and their partners, but were more interested in doing so via taking from someone else, rather than giving up their own resources. Participants also viewed the welfare of the third party as being relatively unimportant as compared to the welfare of the partner they had ostensibly failed.

“To make up for hurting Mike, I think it’s only fair that Karen here suffers”

This returns us to the matter of what kind of thing morality is. de Hooge et al (2011) appear to view morality as an altruism device and view guilt as a moral emotion, yet, strangely, guilt did not appear to make people substantially more altruistic; instead, it seems to make them partial. Given that guilt was not making people behave more altruistically, we might want to reconsider the adaptive function of morality. What if, rather than acting as an altruism device, morality functions as an association management mechanism? If our moral sense functions to build and manage partial relationships, benefiting someone you’ve harmed at the expense of other targets of investment might make more sense. This is because there are good reasons to suspect that friendships represent partial allies maintained in the service of being able to win potential future disputes (DeScioli & Kurzban, 2009). These partial alliances are rank-ordered, however: I have a best friend, close friends, and more distant ones. In order to signal that I rank you highly as a friend, then, I need to demonstrate that I value you more than other people. Showing that I value you highly relative to myself – as would be the case with acts of altruism – would not necessarily tell you much about your value as my friend, relative to other friends. By contrast, behaving in ways that signal I value you more than others at least temporarily – as appeared to be the case in current experiments – could serve to repair a damaged alliance. Morality as an altruism device doesn’t fit the current pattern of data; an alliance management device does, though.

References: DeScioli, P. & Kurzban, R. (2009). The alliance hypothesis for human friendship. PLoS ONE 4(6): e5802. doi:10.1371/journal.pone.0005802

de Hooge, I. Nelissen R., Breugelmans, S., & Zeelenberg, M. (2011). What is moral about guilt? Acting “prosocially” at the disadvantage of others. Journal of Personality & Social Psychology, 100, 462-473.

 

Do Moral Violations Require A Victim?

If you’ve ever been a student of psychology, chances are pretty good that you’ve heard about or read a great many studies concerning how people’s perceptions about the world are biased, incorrect, inaccurate, erroneous, and other such similar adjectives. A related sentiment exists in some parts of the morality literature as well. Perhaps the most notable instance is the unpublished paper on moral dumbfounding, by Haidt, Bjorklund, & Murphy (2000). In that paper, the authors claim to provide evidence that people first decide whether an act is immoral and then seek to find victims or harms for the act post hoc. Importantly, the point seems to be that people seek out victims and harm despite them not actually existing. In other words, people are mistaken in perceiving harm or victims. We could call such tendencies the “fundamental victim error” or the “harm bias”, perhaps. If that interpretation of the results is correct, it would carry a number of implications, chief among which (for my present purposes) is that harm is not a required input for moral systems. Whatever cognitive systems are in charge of processing morally-relevant information, they seem to be able to do so without knowledge of who – if anyone – is getting harmed.

Just a little consensual incest. It’s not like anyone is getting hurt.

Now I’ve long found that implication to be a rather interesting one. The reason it’s interesting is because, in general, we should expect that people’s perceptions about the world are relatively accurate. Not perfect, mind you, but we should be expected to be as accurate as available information allows us to be. If our perceptions weren’t generally accurate, this would likely yield all sorts of negative fitness consequences: for example, believing you can achieve a goal you actually cannot could lead to the investment of time and resources in a fruitless endeavor; resources which could be more profitably spent elsewhere. Sincerely believing you’re going to win the lottery does not mean the tickets are wise investments. Given these negative consequences for acting on inaccurate information, we should expect that our perceptual systems evolved to be as accurate as they can be, given certain real-world constraints.

The only context I’ve seen in which being wrong about something could consistently lead to adaptive outcomes is in the realm of persuasion. In this case, however, it’s not that being wrong about something per se helps you, as much as someone else being wrong helps you. If people happen to think my future prospects are bright – even if they’re not – it might encourage them to see me as an attractive social partner or mate; an arrangement from which I could reap benefits. So, if some part of me happen to be wrong, in some sense, about my future prospects, and being wrong doesn’t cause me to behave in too many maladaptive ways, and it also helps persuade you to treat me better than you would given accurate information, being wrong (or biased) could be, at times, adaptive.

How does persuasion relate to morality and victimhood, you may well be wondering? Consider again the initial point about people, apparently, being wrong about the existence of harms and victims of acts they deem to be immoral. If one was to suggest that people are wrong in this realm – indeed, that our psychology appears to be designed in such a way to consistently be wrong – one would also need to couch that suggestion in the context of persuasion (or some entirely new hypothesis about why being wrong is a good thing). In other words, the argument would need to go something like this: by perceiving victims and harms where none actually exist, I could be better able to persuade other people to take my side in a moral dispute. The implications of that suggestion would seem to, in a rather straight-forward way, rely on people taking sides on moral issues on the basis of harm in the first place; if they didn’t, claims of harm wouldn’t be very persuasive. This would leave the moral dumbfounding work in a bit of a bind, theoretically-speaking, with respect to whether harms are required inputs for moral systems or not: that people perceive something as immoral and then later perceive harms would suggest harms are not required inputs; that arguments about harms are rather persuasive could suggest that harms are required inputs.

Enough about implications; let’s get to some research 

At the very least, the perceptions of victimhood and harm appear intimately tied perceptions of immorality. The connection between the two was further examined recently by Gray, Schein, & Ward, (2014) across five studies, though I’m only going to discuss one of them. In the study of interest, 82 participants each rated 12 actions on whether they wrong (1-5 scale, from ‘not wrong at all’ to ‘extremely wrong’) and whether the act had a victim (1-5 scale, from ‘definitely not’ to definitely yes’). These 12 actions were broken down into three groups of four acts each: the harmful group (including items like kicking a dog or hitting a spouse), the impure group (including masturbating to a picture of your dead sister or covering a bible with feces), and the neutral group (such as eating toast or riding a bus). The interesting twist in this study involved the time frame in which participants answered: one group was placed under a time constraint in which they had to read the question and provide their answers within seven seconds; the other group was not allowed to answer until at least a seven-second delay had passed, and were given an unlimited amount of time in which to answer. So one group was relying on, shall we say, their gut reaction, while the other was given ample time to reason about things consciously.

Unsurprisingly, there appeared to be a connection between harm and victimhood: the directly harmful scenarios generated more certainty about a victim (M = 4.8) than the impure ones (M = 2.5), and the neutral scenarios didn’t generate any victims (M = 1). More notably, the time constraint did have an effect, but only in the impure category: when answering under time constraints in the impure category, participants reported more certainty about the existence of a victim (M = 2.9) relative to when they had more time to think (M = 2.1). By contrast, the perceptions of victims in the harm (M = 4.8 and 4.9, respectively) and neutral categories (M = 1 and 1) did not differ across time constraints.

This finding puts a different interpretive spin on the moral dumbfounding literature: when people had more time to think about (and perhaps invent) victims for more ambiguous violations, they came up with fewer victims. Rather than people reaching a conclusion about immorality first and then consciously reasoning about who might have been harmed, it seems that people could have instead been reaching implicit conclusions about both harm and immorality quite early on, and only later consciously reasoning about why an act which seemed immoral isn’t actually making any worthy victims. If representations about victims and harms are arising earlier in this process than would be anticipated by the moral dumbfounding research, this might speak to whether or not harms are required inputs for moral systems.

Turns out that piece might have been more important than we thought

It is possible, I suppose, that morality could simply use harm as an input sometimes without it being a required input. That possibility would allow harm to be both persuasive and not required, though it would require some explanation as to why harm is only expected to matter in moral judgments at times. At present, I know of no such argument having ever been made, so there’s not too much to engage with on that front.

It is true enough that, at times, when people perceive victims, they tend to perceive victims in a rather broad sense, naming entities like “society” to be harmed by certain acts. Needless to say, it seems rather difficult to assess such claims, which makes one wonder how people perceive such entities as being harmed in the first place. One possibility, obviously, is that such entities (to the extent they can be said to exist at all) aren’t really being harmed and people are using unverifiable targets to persuade others to join a moral cause without the risk of being proved wrong. Another possibility, of course, is that the part of the brain that is doing the reporting isn’t quite able to articulate the underlying reason for the judgment well to others. That is, one part of the brain is (accurately) finding harm, but the talking part isn’t able to report on it. Yet another possibility still is that harm befalling different groups is strategically discounted (Marczyk (2015). For instance, members of a religious group might find disrespect towards a symbol of their faith (rubbing feces on the bible, in this case) to be indicative of someone liable to do harm to their members; those opposed to the religious group might count that harm differently – perhaps not as harm at all. Such an explanation could, in principle, explain the time-constraint effect I mentioned before: the part of the brain discounting harm towards certain groups might not have had enough time to act on the perceptions of harm yet. While these explanations are not necessarily mutually exclusive, they are all ideas worth thinking about.

References: Gray, K., Schein, C., & Ward, A. (2014). The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering. Journal of Experimental Psychology, 143, 1600-1615.

Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished Manuscript. 

Marczyk, J. (2015). Moral alliance strategies theory. Evolutionary Psychological Science, 1, 77-90.