How Many Foundations Of Morality Are There?

If you want to understand and explain morality, the first useful step is to be sure you’re clear about what kind of thing morality is. This first step has, unfortunately, been a stumbling point for many researchers and philosophers. Many writers on the topic of morality, for example, have primarily discussed (and subsequently tried to explain) altruism: behaviors which involves actors suffering costs to benefit someone else. While altruistic behavior can often be moralized, altruism and morality are not the same thing; a mother breastfeeding her child is engaged in altruistic behavior, but this behavior does not appear to driven by moral mechanisms. Other writers (as well as many of the same ones) have also discussed morality in conscience-centric terms. Conscience refers to self-regulatory cognitive mechanisms that use moral inputs to influence one’s own behavior. As a result of that focus, many moral theories have not adequately been able to explain moral condemnation: the belief that others ought to be punished for behaving immorally (DeScioli & Kurzban, 2009). While the value of being clear about what one is actually discussing is large, it is often, and sadly, not the case that many treaties on morality begin by being clear about what they think morality is, nor is it the case that they tend to avoid conflating morality with other things, like altruism.

“It is our goal to explain the function of this device”

When one is not quite clear on what morality happens to be, you can end up at a loss when you’re trying to explain it. For instance, Graham et al (2012), in their discussion of how many moral foundations there are, write:

We don’t know how many moral foundations there really are. There may be 74, or perhaps 122, or 27, or maybe only five, but certainly more than one.

Sentiments like these suggest a lack of focus on what it is precisely the authors are trying to understand. If you are unsure whether the thing you are trying to explain is 2, 5, or over 100 things, then it is likely time to take a step back and refine your thinking a bit. As Graham et al (2012) do not begin their paper with a mention of what kind of thing morality is, they leave me wondering what precisely it is they are trying to explain with 5 or 122 parts. What they do posit is that morality is innate (organized in advance of experience), modified by culture, the result of intuitions first and reasoning second, and that is has multiple foundations; none of that, however, removes my wondering of what precisely they mean when they write “morality”.

The five moral foundations discussed by Graham et al (2012) include kin-directed altruism (what they call the harm foundation), mechanisms for dealing with cheaters (fairness), mechanisms for forming coalitions (loyalty), mechanisms for managing coalitions (authority), and disgust (sanctity). While I would agree that navigating these different adaptive problems are all important for meeting the challenges of survival and reproduction, there seems to be little indication that these represent different domains of moral functioning, rather than simply different domains upon which a single, underlying moral psychology might act (in much the same way, a kitchen knife is capable of cutting a variety of foods, so one need not carry a potato knife, a tomato knife, a celery knife, and so on). In the interests of being clear where others are not, by morality I am referring to the existence of the moral dimension itself; the ability to perceive “right” and “wrong” in the first place and generate the associated judgments that people who engage in immoral behaviors ought to be condemned and/or punished (DeScioli & Kurzban, 2009). This distinction is important because it would appear that species are capable of navigating the above five problems without requiring the moral psychology humans possess. Indeed, as Graham et al (2012) mention, many non-human species share one or many of these problems, yet whether those species possess a moral psychology is debatable. Chimps, for instance, do not appear to punish others for engaging in harmful behavior if said behavior has no effect on them directly (though chimps do take revenge for personal slights). Why, then, might a human moral psychology lead us to condemn others whereas it does not seem to exist in chimps, despite us sharing most of those moral foundations? That answer is not provided, or even discussed, throughout the length of moral foundations paper.

To summarize up this point, the moral foundation piece is not at all clear on what type of thing morality is, resulting in it being unclear when attempting to make a case that many – not one – distinct moral mechanisms exist. It does not necessarily tackle how many of these distinct mechanisms might exist, and it does not address the matter of why human morality appears to differ from whatever nonhuman morality there might – or might not – be. Importantly, the matter of what adaptive function morality has – what adaptive problems it solved and how it solved them – is left all but untouched. Graham et al (2012) seem to fall into the same pit trap that so many before them have of believing they have explained the adaptive value of morality because they outline an adaptive value for somethings like kin-direct altruism, reciprocal altruism, and disgust, despite these concepts not being the same thing as morality per se.

Such pit traps often prove fatal for theories

Making explicit hypotheses of function for understanding morality – as with all of psychology – is crucial. While Graham et al (2012) try to compare these different hypothetical domains of morality to different types of taste receptors on our tongues (one for sweet, bitter, sour, salt, and umami), that analogy glosses over the fact that these different taste receptors serve entirely separate functions by solving unique adaptive problems related to food consumption. Without any analysis of which unique adaptive problems are solved by morality in the domain of disgust, as opposed to, say, harm-based morality, as opposed to fairness-based morality – and so on – the analogy does not work. The question of importance in this case is what function(s) these moral perceptions serve and whether that (or those) function(s) vary when our moral perceptions are raised in the realm of harm or disgust. If that function is consistent across domains, then it is likely handled by a single moral mechanism; not many of them.

However, one thing Graham et al (2012) appear sure about is that morality cannot be understood through a single dimension, meaning they are putting their eggs in the many-different-functions basket; a claim with which I take issue. A prediction that this multiple morality hypothesis put forth by moral foundations theory might make, if I am understanding it correctly, would be that you ought to be able to selectively impair people’s moral cognitions via brain damage. For example, were you to lesion some hypothetical area of the brain, you would be able to remove a person’s ability to process harm-based morality while leaving their disgust-based morality otherwise unaffected (likewise for fairness, sanctity, and loyalty). Now I know of no data bearing on this point, and none is mentioned in the paper, but it seems that, were such a effect possible, it likely would have been noticed by now.

Such a prediction also seems unlikely to hold true in light of a particular finding: one curious facet of moral judgments is that, given someone perceives an act to be immoral, they almost universally perceive (or rather, nominate) someone – or a group of someones – to have been harmed by it. That is to say they perceive one or more victims when they perceive wrongness. If morality, at least in some domains, was not fundamentally concerned with harm, this would be a very strange finding indeed. People ought not need to perceive a victim at all for certain offenses. Nevertheless, it seems that people do not appear to perceive victim-less moral wrongs (despite their inability to always consciously articulate such perceptions), and will occasionally update their moral stances when their perceptions of harms are successfully challenged by others. The idea of victim-less moral wrongs, then, appears to originate much more from researchers claiming that an act is without a victim, rather than from their subject’s perceptions.

Pictured: a PhD, out for an evening of question begging

There’s a very real value to being precise about what one is discussing if you hope to make any forward momentum in a conversation. It’s not good enough for a researcher to use the word morality when it’s not at all clear to what that word is referring. When such specifications are not made, people seem to end up doing all sorts of things, like explaining altruism, or disgust, or social status, rather than achieving their intended goal. A similar problem was encountered when another recent paper on morality attempted to define “moral” as “fair”, and then not really define what they meant by “fair”: the predictable result was a discussion of why people are altruistic, rather than why they are moral. Moral foundations theory seems to only offer a collection of topics about which people hold moral opinions; not a deeper understanding of how our morality functions.

References: DeScioli, P. & Kurzban, R. (2009) Mysteries of morality. Cognition, 112, 281-299.

Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S., & Ditto, P. (2012). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55–130.

The Drug Addictions Of Mice And Men

In my post, I mentioned the very real possibility that as people’s personal biases make their way into research, the quality of that research might begin to decline. More generally, I believe that such an issue arises because of what the interpretation of some results says about the association value of particular groups or individuals. After all, if I believed (correctly) that people like you tend to be more or less [cooperative/aggressive/intelligent/promiscuous/etc] than others, it would be a fairly rational strategy for me to adjust my behavior around you accordingly if I had no information about you other than that piece of information. Anyone who has feared being mugged by a group of adolescent males at night and not feared being mugged by a group of children at a playground during the day understand this point intuitively. As a result, some people might – intentionally or not – game their research towards obtaining certain patterns of results that reflect positively or negative on other groups or, as in today’s case, highlight some research other people as being particularly important because it encourages us to treat others a certain way. So let’s talk about giving drugs to mice and men.

Way to set a positive example for the kids, Mickey

The article which inspired this post was written by Johann Hari, and the message of it is that the likely cause of drug addiction (and, perhaps, other addictions as well) is that people fail to bond with other humans, bonding instead with drugs. This is, according to Johann, quite a distinct explanation than the one that many people favor: that some chemical hooks in the drugs alter our brains in such a way as to make us crave them. To make this point, Johann highlights the importance of the Rat Park experiment, in which rats placed in enriched environments failed to develop addictions to opiates (which were placed in one of their water bottles), whereas rats placed in isolated and stressful environments tended to develop the addictions to the drugs readily. However, when the isolated rats were moved into the enriched environments, their preference for the drug all but vanished.

The conclusion drawn from this research is that the rats – and, by extension, humans – only really use drugs when their environments are harsh. One quote that really drew my attention was the following passage:

“A heroin addict has bonded with heroin because she couldn’t bond as fully with anything else. So the opposite of addiction is not sobriety. It is human connection.”

I find this interpretation to be incomplete and stated far too boldly. One rather troublesome thorn for this explanation rears its head only a few passages later, when Johann is discussing how the nicotine patch does not help most smokers successfully quit; about 18% is the quoted percentage of those who quit through the patch, though that percentage is not properly sourced. From the gallup poll data I dug up, we can see that approximately 5% of those who have quit smoking attribute their success to the patch. That seems like a low number, and one that doesn’t quite fit with the chemical hook hypothesis. Another number sticks out, though: the number of people who attribute their success in quitting to support from friends and family. If Johann’s hypothesis is correct and people are like isolated rats in a cage when addicted, we might expect the number who quit successfully through social support to be quite high. If addiction is the opposite of human connection, as human connections increase, addiction should drop. Unfortunately for his hypothesis, only 6% of ex-smokers attribute their success to those social factors. By contrast, about 50% of the ex-smokers cited just deciding it was time and quitting cold turkey as their preferred method. Now it’s possible that they’re incorrect – that has been known to happen when you ask people to introspect – but I don’t see any reason to assume they are incorrect by default. In fact, many of the habitual smokers I’ve known did not seem like people lacking social connections to begin with; smoking was quite the social activity, and many people started smoking because their friends did. That is, they might developed their addiction through building social connections; not through lacking them.

Indeed, his hypothesis is all the stranger when considering the failure of people using the patch to successfully kick their habit. If, as Johann suggested, people are bonding with chemicals instead of people, it would sound as if giving them the chemicals in question should correspondingly cut down on their urge to smoke. That it doesn’t seem to do so very much is rather peculiar, suggesting something is wrong with the patches or the hypothesis. So what’s going on here? Is addiction to cigarettes different than addiction to opiates, explaining the disconnect from the Rat Park results to the cigarette data? That might be one possibility, though there is another: it is possible that, like quite a bit of psychology research, the Rat Park results don’t replicate so nicely.

“Reply still hazy; try controlling for gender and look again”

Petrie (1996) reports on an attempted replications of the rat park style of research that did not quite pan out as one might hope. In the first experiment, two groups of 10 rats were tested. The first group was raised in isolated conditions from weaning (21 days old), in relatively small cages without much to do; the second group was raised collectively in a much larger and more comfortable enclosure. These enclosures both contained food and water dispensers, freely available at all hours. In order to measure how much water was being consumed, each rat was marked for identification, and each trip to the drinking spout triggered a recording device. The weight of the water consumed was automatically recorded after each trip to the spout as well.  The testing began when the rats were 113 days old, lasting about 30 days, at which points the rats were all killed (which I assume is standard operating procedure for this kind of thing).

During that testing period, animals had access to two kinds of water: tap water and the experimental batch. The experimental batch was flavored with a sweetener initially, whereas on later trials various concentrations of morphine were also added to the bottle (in decreasing amounts from 1 mg to 0.125 mg, cutting by half each time). Across every concentration of morphine, the socially-reared rats drank slightly more than their isolated counterparts: at 1 mg of morphine, the average number of grams of experimental fluid consumed daily by the social group was 3.6 to the isolated rats 0; at 0.5 mg of morphine, these numbers were 1.3 and 0.5, respectively; at 0.25 mg morphine, 18.3 and 15.7; at 0.125 mg, 42.8 and 30.2. In a second follow up study without the automated measuring tools, this pattern was reversed, with the isolated rats tending to drink slightly more of the morphine water during 3 of the 4 testing phases (those numbers, in concentrations of morphine as before, with respect to social/isolated rats were: 4.3/0.3; 3.0/9.4; 10.9/17.4; and 33.1/44.4). So the results seems somewhat inconsistent, and the differences aren’t all that large. The differences in these studies did not even come close to the original reports of previous research claiming that the isolated rats drank up to 7-times as much.

To explain at least some of this difference in results, Petrie (1996) notes that some genetic differences might exist between the rat strains utilized between the two. If that was the case, then the implication of that is – like always – that the story is not nearly as simple as “bad environments cause” people to use drugs; there are other factors to think about, which I’ll get to in a moment. Suffice it to say right now that, in humans, it seems clear that recreational drug use is inherently more pleasant for certain people. Petrie (1996) also notes that the rats tended to consume the same absolute amount of morphine during each phrase, regardless of its concentration in the water. The rats seemed to much prefer the sweetened to the tap water by a huge margin when it was just sucrose, but drank less sweetened water when the morphine (or another bitter additive) was added, so it’s likely that the rats did not quite enjoy the taste of the morphine all that much. The author concludes that it’s probable the rats enjoyed the taste of the sugar more than they enjoyed the morphine’s effects.

An affinity many humans seem to share as well

The Petrie (1996) paper and the cigarette data, then, ought to cause us some degree of pause when assessing the truth value of Johann’s claims concerning the roots of addiction. Also worrying is the moralization that Johann engages in when he writes the following:

“The rise of addiction is a symptom of a deeper sickness in the way we live — constantly directing our gaze towards the next shiny object we should buy, rather than the human beings all around us.”

This hypothesis of his seems to strike me as the strangest of all. He is suggesting, if I am understanding him correctly, that people (a) find human connections more pleasurable than material items or drugs, like rats, but (b) voluntarily forgo human connections in the pursuit of things that bring us less pleasure. That is akin to finding, in terms of our rats, that the rats enjoy the taste of the sweetened water more than the bitter water, but choose to regularly drink out of the bitter one instead, despite both options being available. It would be a very odd psychology that generates that kind of behavior. It would be the same kind of psychology that would drive rats in the enriched cages to leave them for the isolated morphine cages if given the choice; the very thing Johann is claiming would not, and does not, happen. It would require that some other force – presumably some vague and unverifiable entity, like “society” – is pushing people to make a choice they otherwise would not (which, presumably, we must change to be better off).

This moralization is worrying because it sheds some light on the motivation of the author: it is possible that evidence is being selectively interpreted so as to fit with a particular world view that has social implications for others. For instance, the failure to replicate I discussed in not new; it was published in 1996. Did Johann not have access to this data? Did he not know about it? Was it just ignored? I can’t say, but none of those explanations paint a flattering picture of someone who claims expertise in the area. When the reputations of others are on the line, truth can often be compromised in the service of furthering a social agenda; this could include people stopping the search for contrary evidence, ignoring it, or downplaying its importance.

A more profitable route research might take would be to begin by considering what adaptive function the cognitive systems underlying drug use might serve. By understanding that function, we can make some insightful predictions. To attempt and do so, let’s start by asking the question, “why don’t people use drugs more regularly?” Why do so many smokers wish to stop smoking? Why do many people tend to restrict most of their drinking to the weekends? The most obvious answer to these questions is that drinking and smoking entail some costs to be suffered at a later date, whether those costs be tomorrow (in the form of a hangover) or years from now (in the form of lung cancer and liver damage). Most of the people who wanted to quit smoking, for instance, cited health concerns as their reasons. In other words, people don’t engage in these behaviors more often because there are trade-offs to be made between the present and future. The short term benefits of smoking need to be measured against their long term costs.

“No thanks; I need all my energy for crack”

It might follow, then, that those who value short term rewards more heavily in general – those who do not view the future as particularly worth investing in – are more likely to use drugs; the type of people who would rather have $5 today instead of $6 tomorrow. They’d probably also be more oriented towards short term sexual relationships, explaining the interesting connection between the two variables. It would also explain other points mentioned in the Johann piece: soldiers in Vietnam using (and then stopping) heroin and hospital patients not suffering from addiction to their painkillers once they leave the hospital. In the former case, soldiers in war time are in environments where their future is less than certain, to say the least. When people are actively trying to kill you, it makes less sense to put off rewards today for rewards tomorrow, since you can’t claim them if you’re dead. In the latter case, people being administered these painkillers are not necessarily short term oriented to begin with. In both cases, the value of pursuing those drugs further once the temporary threat has been neutralized (the war ends/they end their treatment) is deemed to be relatively low, as it was before the threat appeared. They might value those drugs very much when they’re in the situation, but not when the threat subsides.

It would even explain why drug addiction fell when legalization and treatment hit Portugal: throwing people in jail introduces new complications to life that reduce the value of the future (like the difficulty getting a job with a conviction, or the threats posed by other, less-than-pleasant inmates). If instead people are given some stability and resources are channeled to them, this might increase their perceived value of investing in the future versus getting that reward today. It’s not about connecting with other people per se that helps with the addiction, then, as much as it’s about one’s situation can change their valuation of the present as compared with the future.

Such a conclusion might be resisted by some on the grounds that it implies that drug addicts, to some extent, have self-selected into that pattern of behavior – that their preferences and behaviors, in a meaningful sense, are responsible for which “cage” they ended up in, to use the metaphor. Indeed, those preferences might explain both why addicts like drugs and why some might fail to develop deep connections with others. That might not paint the most flattering picture of a group they’re trying to help. However, it would be dangerous to try and treat a very real problem of drug addiction by targeting the wrong factors, similar to how just giving people large sums of windfall money won’t necessarily help them not go broke in the long term.

References: Petrie, B. (1996). Environment is not the most important variable in determining oral morphine consumption in Wistar rats. Psychological Reports, 78, 391-400.

Real Diversity Means Disagreement

Diversity is one of the big buzzwords of the recent decades. Institutions, both public and private, often take great pains to emphasize their inclusive stances and colorful cast of a staff. I have long found the displays of diversity to be rather queer in one major respect, however: they almost always focus on diversity in the realms of race and gender. The underlying message behind such displays would seem to suggest that men and women, or members of different ethnic groups, are, in some relevant psychological respects, different from one another. What’s strange about that idea is that, as many of the same people might also like to point out, there’s less diversity between those groups than within them, while others are entirely uncomfortable with the claim of sex or racial differences from the start. The ambivalent feelings many people have surrounding such a message were captured well by Principle Skinner on The Simpsons:

It’s the differences…of which…there are none…that make the sameness… exceptional

Regardless of how one feels about such a premise, the fact remains that diversity in race or gender per se is not what people are seeking to maximize in many cases; they’re trying to increase diversity of thought (or, as Maddox put it many years ago: “people who look different must think different because of it; otherwise, why the hell embrace anything? Why not just assume that diversity comes from within, regardless of their skin color, sex, age or religion?“)

Renting that wheel chair was a nice touch, but it’s time to get up and return it before we lose the deposit

If diversity in perspective is what most people are after when they talk about seeking diversity, it seems like it would be a reasonable step to assess people’s perspectives directly, rather than trying to use proxies for it, like race and gender (or clothing, or hair styles, or musical tastes, or…). If, for instance, one was hiring a number of people for a job involving problem solving, it’s quite possible for the person doing the hiring to select a group of men and women from different races who all end up thinking about things in pretty much the same way: not only would the hires likely have the same kinds of educational background, but they’d probably also have comparable interests since they applied for the same job. On top of that initial similarity the person doing the hiring might be partial towards those who hold agreeable points of view. After all, why would you hire someone who holds a perspective you don’t agree with? It sounds as if that decision would make work that much more unpleasant during the day-to-day operations of the company, even if it was irrelevant to the work they do.

Speaking of areas in which diversity of thought seem to be lacking in certain respects, an interesting new paper from Duarte et al (2015) puts forth the proposition that social psychology – as a field – isn’t all that politically diverse, and that’s probably something of a problem for research quality. For example, if social psychologists can be said to be a rather politically homogeneous bunch, this could result in particular (and important) questions not being asked as a result of how that answer might pan out for the images of liberals and their political rivals. After all, if the conclusions of psychology research, by some happy coincidence, tend to demonstrate that liberals (and, by extension, the liberal researchers conducting it) happen to have a firm grasp on reality, whereas their more conservative counterparts are hopelessly biased and delusional, all the better for the liberal group’s public image; all the worse for the truth value of psychological research, however, if those results are obtained by only asking about scenarios in which conservatives, but not liberals, are likely to look biased. If some liberal assumptions about what is right or good are shaping their research to point in certain directions, we’re going to end up making a number of unwarranted interpretative conclusions.

The problems could mount further if the research purporting to deliver conclusions counter to certain liberal interests is reviewed with disproportionate amounts of scrutiny, whereas research supporting those interests is given a pass when their methods are equivalent or worse. Indeed, Duarte et al (2015) discuss some good reasons to think this might be the state of affairs in psychology, not least of which is that quite a number of social psychologists will explicitly admit they would discriminate against those who do not share their beliefs. When surveyed about their self-assessed probability of voting either for or against a known conservative job applicant (when both alternatives are equally qualified for the job), about 82% of social psychologists indicated they would be at least a little more likely to vote against the conservative hire, with about 43% indicating a fairly high degree of certainty they would (above the midpoint of the scale). These kinds of attitudes might well dissuade more conservatives from wanting to enter the field, especially given that the liberals likely to discriminate against them outnumber the conservatives by about 10-to-1.

“Don’t worry, buddy; you can take ‘em”

Not to put too fine of a point on it, but if these ratios were discovered elsewhere – say, a 10:1 ratio of men to women in a field, and about half of the men explicitly say they would vote against hiring women – I imagine that many social psychologists would tripping over themselves to try and inject some justice and moral outrage into the mix. Compared with some other explicit racist tendencies (4% of respondents wouldn’t vote for a black presidential candidate), or sexist ones (5% wouldn’t vote for a woman), there’s a bit of a gulf in discrimination. While the way the question is asked is not quite the same, social psychologists might be about as likely to want to vote for the conservative job candidate as Americans are to vote for a Muslim or an atheist if we assumed equivalence (which is is to say “not very”).

It is at least promising, then, to see that the reactions to this paper were fairly universal in at least recognizing that there might be something of a political diversity problem in psychology, both in terms of its existence and possible consequences. There was more disagreement with respect to the cause of this diversity problem and whether including more conservative minds would increase research quality, but that’s to be expected. I – like the authors – am happy enough that even social psychologists, by in large, seem to accept that social psychology is not all that politically diverse and that such a state of affairs is likely – or at least potentially – harmful to research in some respects (yet another example where stereotypes seem to track reality well).

That said, there is another point to which I want to draw attention. As I mentioned initially, seeking diversity for diversity’s sake is a pointless endeavor, and one that is certainly not guaranteed to improve the quality of work produced. This is the case regardless of the criteria on which candidates are selected, be they physical, political, or something else.  For example, psychology departments could strive to hire people from a variety of different cultural or ethnic groups, but unless those new hires are better at doing psychology, this diversity won’t improve their products. Similarly, psychology departments could strive to hire people with degrees in other fields, like computer science, chemistry, and fine arts; that would likely increase the diversity of thought in psychology, but since there are many more ways of doing poor psychology than there are of doing good psychology, this diversity in backgrounds wouldn’t necessarily be desirable.

Say “Hello” to your new collaborators

Put bluntly, I wouldn’t want people to begin hiring those from non-liberal groups in greater numbers and believe this will, de facto, improve the quality of their research. More specifically, while greater political diversity might, to some extent, reduce the number of bad research projects by diluting or checking existing liberal biases, I don’t know that it would increase in the number of good papers substantially; the relative numbers might change, but I’m more concerned with the absolutes, as a field which fails to produce quality research in sufficient quantities is not demonstrating much value (just like how the guy without a particular failing doesn’t necessarily offer much as a dating prospect). In my humble (and no doubt biased, but not necessarily incorrect) view, there is an important dimension of thought along which I do not wish psychologists to differ, and that is in their application of evolutionary theory as a guiding foundation for their work. Evolutionary theory not only allows one to find previously unappreciated aspects of psychological functioning by considerations of adaptive value, but also allows for building on previous research in a meaningful way and for the effective rooting out of problematic underlying assumptions. In that sense, even failed research projects can contribute in a more meaningful way when framed in an evolutionary perspective, relative to failed projects lacking one.

Evolutionary theory is by no means a cure-all for the bias problem; people will still sometimes get caught up trying to rationalize behaviors or preferences they morally approve of – like homosexuality – as adaptive, for example. In spite of that, I do not particularly hope to see a diversity of perspectives in psychology regarding the theoretical language we all ought to speak by this point. There are many more ways to think about psychology unproductively than there are of doing it well, and more diversity in those respects will make for a much weaker science.

References: Duarte , J., Crawford, J., Stern, C., Haidt, J., Jussim, L., & Tetlock, P. (2015). Political diversity will improve social psychological science. Behavioral & Brain Sciences, 38, 1-58.

No Such Thing As A Free Evolutionary Lunch

Perceiving the world does not typically strike people as a particularly demanding task. All you need to do is open your eyes to see, put something in your mouth to taste, run your hand along an object to feel, and hearing requires less effort still. Perhaps somewhat less appreciated but similar in spirit is the ease with which other kinds of social perceptions are generated, such a perceiving a moral dimension and intentions in the behavior of others. Unless the cognitive mechanisms underlying such perceptions are damaged, all this perceiving feels as if it takes place simply, easily, and automatically. It would be strange for someone to return home from a long day of work and complain that their ears can’t possibly listen to anything else, as they’re too worn out (quite a different complaint from not wanting to listen to someone’s particular speech about their day). Indeed, we ought to expect such processes to work quite efficiently and quickly, owing the historical adaptive value of generating such perceptions. Being able to see and hear, as well as read the minds of others, turn out to be pretty important tasks when it comes to the day-to-day business of survival and reproduction. If one was unable to accomplish such goals quickly and automatically, they would frequently find themselves suffering costs they could have avoided.

“Nothing to it; I can easily perceive the world all day”

That these tasks might feel easy – in fact, perception often doesn’t feel like anything at all – does not mean they are actually easy, either computationally or, importantly, metabolically. Growing, maintaining, and running the appropriate cognitive and physiological mechanisms for generating perception is not free for a body to do. Accordingly, we ought to expect that these perceptual mechanisms are only maintained in the population to extent that they are continuously useful for doing adaptive things. Now for us the value of hearing or seeing in our environment is unlikely to change, and so these mechanisms are maintained in the population. However, that status quo is not always maintained in different species or across time. One example I used for my undergraduate evolutionary psychology course of when this is not the case involves cave-dwelling organisms; specifically, organisms which did not always live in caves exclusively, but came to reside there over time.

What’s notable about these underground caves is that light does not reach the creatures that live there regularly. Without any light, the physiological mechanisms designed to process such information – specifically, the eyes – no longer grant an adaptive benefit to the cave-dwellers. Similarly, the neural tissue required for processing this visual information would not provide any advantage to the bearer either. When that adaptive value of vision is removed, the value of growing the eyes and associated brain regions are compromised and, as a result, many cave-dwelling species either fail to develop eyes altogether, or develop reduced, non-functional ones. Similarly, if there’s no light in the environment, other organisms cannot see you, resulting in many of these cave dwellers losing any skin pigmentation as well. (In a parallel fashion, people tend to lose track of their grooming and dressing habits when they know they aren’t going to leave the house. Now just imagine you would never leave the house again…)

Some recent research attempted to estimate the metabolic costs avoided by cave-dwelling fish who fail to develop functioning eyes and possess a reduced optic tectum (the brain region associated with vision in the surface-dwelling varieties). To do so, researchers removed the brains from surface and cave varieties of Pachon fish and placed them in individual respirator chambers. The oxygenated fluid that filled these chambers was replaced every 10 minutes, allowing measurements to be taken on how much oxygen was consumed by each brain over time. The floating brain/eyes of the surface fish consumed about 23% of the fish’s  estimated resting metabolism (for smaller fish; for larger fish, this percentage was closer to 10%). By contrast, the eyeless brains of the cave fish only consumed about 10% of their metabolism (again, for the smaller fish; larger fish used about 5%). Breaking the numbers down for an estimate of vision specifically, the cost of vision mechanisms was estimated to be about 5-15% of the resting metabolism in the surface fish. The cost of vision, it would seem, is fairly substantial.

Much better; just think of the long-term savings!

It is also worth noting that the other organs (hearts, digestive systems, and gonads) of the fish did not tend to differ between surface and cave dwelling varieties, suggesting that the selective pressure against vision was rather specific, as one should expect, given the domain-specific nature of adaptive problems: just because you don’t have to see, it doesn’t mean you don’t have to circulate blood, eat, and mate. One lesson to take from the current results, then, is to appreciate that adaptive problems are rather specific, instead of being more general. Organisms don’t need to just “do reproductively useful things”, as such a problem space is too under-specified to result in any kind of useful adaptations. Instead, organisms need to do a variety of specific things, like avoid predators, locate food, and remove rivals (and each of those larger class of problems are made up of very many sub-problems).

The second, and larger, important point to draw out from this research is that all features of an organism – from physiological to cognitive – are not free to develop or use. While perceptions like vision, taste, morality, theory of mind, and so on might feel as if they come to us effortlessly, they certainly do not come free. Vision might not feel like lifting weight or going on a run, but the systems required to make it happen need fuel all the same; quite a lot of it, in fact, if the current results are any indication. The implication of this is idea is that we are not allowed to take perceptions, or other psychological functioning, for granted; not if we want to understand them, that is. It’s not enough to say the such feelings or perceptions are “natural” or, in some sense, the default. There need to be reproductively-relevant benefits the justify the existence of any cognitive mechanisms. Even a relatively-minor but consistent drain on metabolic resources can become appreciable when considered over the span of an organism’s life.

To apply this thinking to a topic I’ve written about recently, we could consider stereotypes briefly. There are many psychologists who – and I am glossing this issue broadly – believe that the human mind contains mechanisms for generating beliefs about other groups which end up being, in general, very wrong. A mechanism which uses metabolic resources to generate beliefs that do not correspond well to reality would be a strange find indeed; kind of like a visual mechanism in the surface fish that does not actually result in the ability to navigate the world successfully. When it comes to the related stereotype threat, there are researchers positing the existence of a cognitive mechanism that generates anxiety in response to the existence of stereotypes that result in their bearer performing worse at socially-important tasks. Now you have a metabolically costly cognitive mechanism which seems to be handicapping its host. These would be strange mechanisms to posit the existence of when one is not making (or testing) claims about how and why they might compensate their bearer in other, important ways. It is when you stop taking the existence of cognitive functioning for granted and need to justify it that new, better research questions and clearer thinking about the matter will begin to emerge.

References: Moran, D., Softley, R., & Warrant, E. (2015). The energetic costs of vision and the evolution of eyeless Mexican cavefish. Science Advances, 11, DOI: 10.1126/sciadv.1500363

Tilting At The Windmills Of Stereotype Threat

If I had the power to reach inside your mind and effect your behavior, this would be quite the adaptive skill for me. Imagine being able to effortless make your direct competitors less effective than you, those who you find appealing more interested in associating with you, and, perhaps, even reaching inside your own mind, improving your performance to levels you couldn’t previously reach. While it would be good for me to possess these powers, it would be decidedly worse for other people if I did. Why? Simply put, because my adaptive best interests and theirs do not overlap 100%. Improving my standing in the evolutionary race will often come at their expense, and being able to manipulate them effectively would do just that. This means that they would be better off if they possessed the capacity to resist my fictitious mind-control powers. To bring this idea back down to reality, we could consider the relationship between parasites and hosts: parasites often make their living at their host’s expense, and the hosts, in turn, evolve defense mechanisms – like immune systems – to fight off the parasites.

 Now with 10% more Autism!

This might seem rather straightforward: avoiding manipulative exploitation is a valuable skill. However, the same kind of magical thinking present in the former paragraph seems to present in psychological research from time to time; the line of reasoning that goes, “people have this ability to reach into the minds of others and change their behavior to suit their own ends”. Admittedly, the reasoning is a lot more subtle and requires some digging to pick up on, as very few psychologists would ever say that humans possess such magical powers (with Daryl Bem being one notable exception). Instead, the line of thinking seems to go something like this: if I hold certain beliefs about you, you will begin to conform to those beliefs; indeed, even if such beliefs exist in your culture more generally, you will bend your behavior to meet them. If I happen to believe you’re smart, for example, you will become smarter; if I happen to believe you are a warm, friendly person, you will become warmer. This, of course, is expected to work in the opposite direction as well: if I believe you’re stupid, you will subsequently get dumber; if I believe you’re hostile, you will in turn become more hostile. This is a bit of an oversimplification, perhaps, but it captures the heart of these ideas well.

The problem with this line of thinking is precisely the same as the problem I outlined initially: there is a less than perfect (often far less than perfect) overlap between the reproductive best interests of the believers and the targets. If I allowed your beliefs about me to influence my behavior, I could be pushed and pulled in all sorts of directions I would rather not go in. Those who would rather not see me succeed could believe that I will fail, which would, generally, have negative implications for my future prospects (unless, of course, other people could fight that belief by believing I would succeed, leading to an exciting psychic battle). It would be better for me if I ignored their beliefs and simply proceeded forward on my own. In light of that, it would be rather strange to expect that humans possess cognitive mechanisms which use the beliefs of others as inputs for deciding our own behavior in a conformist fashion. Not only are the beliefs of others hard to accurately assess directly, but conforming to them is not always a wise idea even if they’re inferred correctly.

This hasn’t stopped some psychologists from suggesting that we do basically that, however. One such line of research that I wanted to discuss today is known as “stereotype threat”. Pulling a quick definition from “Stereotype threat refers to being at risk of confirming, as self-characteristic, a negative stereotype about one’s group”. From the numerous examples they list, a typically research paradigm involves some variant of the following: (1) get two groups together to take a test that (2) happen to differ with respect to cultural stereotypes about who will do well. Following that, you (3) make salient their group membership in some way. The expected result is that the group that is on the negative end of the stereotype will perform worse when they’re aware of their group membership. To turn that into an easy example, men are believed to be better at math than women, so if you remind women about their gender prior to a math test, they ought to do worse than women not so reminded. The stereotype of women doing poorly on math actually makes women perform worse.

The psychological equivalent of getting Nancy Kerrigan’d

In the interests of understanding more about stereotype threat – specifically, its developmental trajectory with regard to how children of different ages might be vulnerable to it – Ganley et al (2013) ran three stereotype threat experiments with 931 male and female students, ranging from 4th to 12th grade. In their introduction, Ganley et al (2013) noted that some researchers regularly talk about the conditions under which stereotype threat is likely to have its negative impact: perhaps on hard questions, relative to easy ones; on math-identified girls but not non-identified ones; ones in mixed-sex groups but not single-sex groups, and so on. While some psychological phenomenon are indeed contextually specific, one could also view all that talk of the rather specific contexts required for stereotype threat to obtain as a post-hoc justification for some sketchy data analysis (didn’t find the result you wanted? Try breaking the data into different groups until you do find it). Nevertheless, Ganley et al (2013) set up their experiments with these ideas in mind, doing their best to find the effect: they selected high-performing boys and girls who scored above the mid-point of math identification, used evaluative testing scenarios, and used difficult math questions.

Ganley et al (2013) even used some rather explicit stereotype threat inductions: rather than just asking students to check off their gender (or not do so), their stereotype-threat conditions often outright told the participants who were about to take the test that boys outperform girls. It doesn’t get much more threatening than that. Their first study had 212 middle school students who were told either that boys showed more brain activation associated with math ability and, accordingly, performed better than girls, or that both sexes performed equally well. In this first experiment, there was no effect of condition: the girls who were told that boys do better on math tests did not under-perform, relative to the girls who were told that both sexes do equally well. In fact, the data went in the opposite direction, with girls in the stereotype threat condition performing slightly, though not significantly, better. Their next experiment had 224 seventh-graders and 117 eighth-graders. In this stereotype threat condition, they were asked to indicate their gender on a test before than began it because boys tended to outperform girls on these measures (this wasn’t mentioned in the control condition). Again, the results found no stereotype threat at either grade and, again, their data went in the opposite direction, with stereotype threat groups performing better.

Finally, their third study contained 68 forth-graders, 105 eighth-graders, and 145 twelfth-graders. In this stereotype threat condition, students first solved an easy math problem concerning many more boys being on the math team than girls before taking their test (the control condition’s problem did not contain the sex manipulation). They also tried to make the test seem more evaluative in the stereotype threat condition (referring to it as a “test”, rather than “some problems”). Yet again, no stereotype threat effects emerged at any grade level, with two of the three means going in the wrong direction. No matter how they sliced it, no stereotype threat effects fell out. Their data wasn’t even consistently in the direction of stereotype threat being a negative thing. Ganley et al (2013) even took their analysis just a little further in the discussion section, noting that published studies of such effects found some significant effect 80% of the time. However, these effects were also reported among other, non-significant findings. In other words, these effects were likely found after cutting the data up in different ways. By contrast, the three unpublished dissertations on stereotype threat all found nothing, suggesting the possibility that both data cheating and publication bias were probably at work in the literature (and they’re not the only ones).

     ”Gone fishing for P-values”

The current findings appear to build upon the trend of the frequently non-replicable nature of psychological research. More importantly, however, the type of thinking that inspired this research doesn’t seem to make much sense in the first place, though that part doesn’t seem to be discussed at all. There are good reasons to not let the beliefs of others affect your performance; an argument needs to made as to why we would be sensitive to such things, especially when they’re hypothesized to make us worse, and it isn’t present. To make that point crystal clear, try and apply stereotype threat thinking to any non-human species and see how plausible it sounds. By contrast, a real theory, like kin selection, applies with just as much force to humans as it does to other mammals, birds, insects, and even single-cell organisms. If there’s no solid (and plausible) adaptive reasoning in which one grounds their work – as there isn’t with stereotype threat – it should come as no surprise that effects flicker in and out of existence.

References: Ganley, C., Mingle, L., Ryan, A., Ryan, K., Vasilyeva, M., & Perry, M. (2013). An examination of stereotype threat effects on girls’ mathematical performance. Developmental Psychology, 49, 1886-1897.

Replicating Failures To Replicate

There are moments from my education that have stuck with me over time. One such moment involved a professor teaching his class about what might be considered a “classic” paper in social psychology. I happened to have been aware of this particular paper for two reasons: first, it was a consistent feature in many of my previous psychology classes and, second, because the news had broke recently that when people tried to replicate the effect they had failed to find it. Now a failure to replicate does not necessarily mean that the findings of the original study were a fluke or the result of experimental demand characteristics (I happen to think they are), but that’s not even why this moment in my education stood out to me. What made this moment stand out is that when I emailed the professor after class to let him know the finding had recently failed to replicate, his response was that he was aware of the failure. This seemed somewhat peculiar to me; if he knew the study had failed to replicate, why didn’t he at least mention that to his students? It seems like rather important information for the students to have and, frankly, a responsibility of the person teaching the material, since ignorance was no excuse in this case.

“It was true when I was an undergrad, and that’s how it will remain in my class”

Stories of failures to replicate have been making the rounds again lately, thanks to a massive effort on the part of hundreds of researchers to try and replicate 100 published effects in three psychology journals. These researchers worked with the original authors, used the original materials, were open about their methods, pre-registered their analyses, and archived all their data. Of these 100 published papers, 97 of them reported their effect as being statistically significant, with the other 3 being right on the borderline of significance and interpreted as being a positive effect. Now there is debate over the value of using these kinds of statistical tests in the first place, but, when the researchers tried to replicate these 100 effects using the statistically significant criterion, only 37 even managed to cross the barrier (given that 89 were expected to replicate if the effects were real, 37 is falling quite short of that goal).

There are other ways to assess these replications, though. One method is to examine the differences in effect size. The 100 original papers reported an average effect size of about 0.4; the attempted replications saw this average drop to about 0.2. A full 82% of the original papers showed a stronger effect size than the attempted replications, While there was a positive correlation (about r = 0.5) between the two – the stronger the original effect, the stronger the replication effect tended to be – this still represents an important decrease in the estimated size of these effects, in addition to their statistical existence. Another method of measuring replication success – unreliable as it might be – is to get the researcher’s subjective opinions about whether the results seemed to replicate. On that front, the researchers felt about 39 of the original 100 findings replicated; quite in line with the above statistical data. Finally, perhaps worth noting, social psychology research tended replicate less often than cognitive research (25% and 50%, respectively), and interaction effects replicated less often than simple effects (22% and 47%, respectively).

The scope of the problem may be a bit larger than that, however. In this case, the 100 papers upon which replication efforts were undertaken were drawn from three of the top journals in psychology. Assuming a positive correlation exists between journal quality (as measured by impact factor) and the quality of research they publish, the failures to replicate here should, in fact, be an underestimate of the actual replication issue across the whole field. If over 60% of papers failing to replicate is putting the problem a bit mildly, there’s likely quite a bit to be concerned about when it comes to psychology research. Noting the problem is only one step in the process towards correction, though; if we want to do something about it, we’re going to need to know why it happens.

So come join in my armchair for some speculation

There are some problems people already suspect as being important culprits. First, there are biases in the publication process itself. One such problem is that journals seem to overwhelmingly prefer to report positive findings; very few people want to read about a bad experiment which didn’t work out well. A related problem, however, is that many journals like to publish surprising, or counter-intuitive findings. Again, this can be attributed to the idea that people don’t want to read about things they already believe are true: most people perceive the sky as blue and research confirming this intuition won’t make many waves. However, I would also reckon that counter-intuitive findings are surprising to people precisely because they are also more likely to be inaccurate descriptions of reality. If that’s the case, than a preference on the part of journal editors for publishing positive, counter-intuitive findings might set them up to publish a lot of statistical flukes.

There’s also the problem I’ve written about before, concerning what are known as “research degrees of freedom“; more colloquially, we might consider this a form of data manipulation. In cases like these, researchers are looking for positive effects, so they test 20 people in each group and peak at the data. If they find an effect, they stop and publish it; if they don’t, they add a few more people and peak again, continuing until they find what they want or run out of resources. They might also split the data up into various groups and permutations until they find a set of data that “works”, so to speak (break it down by male/female, or high/medium/low, etc). While they are not directly faking the data (though some researchers do that as well), they are being rather selective about how they analyze it. Such methods inflate the possibility of finding of effect through statistical brute force, even if the effect doesn’t actually exist.

This problem is not unique to psychology, either. A recent paper by Kaplan & Irvin (2015) examined research from 1970-2012 that was looking at the effectiveness of various drugs and dietary supplements for preventing or treating cardiovascular disease. There were 55 trials that met the author’s inclusion criteria. What’s important to note about these trials is that, prior to the year 2000, none of the papers were pre-registered with respect to what variables they were interested in assessing; after 2000, every such study was pre-registered. Registering this research is important, as it doesn’t allow the researchers to then conduct a selective set of analyses on their data. Sure enough, prior to 2000, 57% of trials reported statistically-significant effects; after 2000, that number dropped to 8%. Indeed, about half the papers published after 2000 did report some statistically significant effects, but only for variables other than the primary outcomes they registered. While this finding is not necessarily a failure to replicate per se, it certainly does make one wonder about the reliability of those non-registered findings.

And some of those trials were studying death as an outcome, so that’s not good…

There is one last problem I would like to mention; one I’ve been beating the drum for for the past several years. Assuming that pre-registering research in psychology would help weed out false positives (it likely would), we would still be faced with the problem that most psychology research would not find anything of value, if the above data are any indication. In the most polite way possible, this would lead me to ask a question along the lines of, “why are so many psychology researchers bad at generating good hypotheses?” A pre-registered bad idea does not suddenly make it a good one, even if it makes data analysis a little less problematic. This leads me to my suggestion for improving research in psychology: the requirement of actual theory for guiding research. In psychology, most theories are not theories, but rather restatements of a finding. However, when psychologists begin to take an evolutionary approach to their work, the quality of research (in my obviously-biased mind) tends to improve dramatically. Even if the theory is wrong, making it explicit allows problems to be more easily discussed, discovered, and corrected (provided, of course, that one understands how to evaluate and test such theories, which many people unfortunately do not). Without guiding/foundational theories, the only thing you’re left with when it comes to generating hypotheses are the existing data and your intuitions which, again, don’t seem to be good guides for conducting quality research.

References: Kaplan, R. & Irvin, V. (2015). Likelihood of null effects of large NHLBI clinical trials has increased over time. PLoS One, doi:10.1371/journal.pone.013238

Why Do We Torture Ourselves With Spicy Foods?

As I write this, my mouth is currently a bit aflame, owing to a side of beans which had been spiced with a hot pepper (serrano, to be precise). Across the world (and across YouTube), people partake in the consumption of spicy – and spiced – foods. On the surface, this behavior seems rather strange owing to the pain and other unpleasant feelings induced by such foods. To get a real quick picture of how unpleasant these food additives can be, you could always try to eat an whole raw onion or spicy pepper, though just imagining the experience is likely enough (just in case it isn’t, YouTube will again be helpful). While this taste for spices might be taken for granted – it just seems normal that some people like different amounts of spicy foods – it warrants a deeper analysis to understand this ostensibly strange taste. Why do people love/hate the experience of eating spicy foods?

   Word of caution: don’t touch your genitals afterwards. Trust me.

Food preferences do not just exist in a vacuum; the cognitive mechanisms which generate such preferences need to have evolved owing to some adaptive benefits inherent in seeking out or avoiding certain potential food sources. Some of these preferences are easier to understand than others: for example, our taste for certain foods we perceive as sweet – sugars – likely owes its existence to the high caloric density that such foods historically provided us (which used to be quite valuable when they were relatively rare. As they exist in much higher concentrations in the first world – largely due to our preferences leading us to cultivate and refine them – these benefits can now dip over into costs associated with overconsumption and obesity). By contrast, our aversion to foods which appear spoiled or rotten helps us avoid potentially harmful pathogens which might reside in them; pathogens which we would rather not purposefully introduce into our bodies. Similar arguments can be made for avoiding foods which contain toxic compounds and taste correspondingly unpleasant. When such toxins are introduced into our bodies, the typical physiological response is nausea and vomiting; behaviors which help remove the offending material as best we can.

So where do spicy foods fall with respect to what costs they avoid or benefits they provide? As many such foods do indeed taste unpleasant, it is unlikely that they are providing us with direct nutritional benefits the way that more pleasant-tasting foods do. That is to say we don’t like spicy foods because they are rich sources of calories or vital nutrients. Indeed, the spiciness that is associated with such foods represents chemical weaponry evolved on the part of the plants. As it turns out, these plants have their own set of adaptive best interests which often include not being eaten at certain times or by certain species. Accordingly, they develop certain chemical weapons that dissuade would be predators from chowing down (this is the reason that the selective breeding of plants for natural insect resistance ends up making them more toxic for humans to eat as well. Just because pesticides aren’t being used, that doesn’t mean you’re avoiding toxic compounds). Provided this analysis is correct, then, the natural question arises of why people would have a taste for plants that possess certain types and amounts of chemical weaponry designed to prevent their being eaten. On a hedonic level, growing crops of jalapenos seems as peculiar as growing a crop of edible razor blades.

The most likely answer to this mystery comes in the form of understanding what these chemical weapons do not to humans, but rather what they do to the other pathogens that tend to accompany our other foods. If these chemical weapons are damaging to our bodies – as evidenced by the painful or unpleasant tastes that accompany them – it stands to reason they are also damaging to some pathogens which might reside in our food as well. Provided our bodies are better able to withstand certain doses of these harmful chemicals, relative to the microbes in our food, then eating spicy foods could represent a trade-off between the killing food-borne pathogens against the risk of poisoning ourselves. Provided the harm done to our bodies by the chemicals is less than the expected damage done by the pathogens, a certain perverse taste for spicy foods could evolve.

As before, you should still be wary of genital contact with such perverse tastes

A healthy degree of empirical evidence is consistent with such an adaptive hypothesis from the world over. One of the most extensive data sets focuses on recipes found in 93 traditional cookbooks from 36 different countries across the world (Sherman & Billing, 1999). The recipes in these cookbooks were examined for which of 43 spices were added to meat dishes. Of the approximately 4,500 different meat dishes present in these books, the average number of spices called for by the recipes was 4, with 93% of recipes calling for at least one. Importantly, the distribution of these spices was anything but random. Recipes coming from warmer climates tended to call for a much greater use of spices. The probable reason this finding emerged relates to the fact that, in warmer climates, food – especially meats – which would have been unrefrigerated for most of human history (alien as that idea sounds currently) will tend to spoil quicker, relative to cooler climates. Accordingly, as the degree and speed of spoilage tended to increase in warmer climates, a greater use of anti-microbial spices can be introduced to dishes to help combat food-borne illness. To use one of their examples, the typical Norwegian recipe called for 1.6 spices per dish and the recipes only mentioned 10 different spices; in Hungary, the average number of spices per dish was 3, and up to 21 different spices were referenced. It is not too far-fetched to go one step further and suggest that people indigenous to such regions might also have evolved slightly different tolerances for spices in their meals.

Even more interestingly, those spices with the strongest anti-microbial effects (such as garlic and onions) also tended to be the ones used more often in warmer climates, relative to cooler ones. Among the spices which had weaker effects, the correlation between temperature and spice use ceased to exist. Nevertheless, the most inhibitory spices were also the ones that people tended to use most regularly across the globe. Further, the authors also discuss the trade-off between balancing the fighting of pathogens against the possible toxicity of such spices when consumed in large quantities. A very interesting point bearing on that matter concerns the dietary preferences of pregnant women. While an adult female’s body might be able to tolerate the toxicity inherent in such compounds fairly well, the developing fetus might be poorly equipped for the task. Accordingly, women in their first trimester tend to show a shift in food preferences towards avoiding a variety of spices, just as they also tend to avoid meat dishes. This shift in taste preferences could well reflect the new variable of the fetus being introduced to the usual cost/benefit analysis of adding spices to foods.

An interesting question related to this analysis was also posed by the Sherman & Billing (1999): do carnivorous animals ingest similar kinds of spices? After all, if these chemical compounds are effective at fighting against food-borne pathogens, carnivores – especially scavengers – might have an interest in using such dietary tricks as well (provided they did not stumble upon a different adaptive solution). While animals do not appear to spice their foods the way humans do, the authors do note that vegetation makes up a small portion of many carnivore’s diets. Having owned cats my whole life, I confess I have always found their behavior of eating the grass outside to be quiet a bit odd: not only does the grass not seem to be a major part of a cat’s diet, but it often seems to make them vomit with some regularity. While they present no data bearing on this point, Sherman & Billing (1999) do float the possibility that a supplement of vegetation to their diet might be a variant of that same kind of spicing behavior: carnivores eat vegetation not necessarily for its nutritional value, but rather because of possible anti-microbial benefits. It’s certainly an idea worth examining further, though I know of no research at present to have tackled the matter. (As a follow up, it seems that ants engage in this kind of behavior as well)

It’s a point I’ll bear in mind next time she’s vomiting outside my window.

I find this kind of analysis fascinating, frankly, and would like to take this moment to mention that these fascinating ideas would be quite unlikely to have stumbled upon without the use of evolutionary theory as a guide. The typical explanation you might get when asking people about why we spice food would typically sound like “because we like the taste the spice adds”; a response as uninformative as it is incorrect, which is to say “mostly” (and if you don’t believe that last part, go ahead an enjoy your mouthfuls of raw onion and garlic). The proximate taste explanation would fail to predict the regional differences in spice use, the aversion to eating large quantities of them (though this is a comparative “large”, as a slice of Jalapeno can be more than some people can handle), and the maternal data concerning aversions to spices during critical fetal developmental windows. Taste preferences – like any psychological preferences – are things which require deeper explanations. There’s a big difference between knowing that people tend to add spices to food and knowing why people tend to do so. I would think that findings like these would help psychology researchers understand the importance of adaptive thinking. At the very least, I hope they serve as food for thought.

References: Sherman, P. & Billing, J. (1999). Darwinian gastronomy: Why we use spices. Bioscience, 49, 453–463.

The Altruism Of The Rich And The Poor

Altruistic behavior is a fascinating topic. On the first hand, it’s something of an evolutionary puzzle as to why an organism would provide benefits to others at an expense to itself. A healthy portion of this giving has already been explained via kin selection (providing resources to those who share an appreciable portion of your genes) and reciprocal altruism (giving to you today increases the odds of you giving to me in the future). As these phenomenon have, in a manner of speaking, been studied to death, they’re a bit less interesting; all the academic glory goes to people who tackle new and exciting ideas. One such new and exciting realm of inquiry (new at least as far as I’m aware of, anyway) concerns the social regulations and sanctions surrounding altruism. A particularly interesting case I came across some time ago concerned people actually condemning Kim Kardashian for giving to charity; specifically, for not giving enough. Another case involved the turning away of a sizable charitable donation from Tucker Max so as to avoid a social association with him.

*Unless I disagree with your personality; in that case, I’ll just starve

Just as it’s curious that people are altruistic towards others at all, then, it is, perhaps, more curious that people would ever turn down altruism or condemn others for giving it. To examine one more example that crossed my screen today, I wanted to consider two related articles. The first of the articles concerns charitable giving in the US. The point I wanted to highlight from that piece is that, as a percentage of their income, the richest section of the population tends to give the largest portion to charity. While one could argue that this is obviously the case because the rich have more available money which they don’t need to survive, that idea would fail to explain the point that charitable giving appears to evidence a U-shaped distribution, in which the richest and poorest sections of the population contribution a greater percentage of their income than those in the middle (though how to categorize the taxes paid by each group is another matter). The second article I wanted to bring up condemned the richer section of the population for giving less than they used to, compared to the poor, who had apparently increased the percentage they used to give. What’s notable about their analysis of the issue is that the former fact – that the rich still tended to donate a higher percentage of their income overall – is not mentioned at all. I imagine that such an omission was intentional.

Taken together, all these pieces of information are consistent with the idea that there’s a relatively opaque strategic element which surrounds altruistic behavior. While it’s one people might unconsciously navigate with relative automaticity, it’s worthwhile to take a step back and consider just how strange this behavior is. After all, if we saw this behavior in any other species, we would be very curious indeed as to what led them to do what they did; perhaps we would even forgoing the usual moralization that accompanies and clouds these issues while we examined them. So, on the subject of rich people and strategic altruism, I wanted to review a unique data set from Smeets, Bauer, & Gneezy (2015) concerning the behavior of millionaires in two standard economic games: the dictator and ultimatum games. In the former, participants are in charge of deciding how €100 will be divided between themselves and another participant; in the latter, the participant will propose how €100 will be split between themselves and a receiver. If the receiver accepts the offer, both players get paid the division; if the receiver rejects it, both players get nothing.

In the dictator game, approximately 200 Dutch millionaires (those with over €1,000,000 in their bank accounts) where told they were either playing the game with another millionaire or with a low-income receiver. According to data from existing literature on these games, the average amount given to the receiver in a dictator game is a little shy of 30%, with only about 5% of dictators allocating all the money to the recipient. In start contrast, when paired with a low-income individual, millionaire dictators tended to give an average of 71% of the money to the other player, with 45% of dictators giving the full €100. When paired with another millionaire recipient, however, the millionaire dictators only gave away approximately 50% of the €100 sum which, while still substantially more generous than the literature average, is less generous than their giving towards the poor.

The rich; maybe not as evil and cold as they’re imagined to be

Turning to the data from the ultimatum games, we often find that people are often more generous in their offers to receivers in such circumstances, owing to the real possibility that a rejected offer can leave the proposer without anything. Indeed, the reported percentage of the offers in ultimatum games from the wider literature is close to 45% of the total sum (as compared with 30% in dictator games). In the ultimatum game, the millionaires were actually less generous towards the low-income recipients than in the dictator game – bucking the overall trend – but were still quite generous overall, giving an average of 64% of the total sum, with 30% of dictators giving away the full €100 to the other person (as compared with 71% and 45% from above). Interestingly, when paired with other millionaires in the ultimatum game, millionaire proposers gave precisely the same amounts they tended to in the dictator games. In that case, the strategic context has no effect on their giving.

In sum, millionaires tended to evidence quite a bit more generosity in giving contexts than previous, lower-income samples had. However, this generosity was largely confined to instances of giving to those in greater need, relative to a more general kind of altruism. In fact, if one was in need and interested in receiving donations from rich targets, it would seem to serve your goal better to not frame the request as some kind of exchange relationship through which the rich person will eventually receive some monetary benefits, as that kind of strategic element appears to result in less giving.

Why should this be the case, though? One possible explanation that comes to mind builds upon the ostensibly obvious explanation for rich people giving more I mentioned initially: the rich already possess a great number of resources they don’t require. In economic terms, the marginal value of additional money for them is lower than it is for the poor. When the giving is economically strategic, then, the benefit to be received is more money, which, as I just suggested, has a relatively low marginal value to the rich recipient. By contrast, when the giving is driven more by altruism, the benefits to be receiver are predominately social in nature: the gratitude of the recipients, possible social status from observers, esteem from peers, and so on. The other side of this giving coin, as I also mentioned at the beginning, is there can also be social costs associated with not giving enough for the rich. As building social alliances and avoiding condemnation might have different marginal values than additional units of money, the rich could perceive greater benefits from giving in certain contexts, relative to exchange relationships.

Threats – implicit or explicit – do tend to be effective motivators for giving

Such an explanation could also, at least in principle, help explain why the poorest section of the population tends to be relatively charitable, compared to the middle: the poorest individuals are facing a greater need for social alliances, owing to the relatively volatile nature of their position in life. As economic resources might not be stable, poorer individuals might be better served by using more of them to build stronger social networks when money is available. Such spending would allow the poor to hedge and defend against the possibility of future bad luck; that friend you helped out today might be able to give you a place to sleep next month if you lose your job and can’t make rent. By contrast, those in the middle of the economic world are not facing the same degree of social need as the lower classes, while, at the same time, not having as much disposal income as the upper classes (and, accordingly, might also be facing less social pressure to be generous with what they do have), leading to them giving less. Considerations of social need guiding altruism also fits nicely with the moral aspect of altruism, which is just one more reason for me to like it.

References: Smeets, P., Bauer, R., & Gneezy, U. (2015). Giving behavior of millionaires. Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1507949112

Examining The Performance-Gender Link In Video Games

Like many people around my age or younger, I’m a big fan of video games. I’ve been interested in these kinds of games for as long as I can remember, and they’ve been the most consistent form of entertainment in my life, often winning out over the company of other people and, occasionally, food. As I – or pretty much anyone who has spent time within the gaming community – can attest to, the experience of playing these games with others can frequently lead to, shall we say, less-than-pleasant interactions with those who are upset by losses. Whether being derided for your own poor performance, good performance, good luck, or tactics of choice, negative comments are a frequent occurrence in the competitive online gaming environment. There are some people, however, who believe that simply being a woman in such environments yields a negative reception from a predominately-male community. Indeed, some evidence consistent with this possibility was recently published by Kasumovic & Kuznekoff (2015) but, as you will soon see, the picture of hostile behavior towards women that emerges in much more nuanced than it is often credited as being.

Aggression, video games, and gender relations; what more could you want to read about?

As an aside, it is worth mentioning that some topics – sexism being among them – tend to evade clear thinking because people have some kind of vested social interest in what they have to say about the association value of particular groups. If, for instance, people who play video games are perceived negatively, I would likely suffer socially by extension, since I enjoy video games myself (so there’s my bias). Accordingly, people might report or interpret evidence in ways that aren’t quite accurate so as to paint certain pictures. This issue seems to rear its head in the current paper on more than one occasion. For example, one claim made by Kasumovic & Kuznekoff (2015) is that “…men and women are equally likely to play competitive video games”. The citation for this claim is listed as “Essential facts about the computer and video game industry (2014)“. However, in that document, the word “competitive” does not appear at all, let alone a gender breakdown of competitive game play. Confusingly, the authors subsequently claim that competitive games are frequently dominated by males in terms of who plays them, directly contradicting the former idea. Another claim made by Kasumovic & Kuznekoff (2015) is that women are “more often depicted as damsels in distress”, though the paper they link to to support that claim does not appear to contain any breakdown of women’s actual representation in video games as characters, instead measuring people’s perceptions of women’s representation in them. While such a claim may indeed be true – women may be depicted as in need of rescue more often than they’re depicted in other roles and/or relative to men’s depictions – it’s worth noting that the citation they use does not contain the data they imply it does.

Despite these inaccuracies, Kasumovic & Kuznekoff (2015) take a step in the right direction by considering how the reproductive benefits to competition have shaped male and female psychologies when approaching the women-in-competitive-video-games question. For men, one’s place in a dominance hierarchy was quite relevant for determining their eventual reproductive success, leading to more overt strategies of social hierarchy navigation. These overt strategies include the development of larger, more muscular upper-bodies in men, suited for direct physical contests. By contrast, women’s reproductive fitness was often less affected by their status within the social hierarchy, especially with respect to direct physical competitions. As men and women begin to compete in the same venues where differences in physical strength no longer determine the winner – as is the case in online video games – this could lead to some unpleasant situations for particular men who have the most to lose by having their status threatened by female competition.

In the interests of being more explicit about why female involvement in typically male-style competitions might be a problem for some men, let’s employ some Bayesian reasoning. In terms of physical contests, larger men tend to dominate smaller ones; this is why most fighting sports are separated into different classes based on the weight of the combatants. So what are we to infer when a smaller fighter consistently beats a larger one? Though these aren’t mutually exclusive, we could infer either that the smaller fighter is very skilled or that the larger fighter is particularly unskilled. Indeed, if the larger fighter is losing both to people of his own weight class and of a weight class below him, the latter interpretation becomes more likely. It doesn’t take much of a jump to replace size with sex in this example: because men tend to be stronger than women, our Bayesian priors should lead us to expect that men will win in direct physical competition over women, on average. A man who performs poorly against both men and women in physical competition, is going to suffer a major blow to his social status and reputation as a fighter.

It’ll be embarrassing for him to see that replayed five times from three angles.

While winning in competitive video games does not rely on physical strength, a similar type of logic applies there as well: if men tend to be the ones overwhelming dominating a video game in terms of their performance, then a man who performs poorly has the most to lose from women becoming involved in the game, as he now might compare poorly both to the standard reference group and to the disfavored minority group. By contrast, men who are high performers in these games would not be bothered by women joining in, as they aren’t terribly concerned about losing to them and having their status threatened. This yields some interesting predictions about what kind of men are going to become hostile towards women. By comparison, other social and lay theories (which are often hard to separate) do not tend to yield such predictions, instead suggesting that both high and low performing men might be hostile towards women in order to remove them from a type of male-only space; what one might consider a more general sexist discrimination.

To test these hypotheses, Kasumovic & Kuznekoff (2015) reported on some data collected while they were playing Halo 3, during which time all matches and conversations within the game were recorded. During these games, the authors had approximately a dozen neutral phrases prerecorded with either a male or female voice they would play during appropriate times in the match. These phrases served to cue the other players as to the ostensible gender of the researcher. The matches themselves were 4 vs 4 games in which the objective for each is to kill more members of the enemy team than they kill of yours. All in-game conversations were transcribed, with two coders examined the transcripts for comments directed towards the researcher playing the game, classifying them as positive, negative, or neutral. The performance of the players making these comments were also recorded with respect to whether the game was won or lost, that player’s overall skill level, and the number of their kills and deaths in the match, so as to get a sense for the type of player making them.

The data represented 163 games of Halo, during which 189 players directed comments towards the researcher across 102 of the games. Of those 189 players who made comments, all of them were males. Only the 147 of those commenters that came from a teammate were retained for analysis. In total, then, 82 players directed comments towards the female-voiced player, whereas 65 directed comments towards the male-voiced player.

A few interesting findings emerged with respect to the gender manipulation. While I won’t mention all of them, I wanted to highlight a few. First, when the researcher used the female voice, higher-skill male players tended to direct significantly more positive comments towards them, relative to low-skill players (β = -.31); no such trend was observed for the male-voiced character. Additionally, as the difference between the female-voiced researcher and the commenting player grew larger (specifically, as the person making the comment was of progressively higher ranks than the female-voiced player), the number of positive comments tended to increase. Similarly, high-skill male players tended to direct fewer negative comments towards the female-voiced research as well (β = -.18). Finally, in terms of their kills during the match, poor performing males directed more negative comments towards female voiced characters, relative to high-performing men (β = .35); no such trend was evident for the male-voiced condition.

“I’m bad at this game and it’s your fault people know it!”

Taken together, the results seem to point in a pretty consistent direction: low-performing men tended to be less welcoming of women in their competitive game of choice, perhaps because it highlighted their poor performance to a greater degree. By contrast, high-performing males were relatively less troubled by the ostensible presence of women, dipping over into being quite welcoming of them. After all, a man being good at the game might well be an attractive quality to women who also enjoy the world of Esports, and what better way to kick off a potential relationship than with a shared hobby? As a final point, it is worth noting that the truly sexist types might present a different pattern of data, relative to people who were just making positive or negative comments: only 11 of the players (out of 83 who made negative comments and 189 who made any comments) were classified as making comments considered to be “hostile sexism”, which did not yield a large enough sample for a proper analysis. The good news, then, seems to be such comments are at least relatively rare.

References: Kasumovic, M. & Kuznekoff, J. (2015). Insights into sexism: Male status and performance moderates female-directed hostile and amicable behavior. PLoS One, 10: e0131613. doi:10.1371/journal.pone.0131613

Understanding Conspicuous Consumption (Via Race)

Buckle up, everyone; this post is going to be a long one. Today, I wanted to discuss the matter of conspicuous consumption: the art of spending relatively large sums of money on luxury goods. When you see people spending close to $600 on a single button-up shirt, two-months salary on engagement rings, or tossing spinning rims on their car, you’re seeing examples of conspicuous consumption. A natural question that many people might (and do) ask when confronted with such outrageous behavior is, “why do you people seem to (apparently) waste money?” A second, related question that might be asked once we have an answer to the first question (indeed, our examination of this second question should be guided by – and eventually inform – our answer to the first) is how can we understand who is most likely to spend money in a conspicuous fashion? Alternatively, this question could be framed by asking about what contexts tend to favor conspicuous consuming behavior. Such information should be valuable to anyone looking to encourage or target big-ticket spending or spenders or, if you’re a bit strange, you could also try to create contexts in which people spend their money more responsibly.

But how fun is sustainability when you could be buying expensive teeth  instead?

The first question – why do people conspicuously consume – is perhaps the easier question to initially answer, as it’s been discussed for the last several decades. In the biological world, when you observe seemingly gaudy ornaments that are costly to grow and maintain – peacock feathers being the go-to example – the key to understanding their existence is to examine their communicative function (Zahavi, 1975). Such ornaments are typically a detriment to an organism’s survival; peacocks could do much better for themselves if they didn’t have to waste time and energy growing the tail feathers which make it harder to maneuver in the world and escape from predators. Indeed, if there was some kind of survival benefit to those long, colorful tail feathers, we would expect that both sexes would develop them; not just the males.

However, it is because these feathers are costly that they are useful signals, since males in relatively poor condition could not shoulder their costs effectively. It takes a healthy, well-developed male to be able to survive and thrive in spite of carrying these trains of feathers. The costs of these feathers, in other words, ensures their honesty, in the biological sense of the word. Accordingly, females who prefer males with these gaudy tails can be more assured that their mate is of good genetic quality, likely leading to offspring well-suited to survive and eventually reproduce themselves. On the other hand, if such tails were free to grow and develop – that is, if they did not reliably carry much cost – they would not make good cues for such underlying qualities. Essentially, a free tail would be a form of biological cheap talk. It’s easy for me to just say I’m the best boxer in the world, which is why you probably shouldn’t believe such boasts until you’ve actually seen me perform in the ring.

Costly displays, then, owe their existence to the honesty they impart on a signal. Human consumption patterns should be expected to follow a similar pattern: if someone is looking to communicate information to others, costlier communications should be viewed as more credible than cheap ones. To understand conspicuous consumption we would need to begin by thinking about matters such as what signal someone is trying to send to others, how that signal is being sent, and what conditions tend to make the sending of particular signals more likely? Towards that end, I was recently sent an interesting paper examining how patterns of conspicuous consumption vary among racial groups: specifically, the paper examined racial patterns of spending on what was dubbed visible goods: objects which are conspicuous in anonymous interactions and portable, such as jewelry, clothing, and cars. These are good designed to be luxury items which others will frequently see, relative to other, less-visible luxury items, such as hot tubs or fancy bed sheets.

That is, unless you just have to show off your new queen mattress

The paper, by Charles et al (2008), examined data drawn from approximately 50,000 households across the US, representing about 37,000 White 7,000 Black, and 5,000 Hispanic households between the ages of 18 and 50. In absolute dollar amounts, Black and Hispanic households tended to spend less on all manner of things than Whites (about 40% and 25%, respectively), but this difference needs to be viewed with respect to each group’s relative income. After all, richer people tend to spend more than poorer people. Accordingly, the income of these households was estimated through their reports of their overall reported spending on a variety of different goods, such as food, housing, etc. Once a household’s overall income was controlled for, a better picture of their relative spending on a number of different categories emerged. Specifically, it was found that Blacks and Hispanics tended to spend more on visible  goods (like clothing, cars, and jewelry) than Whites by about 20-30%, depending on the estimate, while consuming relatively less in other categories like healthcare and education.

This visible consumption is appreciable in absolute size, as well. The average white household was spending approximately $7,000 on such purchases each year, which would imply that a comparably-wealthy Black or Hispanic household would spend approximately $9,000 on such purchases. These purchases come at the expense of all other categories as well (which should be expected, as the money has to come from somewhere), meaning that the money spent on visible goods often means less is spent on education, health care, and entertainment.

There are some other interesting findings to mention. One – which I find rather notable, but the authors don’t see to spend any time discussing – is that racial differences in consumption of visible goods declines sharply with age: specifically, the Black-White gap in visible spending was 30% in the 18-34 group, 23% in the 35-49 group, and only 15% in the 50+ group. Another similarly-undiscussed finding is that visible consumption gap appears to decline as one goes from single  to married. The numbers Charles et al (2009) mention estimate that the average percentage of budgets used on visible purchases was 32% higher for single Black men, 28% higher for single Black women, and 22% higher for married Black couples, relative to their White counterparts. Whether these declines represent declines in absolute dollar amounts or just declines in racial differences, I can’t say, but my guess is that it represents both. Getting old and getting into relationships tended to reduce the racial divide in visible good consumption.

Cool really does have a cut-off age…

Noting these findings is one thing; explaining them is another, and arguably the thing we’re more interested in doing. The explanation offered by Charles et al (2009) goes roughly as follows: people have a certain preference for social status, specifically with respect to their economic standing. People are interested in signaling their economic standing to others via conspicuous consumption. However, the degree to which you have to signal depends strongly on the reference group to which you belong. For example, if Black people have a lower average income than Whites, then people might tend to assume that a Black person has a lower economic standing. To overcome this assumption, then, Black individuals should be particularly motivated to signal that they do not, in fact, have a lower economic standing more typical of their group. In brief: as the average income of a group drops, those with money should be particularly inclined to signal that they are not as poor as other people below them in their group.

In support of this idea, Charles et al (2008) further analyzed their data, finding that the average spending on visible luxury goods declined in states with higher average incomes, just as it also declined among racial groups with higher average incomes. In other words, raising the average income of a racial group within a state tended to strongly impact what percentage of consumption was visible in nature. Indeed, the size of this effect was such that, controlling for the average income of a race within a state, the racial gaps almost entirely disappeared.

Now there are a few things to say about this explanation, first of which being that it’s incomplete as stands. From my reading of it, it’s a bit unclear to me how the explanation works for the current data. Specifically, it would seem to posit that people are looking to signal that they are wealthier than those immediately below them in the social ladder. This could explain the signaling in general, but not the racial divide. To explain the racial divide, you need to add something else; perhaps that people are trying to signal to members of higher income groups that, though one is a member of a lower income group, one’s income is higher than the average income. However, that explanation would not explain the age/marital status information I mentioned before without adding on other assumption, nor would directly explain the benefits which arise from signaling one’s economic status in the first place. Moreover, if I’m understanding the results properly, it wouldn’t directly explain why visible consumption drops as the overall level of wealth increases. If people are trying to signal something about their relative wealth, increasing the aggregate wealth shouldn’t have much of an impact, as “rich” and “poor” are relative terms.

“Oh sure, he might be rich, but I’m super rich; don’t lump us together”

So how might this explanation be altered to fit the data better? The first step is to be more explicit about why people might want to signal their economic status to others in the first place. Typically, the answer to this question hinges on the fact that being able to command more resources effectively makes one a more valuable associate. The world is full of people who need things – like food and shelter – so being able to provide those things should make one seem like a better ally to have. For much the same reason, being in command of resources also tends to make one appear to be a more desirable mate as well. A healthy portion of conspicuous signaling, as I mentioned initially, has to do with attracting sexual partners. If you know that I am capable of providing you with valuable resources you desire, this should, all else being equal, make me look like a more attractive friend or mate, depending on your sexual preferences.

However, recognition of that underlying logic helps make a corollary point: the added value that I can bring you, owing to my command of resources, diminishes as overall wealth increases. To place it in an easy example, there’s a big difference between having access to no food and some food; there’s less of a difference between having access to some food and good food; there’s less of a difference still between good food and great food. The same holds for all manner of other resources. As the marginal value of resources decreases as access to resources increases overall, we can explain the finding that increases in average group wealth decrease relative spending on visible goods: there’s less of a value in signaling that one is wealthier than another if that wealth difference isn’t going to amount to the same degree of marginal benefit.

So, provided that wealth has a higher marginal value in poorer communities – like Black and Hispanic ones, relative to Whites – we should expect more signaling of it in those contexts. This logic could explain the racial gap on spending patterns. It’s not that people are trying to avoid a negative association with a poor reference group as much as they’re only engaging in signaling to the extent that signaling holds value to others. In other words, it’s not about my signaling to avoid being thought of as poor; it’s about my signaling to demonstrate that I hold a high value as a partner, socially or sexually, relative to my competition.

Similarly, if signaling functions in part to attract sexual partners, we can readily explain the age and martial data as well. Those who are married are relatively less likely to engage in signaling for the purposes of attracting a mate, as they already have one. They might engage in such purchases for the purposes of retaining that mate, though such purchases should involve spending money on visible items for other people, rather than for themselves. Further, as people age, their competition in the mating market tends to decline for a number reasons, such as existing children, inability to compete effectively, and fewer years of reproductive viability ahead of them. Accordingly, we see that visible consumption tends to drop off, again, because the marginal value of sending such signals has surely declined.

“His most attractive quality is his rapidly-approaching demise”

Finally, it is also worth noting other factors which might play an important role in determining the marginal value of this kind of conspicuous signaling. One of these is an individual’s life history. To the extent that one is following a faster life history strategy – reproducing earlier, taking rewards today rather than saving for greater rewards later – one might be more inclined to engage in such visible consumption, as the marginal value of signaling you have resources now is higher when the stability of those resources (or your future) is called into question. The current data does not speak to this possibility, however. Additionally, one’s sexual strategy might also be a valuable piece of information, given the links we saw with age and martial status. As these ornaments are predominately used to attract the attention of prospective mates in nonhuman species, it seems likely that individuals with a more promiscuous mating strategy should see a higher marginal value in advertising their wealth visibly. More attention is important if you’re looking to get multiple partners. In all cases, I feel these explanations make more textured predictions than the “signaling to not seem as poor as others” hypothesis, as considerations of adaptive function often do.

References: Charles, K., Hurst, E., & Roussanov, N. (2008). Conspicuous consumption and race. The Journal of Quarterly Economics, 124, 425-467.

Zahavi, A. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.