Sinking Costs

My cat displays a downright irrational behavior: she enjoys stalking and attacking pieces of string. I would actually say that this behavior extends beyond enjoying it the point of actively craving it. It’s fairly common for her to meow at me until she gets my attention before running over to her string and sitting by it, repeating this process until I play with her. At that point, she will chase it, claw at it, and bite it as if it were a living thing she could catch. This is irrational behavior for the obvious reason that the string isn’t prey; it’s not the type of thing it is appropriate to chase. Moreover, despite numerous opportunities to learn this, she never seems to cease this behavior, continuing to treat the string like a living thing. What could possibly explain this mystery?

If you’re anything like me, you might find that entire premise rather silly. My cat’s behavior only looks irrational when compared against an arguably-incorrect frame of reference; one in which my cat ought to only chase things that are alive and capable of being killed/eaten. There are other ways of looking at the behavior which make it understandable. Let’s examine two such perspectives briefly. The first of these is that my cat is – in some sense – interested in practicing for future hunting. In much the same way that people might practice in advance of a real event to ensure success, my cat may enjoy chasing the string because of the practice it affords her for achieving successful future hunts. Another perspective (which is not mutually exclusive) is that the string might give off proximate cues that resemble those of prey (such as ostensibly self-directed movement) which in turn activate other cognitive programs in my cat’s brain associated with hunting. In much the same way that people watch cartoons and perceive characters on the screen, rather than collections of pixels or drawings, my cat may be responding to proximate facsimiles of cues that signaled something important over evolutionary time when she sees strings moving.

The point of this example is that if you want to understand behavior – especially behavior that seems strange – you need to place it within its proper adaptive context. Simply calling something irrational is usually a bad idea for figuring out what is going on, as no species has evolved cognitive mechanisms that exist because they encouraged that organism to behave in irrational, maladaptive, or otherwise pointless ways. Any such mechanism would represent a metabolic cost endured for either no benefit or a cost, and those would quickly disappear from the population, outcompeted by organisms that didn’t make such silly mistakes.  

For instance, burying one’s head in the proverbial sand doesn’t help avoid predators

Today I wanted to examine one such behavior that gets talked about fairly regularly: what is referred to as the sunk-cost fallacy (implying a mistake is occurring). It refers to cases where people make decisions based on previous investments, rather than future expected benefits. For instance, if you happened to have a Master’s degree in a field that isn’t likely to present you with a job opportunity, the smart thing to do (according to most people, I imagine) would be to cut your losses and find a new major in a field that is likely to offer work. The sunk-cost fallacy here might represent saying to yourself, “Well, I’ve already put so much time into this program that I might as well put in more and get that PhD,” even though committing further resources is more than likely going to be a waste. In another case, you might sometimes continuing to pour money into a failing business venture because they had already invested most of their life savings. In fact, the tendency to invest in such projects is usually predictable by how much was invested in the past. The more you already put in, the more likely you are to see it through to its conclusion. I’m sure you can come up with your own examples of this from things you’ve either seen or done in the past.

On the face of it, this behavior looks irrational. You cannot get your previous investments back, so why should they have any sway over future decision making? If you end up concluding that such behavior couldn’t possibly be useful – that it’s a fallacious way of thinking – there’s a good chance you haven’t thought about it enough yet. To begin understanding why sunk costs might factor into decision making, it’s helpful to start with a basic premise: humans did not evolve in a world where financial decisions – such as business investments – were regularly made (if they were made at all). Accordingly, whatever cognitive mechanisms underlie sunk-cost thinking likely have nothing at all to do with money (or the pursuit of degrees, or other such endeavors). If we are using cognitive mechanisms to manage tasks they did not evolve for solving, it shouldn’t be surprising that we see some strange decisions cropping up from time to time. In much the same way, cats are not adapted to worlds with toys and strings. Whatever cognitive mechanism impels my cat to chase them, it is not adapted for that function.

So – when it comes to sunk costs – what might the cognitive mechanisms leading us to make these choices be designed to do? While humans might not have done a lot of financial investing over our evolutionary history, we sure did a lot of social investing. This includes protecting, provisioning, and caring for family members, friends, and romantic partners who in turn do the same for you. Such relationships need to be managed and broken off from time to time. In that regard, sunk costs begin to look a bit different.  

“Well, this one is a dud. Better to cut our losses and try again”

On the empirical end, it has been reported that people respond to social investments in a different way than they do financial ones. In a recent study by Hrgović & Hromatko (2017), 112 students were asked to respond to a stock market task and a social task. In the financial task, they read about a hypothetical investment they had made in their own business, but they had been losing value. The social tasks were similar: participants were told they had invested in a romantic partner, a sibling, and a friend. All were suffering financial difficulties, and the participant had been trying to help. Unfortunately, the target of this investment hadn’t been pulling themselves back up, even turning down job offers, so the investments were not currently paying off. In both the financial and social tasks, participants were then given the option to (a) stop investing in them now, (b) keep investing for another year only, or (c) keep investing indefinitely until the issue was resolved. The responses and time to response were recorded.

When it came to the business investment, about 40% of participants terminated future investments immediately; when it came to the numbers social contexts, these were about 35% in the romantic partner scenario, 25% in the sibling context, and about 5% in the friend context. The numbers for investing another year were about 35% in the business context, 50% in the romantic, and about 65% in the sibling and friend conditions. Finally, about 25% of participants would invest indefinitely in the business, 10% in the romantic partner, 5% in the sibling, and 30% in the friendship. In general, the picture that emerges is that people were willing to terminate the business investments much more readily than the social ones. Moreover, the time it took to make a decision was also longer in the business context, suggesting that people found the decision to continue investing in social relationships easierPhrased in terms of sunk costs, people appeared to be more willing to factor those into the decision to keep investing in social relationships. 

So at least you’ll have company as you sink into financial ruin

The question remains as to why that might be? Part of that answer no doubt involves opportunity costs. In the business world, if you want to invest your money into a new venture, doing so is relatively easy. Your money is just as green as the next person’s. It is far more difficult to just go out into the world and get yourself a new friend, sibling, or romantic partner. Lots of people already have friends, families, and friendships and aren’t looking to add to that list, as their investment potential in that realm is limited. Even if they are looking to add to it, they might not be looking to add you. Accordingly, the expected value of finding a better relationship needs to weighed against the time it takes to find it, as well as the degree of improvement it would likely yield. If you cannot just go out into the world and find new relationships with ease, breaking off an existing one could be more costly when weighed against the prospect of waiting it out to see if it improves in the future. 

There are other factors to consider as well. For instance, the return on social investment may often not be all that immediate and, in other cases, might come from sources other than the person being invested in. Taking those in order, if you break off social investments with others at the first sign of trouble – especially deeper, longer-lasting relationships – you may develop a reputation as a fair-weather friend. Simply put, people don’t want to invest and be friends with someone who is liable to abandon them when they need it most. We’d rather have friends who are deeply and honestly committed to our welfare, as those can be relied on. Breaking off social relationships too readily demonstrates to others that one is not that appealing as a social asset, making you less likely to have a place in their limited social roster. 

Further, investing in one person is also to invest in their social network. If you take care of a sick child, you’re not going to hope that the child will pay you back. Doing so might ingratiate you to their parents, however, and perhaps others as well. This can be contrasted with investing in a business: trying to help a failing business isn’t liable to earn you any brownie points as an attractive social asset to other businesses looking to court your investment, nor is Ford going to return the poor investment you made in BP because they’re friends with each other.

Whatever the explanation, it seems that the human willingness to succumb to sunk costs in the financial realm may well be a byproduct of an adaptive mechanism in the social domain being co-opted for a task it was not designed to solve. When that happens, you start seeing some weird behavior. The key to understanding that weirdness is to understand the original functionality.

References: Hrgović, J. & Hromatko, I. (2017). The time and social context in sunk-cost effects. Evolutionary Psychological Science, doi: 10.1007/s40806-017-0134-4

Does Diversity Per Se Pay?

In one of the most interesting short reports I read recently, some research was conducted in Australia examining what the effect of blind reviews would be on hiring. The premise of the research, far as I can surmise, was that a fear existed of conscious or unconscious bias against women and minority groups when it came to getting hired. This bias would naturally make it harder for those groups to find employment, ultimately yielding a less diverse workforce. In the interests of avoiding that bias, the research team compared what happened when candidates were assessed on either standard resumes or de-identified ones. The latter resumes were identical to the former, except they had group-relevant information (like gender and race) removed. If reviewers don’t have that information of race or gender available, then they couldn’t possibly assess the candidates on the basis of them, whether consciously or unconsciously. That seems straightforward enough. The aim was to compare the results from the blind assessments to those of the standard resumes. As it turned out, there were indeed hints of bias; relatively small in size sometimes, but present nonetheless. However, the bias did not go in the direction that had been feared.

Shocking that the headline wasn’t “Blind review processes are biased”

Specifically, when the participants assessing the resumes had information about gender, they were about 3% more likely to select women, and 3% less likely to select men. Further, minorities were more likely to be selected as well when the information was available (about 6% for males and 9% for females). While there’s more to the picture than that, the primary result seemed to be that, when given the option, these reviewers discriminated in favor of women and minority groups simply because of their group membership. If these results had run in the opposite direction (against women and minorities) there would have no doubt been calls for increasing blind reviews. However, because blind reviews seemed to disfavor women and minorities, the authors had a different suggestion:

Overall, the results indicate the need for caution when moving towards ’blind’ recruitment processes in the Australian Public Service, as de-identification may frustrate efforts aimed at promoting diversity

It’s hard to interpret that statement as anything other than ”we should hire more women and minorities, regardless of qualifications.” Even if sex and race ought to be irrelevant to the demands of the job and candidates should be assessed on their merit, people should also apparently be cautious when removing those irrelevant pieces from the application process. The authors seemed to favor discrimination based on sex or race so long as it benefited the right groups. Such discriminatory practices have led to negative reactions on the part of others, as one might expect.

This brings me another question: why should we value diversity when it comes to hiring decisions? To be clear, the diversity being sought is often strictly demographic in nature (many organizations tout diversity in race, for instance, but not in perspective. I don’t recall the draw of many positions being that you will meet a variety of people who hold fundamental disagreements with your view on the world). It’s also usually the kind of diversity that benefits women and minorities (I’ve never come across calls to get more white males into certain fields dominated by women or other races. Perhaps they exist; I just haven’t seen them). But are there real economic benefits to increasing diversity per se? Could it be the case that more diverse organizations just do better? On the face of it, I would assume the answer is “no” if the diversity in question is simply demographic in nature. What matters when it comes to job performance is not the color of one’s skin or what sex chromosomes they possess, but rather their skills and competencies they bring with them. While some of those skills and competencies might be very roughly approximated by race and gender if you have no additional information about your applicants, we thankfully don’t need to rely on those indirect measures. Rather than asking about gender or race, one could just ask directly about skill sets and interests. When you can do that, the additional value of knowing one’s group membership is likely close to nil. Why bother using a predictor of a variable when you can just use the variable itself?

Do you really love roundabouts that much?

Nevertheless, it has apparently been reported before that demographic diversity predicts the relative success of companies (Herring, 2009). A business case was made for diversity, such that diverse companies were found to generally do better than less diverse ones across a number of different metrics. Not that those in favor of increasing diversity really seemed to need a financial justification, but having one certainly wouldn’t hurt their case. As this paper was apparently popular within the literature (for what I assume is that reason), a replication was attempted (Stojmenovska et al, 2017), beginning in a graduate course as an assignment to help students “learn from the best.” Since it seems “psychology research” and “replications” mix about as well as oil and water as of late, the results turned out a bit worse than hoped. The student wasn’t even trying to look for problems; they just stumbled upon them.  

In this instance, the replication attempt failed to find the published result, instead catching two primary mistakes made in the original paper (as opposed to anything malicious): there were a number of coding mistakes within the data, and the sample data itself was skewed. Without going too deeply into why this is a problem, it should suffice to say that coding mistakes are bad for all the obvious reasons. Fixing the coding mistakes by deleting missing data resulted in a substantial reduction in sample size (25-50% smaller). As for the issue of skew, having a skewed sample can result in an underestimation of the relationship between predictors and outcomes. In brief, there were confounding relationships between predictor variables and the outcomes that were not adequately controlled for in the original paper. To correct for the skew issue, a log transformation on the data was carried out, resulting in a dramatic increase in the relationship between particular variables.

In order to provide a concrete sense for that increase, in the original report the correlation between company size and racial diversity was .14; after the log transformation was carried out, that correlation increased to .41. This means that larger companies tended to be more racially diverse than smaller ones, but that relationship was not fully accounted for in the original paper examining how diversity impacted success. The same issue held for gender diversity and establishment size.

Once these two issues – coding errors and skewed data – were addressed, the new results showed that gender and racial diversity were effectively unrelated to company performance. The only remaining relationship was a small one between gender diversity and the logged number of customers. While seven of the original eight hypotheses were supported in the first paper, the replication attempt correcting these errors only found one of the eight to be statistically supported. As most of the effects no longer existed, and the one that did exist was small in size, the business justification for increasing racial and gender diversity failed to receive any real support.

Very colorful, but they ultimately all taste the same

As I initially mentioned, I don’t see a very good reason to expect that a more demographically diverse group of employees should yield better outcomes. They don’t yield worse outcomes either. However, the study from Australia suggests that the benefits of diversity (or the lack thereof) are basically besides the point in many instances. That is, not only would I imagine this failure to replicate won’t have a substantial impact on many people’s views on whether or not diversity should be increased, but I don’t think it would even if diversity was found to be a bad thing, financially speaking. This is because I don’t suspect many views of whether increasing diversity should be done are based on the foundation that it’s good for people economically in the first place. Increasing diversity isn’t viewed as a tricky empirical matter as much as it seems to be a moral one; one in which certain groups of people are viewed as owing or deserving various things.

This is only looking at the outcomes of adding diversity, of course. The causes of such diverse levels of diversity across different walks of life is another beast entirely.

References: Stojmenovska, D., Bol, T., & Leopolda, T. (2017). Does diversity pay? Replication of Herring (2009). American Sociological Review, 82, 857-867. 

Herring, C. (2009). Does diversity pay? Race, gender, and the business case for diversity. American Sociological Review, 74, 208–224.

Income Inequality Itself Doesn’t Make People Unhappy

There’s an idea floating around the world of psychology referred to as Social Comparison Theory. The basic idea is that people want to know how well they measure up to others in the world and so will compare themselves to others. While there’s obviously more to it than that (including some silly suggestions along the lines of people comparing themselves to others to feel better, rather than to do something adaptive with that information), the principle has been harnessed by researchers examining inequality. Specifically, it has been proposed that inequality itself makes people sad. According to the status-anxiety hypothesis, when it comes to things like money and social status, people make a lot of upwards comparisons between themselves and those doing better. Seeing that other people in the world are doing better than them, they become upset, and this is supposed to be why inequality is bad. I think that’s the idea, anyway. Feel free to add in any additional, more-refined versions of the hypothesis if you’re sitting on them.

“People are richer than me,” now warrants a Xanax prescription

As it turns out, that idea seems to be a little less than true. Before getting to that, though, I wanted to make a few general points (or warnings) about the research I’ve encountered on inequality aversion; the idea that people dislike inequality itself and seek to reduce it when possible (especially the kind that leaves them with less than others). The important point to make about research on inequality is that, if you are looking to get a solid measure of the effects of inequality itself, you need to get everything that is not inequality out of your measures. That’s a basic point for any research, really. 

For instance, when examining research on inequality in the past, I kept noticing that the papers on the topic almost always contained confounding details which impeded the ability of the authors to make the interpretations of the data they were interested in making. Several papers looked at the effects of inequality on punishment in taking games. The basic set up here is that you and I would start with some amount of money. I then take some of that money from you for myself. Because I have taken from you, we either end up with you being better off, me being better off, or both of us being equal. After I take from you, you would be given the option to punish me for my behavior and, as it turns out, people preferentially punish when the taker ends up with more money than them. So if I took money from you, you’d be more likely to punish me if I ended up better off, relative to cases where we were equal or you were still better off. (This happens in research settings with experimental demand characteristics, anyway. In a more naturalistic setting when someone mugs you, I can’t imagine many people’s first thoughts are, “He probably needs the money more than me, so this is acceptable.”)

While such research can tell us about the effects of inequality to some extent, it cannot tell us about the effects of inequality that are distinct from takingTo put that in concrete example, my subsequent research (Marczyk, 2017) used that same taking game to replicate the results while adding two other inequality-generating conditions: one in which I could increase my payment with no impact on you, and another where I could decrease your payment at no benefit to myself. In those two conditions, I found that inequality didn’t appear to have any appreciable impact on subsequent punishment: if I wasn’t harming you, then you wouldn’t punish me even if I generated inequality; if I was harming you, you would punish me even if I was worse off. This new piece of information tells us something very important: namely, that people do not consistently want to achieve equality. When we have been harmed, we usually want to punish, even if punishing generates more inequality than originally existed. (That said, there are still demand characteristics in my work worth addressing. Specifically, I’d bet any effects of inequality would be reduced even further when the money the participants get is earned, rather than randomly distributed by an experimenter)

In terms of the research I want to talk about today, this is relevant because this new – and incredibly large – analysis sought to examine the effects of income inequality on happiness as distinct from the overall economic development of a country (Kelley & Evans, 2017). Apparently lots of previous work had been looking at the relationship between inequality within nations and their happiness without controlling for other important variables. The research question of this new work was, effectively, all else being equal, does inequality itself tend to make people unhappy? The simple example of this question they put forth was to imagine twins: John and James. John lives in a country with relatively low income-inequality and makes $20,000 a year. James lives in a country with relatively high income-inequality and makes $20,000 a year. Will John or James be happier? They sought to examine, on the national level, this connection between inequality and life satisfaction/happiness. 

“At least we’re all poor together”

In order to get that all else to be equal, there are a number of things you need to control for that might be expected to affect life satisfaction. The first of these is GDP per capita; how much a nation tends to produce per person. This is important because it might mean a lot less for your happiness that everyone is equal if that equality means everyone lives in extreme poverty. If that happens to the be the case, then increasing industrialization of a nation can actually increase opportunities for economic advancement while also increasing inequality (as the rewards of such a process aren’t shared equally among the population, at least initially. After a time, a greater percentage of the population will begin to share in those rewards and the relationship between inequality and economic development decreases).  

The other factors you need to control for are individual ones. Just because a society might be affluent, that does not mean that the person answering your survey happens to be. This means controlling for personal income as well, as making more money tends to make for happier people. The authors also controlled for known correlates of happiness including sex, age, marriage status, education, and religious attendance. It’s only once all these factors have been controlled for that you can begin to consider the effect of national inequality (as measured by the Gini coefficient) on life satisfaction ratings. That’s not to say these are all the relevant controls, but they’re a pretty good start. 

Enacting these controls is exactly what the researchers did, pooling data from 169 surveys in 68 societies, representing over 200,000 individuals. If there’s a connection between inequality and life satisfaction to be found, it should be evident here. Countries were categorized as a member of either developing nations (those below 30% of the US per-captial GDP) or advanced ones (those above the 30% mark), and the same analyses was run on each. The general findings of the research are easy to summarize: in developing nations, inequality was predictive of an increase in societal happiness (about 8 points on a 1-100 scale); among the advanced nations, there was no relationship between inequality and happiness. This largely appeared to be the case because, as previously outlined, the onset of development in poorer countries generated initial periods of greater inequality. As development advances, however, this relationship disappears.

A separate analysis was also run on families in the bottom 10% of a nation in terms of income, compared with the families in the top 10% since much of the focus on inequality has discussed the divide between the poor and the rich. As expected, rich people tended to be happier than poor ones, but the presence of inequality was, as before, a boon for happiness and life satisfaction in both groups. It was not that inequality made the poor feel bad while the rich felt good. Whatever the reason for this, it does not seem like poor people were looking up at rich people and feeling like their life was terrible because others had more.

“Some day, all this could be yours…”

All this is not to say that inequality itself is going to make people happy as much as the things that inequality represents can. Inequality can signal the onset of industrialization and development, or it can signal there is hope of improving one’s lot in life through hard work. These are positives for life satisfaction. Inequality might also represent that the warlord in the next town over is very good at stealing resources. This would be bad. However, whatever the reason for these correlations, it does not seem to be the case that inequality per se is what makes people unhappy with life (though living in nations with high GDP and earning good salaries seem to put a smile on some faces).

I like this interpretation of the data, unsurprisingly, because it happens to fit well with my own. In my experiments, people didn’t seem to be punishing inequality itself; they were punishing particular types of behaviors – like the stealing or destruction of resources – that just so happened to generate inequality at times. In other words, people are responding primarily to the means through which inequality arises, rather than the inequality itself. This appears to be the case in the present paper as well. Most telling of this interpretation, I feel, is a point mentioned within the paper without much discussion (as its the topic of a separate one): the national data was collected from non-communist nations. Things are a little different in the communist countries. For those cohorts who lived their formative years in communist nations, inequality appears to have a negative relationship with happiness, though that dissipates in new, post-communist generations. From that finding, it seems plausible to speculate that communists might have different ideas about the means through which inequality arises (mostly negative) which they push rather aggressively, relative to non-communists. That said, those attitudes do not seem to persist without consistent training.

Reference: Kelley, J. & Evans, M. (2017). Societal inequality and individual subjective well-being: Results from 68 societies and over 200,000 individuals, 1981-2008. Social Science Research, 62, 1-23.

Marczyk, J. (2017). Human punishment is not primarily motivated by inequality. PLOS One, https://doi.org/10.1371/journal.pone.0171298

On The Need To Evolutionize Memory Research

This semester I happen to be teaching a course on human learning and memory. Part of the territory that comes with designing and teaching any class is educating yourself on the subject: brushing up on what you do know and learning about what you do not. For the purposes of this course, much of my preparations come from the latter portion. Memory isn’t my main specialty, so I’ve been spending a lot of time reading up on it. Wandering into a relatively new field is always an interesting experience, and on that front I consider myself fortunate: I have a theoretical guide to help me think about and understand the research I’m encountering – evolution. Rather than just viewing the field of memory as a disparate collection of facts and findings, evolutionary theory allows me to better synthesize and explain, in a satisfying way, all these novel (to me) findings and tie them to one another. It strikes me as unfortunate that, as with much of psychology, there appears to be a distinct lack of evolutionary theorizing on matters of learning and memory, at least as far as the materials I’ve come across would suggest. That’s not to say there has been none (indeed, I’ve written about some before), but rather that there certainly doesn’t seem to have been enough. It’s not the foundation of the field, as it should be. 

“How important could a solid foundation really be?”

To demonstrate what I’m talking about, I wanted to consider an effect I came across during my reading: the generation effect in memory. In this case, generation refers not to a particular age group (e.g., people in my generation), but rather to the creation of information, as in to generate. The finding itself – which appears to replicate well – is that, if you give people a memory task, they tend to be better at remembering information they generated themselves, relative to remembering information that was generated for them. To run through a simple example, imagine I was trying to get you to remember the word “bat.” On the one hand, I could just have the word pop up on a screen, tell you to read and remember it. On the other hand, I could give you a different word, say, “cat” and ask you to come up with a word that rhymes with “cat” that can complete the blanks in “B _ _.” Rather than my telling you the word “bat,” then, you would generate the word on your own (even if the task nudges you towards generating it rather strongly). As it turns out, you should have a slight memory advantage for the words you generated, relative to the words you were just given.

Now that’s a neat finding at all – likely one that people would read about and thoughtfully nod their head in agreement – but we want to explain it: why is memory better for words you generate? On that front, the textbook I was using was of no use, offering nothing beyond the name of the effect and a handful of examples. If you’re trying to understand the finding – much less explain it to a class full of students – you’ll be on your own. Textbooks are always incomplete, though, so I turned to some of the referenced source material to see how the researchers in the field were thinking about it. These papers seemed to predominately focus on how information was being processed, but not necessarily on why it was being processed that way. As such, I wanted to advance a little bit of speculation on how an evolutionary approach could help inform our understanding of the finding (I say could because this is not the only possible answer to the question one could derive from evolutionary theory; what I hope to focus on is the approach to answering the question, rather than the specific answer I will float. Too often people can talk about an evolutionary hypothesis that was wrong as a reflection of the field, neglecting that how an issue was thought through is a somewhat separate matter from the answer that eventually got produced).

To explain the generation effect I want to first take it out of an experimental setting and into a more naturalistic one. That is, rather than figuring out why people can remember arbitrary words they generated better than ones they just read, let’s think about why people might have a better memory for information they’ve created in general, relative to information they heard. The initial point to make on that front is that our memory systems will only retain a (very) limited amount of the information we encounter. The reason for this, I suspect, is that if we retained too much information, cognitively sorting through it for the most useful pieces of information would be less efficient, relative to a case where only the most useful information was retained in the first place. You don’t want a memory (which is metabolically costly to maintain) chock-full of pointless information, like what color shirt your friend wore when you hung out 3 years ago. As such, we ought to expect that we have a better memory for events or facts that carry adaptively-relevant consequences.

“Yearbooks; helping you remember pointless things your brain would otherwise forget”

Might information you generate carry different consequences than information you just hear about? I think there’s a solid case to be made that, at least socially, this can be true. In a quick example, consider the theory of evolution itself. This idea is generally considered to be one the better ones people (collectively) have had. Accordingly, it is perhaps unsurprising that most everyone knows the name of the man who generated this idea: Charles Darwin. Contrast Darwin with someone like me: I happen to know a lot about evolutionary theory and that does grant me some amount of social prestige within some circles. However, knowing a lot about evolutionary theory does not afford me anywhere near the amount of social acclaim that Darwin receives. There are reasons we should expect this state of affairs to hold as well, such as that generating an idea can signal more about one’s cognitive talents than simply memorizing it does. Whatever the reasons for this, however, if ideas you generate carry greater social benefits, our memory systems should attend to them more vigilantly; better to not forget that brilliant idea you had than the one someone else did.

Following this line of reasoning, we could also predict that there would be circumstances in which information you generated is recalled less-readily than if you had just read about it: specifically, in cases when the information would carry social costs for the person who generated it.

Imagine, for instance, that you’re a person who is trying to think up reasons to support your pet theory (call that theory A). Initially, your memory for that reasoning might be better if you think you’ve come up with an argument yourself than if you had read about someone else who put forth that same idea. However, it later turns out that a different theory (call that theory B) ends up saying your theory is wrong and, worse yet, theory B is also better supported and widely-accepted. At that point, you might actually observe that the person’s memory for the initial information supporting theory A is worse if they generated those reasons themselves, as that reflects more negatively on them than if they had just read about someone else being wrong (and memory would be worse, in this case, because you don’t want to advertise the fact that you were wrong to others, while you might care less about talking about why someone who wasn’t you was wrong).

In short, people might selectively forget potentially embarrassing information they generated but was wrong, relative to times they read about someone else being wrong. Indeed, this might be why it’s said truth passes through three stages: ridicule, opposition, and acceptance. This can be roughly translated to someone saying of a new idea, “That’s silly,” to, “That’s dangerous,” to, “That’s what I’ve said all along.” This is difficult to test, for sure, but it’s a possibility worth mulling over.

How you should feel reading over old things you forgot you wrote

With the general theory described, we can now try and apply that line of thinking back into the unnatural environment of memory research labs in universities. One study I came across (deWinstanley & Bjork, 1997) claims that the generation effect doesn’t always have an advantage over reading information. In their first experiment, the researchers had conditions where participants would either read cue-word pairs (like “juice” – “orange”, and, “sweet” – “pineapple”) or read a cue and then generate a word (e.g., “juice” – “or_n_ _”). The participants would later be tested on how many of the target words (the second one in the pair) they could recall. When participants were just told there would be a recall task later, but not the nature of that test, the generate group had a memory advantage. However, when both groups were told to focus on the relationship between the targets (such as them all being fruits), the read group’s ability to remember now matched that of the generate group.

In their second experiment, the researchers then changed the nature of the memory task: instead of asking participants to just freely recall the target words, they would be given the cue word and asked to recall the associated target (e.g., they see “juice” and need to remember “orange”). In this case, when participants were instructed to focus on the relationship between the cue and the target, it was the read participants with the memory advantage; not the generate group.

One might explain these findings within this framework I discussed as follows: in the first experiment, participants in the “read” condition were actually also in an implicit generate condition; they were being asked to generate a relationship between the targets to be remembered and, as such, their performance improved on the associated memory task. By contrast, in the second experiment, participants in the read condition were still in the implicit “generate” condition: being asked to generate connections between the cues and targets. However, those in the explicit generate condition were only generating the targets; not their cues. As such, it’s possible participants tended to selectively attend to the information they had created over the information they did not. Put simply, the generate participant’s ability to better recall the words they created was interfering with their ability to remember their associations with the words they did not create. Their memory systems were focusing on the former over the latter.

A more memorable meal than one you go out and buy

If one wanted to increase the performance of those in the explicit generate condition for experiment two, then, all a researcher might have to do would be to get their participants to generate both the cue and the target. In that instance, the participants should feel more personally responsible for the connections – it should reflect on them more personally – and, accordingly, remember them better. 

Now whether that answers I put forth get it all the way (or even partially) right is besides the point. It’s possible that the predictions I’ve made here are completely wrong. It’s just that what I have been noticing is that words like “adaptive” and “relevance” are all but absent from this book (and papers) on memory. As I hope this post (and my last one) illustrates, evolutionary theory can help guide our thinking to areas it might not otherwise reach, allowing us to more efficiently think up profitable avenues for understanding existing research and creating future projects. It doesn’t hurt that it helps students understand the material better, either.

References: deWinstanley, P. & Bjork, E. (1997). Processing instructions and the generation effect: a test of the multifactor transfer-appropriate processing theory. Memory, 5, 401-421.

 

Benefiting Others: Motives Or Ends?

The world is full of needy people; they need places to live, food to eat, medical care to combat biological threats, and, if you ask certain populations in the first world, a college education. Plenty of ink has been spilled over the matter of how to best meet the needs of others, typically with a focus on uniquely needy populations, such as the homeless, poverty-stricken, sick, and those otherwise severely disadvantaged. In order to make meaningful progress in such discussions, there arises the matter of precisely why - in the functional sense of the word – people are interested in helping others, as I believe the answer(s) to that question will be greatly informative when it comes to determining the most effective strategies for doing so. What is very interesting about these discussions is that the focus is frequently placed on helping others altruistically; delivering benefits to others in ways that are costly for the person doing the helping. The typical example of this involves charitable donations, where I would give up some of my money so that someone else can benefit. What is interesting about this focus is that our altruistic systems often seem to face quite a bit of pushback from other parts of our psychology when it comes to helping others, resulting in fairly poor deliveries of benefits. It represents a focus on the means by which we help others, rather than really serving to improve the ends of effective helping. 

For instance. this sign isn’t asking for donations

As a matter of fact, the most common ways of improving the lives of others doesn’t involve any altruism at all. For an alternative focus, we might consider the classic Adam Smith quote pertaining to butchers and bakers:

But man has almost constant occasion for the help of his brethren, and it is in vain for him to expect it from their benevolence only. He will be more likely to prevail if he can interest their self-love in his favour, and show them that it is for their own advantage to do for him what he requires of them. Whoever offers to another a bargain of any kind, proposes to do this. Give me that which I want, and you shall have this which you want, is the meaning of every such offer; and it is in this manner that we obtain from one another the far greater part of those good offices which we stand in need of. It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.

In short, Smith appears to recommend that, if we wish to effectively meet the needs of others (or have them meet our needs), we must properly incentivize that other-benefiting behavior instead of just hoping people will be willing to continuously suffer costs. Smith’s system, then, is more mutualistic or reciprocal in nature. There are a lot of benefits to trying to use these mutualistic and reciprocally-altruistic cognitive mechanisms, rather than altruistic ones, some of which I outlined last week. Specifically, altruistic systems typically direct benefits preferentially towards kin and social allies, and such a provincial focus is unlikely to deliver benefits to the needy individuals in the wider world particularly well (e.g., people who aren’t kin or allies). If, however, you get people to behave in a way that benefits themselves and just so happen to benefit others as a result, you’ll often end up with some pretty good benefit delivery. This is because you don’t need to coerce people into helping themselves.  

So let’s say we’re faced with a very real-world problem: there is a general shortage of organs available for people in need of transplants. What cognitive systems do we want to engage to solve that problem? We could, as some might suggest, make people more empathetic to the plight of those suffering in hospitals, dying from organ failure; we might also try to convince people that signing up as an organ donor is the morally-virtuous thing to do. Both of these plans might increase the number of people willing to posthumously donate their organs, but perhaps there are much easier and effective ways to get people to become organ donors even if they have no particular interest in helping others. I wanted to review two such candidate methods today, neither of which require that people’s altruistic cognitive systems be particular engaged.

The first method comes to us from Johnson & Goldstein (2003), who examine some cross-national data on rates of organ donor status. Specifically, they note an oddity in the data: very large and stable differences exist between nations in organ donor status, even after controlling for a number of potentially-relevant variables. Might these different rates exist because of people’s preferences for being an organ donor varying markedly between countries? It seems unlikely, unless people in Germany have an exceedingly unpopular opinion toward being an organ donor (14% are donors, from the figures cited), while people in Sweden are particularly interested in it (86%). In fact, in the US, support for organ donation is at near ceiling levels, yet a large gap persists between those who support it (95%) and those who indicated on a driver’s license they were donors (51% in 2005; 60% in 2015) or who had signed a donor card (30%). If it’s not people’s lack of support for such a policy, what is explaining the difference?

A poor national sense for graphic design?

Johnson & Goldstein (2003) float a simple explanation for most of the national differences: whether donor programs were opt-in or opt-out. What that refers to is the matter of, assuming someone has made no explicit decision as to what happens to their organs after they die, what decision would be treated as the default? In opt-in countries (like Germany and the US), non-donor status would be assumed unless someone signs up to be a donor; in opt-out countries, like Sweden, people are assumed to be donors unless they indicate that they do not wish to be one. As the authors report, the opt-in countries have much lower effective consent rates (on average, 60% lower) and the two groups represent non-overlapping populations. That data supplements the other experimental findings from Johnson & Goldstein (2003) as well. The authors had 161 participants take part in an experiment where they were asked to imagine they had moved to a new state. This state either treated organ donation as the default option or non-donation as the default, and participants were asked whether they would like to confirm or change their status. There was also a third condition where no default answer was provided. When no default answer was given, 79% of participants said they would be willing to be an organ donor; a percentage which did not differ from those who confirmed their donor status when it was the default (82%). However, when non-donor status was the default, only 42% of the participants changed their status to donor. 

So defaults seem to matter quite a bit, but let’s assume that a nation isn’t going to change its policy from opt-in to opt-out anytime soon. What else might we do if we wanted to improve the rates of people signing up to be an organ donor in the short term? Eyting et al (2016) tested a rather simple method: paying people €10. The researchers recruited 320 German university students who did not currently have an organ donor card and provided them the opportunity to fill one out. These participants were split into three groups: one in which there was no compensation offered for filling out the card, one in which they would personally receive €10 for filling out a card (regardless of which choice they picked: donor or non-donor), and a final condition in which €10 would be donated to a charitable organization (the Red Cross) if they filled out a card. No differences were observed between the percentage of participants who filled out the card between the control (35%) and charity (36%) conditions. However, in the personal benefit group, there was a spike in the number of people filling out the card (72%). Not all those who filled out the cards opted for donor status, though. Between conditions, the percentage of people who both (a) filled out the card and (b) indicated they wanted to be a donor where about 44% in the personal payment condition, 28% in the control condition, and only 19% in the charity group. Not only did the charity appeal not seem particularly effective, it was even nominally counterproductive.

“I already donated $10 to charity and now they want my organs too?!”

Now, admittedly, helping others because there’s something in it for you isn’t quite as sexy (figuratively speaking) as helping because you’re driven by an overwhelming sense of empathy, conscience, or simply helping for no benefit at all. This is because there’s a lower signal value in that kind of self-beneficial helping; it doesn’t predict future behavior in the absence of those benefits. As such, it’s unlikely to be particularly effective at building meaningful social connections between helpers and others. However, if the current data is any indication, such helping is also likely to be consistently effective. If one’s goal is to increase the benefits being delivered to others (rather than building social connections), that will often involve providing valued incentives for the people doing the helping.

On one final note, it’s worth mentioning that these papers only deal with people becoming a donor after death; not the prospect of donating organs while alive. If one wanted to, say, incentivize someone to donate a kidney while alive, a good way to do so might be to offer them money; that is, allow people to buy and sell organs they are already capable of donating. If people were allowed to engage in mutually-beneficial interactions when it came to selling organs, it is likely we would see certain organ shortages decrease as well. Unfortunately for those in need of organs and/or money, our moral systems often oppose this course of action (Tetlock, 2000), likely contingent on perceptions about which groups would be benefiting the most. I think this serves as yet another demonstration that our moral sense might not be well-suited for maximizing the welfare of people in the wider social world, much like our empathetic systems don’t.

References: Eyting, M., Hosemann, A., & Johannesson, M. (2016). Can monetary incentives increase organ donations? Economics Letters, 142, 56-58.

Johnson, E. & Goldstein, D. (2003). Do defaults save lives? Science, 132, 1338-1339.

Tetlock, P. (2000). Coping with trade-offs: Psychological constraints and political implications. In Elements of Reason: Cognition, Choice, & the Bounds of Rationality. Ed. Lupia, A., McCubbins, M., & Popkin, S. 239-322.  

Is Choice Overload A Real Thing?

Within the world of psychology research, time is often not kind to empirical findings. This unkindness was highlighted recently in the results of the reproducibility project, which found that the majority of psychological findings tested did not appear to replicate particularly well. There are a number of reasons this happens, including that psychological research tends to be conducted rather atheoretically (allowing large numbers of politically-motivated or implausible hypotheses to be successfully floated), and that researchers have the freedom to analyze their data in rather creative ways (allowing them to find evidence of effects where none actually exist). These practices are engaged in because positive findings tend to be published more often than null results. In fact, even if the researchers do everything right, that’s still not a guarantee of repeatable results; sometimes people just get lucky with their data. Accordingly, it is a fairly common occurrence for me to revisit some research I learned about during my early psychology education only to find out that things are not quite as straightforward or sensible as they had been presented to be. I’m happy to report that today is (sort of) one of those days. The topic in question has been called a few different things, but for my present purposes I will be referring to it as choice overload: the idea that having access to too many choices actually results in making decisions more difficult and less satisfying. In fact, if too many options are presented, people might even avoid making a decision altogether. What a fascinating idea.

Here’s to hoping time is kind to it…

The first time I had heard of this phenomenon, it was in the context of exotic jams. The summary of the research goes as follows: Iyengar & Lepper (2000) set up shop in a grocery store, creating a tasting booth for either six or 24 varieties of jams (from which the more-standard flavors, like strawberry, were removed). Shoppers were invited to stop by the booths, try as many of the jams as they wanted, given a $1 off coupon for that brand’s jam, and then left. The table with the more extensive variety did attract more customers (60% of those who walked by), relative to the table with fewer selections (40%), suggesting that the availability of more options was, at least initially, appealing to people. Curiously, however, there was no difference between the average number of jams sampled: whether the table had 6 flavors or 24, people only sampled about 1.5 of them, on average, and apparently, no one ever sampled more than two flavors (maybe they didn’t want to see rude or selfish). More interestingly still, because the customers were given coupons, their purchases could be tracked. Of those who stopped at the table with only six flavors, about 30% ended up later purchasing jam; when the table had 24 flavors, a mere 3% of customers ended up buying one.

There are a couple of potential issues with this study, of course, owing to its naturalistic design; issues which were noted by the authors. For instance, it is possible that people who were fairly uninterested in buying jam might have been attracted to the 24-flavor booth nevertheless, simply out of curiosity, whereas those with a greater interest in buying jams would have remained interested in sampling them even when a smaller number of options existed. To try and get around these issues, Iyengar & Lepper (2000) designed another two experiments, one of which I wanted to cover. This other experiment was carried out in a more standard lab setting (to help avoid some of the possible issues with the jam results) and involved tasting chocolate. There were three groups of participants in this case: the first group (n = 33) got to select and sample a chocolate from an array of six possible options, the second group (n = 34) got to select and sample a chocolate from an array of 30 possible options, and a final group (n = 67) were randomly assigned to test a chocolate they had not selected. In the interests of minimizing people’s familiar preferences for such things, only those who enjoyed chocolate, but did not have experience with that particular brand were selected for the study. After filling out a few survey items and completing the sampling task, the participants were presented with their payment option: either $5 in cash, or a box of chocolates from that brand worth $5. 

In accordance with the previous findings, participants who selected from 30 different options were somewhat more likely to say they had been presented with “too many” options (M = 4.88) compared with those who old had 6 possible choices (M = 3.61, on a seven-point scale, ranging from “too few” choices at 1, to “too many” choices at 7). Despite the subjects in the extensive-choice group saying that making a decision as to which chocolate to sample was more difficult, however, there was no correlation between how difficult participants found the decision and how much they reported enjoying making it. It seemed people could enjoy making more difficult choices. Additionally, participants in the limited-choice group were more satisfied with their choice (M = 6.28) than those in the extensive-choice group (M = 5.46), who were in turn more satisfied than those in the no-choice group (M = 4.92). Of particular interest are the compensation findings: those in the limited-choice group were more likely to accept a box of chocolate in lieu of cash (48%) than those in either the extensive-choice (12%) or no-choice conditions (10%). It seems that having some options was preferable to having no options, but having too many options seemed to cause people difficulty in making decisions. The research concluded that, to use the term, people could be overloaded by choices, hindering their decision making process.

“If it can’t be settled via coin flip, I’m not interested”

While such findings are indeed quite interesting, there is no guarantee they will hold up over time; as I mentioned initially, lots of research fails to do likewise. This is where meta-analyses can help. This is the kind of research where the results from many different studies can be examined jointly. Scheibehenne et al (2010) set out to conduct one of their own on the research surrounding choice overload, noting that some of the research on the phenomenon does not point in the same direction. They note a few examples, such as field research in which reducing the number of available items resulted in decreases or no changes to sales, rather than what should have been a predicted uptick in them. Indeed, the lead author also reports that their own attempt at replicating the jam study for their dissertation in 2008 failed, as well as the second author’s attempt to replicate the chocolate experiment. These failures to replicate the original research might indicate that the initial results of choice overload were something of a fluke, and so a wider swath of research needs to be examined to determine if that’s the case.

Towards this end, Scheibehenne et al (2010) collected 50 experiments from the literature on the subject, representing about 5,000 participants in 13 published and 16 unpublished papers from 2000-2009. In total, the average estimated effect size for the choice overload effect across all the experiments was a mere D = 0.02; the effect was all but non-existent. Further analysis revealed that the difference in effect sizes between studies did not seem to be randomly distributed; there were likely relevant differences between these papers determining what kind of results they found. To examine this issue further, Scheibehenne et al (2010) began by trimming off the 6 largest effects from both the top and the bottom ends of the reported research. The results showed that, in the trimmed data set, there was little evidence of difference between the remaining research. This suggests that most of the differences between these studies was being driven by unusually large positive and negative effects.

Returning to the complete, untrimmed data set, Scheibehenne et al (2010) started to pick apart how several moderating variables might be affecting the reported results. In line with the intuitions of Iyengar & Lepper (2000), preexisting preferences or expertise did indeed have an effect on the choice overload issue: people with existing preferences were not as troubled by additional items when making a choice, relative to those without such preferences. However, there was also an effect of publication – such that published papers were somewhat more likely to report an effect of choice overload, relative to unpublished ones – as well as a small effect of year – such that papers published more recently were a bit less likely to report choice overloading effects. In sum, the results of the meta-analysis indicated that the average effect size of choice overload was nearly zero, that older studies which saw publication report larger effects than those that came later or were not published, and that well-defined, preexisting preferences likely remove the negative effects of having too many options (to the extent they actually existed in the first place). Crucially, what should have been an important variable – the number of different options participants were presented with on the high end – explained essentially none of the variance. That is to say that 18 times didn’t seem to make any difference, compared to 30 items or more

“Well, there are too many different chip options; guess I’ll just starve”

While this does not rule out choice overload as being a real thing, it does cast doubt on the phenomenon being as pervasive or important as some might have given it credit for. Instead, it appears probable that such choice effects might be limited to particular contexts, assuming they reliably exist in the first place. Such contexts might include how easily the products can be compared to one another (i.e., it’s harder to decide when faced with two equally attractive, but quite distinct options), or whether people are able to use mental shortcuts (known as heuristics) to rapidly whittle down the number of options they actually consider (so as to avoid spending too much time making fairly unimportant choices). While future examination would be required to test some of these ideas, the larger message here extends beyond the choice overload literature to most of psychology research: it is probably fair to assume that, as things currently stand, the first thing you hear about the existence or importance of an effect will likely not resemble the last thing you do.

References: Iyengar, S. & Lepper, M. (2000). When choice is demotivating: Can one desire too much of a good thing? Journal of Personality & Social Psychology, 79, 995-1006.

Scheibehenne, B., Greifeneder, R., & Todd, P. (2010). Can there ever be too many options? A meta-analytic review of choice overload. Journal of Consumer Research, 37, 409-424.

 

Savvy Shoppers Seeking Sex

There exists an idea in the economic field known as revealed preferences theory. People are often said to have preferences for this or that, but preferences are not the kind of thing that can be directly observed (just as much of our psychology cannot). As such, you need to find a way to infer information about these underlying preferences through something observable. In the case of revealed preferences, the general idea is that people’s decisions about what to buy and how much to spend are capable of revealing that information. For instance, if you would rather buy a Honda instead of a Ford for the same price, I have learned that your preferences – at least in the current moment – favor Hondas; if I were interested in determining the degree of that preference, I could see how much more you were willing to pay for the Honda. There are some criticisms of this approach – such as the the issue that people sometimes prefer A to B when compared to each other directly, but prefer B to A when presented with a third, irrelevant option – but the general principle behind it seems sound: people’s willingness to purchase goods and services positively correlates with their desires, despite some peculiarities. The more someone is willing to pay for something, the more valuable they perceive it to be.

“Marrying you is worth about $1,500 to me”

Now this is by no means groundbreaking information; it’s a facet of our psychology we are all already intimately familiar with. It does, however, yield an interesting method for examining people’s mating preferences when it’s turned on prostitution. In this case, a new paper by Sohn (2016) sought to examine how well men’s self-reported mating preferences for youthful partners were reflected in the prostitution market, where encounters are often short in duration, fairly anonymous, and people can seek out what they’re interested in, so long as they can afford it. It is worth mentioning at the outset that seeking youth per se is not exactly valuable in the adaptive sense of the word; instead, youth is valued (at least in humans) because of how it relates to both reproductive potential and fertility. Reproductive potential refers to how many expected years of future reproduction a woman has remaining before she reaches menopause and loses that capability. As such, this value is highest around the time she reaches menarche (signaling the onset of her reproductive ability) in her mid-teens and decreases over time until it reaches zero at menopause. Fertility, by contrast, refers to a woman’s likelihood of successful conception following intercourse, and tends to peak around her early twenties, being lower both prior to and after that point.

Since the type of intercourse sought by men visiting prostitutes is usually short-term in nature, we ought to expect the male preference for traits that cue high fertility to be revealed by the relative price they’re willing to pay for sex with women displaying them (since short-term encounters are typically aimed at immediate successful reproduction, rather than monopolizing a woman’s reproductive potential in the future). As such fertility cues tend to peak at the same ages as fertility itself, we would predict that women in their early twenties should command the highest price on the sexual market price, and this value should decline as women get older or younger. There are some issues with studying the subject matter, of course: sex with minors – much like prostitution in general – is often subject to social and legal sanctions. While the former issue cannot (and, really, should not) be skirted, the latter issue can be. One way of getting around the legal sanctions of prostitution in general is to study it in areas in the world where it is legal. In this instance, Sohn (2016) reports on a data set derived from approximately 8,600 prostitutes in Indonesia, ranging from ages 17-40, where, we are told, prostitution is quasi-legal.

The variable of interest in this data set concerns how much money the prostitutes received during their last act of commercial sex. This single-act method was employed in the hopes of minimizing any kinds of reporting inaccuracies that might come with trying to estimate how much money is being earned on average over long periods of time. While this choice necessarily limits the scope of the emerging picture concerning the price of sex, I believe it to be a justifiable one. Age was the primary predictor of this sex-related income, but a number of other variables were included in the analysis, such as frequency of condom use, years of schooling, age of first sex, and time selling sex. Overall, these predictor variables were able to account for over half of the variance in the price of sex, which is quite good.

“Priced to move!”

Supporting the hypothesis that men really do value these cues of fertility, the price of sex nominally rose from age 17 until it peaked at 21 (though this rise was not too appreciable), tracking fertility, rather than reproductive potential. Following that peak, the price of sex began to quickly and continuously decline through age 40, though the decline slowed passed 30. Descriptively, the price of sex at its minimum value was only about half the price of sex at peak fertility (which is a helpful tip for all you bargain-seekers out there…). Indeed, when age alone was considered, each additional year reduced the price of sex, on average, by about 4.5%; the size of that decrease uniquely attributable to age was reduced to about 2% per year when other factors were added into the equation, but both numbers tell the same story. A more detailed examination of this decrease grouped women into blocks of 5-year age periods. When considering age alone, there was no statistical difference between women in the 17-19 and 20-25 range. After that period, however, differences emerged: those in the 26-30 range earned 22% less, on average; a figure which fell to 42% less in the 30-34 group, and about 53% in the the 35-40 group.

This decrease in the price of sex over a woman’s lifespan is the opposite of how income usually works in non-sexual careers, where income rises with time and experience. It would be quite strange to work at a job where you saw your pay get cut by 2% each year you were with the company. It is likely for this reason that prostitutes in the 20-25 range were the most common (representing 32.6% of the sample), and those in older age groups were represented less heavily (27.6% in the 26-30 group, all the way down to 12% in the 35-40 range). When shopping for sex, then, men were not necessarily seeking the most experienced candidate for the position(s), but rather the most fertile one. As fertility declined, so too did the price. As price declined, women tended to leave the market. 

There were a few other findings of note, though the ‘whys’ explaining them are less straightforward. First, more educated prostitutes commanded a higher average asking price than their less educated peers, to the tune of about a 5% increase in price per extra year of school. As men and women both value intelligence highly in long-term partners, it is possible that cues of intelligence remain attractive, even in short-term contexts. Second, controlling for age, each year of selling sex tended to decrease the average price by about 1.5%. It is possible that the effects of prostitution visibly wear down the cues that men find appealing over time. Third, prostitutes who had ever used drugs or drank alcohol earned 12% more than their peers who abstained. Though I don’t know precisely why, it’s unlikely a coincidence that moral views about recreational drug use happen to be well predicted by views about the acceptability of casual sex (data from OKCupid, for instance, tells us the single best predictor of a woman’s interest in casual sex is whether she enjoys the taste of beer). Finally, prostitutes who proposed using condoms more often earned about 10% more than those who never did. I agree with Sohn’s (2016) assessment that this probably has to do with more desirable prostitutes being attractive enough to effectively bargain for condom use, whereas less attractive women compromise there in order to bring in clients. While men prefer sex without condoms, they appear willing to put that preference aside in the face of an attractive-enough prospect.  

“Disappointment now sold in bulk”

So what has been revealed about men’s preferences for sex with these data? Unfortunately, interpretation of prices is less straightforward than simply examining the raw numbers: their correspondence to other sources of data and theory should be considered. For instance, at least when seeking short term encounters, men seem to value fertility highly, and are willing to pay a premium to get it. This “real world” data accords well with the self-reports of men in survey and laboratory settings and, as such, seems to be easily interpretable. On other hand, men usually prefer sex without condoms, so the price premium among prostitutes who always suggest they be used would seem to, at face value, ‘reveal’ the wrong preference. Instead, it is more likely that prostitutes who already command a high price are capable of bargaining effectively for their use. In order to test such an explanation, you would need to pit the prospect of sex with the same prostitute with and without a condom against each other, both at the same price. Further, more educated prostitutes seemed to command a higher price on the sexual market: is this because men value intelligence in short-term encounters, educated women are more effective at bargaining, intelligence correlates with other cues of fertility or developmental stability (and thus attractiveness), or because of some other alternative? While one needs to step outside the raw pricing data obtained from these naturalistic observations to answer such questions effectively, the idea of using price data in general seems like a valuable method of analysis; whether it is more accurate, or a “truer” representation of our preferences than our responses to surveys is debatable but, thankfully, this need not be an either/or type of analysis.

References: Sohn, K. (2016). Men’s revealed preferences regarding women’s ages: Evidence from prostitution. Evolution & Human Behavior, DOI: http://dx.doi.org/10.1016/j.evolhumbehav.2016.01.002 

Preferences For Equality?

People are social creatures. This is a statement that surprises no one, seeming trivial to the same degree it is widely recognized (which is to say, “very”). That many people will recognize such a statement in the abstract and nod their head in agreement when they hear it does not mean they will always apply it to their thinking in particular cases, though. Let’s start with a context in which people will readily apply this idea to their thinking about the world: a video in which pairs of friends watch porn together while being filmed by others who have the intention to put the video online for view by (at the time of writing) about 5,700,000 people worldwide. The video is designed to get people’s reactions to an awkward situation, but what precisely is it about that situation which causes the awkward reactions? As many of you will no doubt agree, I suspect that answer has to do with the aforementioned point that people are social creatures. Because we are social creatures, others in our environment will be relatively inclined (or disinclined) from associating with us contingent on, among other things, our preferences. If some preferences make us seem like a bad associate to others – such as, say, our preferences concerning what kind of pornography arouses us, or our interest in pornography more generally – we might try to conceal those preferences from public view. As people are trying to conceal their preferences, we likely observe a different pattern of reactions to – and searches for – pornography in the linked video, compared to what we might expect if those actors were in the comfort and privacy of their own home.

Or, in a pinch, in the privacy of an Apple store or Public Library 

Basically, we would be wrong to think we get a good sense for these people’s pornography preferences from their viewing habits in the video, as people’s behavior will not necessarily match their desires. With that in mind, we can turn to a rather social human behavior: punishment. Now, punishment might not be the first example of social behavior that pops into people’s heads when they think about social things, but make no mistake about it; punishment is quite social. A healthy degree of human gossip centers around what we believe ought to be and not be punished; a fact which, much to my dismay, seems to take up a majority of my social media feeds at times. More gossip still concerns details of who was punished, how much they were punished, why they were punished, and, sometimes, this information will lead to other people joining in the punishment themselves or trying to defend someone else from it. From this analysis, we can conclude a few things, chief among which are that, (a) some portion of our value as an associate to others (what I would call our association value) will be determined by the perception of our punishment preferences, and (b) punishment can be made most or less costly, contingent on the degree of social support our punishment receives from others. 

This large social component of punishment means that observing the results of people’s punishment decisions does not necessarily inform you as to their preferences for punishment; sometimes people might punish others more or less than they would prefer to, were it not for these public variables being a factor. With that in mind, I wanted to review two pieces of research to see what we can learn about human punishment preferences from people’s behavior. The first piece claims that human punishment mechanisms have – to some extent – evolved to seek equal outcomes between the punisher and the target of their punishment. In short, if someone does some harm to you, you will only desire to punish them to the extent that it will make you two “even” again. An eye for an eye, as the saying goes; not an eye for a head. The second piece makes a much different claim: that human punishment mechanisms are not designed for fairness at all, seeking instead to inflict large costs on others who harm you, so as to deter future exploitation. Though both of these papers do not assess punishment in a social context, I think they have something to tell us about that all  the same. Before getting to that point, though, let’s start by considering the research in question.

The first of these papers is from Bone & Raihani (2015). Without getting too bogged down in the details, the general methods of the paper go as follows: two players enter into a game together. Player A begins the game with $1.10 while player B begins with a payment ranging from $0.60 to also $1.10. Player B is then given a chance to “steal” some of player A’s money for himself. The important part about this stealing is that it would either leave player B (a) still worse off than A, (b) with an equal payment to A, or (c) with a better payment than A. After the stealing phase, player A has the chance to respond by “punishing” player B. This punishment was either efficient – where for each cent player A spent, player B would lose three – or inefficient – where for each cent player A spent, player B would only lose one. The results of this study turned up the following findings of interest: first, player As who were stolen from tended to punish the player Bs more, relative to when the As were not stolen from. Second, player As who had access to the more efficient punishment option tended to spend more on punishment than those who had access to the less efficient option. Third, those player As who had access to the efficient punishment option also punished player Bs more in cases where B ended up better off than them. Finally, when participants in that former case were punishing the player Bs, the most common amount of punishment they enacted was the amount which would leave both player A and B with the same payment. From these findings, Bone & Raihani (2015) conclude that:

Although many of our results support the idea that punishment was motivated primarily by a desire for revenge, we report two findings that support the hypothesis that punishment is motivated by a desire for equality (with an associated fitness-leveling function…)

In other words, the authors believe they have observed the output of two distinct preferences: one for punishing those who harm you (revenge), and one for creating equality (fitness leveling). But were people really that concerned with “being even” with their agent of harm? I take issue with that claim, and I don’t believe we can conclude that from the data. 

We’re working on preventing exploitation; not building a frame.

To see why I take issue with that claim, I want to consider an earlier paper by Houser & Xiao (2010). This study involves a slightly different setup. Again, two players are involved in a game: player A begins the game by receiving $8. Player A could then transfer some amount of that money (either $0, $2, $4, $6, or $8) to player B, and then keep whatever remained for himself (another condition existed in which this transfer amount was randomly determined). Following that transfer, both players received $2. Finally, player B was given the following option: to pay $1 for the option to reduce player A’s payment by as much as they wanted. The results showed the following pattern: first, when the allocations were random, player B rarely punished at all (under 20%) and, when they did punish, they tended to punish the other player irrespective of inequality. That is they were equally as likely to deduct at all, no matter the monetary difference, and the amount they deducted did not appear to aimed at achieving equality. By contrast, of the player Bs that received $0 or $2 intentionally, 54% opted to punish player A and, when they did punish, were most likely to deduct so much from player A that they ended up better off than him (that outcome obtained between 66-73% of the time). When given free reign over the desired punishment amount, then, punishers did not appear to be seeking equality as an outcome. This finding, the authors conclude, is inconsistent with the idea that people are motivated to achieve equality per se. 

What both of these studies do, then, is vary the cost of punishment. In the first, punishment is either inefficient (1-to-1 ratio) or quite efficient (3-to-1 ratio); in the second, punishment is unrestricted in its efficiency (X-to-1 ratio). In all cases, as punishment becomes more efficient and less costly, we observe people engaging in more of it. What we learn about people’s preferences for punishment, then, is that they seems to be based, in some part, on how costly punishment is to enact. With those results, I can now turn to the matter of what they tell us about punishment in a social context. As I mentioned before, the costs of engaging punishment can be augmented or reduced to the extent that other people join in your disputes. If your course of punishment is widely supported by others, this means its easier to enact it; if your punishment is opposed by others, not only is it costlier to enact, but you might in turn get punished for engaging in your excessive punishment. This idea is fairly easy to wrap one’s mind around: stealing a piece of candy from a corner store does not usually warrant the death penalty, and people would likely oppose (or attack) the store owner or some government agency if they attempted to hand down such a draconian punishment for the offense.

Now many of you might be thinking that third parties were not present in the studies I mentioned, so it would make no sense for people to be thinking about how these non-existent third parties might feel about their punishment decisions. Such an intuition, I feel, would be a mistake. This brings me back to the matter of pornography briefly. As I’ve written before, people’s minds tend to generate physiological arousal to pornography despite there being no current adaptive reason for that arousal. Instead, our minds – or, more precisely, specific cognitive modules – attend to particular proximate cues when generating arousal that historically correlated with opportunities to increase our genetic fitness. In modern environments, where that link between cue and fitness benefit is broken by digital media providing similar proximate cues, the result in maladaptive outputs: people get aroused by an image, which makes about as much adaptive sense as getting aroused by one’s chair.

The same logic can likely be applied to punishment here as well, I feel: the cognitive modules in our mind responsible for punishment decisions evolved in a world of social punishment. Not only would your punishment decisions become known to others, but those others might join in the conflict on your side or opposing you. As such, proximate cues that historically correlated with the degree of third party support are likely still being utilized by our brains in these modern experimental contexts where that link is being intentionally broken and interactions are anonymous and dyadic. What is likely being observed in these studies, then, is not an aversion to inequality as much as an aversion to the costs of punishment or, more specifically, the estimated social and personal costs of engaging in punishment in a world that other people exist in.

“We’re here about our concerns with your harsh punishment lately”

When punishment is rather cheap to enact for the individual in question – as it was in Houser & Xiao (2010) – the social factor probably plays less of a role in determining the amount of punishment enacted. You can think of that condition as one in which a king is punishing a subject who stole from him: while the king is still sensitive to the social costs of punishment (punish too harshly and the rabble will rise up and crush you…probably), he is free to punish someone who wronged him to a much greater degree than your average peasant on the street. By contrast, in Bone & Raihani (2015), the punisher is substantially less powerful and, accordingly, more interested in the (estimated) social support factors. You can think of those conditions as ones in which a knight or a peasant is trying to punish another peasant. This could well yield inequality-seeking punishment in the former study and equality-seeking punishment in the latter, as different groups require different levels of social support, and so scale their punishment accordingly. Now the matter of why third parties might be interested in inequality between the disputants is a different matter entirely, but recognition of the existence of that factor is important for understanding why inequality matters to second parties at all.

References: Bone, J. & Raihani, N. (2015). Human punishment is motivated both by a desire for revenge and a desire for equality. Evolution & Human Behavior, 36, 323-330.

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment. Economics Letters, 109, 20-23.

Inequality Aversion, Evolution, And Reproduction

Here’s a scenario that’s not too difficult to imagine: a drug company has recently released a paper claiming that a product they produce is both safe and effective. It would be foolish of any company with such a product to release a report saying their drugs were in any way harmful or defective, as it would likely lead to a reduction in sales and, potentially, a banning or withdrawal of the drugs from the wider market. Now, one day, an outside researcher claims to have some data suggesting that drug company’s interpretation of their data isn’t quite right; once a few other data points are considered, it becomes clear that the drug is only contextually effective and, in other cases, not really effective at all. Naturally, were some representatives of the drug company asked about the quality of this new data, one might expect them to engage in a bit of motivated reasoning: some concerns might be raised about the quality of the new research that otherwise would not be, were its conclusions different. In fact, the drug company would likely wish to see the new research written up to be more supportive of their initial conclusion that the drug works. Because of their conflict of interests, however, expecting an unbiased appraisal of the research suggesting the drug is actually much less effective than previously stated from those representatives would be unrealistic. For this reason, you probably shouldn’t ask representatives from the drug company to serve as reviewers for the new research, as they’d be assessing both the quality of their own work and the quality of the work of others with factors like ‘money’ and ‘prestige’ on the table.

“It must work, as it’s been successfully making us money; a lot of money”

On an entirely unrelated note, I was the lucky recipient of a few comments about some work of mine concerning inequality aversion: the idea that people dislike inequality per se (or at least when they get the short end of the inequality stick) and are willing to actually punish it. Specifically, I happen to have some data that suggests people do not punish inequality per se: they are much more interested in punishing losses, with inequality only playing a secondary role in – occasionally – increasing the frequency of that punishment. To place this in an easy example, let’s consider TVs. If someone broke into your house and destroyed your TV, you would likely want to see the perpetrator punished, regardless of whether they were richer or poorer than you. Similarly, if someone went out and bought themselves a TV (without having any effect on yours), you wouldn’t really have any urge to punish them at all, whether they were poorer or richer than you. If, however, someone broke into your house and took your TV for themselves, you would likely want to see them punished for their actions. However, if they were actually poorer than you, this might incline you to go after the thief a bit less. This example isn’t perfect, but it basically describes what I found.

Inequality aversion would posit that people show a different pattern of punitive sentiments: that you would want to punish people who end up better off than you, regardless of how they got that way. This means that you’d want to punish the guy who bought the TV for himself if it meant he ended up better off than you, even though he had no effect on your well-being. Alternatively, you wouldn’t be particularly inclined to punish the person who stole/broke your TV either unless they subsequently ended up better off than you. If they were poorer than you to begin with and were still poorer than you after stealing/destroying the TV, you ought not to be particularly interested in seeing them punished.

In case that wasn’t clear, the argument being put forth is that how well you are doing, relative to others ought to be used as an input for punishment decisions to a greater extent – a far greater one – than absolute losses or gains are.

Now there’s a lot to say about that argument. The first thing to say is that, empirically, it is not supported by the data I just mentioned: if people were interested in punishing inequality itself, they ought to be willing to punish that inequality regardless of how it came about: stealing a TV, buying a TV, or breaking a TV should be expected to prompt very similar punishment responses; it’s just that they don’t: punishment is almost entirely absent when people create inequality by benefiting themselves at no cost to others. By contrast, punishment is rather common when costs are inflicted on someone, whether those costs involve taking (where one party benefits while the other suffers) or destruction (where one party suffers a loss at no benefit to anyone else). On those grounds alone we can conclude that something is off about the inequality aversion argument: the theory does not match the data. Thankfully – for me, anyway – there are also many good theoretical justifications for rejecting inequality aversion.

“It’s a great home in a good neighborhood; pay no mind to the foundation”

The next thing to say about the inequality argument is that, in one regard, it is true: relative reproduction rates determine how quickly the genes underlying an adaptation spread – or fail to spread – throughout the population. As resources are not unlimited, a gene that reproduces itself 1.1 times for each time an alternative variant reproduces itself once will eventually replace the other in the population entirely, assuming that the reproductive rates stay constant. It’s not enough for genes to reproduce themselves, then, but for them to reproduce themselves more frequently than competitors if they metaphorically hope to stick around in the population over time. That this much is true might lure people into accepting the rest of the line of reasoning, though to do so would be a mistake for a few reasons.

Notable among this reasons is that “relative reproductive advantage” does not have three modes of “equal”, “better”, or “worse”. Instead, relative advantage is a matter of degree: a gene that reproduces itself twice as frequently as other variants is doing better than a gene that does so with 1.5 times the frequency; a gene that reproduces itself three times as frequently will do better still, and so on. As relative reproductive advantages can be large or small, we ought to expect mechanisms that generate larger relative reproductive advantages to be favored over those which generate smaller ones. On that point, it’s worth bearing in bearing in mind that the degree of relative reproductive advantage is an abstract quantity compromised of absolute differences between variants. This is the same point as noting that, even if the average woman in the US has 2.2 children, no woman actually has two-tenths of a child laying around; they only come in whole numbers. That means, of course, that evolution (metaphorically) must care about absolute advantages to precisely the same degree it cares about relative ones, as maximizing a relative reproductive rate is the same thing as maximizing an absolute reproductive rate.

The question remains, however, as to what kind of cognitive adaptations would arise from that state of affairs. On the one hand, we might expect adaptations that primarily monitor one’s own state of affairs and makes decisions based on those calculations. For instance, if a male with two mates has an option to pursue a third and the expected fitness benefits of doing so outweigh the expected costs, then the male in question would likely pursue the opportunity. On the other hand, we might follow the inequality aversion line of thought and say that the primary driver of the decision to pursue this additional mate should be how well the male in question is doing, relative to his competitors. If most (or should it be all?) of his competitors currently have fewer than two mates, then the cognitive mechanisms underlying his decision should generate a “don’t pursue” output, even if the expected fitness costs are smaller than the benefits. It’s hard to imagine how this latter strategy is expected to do better (much less far better) than the former, especially in light of the fact that calculating how everyone else is doing is more costly and prone to errors than calculating how you are doing. It’s similarly hard to imagine how the latter strategy would do better if the state of the world changes: after all, just because someone is not currently doing as well as you, it does not mean they won’t eventually be. If you miss an opportunity to be doing better today, you may end up being relatively disadvantaged in the long run.

“I do see her more than the guy she’s cheating on me with, so I’ll let it slide…”

I’m having a hard time seeing how a mechanism that operates on an expected fitness cost/benefit analysis would get out-competed by a more cognitively-demanding strategy that either ignores such a cost/benefit strategy or takes it and adds something irrelevant into the calculations (e.g.,” get that extra benefit, but only so long as other people are currently doing better than you)”. As I mentioned initially, the data shows the absolute cost/benefit pattern predominates: people do not punish others primarily on the basis of whether they’re doing better than them or not; they primarily punish on the basis of whether they experienced losses. Nevertheless inequality does play a secondary role – sometimes – in the decision regarding whether to punish someone for taking from you. I happen to think I have an explanation as to why that’s the case but, as I’ve also been informed by another helpful comment (which might or might not be related to the first one), speculating about such things is a bit on the taboo side and should be avoided. Unless one is speculating that inequality, and not losses, primarily drives punishment, that is.

Understanding Conspicuous Consumption (Via Race)

Buckle up, everyone; this post is going to be a long one. Today, I wanted to discuss the matter of conspicuous consumption: the art of spending relatively large sums of money on luxury goods. When you see people spending close to $600 on a single button-up shirt, two-months salary on engagement rings, or tossing spinning rims on their car, you’re seeing examples of conspicuous consumption. A natural question that many people might (and do) ask when confronted with such outrageous behavior is, “why do you people seem to (apparently) waste money?” A second, related question that might be asked once we have an answer to the first question (indeed, our examination of this second question should be guided by – and eventually inform – our answer to the first) is how can we understand who is most likely to spend money in a conspicuous fashion? Alternatively, this question could be framed by asking about what contexts tend to favor conspicuous consuming behavior. Such information should be valuable to anyone looking to encourage or target big-ticket spending or spenders or, if you’re a bit strange, you could also try to create contexts in which people spend their money more responsibly.

But how fun is sustainability when you could be buying expensive teeth  instead?

The first question – why do people conspicuously consume – is perhaps the easier question to initially answer, as it’s been discussed for the last several decades. In the biological world, when you observe seemingly gaudy ornaments that are costly to grow and maintain – peacock feathers being the go-to example – the key to understanding their existence is to examine their communicative function (Zahavi, 1975). Such ornaments are typically a detriment to an organism’s survival; peacocks could do much better for themselves if they didn’t have to waste time and energy growing the tail feathers which make it harder to maneuver in the world and escape from predators. Indeed, if there was some kind of survival benefit to those long, colorful tail feathers, we would expect that both sexes would develop them; not just the males.

However, it is because these feathers are costly that they are useful signals, since males in relatively poor condition could not shoulder their costs effectively. It takes a healthy, well-developed male to be able to survive and thrive in spite of carrying these trains of feathers. The costs of these feathers, in other words, ensures their honesty, in the biological sense of the word. Accordingly, females who prefer males with these gaudy tails can be more assured that their mate is of good genetic quality, likely leading to offspring well-suited to survive and eventually reproduce themselves. On the other hand, if such tails were free to grow and develop – that is, if they did not reliably carry much cost – they would not make good cues for such underlying qualities. Essentially, a free tail would be a form of biological cheap talk. It’s easy for me to just say I’m the best boxer in the world, which is why you probably shouldn’t believe such boasts until you’ve actually seen me perform in the ring.

Costly displays, then, owe their existence to the honesty they impart on a signal. Human consumption patterns should be expected to follow a similar pattern: if someone is looking to communicate information to others, costlier communications should be viewed as more credible than cheap ones. To understand conspicuous consumption we would need to begin by thinking about matters such as what signal someone is trying to send to others, how that signal is being sent, and what conditions tend to make the sending of particular signals more likely? Towards that end, I was recently sent an interesting paper examining how patterns of conspicuous consumption vary among racial groups: specifically, the paper examined racial patterns of spending on what was dubbed visible goods: objects which are conspicuous in anonymous interactions and portable, such as jewelry, clothing, and cars. These are good designed to be luxury items which others will frequently see, relative to other, less-visible luxury items, such as hot tubs or fancy bed sheets.

That is, unless you just have to show off your new queen mattress

The paper, by Charles et al (2008), examined data drawn from approximately 50,000 households across the US, representing about 37,000 White 7,000 Black, and 5,000 Hispanic households between the ages of 18 and 50. In absolute dollar amounts, Black and Hispanic households tended to spend less on all manner of things than Whites (about 40% and 25%, respectively), but this difference needs to be viewed with respect to each group’s relative income. After all, richer people tend to spend more than poorer people. Accordingly, the income of these households was estimated through their reports of their overall reported spending on a variety of different goods, such as food, housing, etc. Once a household’s overall income was controlled for, a better picture of their relative spending on a number of different categories emerged. Specifically, it was found that Blacks and Hispanics tended to spend more on visible  goods (like clothing, cars, and jewelry) than Whites by about 20-30%, depending on the estimate, while consuming relatively less in other categories like healthcare and education.

This visible consumption is appreciable in absolute size, as well. The average white household was spending approximately $7,000 on such purchases each year, which would imply that a comparably-wealthy Black or Hispanic household would spend approximately $9,000 on such purchases. These purchases come at the expense of all other categories as well (which should be expected, as the money has to come from somewhere), meaning that the money spent on visible goods often means less is spent on education, health care, and entertainment.

There are some other interesting findings to mention. One – which I find rather notable, but the authors don’t see to spend any time discussing – is that racial differences in consumption of visible goods declines sharply with age: specifically, the Black-White gap in visible spending was 30% in the 18-34 group, 23% in the 35-49 group, and only 15% in the 50+ group. Another similarly-undiscussed finding is that visible consumption gap appears to decline as one goes from single  to married. The numbers Charles et al (2009) mention estimate that the average percentage of budgets used on visible purchases was 32% higher for single Black men, 28% higher for single Black women, and 22% higher for married Black couples, relative to their White counterparts. Whether these declines represent declines in absolute dollar amounts or just declines in racial differences, I can’t say, but my guess is that it represents both. Getting old and getting into relationships tended to reduce the racial divide in visible good consumption.

Cool really does have a cut-off age…

Noting these findings is one thing; explaining them is another, and arguably the thing we’re more interested in doing. The explanation offered by Charles et al (2009) goes roughly as follows: people have a certain preference for social status, specifically with respect to their economic standing. People are interested in signaling their economic standing to others via conspicuous consumption. However, the degree to which you have to signal depends strongly on the reference group to which you belong. For example, if Black people have a lower average income than Whites, then people might tend to assume that a Black person has a lower economic standing. To overcome this assumption, then, Black individuals should be particularly motivated to signal that they do not, in fact, have a lower economic standing more typical of their group. In brief: as the average income of a group drops, those with money should be particularly inclined to signal that they are not as poor as other people below them in their group.

In support of this idea, Charles et al (2008) further analyzed their data, finding that the average spending on visible luxury goods declined in states with higher average incomes, just as it also declined among racial groups with higher average incomes. In other words, raising the average income of a racial group within a state tended to strongly impact what percentage of consumption was visible in nature. Indeed, the size of this effect was such that, controlling for the average income of a race within a state, the racial gaps almost entirely disappeared.

Now there are a few things to say about this explanation, first of which being that it’s incomplete as stands. From my reading of it, it’s a bit unclear to me how the explanation works for the current data. Specifically, it would seem to posit that people are looking to signal that they are wealthier than those immediately below them in the social ladder. This could explain the signaling in general, but not the racial divide. To explain the racial divide, you need to add something else; perhaps that people are trying to signal to members of higher income groups that, though one is a member of a lower income group, one’s income is higher than the average income. However, that explanation would not explain the age/marital status information I mentioned before without adding on other assumption, nor would directly explain the benefits which arise from signaling one’s economic status in the first place. Moreover, if I’m understanding the results properly, it wouldn’t directly explain why visible consumption drops as the overall level of wealth increases. If people are trying to signal something about their relative wealth, increasing the aggregate wealth shouldn’t have much of an impact, as “rich” and “poor” are relative terms.

“Oh sure, he might be rich, but I’m super rich; don’t lump us together”

So how might this explanation be altered to fit the data better? The first step is to be more explicit about why people might want to signal their economic status to others in the first place. Typically, the answer to this question hinges on the fact that being able to command more resources effectively makes one a more valuable associate. The world is full of people who need things – like food and shelter – so being able to provide those things should make one seem like a better ally to have. For much the same reason, being in command of resources also tends to make one appear to be a more desirable mate as well. A healthy portion of conspicuous signaling, as I mentioned initially, has to do with attracting sexual partners. If you know that I am capable of providing you with valuable resources you desire, this should, all else being equal, make me look like a more attractive friend or mate, depending on your sexual preferences.

However, recognition of that underlying logic helps make a corollary point: the added value that I can bring you, owing to my command of resources, diminishes as overall wealth increases. To place it in an easy example, there’s a big difference between having access to no food and some food; there’s less of a difference between having access to some food and good food; there’s less of a difference still between good food and great food. The same holds for all manner of other resources. As the marginal value of resources decreases as access to resources increases overall, we can explain the finding that increases in average group wealth decrease relative spending on visible goods: there’s less of a value in signaling that one is wealthier than another if that wealth difference isn’t going to amount to the same degree of marginal benefit.

So, provided that wealth has a higher marginal value in poorer communities – like Black and Hispanic ones, relative to Whites – we should expect more signaling of it in those contexts. This logic could explain the racial gap on spending patterns. It’s not that people are trying to avoid a negative association with a poor reference group as much as they’re only engaging in signaling to the extent that signaling holds value to others. In other words, it’s not about my signaling to avoid being thought of as poor; it’s about my signaling to demonstrate that I hold a high value as a partner, socially or sexually, relative to my competition.

Similarly, if signaling functions in part to attract sexual partners, we can readily explain the age and martial data as well. Those who are married are relatively less likely to engage in signaling for the purposes of attracting a mate, as they already have one. They might engage in such purchases for the purposes of retaining that mate, though such purchases should involve spending money on visible items for other people, rather than for themselves. Further, as people age, their competition in the mating market tends to decline for a number reasons, such as existing children, inability to compete effectively, and fewer years of reproductive viability ahead of them. Accordingly, we see that visible consumption tends to drop off, again, because the marginal value of sending such signals has surely declined.

“His most attractive quality is his rapidly-approaching demise”

Finally, it is also worth noting other factors which might play an important role in determining the marginal value of this kind of conspicuous signaling. One of these is an individual’s life history. To the extent that one is following a faster life history strategy – reproducing earlier, taking rewards today rather than saving for greater rewards later – one might be more inclined to engage in such visible consumption, as the marginal value of signaling you have resources now is higher when the stability of those resources (or your future) is called into question. The current data does not speak to this possibility, however. Additionally, one’s sexual strategy might also be a valuable piece of information, given the links we saw with age and martial status. As these ornaments are predominately used to attract the attention of prospective mates in nonhuman species, it seems likely that individuals with a more promiscuous mating strategy should see a higher marginal value in advertising their wealth visibly. More attention is important if you’re looking to get multiple partners. In all cases, I feel these explanations make more textured predictions than the “signaling to not seem as poor as others” hypothesis, as considerations of adaptive function often do.

References: Charles, K., Hurst, E., & Roussanov, N. (2008). Conspicuous consumption and race. The Journal of Quarterly Economics, 124, 425-467.

Zahavi, A. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.