Maybe It’s Not The Money; Maybe It’s What Money Represents

‘Thank for all for being here. Now pay me”

I’m a big believer in the value of education, which is why I’ve spent so much time educating people (in forums other than here and about topics other than psychology as of late, but I’m always scratching that same itch). As anyone who has been through an education system can tell you, however, not all educators provide the same amount of value. Some teachers and professors have inspired me to reach for new heights while others have killed any interest in a subject I might have had. Some taught me valuable and useful information while others have provided active misinformation. Naturally, if we have the option, we’d all prefer the former type of teacher – the good ones. The same holds true for most parents as well: given the option, they’d prefer their children had access to the best teachers over the worst ones, all else being equal. This is all working under the assumption that good teachers provide better opportunities for their students in the future. I don’t think we’re breaking any new ground here with these premises and I think they’re all sound. This drives students and parents to seek out the best teachers they can find.

Quantifying someone’s quality as an educator is difficult, however. This leads people to fall back on the things they can measure more easily as proxies for educator quality, like student outcomes. After all, if a student cannot perform tasks related to what they were just taught, that’s a reasonable indication that the teacher might not be great at their job. If only matters were that simple we’d have better teachers. They aren’t, though, since such a measure conflates student quality with teaching quality. Put the best teacher in a room of students with an IQ below 80 and you’ll see worse outcomes in terms of student performance than a poor teacher instructing a class with an IQ above 120. Teachers can help you reach for the stars; they just can’t bring the stars to you.

Nevertheless, people do use student outcomes as a proxy for education quality and, as it turns out, students at private schools tend to outperform those at public ones. With limited information available, many people might come to believe that private schools give their children a better education and invest large amounts of resources to ensure their children go there. Perhaps we could improve student performance if we could just send more children to private schools. It’s an interesting suggestion.

“No poor people allowed…until now”

Let’s get the most important question out there first: why would we expect that a private education is better than a public education? The reason this question matters is because the primary difference between these two sources of education is simply the source of funding: private education is funded privately; public education publicly. One might wonder what the source of the funding has to do with the quality of education received, and rightly so. As far as I can tell, that answer should be that funding source per se is largely irrelevant. If you’re buying a new phone, the quality of phone you receive shouldn’t be expected to change on the basis of whether you’re using your money or the government’s money to make the purchase. The same should hold true of education.

As such, if you’re wondering whether private or public education is better, you’re not really looking at the right variables. Whatever factors are important for a good education – class sizes, instructor quality, instruction method, and so on – should be the same for both domains. So perhaps, then, private educations are better because more money allows people to purchase better teachers with better supplies and better methods. As the old saying goes, “you get what you pay for.” Presumably, this would result in children at private schools achieving more in terms of learning and outperforming their public-schooled peers. It might also mean that if public schools just received more money to purchase more materials, space, or better teachers, you’d see student performance begin to increase

That said, this logic usually only holds true to a point. There are diminishing returns on the amount of quality you receive per extra dollar spent. A $5 shirt might be of lower quality than a $30 shirt, but is that shirt six-times better? Is that $120 designer shirt four-times better still? At some point, spending more doesn’t necessarily get you much in the way of a better product.

Hey you tried paying even more than that?

This brings us nicely to the present paper by Pinata & Ansari (2018) who examined a sample of approximately 1100 children’s education-related achievements over time (from birth to age 15). While the paper isn’t experimental in nature, the authors sought to determine to what extent children’s enrollment in private schools affected their performance, as records on their school attendance were available (among other measures). Whether these children attended any private school (yes/no) as well as how much private school they attended were used to predict their ninth-grade performance on a number of standard metrics. These included cognitive, literary, and math skills, as well as working memory abilities. Just to be thorough, they also asked these children how competent they felt in a couple academic domains. The authors also assessed children’s behavioral problems – internal and external – and social skills to see if private school had an impact on those as well. Finally, a number of family variables were collected, including factors like birth weight, maternal employment and vocabulary, and race. In other words, factors unrelated to the public vs private schooling itself.

Turning to the results, when the authors were just trying to predict cognitive and academic performance from the amount of private school attended, there was a noticeable difference. Children who attended any private school tended to outperform those who only attended public school on most of the measured variables. The authors then conducted the same analysis, adding in some of those pesky family variables – like family income – which ended up reducing just about all of those relationships to non-significance, and this was true regardless of how long the children had attended private institutions. In other words, children who attend private school tended to do better than those who attended public school, but this might have very little to do with the schools per se.

While that finding might be interesting to some for reasons related to their finances, it interests me for a different reason. Specifically, at no point in the paper (or the comments/reactions to it) do the authors mention that maybe the difference in performance has to do with some kind of biologically-inherited potential. The ability to learn, like all things biological, is partially inherited. Smart parents tend to have smart children, just like tall parents tend to have tall children. Instead the focus on this paper (and the commentary) seems to revolve predominantly around controlling for the monetary factors.  

Let’s just print more money until everyone’s a genius

Maybe richer parents are able to provide things that poorer parents cannot, and those things lead to better academic performance. Perhaps that’s true, but it does seem to gloss over a rather important fact: wealth is not distributed randomly. Those who are able to achieve higher incomes tend to do so because they possess certain skills that those who fail to achieve high income lack. These could be related to intelligence (factors like good working memories and high IQ) or personality (higher in agreeableness, conscientiousness, or other important factors). This is a long-winded way of saying that people who can successfully complete complicated jobs and show up consistently probably out earn those who mess up everything they touch, frequently miss work, or become distracted by other goals regularly. Each group also tends to have children which inherit these tendencies.

We might expect, then, that parents who have lots of money to spend on an expensive private education are higher-performers, on average; that’s why they have so much extra cash and value spending it on what they think is a good education. They’re also the same kind of parents who are likely to have children who are higher performers, because the children genetically resemble them. This would certainly explain the present set of findings.

When people have different biological performance ceilings the best teachers might help students reach those ceilings without changing where they reside. Past a certain point, then, educator quality may fail to have a noticeable effect. Let’s put that in a sports example: a great coach may make his players as good as they can be at basketball and as a team working together, but he can’t coach them into being taller. No amount of money can buy that ability in a coach. Conversely, some people are likely to succeed even despite a poor education simply because they’re capable enough on their own that they don’t need much additional guidance. A poor teacher to them is simply white noise in the background they can ignore as they achieve all on their own.

“Can you please shut up so I can get back to being great?”

All of this is not to say that educators don’t vary in quality, but it could be the case that the distribution of that quality is at least partially (perhaps even largely or entirely) independent of money at the moment. Maybe teachers are being hired on the basis of things that have little to do with their ability to provide quality education. In higher education this is most certainly the case, where publications and the ability to bring in grant money look appealing.

There is also the lurking matter of how peer quality influences the education of other students. A healthy portion of school life for any child involves managing the social world they attend school in. Children transferring into one school from another – private or public – find themselves faced with the prospect of navigating a new social hierarchy, and that goal tends to distract from education. Similarly, children who find themselves in a school where their peers don’t value education may not put learning at the top of their to-do list, as it affords them little social mobility (at least in the short term). It’s also possible that even poor-performing children will find little motivation to improve when you surround them by high-performing children if the gap between them is too wide. Since they can’t improve enough to see social gains from it, they may disengage from education and pursue other goals.

It’s not like the only thing that can change between schools – public or private – is educator quality or the amount of money they have for books. Many other moving parts are at work, so simply shuffling more children into private schools shouldn’t be expected to just improve outcomes.

References: Pinata, R. & Ansari, A. (2018). Does attendance in private schools predict student outcomes at age 15? Evidence from a longitudinal study. Educational Research, DOI: 10.3102/0013189X18785632

Getting Off Your Phone: Benefits?

The videos will almost be as good as being there in person

If you’ve been out to any sort of live event lately – be it a concert or other similar gathering; something interesting – you’ll often find yourself looking out over a sea of camera phones (perhaps through a camera yourself) in the audience. This has often given me a sense of general unease at times, namely for two reasons: first, I’ve taken such pictures before in the past and, generally speaking, they come out like garbage. Turns out it’s not the easiest thing in the world to get clear audio in a video at a loud concert, or even a good picture if you’re not right next to the stage. But, more importantly, I’ve found such activities to detract from the experience; either because you’re spending time on your phone instead of just watching what you’re there to see, or because it signals an interest to showing other people what you’re doing rather than just doing it and enjoying yourself. Some might say all those people taking pictures aren’t quite living for the moment, so to speak.

In fact, it has been suggested (Soares & Storm, 2018) that the act of taking a picture can actually make your memory for the event worse at times. Why might this be? There are two candidate explanations that come to mind: first, and perhaps most intuitively, screwing around on your phone is a distraction. When you’re busy trying to work the camera and get the right shot, you’re just not paying attention to what you’re photographing as much. It’s a boring explanation, but perfectly plausible, just like how texting makes people worse drivers; their attention is simply elsewhere.

The other explanation is a bit more involved, but also plausible. The basics go like this: memory is a biologically-costly thing. You need to devote resources to attending to information, creating memories, maintaining them, and calling them to mind when appropriate. If we remembered everything we ever saw, for instance, we would likely be devoting lots of resources to ultimately irrelevant information (no one really cares how many windows each building you pass on your way home from work has, so why remember it?), and finding the relevant memory amidst a sea of irrelevant ones would take more time. Those who store memories efficiently might thus be favored by selection pressures as they can more quickly recall important information with less investment. What does that have to do with taking pictures? If you happen to snap a picture, you now have a resource you could later consult for details. Rather than store this information in your head, you can just store it in the picture and consult the picture when needed. In this sense, the act of taking a picture may serve as a proximate cue to the brain that information needs to be attended to less deeply and committed less firmly to memory.

Too bad it won’t help everyone else forget about your selfies

Worth noting is that these explanations aren’t mutually exclusive: it could both be true that taking a picture is a cue you don’t need to remember information as well and that taking pictures is distracting. Nevertheless, both could explain the same phenomenon, and if you want to test to see if they’re true, you need a way of differentiating them; a context in which the two make opposing predictions about what would happen. As a spoiler warning, the research I wanted to cover today tries to do that, but ultimately fails at the task. Nevertheless, the information is still interesting, and appreciating why the research failed at its goal is useful for future designs, some of which I will list at the end.

Let’s begin with what the researchers did: they followed a classic research paradigm in this realm and had participants take part in a memory task. They were shown a series of images and then given a test about them to see how much they remembered. The key differentiating variable here was that some of the time participants would watch without taking pictures, take a picture of each target before studying it, or take a picture and delete it before studying the target. The thinking here was that – if the efficiency explanation was true – participants who took pictures in a way they knew they wouldn’t be able to consult later – such as when they are snapchatted or deleted – would instead commit more of the information to memory. If you can’t rely on the camera to have the pictures, it’s an unreliable source of memory offloading (the official term), and so we shouldn’t offload. By contrast, if the mere act of taking the picture was distracting and interfered with memory in some way because of that, whether the picture was deleted or not shouldn’t matter. The simple act of taking the picture should be what causes the memory deficits, and similar deficits should be observed regardless of whether the picture was saved or deleted.

Without going too deeply into the specifics, this is basically what the researchers found: when participants had merely taken a picture – regardless of whether it was deleted or stored – the memory deficits were similar. People remembered these images better when they weren’t taking pictures. Does this suggest that taking pictures is simply an attention problem on forming memories, rather than an offloading one?

Maybe the trash can is still a reliable offloading device

Not quite, and here’s why: imagine an experiment where you were measuring how much participants salivated. You think that the mere act of cooking will get people to salivate, and so construct two conditions: one in which hungry people cook and then get to eat the food after, and another in which hungry people cook the food and then throw it away before they get to eat (and they know in advance they will be throwing it away). What you’ll find in both cases is that people will salivate when cooking because the sights and smells of the food are proximate cues of getting to eat. Some part of their brains are responding to those cues that signal food availability, even if those cues do not ultimately correspond to their ability to eat it in the future. The part of the brain that consciously knows it won’t be getting food isn’t the same part responding to those proximate cues. While one part of you understands you’ll be throwing the food away, another part disagrees and thinks, “these cues mean food is coming,” and you start salivating anyway because of it.

This is basically the same problem the present research ran into. Taking a picture may be a proximate cue that information is stored somewhere else and so you don’t need to remember it as well, even if that part of the brain that is instructed to delete the picture believes otherwise. We don’t have one mind, but rather a series of smaller minds that may all be working with different assumptions and sets of information. Like a lot of research, then, the design here focuses too heavily on what people are supposed to consciously understand, rather than on what cues the non-conscious parts of the brain are using to generate behavior.

Indeed, the authors seem to acknowledge as much in their discussion, writing the following:

”Although the present results are inconsistent with an “explicit” form of offloading, they cannot rule out the possibility that through learned experience, people develop a sort of implicit transactive memory system with cameras such that they automatically process information in a way that assumes photographed information is going to be offloaded and available later (even if they consciously know this to be untrue). Indeed, if this sort of automatic offloading does occur then it could be a mechanism by which photo-taking causes attentional disengagement”

All things considered, that’s a good passage, but one might wonder why that passage was saved for the end of their paper, in the discussion section. Imagine instead that this passage appeared in the introduction:

“While it is possible that operating a camera taking a picture disrupts participants attention and results in a momentary encoding deficit, it is also completely possible that the mere act of taking picture is a proximate cue used by the brain to determine how thoroughly (largely irrelevant) information needs to be encoded. Thus, our experiment doesn’t actually differentiate between these alternative hypotheses, but here’s what we’re doing anyway…”

Does your interest in the results of the paper go up or down at that point? Because that would effectively be the same thing the discussion section said. As such, it seems probable that the discussion passage may well represent an addition made to the paper after the fact, per a reviewer request. In other words, the researchers probably didn’t think the idea through as fully as they might like.  With that in mind, here are a few other experimental conditions they could have run which would have been better at the task of separating the hypotheses:

  • Have participants do something with a phone that isn’t taking a picture to distract themselves. If this effect isn’t picture specific, but people simply remember less when they’ve been messing around on a phone (like typing out a word, then looking at the picture), then the attention hypothesis would look better, especially if the impairments to memory are effectively identical.
  • Have an experimenter take the pictures instead of the participant. That way participants would not be distracted by using a phone at all, but still have a cue that the information might be retrievable elsewhere. However, the experimenter could also be viewed as a source of information themselves, so there could be another condition where an experimenter is simply present doing something that isn’t taking a picture. If an experimenter taking a picture results in worse memory as well, then it might be something about the knowledge of a picture in general causing the effect.
  • Better yet, if messing around with the phone is only temporarily disrupting encoding, then having participants take a picture of the target briefly and then wait a period (say, a minute) before viewing the target for the 15 seconds proper should help differentiate the two hypotheses. If the mere act of taking a picture in the past (whether deleted or not) causes participants to encode information less thoroughly because of proximate cues for efficient offloading, then this minor time delay shouldn’t alleviate those memory deficits. By contrast, if messing with the phone is just distracting people momentarily, the time delay should help counteract the effect.

These are all productive avenues that could be explored in the future for creating conditions where these hypotheses make different predictions, especially the first and third ones. Again, both could be true, and that could show up in the data, but these designs give the opportunity for that to be observed.

And, until the research is conducted, do yourself a favor and enjoy your concerts instead of viewing them through a small phone screen. (The caveat here is that it’s unclear whether such results would generalize, as in real life people decide what to take pictures of, rather than taking pictures of things they probably don’t really care about).

References: Soares, J. & Storm, B. (2018). Forgot in a flash: A further investigation of the photo-taking-impairment effect. Journal of Applied Research in Memory & Cognition, 7, 154-160

The Distrust Of Atheists

Atheists are good friends because they keep it real

There’s an interesting finding rolling around about what kinds of people Americans would vote for as a president. When asked:

“If your party nominated a generally well-qualified person for president who happened to be [blank], would you vote for that that person?”

Answers varied a bit depending on the blank: 96% of Americans would vote for a Black president (while only 4% would not); 95% would vote for a woman. Characteristics like that don’t really dissuade people, at least in the abstract. Other groups don’t fare as well: only 68% of people said they would vote for a gay/lesbian candidate, and 58% a Muslim. But bottoming out the list? Atheists. A mere 54% of people said they would vote for an atheist. This is also a finding that changes a bit – but not all that much – between political affiliations. At the low point, 48% of Republicans would vote for an atheist, while at its peak, 58% of Democrats would. An appreciable difference, but not night and day (larger differences exist for Mormon, gay/lesbian and Muslim candidates, coming in at 18%, 26%, and 22%, respectively).

At the outset – and this is a point that will become important later – it is worth noting that the answers to these questions might not tell you how people would feel about any particular atheist, woman, Muslim, etc. They are not asking whether people would vote for a specific atheist; they are asking about voting for an atheist in the abstract sense of the word, so they are relying on stereotype information. It is also worth noting that people have become much more tolerant over time: in 1958, only 18% said they would vote for an atheist, so getting up to over half (and up to 70% in the younger generation) is good progress. Of course, only 38% said they would vote for a black person during that same year which, as we just saw, has changed dramatically to near 100% by 2012. Atheists haven’t made similar gains, in terms of degree.

This is a very interesting finding that begs for a proper explanation. What is it about atheists that puts people off so much? While I can’t provide a comprehensive or definitive answer at the moment, there is some research I wanted to discuss today that helps shed some light on the issue.

Spoilers…

The basic premise of this research is effectively that – to some (perhaps large) degree – religion per se isn’t what people are necessarily concerned about when they’re providing their answers to questions like our voting one. Instead, what concerns people are other, more-relevant factors that religiosity just so happens to correlate with. So people are really concerned with trait X in a candidate, but are using religiosity as a means of indirectly assessing the presence of trait X. In case that all sounds a bit too abstract, let’s make it concrete and think about the trait Moon, Krems, and Cohen (2018) examined: trust.

When considering who you’d like to support politically or interact with socially, trust is an important factor. If you know you can trust someone, this increases the types of cooperation you can safely engage in with them. When you cannot trust someone, for instance, interactions with them need to be relatively immediate for the sake of safety: I give you the money now and I get my product now. If they aren’t trustworthy, you should be less inclined to give them money now for the promise of your product in a day, week, month, year, or beyond, as they might just take your money and run. By contrast, someone who is trustworthy can offer cooperation over the longer term. The same logic applies to a leader. If you cannot trust a leader to work in your interests, why follow them and offer your support?

As it turns out, religious people are perceived to be more trustworthy than the nonreligious. Why might this be the case? One ostensibly obvious explanation that might jump out at you is that religious people tend to believe in deities that punish people for misbehavior. If someone believes they will be punished for breaking a promise, they should be less likely to break that promise, all else being equal. This is one explanation for the trust finding, then, but there’s an issue: it’s quite easy to just say you believe in a punishing deity when you actually do not. Since that signal is so cheap to produce, it wouldn’t be trustworthy.

This is where religion in particular might help, as membership in a religious group often involves some degree of costly investment: visits to houses of worship, following rituals that are a real pain to complete, and any other similar behavior. Those who are unwilling to endure those immediate costs for group membership demonstrate that they’re just talk. Their commitment doesn’t run deep enough for them to be willing to suffer for them. When behavior is no longer cheap, you can believe what people are telling you. Now this might make religious people look more trustworthy because it demonstrates they’re more groupish and – by extension – more cooperative, but this groupishness is a double-edged sword: those who are inclined towards their group are usually less inclined towards others. This might mean that religious people are more trustworthy to their in-group, but not necessarily their outgroup.

“Who’s up next to demonstrate their trustworthiness?”

There are other explanations, though. The one the present paper favors is the possibility that religious people tend to follow slower life history strategies. This means possessing traits like sexual restrictiveness (they’re relatively monogamous, or at least less promiscuous), greater investment in family, and generally more future-looking than they are living in the present. This would be what makes them look more cooperative than the non-religious. Fast life history strategies are effectively the opposite: they view life as short and unpredictable and so take benefits today instead of saving for tomorrow, and invest more in mating effort than parental effort. Looking at religious individuals as slow-life strategists fits well with previous research suggesting that religious attitudes correlate better with sexual morality than they do cooperative morality, and that religions might act as support for long-term, monogamous, high-fertility mating strategies.

As with many stereotypes, those about religious individuals possessing these slow-life-history traits to a greater degree seems to be fairly accurate. So, when people are asked to judge an individual and are given no more information about them than their religion, they may tend to default to using those stereotypes to assess other traits of interest, like trust. This should also predict that when people know more about a particular individual’s life history strategy – be it fast or slow – religion per se should cease to be used as a predictor. After all, why bother to use religion to assess someone’s life history strategy when you can just assess that strategy directly? Religion stops adding anything at that point, and so information about it should be largely discarded.

As it turns out, this is basically what the research uncovered. In the first experiment people (N = 336) were asked whether they perceived targets (dating profiles of religious or non-religious individuals) as possessing traits like aggression, impulsivity, education, whether they thought they came from a rough neighborhood, and whether they trusted the person. As expected, people perceived the religious targets as being less aggressive, impulsive, more educated, more committed in sexual relationships, and – accordingly – trusted them more. These perceptions held even for the non-religious raters on average, who appeared to trust religious people more than those who shared their lack of belief. Experiment three basically replicated these same results, but also found that the effects were partially independent of the specific religion in question. That is, whether the target being judged was Christian or Muslim, they were still both trusted more than the non-religious targets (even if Christians were nominally trusted more than Muslims, likely due to the majority religion of the country in which the research took place).

Mileage may vary on the basis of local religious majorities

Experiment two  is where the real interesting finding emerged. The procedure was generally the same as before, but now the dating profiles contained better individuating information about the person’s life history strategy. In this case, the targets described themselves either as looking for “someone special, settling downing, and starting a family,” or one who, “doesn’t see themselves setting down anytime soon, as they enjoy playing the field” (paraphrased slightly). When rating these profiles with better information about the person (beyond simply their religious behavior/belief), the effect of commitment strategy on trust was much larger (ηp2 = .197) than the effect of religion per se (ηp2 = .008).

The authors also tried to understand which variables predicted this relationship between reproductive strategy and trust. Their first model used “belief in god” as a mediator and did indeed find a small, but significant relationship running from reproductive strategy predicting belief in god which in turn predicted trust. However, when other life history traits were included as mediator variables (like impulsivity, opportunistic behavior, education, and hopeful ecology – which means what kind of neighborhood one comes from, effectively), the belief in god mediator was no longer significant while three of the life history variables were.

In short, this would suggest that belief in god itself is not the thing doing much of the pulling when it comes to understanding why people trust religious people more. Instead, people are using religion as something of a proxy for someone’s likely reproductive strategy and, accordingly, life history traits. As such, when people have information directly bearing on the traits they’re interested in assessing, they largely stop using their stereotypes about religion in general and instead rely on information about the person (which is completely consistent with previous research on how people use stereotype information: when no other information is available, stereotypes are used, but as more individuating information is available, people rely on that more and their stereotypes less).

References: Moon, J., Krems, J., & Cohen, A. (2018). Religious people are trusted because they are viewed as slow life-history strategists. Psychological Science, DOI: 10.1177/0956797617753606

The Beautiful People

Less money than it looks like if that’s all 10′s

There’s a perception that exists involving how out of touch rich people can be, summed up well in this popular clip from the show Arrested Development: It’s one banana, Michael, how much could it cost? Ten dollars?” The idea is that those with piles of money – perhaps especially those who have been born into it – have a distorted sense for the way the world works, as there are parts of it they’ve never had to experience. A similar hypothesis guides the research I wanted to discuss today, which sought to examine people’s beliefs in a just world. I’ve written about this belief-in-a-just-world hypothesis before; the reviews haven’t been positive.

The present research (Westfall, Millar, & Lovitt, 2018) took the following perspectives: first, believing in a just world (roughly that people get what they deserve and deserve what they get) is a cognitive bias that some people hold to because it makes them feel good. Notwithstanding the fact that “feeling good” isn’t a plausible function, for whatever reason the authors don’t seem to suggest that believing the world to be unfair is a cognitive bias as well, which is worth keeping in the back of your mind. Their next point is that those who believe in a just world are less likely to have experienced injustice themselves. The more personal injustice one experiences (those that affect you personally in a negative way), the more one is likely to reject their belief in a just world because, again, rejecting that belief when faced with contradictory evidence should maintain self-esteem. Placed in a simple example, if something bad happened to you and you believe the world is a just place, that would mean you deserved that bad thing because you’re a bad person. So, rather than think you’re a bad person, you reject the idea that the world is fair. Seems that the biasing factor there would be the message of, “I’m awesome and deserve good things” as that could explain both believing the world is fair if things are going well and unfair if they aren’t, rather than the just-world belief being the bias, but I don’t want to dwell on that point too much yet.

This is where the thrust of the paper begins to take shape: attractive people are thought to have things easier in life, not unlike being rich. Because being physically attractive means one will be exposed to fewer personally-negative injustices (hot people are more likely to find dates, be treated well in social situations, and so on), they should be more likely to believe the world is a just place. In simple terms, physical attractiveness = better life = more belief in a just world. As the authors put it:

Consistent with this reasoning, people who are societally privileged, such as wealthy, white, and men, tend to be more likely to endorse the just-world hypothesis than those considered underprivileged

The authors also throw some line in their introduction about how physical attractiveness is “largely beyond one’s personal control,” and how “…many long-held beliefs about relationships, such as an emphasis on personality or values, are little more than folklore,” in the face of people valuing physical attractiveness. Now these don’t have any relevance to their paper’s theory and aren’t exactly correct, but should also be kept in the back on your mind to understand the perspective they are writing from.

What a waste of time: physical attractiveness is largely beyond his control

In any case, the authors sought to test this connection between greater attractiveness (and societal privilege) to greater belief in a just world across two studies. The first of these involved asking about 200 participants (69 male) about their (a) belief in a just world, (b) perceptions of how attractive they thought they were, (c) self-esteem, (d) financial status, and (e) satisfaction with life. About as simple as things come, but I like simple. In this case, the correlation between how attractive one thought they were and belief in a just world were rather modest (r = .23), but present. Self-esteem was a better predictor of just-world beliefs (r = .34), as was life satisfaction (r = .34). A much larger correlation understandably emerged between life satisfaction and perceptions of one’s own attractiveness (r = .67). Thinking one was attractive made one happier with life than it did lead one to believe the world is just. Money did much the same: financial status correlated better with life satisfaction (r = .33) than it did just world beliefs (r = .17). Also worth noting is that men and women didn’t differ in their just world beliefs (Ms of 3.2 and 3.14 on the scale, respectively). 

Study 2 did much the same as study one with basically the same sample, but it also included ratings of a participant’s attractiveness supplied by others. This way you aren’t just asking people how attractive they are; you are also asking people less likely to have a vested interest in the answer to the question (for those curious, ratings of self-attractiveness only correlated with other-ratings at r = .21). Now, self-perception of physical attractiveness correlated with belief in a just world (r = .17) less well than independent ratings of attractiveness did (r = .28). Somewhat strangely, being rated as prettier by others wasn’t correlated with self-esteem (r = .07) or life satisfaction (r = .08) – which you might expect it would if being attractive leads others to treat you better – though self-ratings of attractiveness were correlated with these things (rs = .27 and .53, respectively). As before, men and women also failed to differ with respect to their just world beliefs.

From these findings, the authors conclude that being attractive and rich makes one more likely to believe in a just world under the premise that they experience less injustice. But what about that result where men and women don’t differ with respect to their belief in a just world? Doesn’t that similarly suggest that men and women don’t face different amounts of injustice? While this is one of the last notes the authors make in their paper, they do seem to conclude that – at least around college age – men might not be particularly privileged over women. A rather unusual passage to find, admittedly, but a welcome one. Guess arguments about discrimination and privilege apply less to at least college-aged men and women.

While reading this paper, I couldn’t shake the sense that the authors have a rather particular perspective about the nature of fairness and the fairness of the world. Their passages about how belief in a just world is a bias not containing any comparable comments about how thinking the world is unjust also a bias, coupled with comments about how attractiveness if largely outside of one own’s control and this…

Finally, the modest yet statistically significant relationship between current financial status and just-world beliefs strengthens the case that these beliefs are largely based on viewing the world from a position of privilege.

 …in the face of correlations ranging from about .2 to .3 does likely say something about the biases of the authors. Explaining about 10% or less of the variance in belief in a just world from ratings of attractiveness or financial status does not scream that ‘these beliefs are largely based’ on such things to me. In fact, it seems to suggest beliefs in a just world are largely based on other things. 

“The room is largely occupied the ceiling fan”

While there is an interesting debate to have over the concept of fairness in this article, I actually wanted to use this research to discuss a different point about stereotypes. As I have wrote before, people’s beliefs about the world should tend towards accuracy. That is not to say they will always be accurate, mind you, but rather that we shouldn’t expect there to be specific biases built into the system in many cases. People might be wrong about the world to various degrees, but not because the cognitive system generating those perceptions evolved to be wrong (that is, take accurate information about the world and distort it); they should just be wrong because of imperfect information or environmental noise. The reason for this is that there are costs to being wrong and acting on imperfect information. If I believe there is a monster that lives under my bed, I’m going to behave differently than the person who doesn’t believe in such things. If I’m acting under and incorrect belief, my odds of doing something adaptive go down, all else being equal.

That said, there are some cases where we might expect bias in beliefs: the context of persuasion. If I can convince you to hold an incorrect belief, the costs to me can be substantially reduced or outweighed entirely by the benefits. For instance, if I convince you that my company is doing very well and only going to be doing better in the future, I might attract your investment, regardless of whether that belief in me you have is true. Or, if I had authored the current paper, I might be trying to convince you that attractive/privileged people in the world are biased while the less privileged are grounded realists.

The question arises, then, as to what the current results represent: are the beautiful people more likely to perceive the world as fair and the ugly ones more likely to perceive it as unjust because of random mistakes, persuasion, or something else? Taking persuasion first, if those who aren’t doing as well in life as they might hope because of their looks (or behavior, or something else) are able to convince others they have been treated unjustly and are actually valuable social assets worthy of assistance, they might be able to receive more support than if they are convinced their lot in life has been deserved. Similarly, the attractive folk might see the world as more fair to justify their current status to others and avoid having it threatened by those who might seek to take those benefits for their own. This represents a case of bias: presenting a case to others that serves your own interest, irrespective of the truth.

While that’s an interesting idea – and I think there could be an element of that to it in these results – there another option I wanted to explore as well: it is possible that neither side is actually biased. They might both be acting off information that is accurate as far as they know, but simply be working under different sets of it.

“As far as I can tell, it seems flat”

This is where we return to stereotypes. If person A has had consistently negative interactions with people from group X over their life, I suspect person A would have some negative stereotypes about them. If person B has had consistently positive interactions with people from the same group X over their life, I further suspect person B would have some positive stereotypes about them. While those beliefs shape each person’s expectations of the behavior of unknown members of group X and those beliefs/expectations contrast with each other, both are accurate as far as each person is concerned. Person A and B are both simply using the best information they have and their cognitive systems are injecting no bias – no manipulation of this information – when attempted to develop as accurate a picture of the world as possible.

Placed into the context of this particular finding, you might expect that unattractive people are treated differently than attractive ones, the latter offering higher value in the mating market at a minimum (along with other benefits that come with greater developmental stability). Because of this, we might have a naturally-occurring context where people are exposed to two different versions of the same world, both develop different beliefs about it, but neither necessarily doing so because they have any bias. The world doesn’t feel unfair to the attractive person, so they don’t perceive it as such. Similarly, the world doesn’t feel fair to the unattractive person who feels passed over because of their looks. When you ask these people about how fair the world is, you will likely receive contradictory reports that are both accurate as far as the person doing the reporting is aware. They’re not biased; they just receive systematically different sets of information.

Imagine taking that same idea and studying stereotypes on a more local level. What I’ve read about when it comes to stereotype accuracy research has largely been looking at how people’s beliefs about a group compare to that group more broadly; along the lines of asking people, “How violent are men, relative to women,” and then comparing those responses to data collected from all men and women to see how well they match up. While such responses largely tend towards accuracy, I wonder if the degree of accuracy could be improved appreciably by considering what responses any given participant should provide, given the information they have access to. If someone grew up in an area where men are particularly violent, relative to the wider society, we should expect they have different stereotypes about male violence, as those perceptions are accurate as far as they know. Though such research is more tedious and less feasible than using broader measures, I can’t help but wonder what results it might yield. 

References: Westfall, R., Millar, M., & Lovitt A. (2018). The influence of physical attractiveness on belief in a just world. Psychological Reports, 0, 1-14.

Making A Great Leader

Selfies used to be a bit more hardcore

If you were asked to think about what makes a great leader, there are a number of traits you might call to mind, though what traits those happen to be might depend on what leader you call to mind: Hitler, Gandhi, Bush, Martin Luther King Jr, Mao, Clinton, or Lincoln were all leaders, but seemingly much different people. What kind of thing could possibly tie all these different people and personalities together under the same conceptual umbrella? While their characters may have all differed, there is one thing all these people shared in common and it’s what makes anyone anywhere a leader: they all had followers.

Humans are a social species and, as such, our social alliances have long been key to our ability to survive and reproduce over our evolutionary history (largely based around some variant of the point that two people are better at beating up one person than a single individual is; an idea that works with cooperation as well). While having people around who were willing to do what you wanted have clearly been important, this perspective on what makes a leader – possessing followers – turns the question of what makes a great leader on its head: rather than asking about what characteristics make one a great leader, you might instead ask what characteristics make one an attractive social target for followers. After all, while it might be good to have social support, you need to understand why people are willing to support others in the first place to fully understand the matter. If it was all cost to being a follower (supporting a leader at your own expense), then no one would be a follower. There must be benefits that flow to followers to make following appealing. Nailing down what those benefits are and why they are appealing should better help us understand how to become a leader, or how to fall from a position of leadership.

With this perspective in mind, our colorful cast of historical leaders suddenly becomes more understandable: they vary in character, personality, intelligence, and political views, but they must have all offered their followers something valuable; it’s just that whatever that something(s) was, it need not be the same something. Defense from rivals, economic benefits, friendship, the withholding of punishment: all of these are valuable resources that followers might receive from an alliance with a leader, even from the position of a subordinate. That something may also vary from time to time: the leader who got his start offering economic benefits might later transition into one who also provides defense from rivals; the leader who is followed out of fear of the costs they can inflict on you may later become a leader who offers you economic benefits. And so on.

“Come for the violence; stay for the money”

The corollary point is that features which fail to make one appealing to followers are unlikely to be the ones that define great leaders. For example – and of relevance to the current research on offer – gender per se is unlikely to define great leaders because being a man or a woman does not necessarily offer much to many followers. Traits associated with them might – like how those who are physically strong can help you fight against rivals better than one who is not, all else being equal – but not the gender itself. To the extent that one gender tends to end up in positions of leadership it is likely because they tend to possess higher levels of those desirable traits (or at least reside predominantly on the upper end of the population distribution of them). Possessing these favorable traits that allow leaders to do useful things is only one part of the equation, however: they must also appear willing to use those traits to provide benefits to their follows. If a leader possesses considerable social resources, they do you little good if said leader couldn’t be any less interested in granting you access to them.

This analysis also provides another context point for understanding the leader/follower dynamic: it ought to be context specific, at least to some extent. Followers who are looking for financial security might look for different leaders than those who are seeking protection from outside aggression; those facing personal social difficulties might defer to different leaders still. The match between the talents offer by a leader and the needs of the followers should help determine how appealing some leaders are. Even traits that might seem universally positive on their face – like a large social network – might not be positives to the extent it affects a potential follower’s perception of their likelihood of receiving benefits. For example, leaders with relatively full social rosters might appear less appealing to some followers if that follower is seeking a lot of a leader’s time; since too much of it is already spoken for, the follower might look elsewhere for a more personal leader. This can create ecological leadership niches that can be filled by different people at different times for different contexts.

With all that in mind, there are at least some generalizations we can make about what followers might find appealing in a leader in an, “all else being equal…” sense: those with more social support with be selected as leaders more often, as such resources are more capable of resolving disputes in your favor; those with greater physical strength or intelligence might be better leaders for similar reasons. Conversely, one might follow such leaders because of the costs failing to follow would incur, but the logic holds all the same. As such, once these and other important factors are accounted for, you should expect irrelevant factors – like sex – to fall out of the equation. Even if many leaders tend to be men, it’s not their maleness per se that makes them appealing leaders, but rather these valued and useful traits.

Very male, but maybe not CEO material

This is a hypothesis effectively tested in a recent paper by von Rueden et al (in press). The authors examined the distribution of leadership in a small-scale foraging/farming society in the Amazon, the Tsimane. Within this culture – as others – men tend to exercise the greater degree of political leadership, relative to women, as measured by domains including speaking more during social meetings, coordinating group efforts, and resolving disputes. The leadership status of members within this group were assessed by ratings of other group members. All adults within the community (male n = 80; female n = 72) were photographed, and these photos were then then given to 6 of the men and women in sets of 19. The raters were asked to place the photos in order in terms of which person whose voice tended to carry the most weight during debates, and then in terms of who managed the most community projects. These ratings were then summed up (from 1 to 19, depending on their position in the rankings, with 19 being the highest in terms of leadership) to figure out who tended to hold the largest positions of leadership.

As mentioned, men tended to reside in positions of greater leadership both in terms of debates and management (approximate mean male scores = 37; mean female scores = 22), and both men and women agreed on these ratings. A similar pattern was observed in terms of who tended to mediate conflicts within the community: 6 females were named in resolving such conflicts, compared with 17 males. Further, the males who were named as conflict mediators tended to be higher in leadership scores, relative to non-mediating males, while this pattern didn’t hold for the females.

So why were men in positions of leadership in greater percentages than females? A regression analysis was carried out using sex, height, weight, upper body strength, education, and number of cooperative partners predicting leadership scores. In this equation, sex (and height) no longer predicted leadership score, while all the other factors were significant predictors. In other words, it wasn’t that men were preferred as leaders per se, but rather that people with more upper body strength, education, and cooperative partners were favored, whether male or female. These traits were still favored in leaders despite leaders not being particularly likely to use force or violence in their position. Instead, it seems that traits like physical strength were favored because they could potentially be leveraged, if push came to shove.

“A vote for Jeff is a vote for building your community. Literally”

As one might expect, what makes followers want to follow a leader wasn’t their sex, but rather what skills the leader could bring to bear in resolving issues and settling disputes. While the current research is far from a comprehensive examination of all the factors that might tap leadership at different times and contexts, it represents a sound approach to understanding the problem of why followers select particular leaders. By thinking about what benefits followers tended to reap from leaders over evolutionary history can help inform our search for – and understanding of – the proximate mechanisms through which leaders end up attracting them.

References:  von Rueden, C., Alami, S., Kaplan, H., & Gurven, M. (In Press). Sex differences in political leadership in an egalitarian society. Evolution & Human Behavior, doi:10.1016/j.evolhumbehav.2018.03.005

Doesn’t Bullying Make You Crazy?

“I just do it for the old fashioned love of killing”

Having had many pet cats, I understand what effective predators they can be. The number of dead mice and birds they have returned over the years is certainly substantial, and the number they didn’t bring back is probably much higher. If you happen to be a mouse living in an area with lots of cats, your life is probably pretty stressful. You’re going to be facing a substantial adaptive challenge when it comes to avoiding detection by these predators and escaping them if you fail at that. As such, you might expect mice developed a number of anti-predator strategies (especially since cats aren’t the only thing they’re trying to not get killed by): they might freeze when they detect a cat to avoid being spotted; they might develop a more chronic state of psychological anxiety, as being prepared to fight or run at a moment’s notice is important when your life is often on the line. They might also develop auditory or visual hallucinations that provide them with an incorrect view of the world because…well, I actually can’t think of a good reason for that last one. Hallucinations don’t serve as an adaptive response that helps the mice avoid detection, flee, or otherwise protect themselves against those who would seek to harm them. If anything, hallucinations seem to have the opposite effect, directing resources away from doing something useful as the mice would be responding to non-existent threats.

But when we’re talking about humans and not mice, some people seem to have a different sense for the issue: specifically, that we ought to expect something of a social predation – bullying – to cause people to develop psychosis. At least that was the hypothesis behind some recent research published by Dantchev, Zammit, and Wolke (2017). This study examined a longitudinal data set of parents and children (N = 3596) at two primary times during their life: at 12 years old, children were given a survey asking about sibling bullying, defined as, “…saying nasty and hurtful things, or completely ignores [them] from their group of friends, hits, kicks, pushes or shoves [them] around, tells lies or makes up false rumors about [them].” They were asked how often they experienced bullying by a sibling and how many times a week they bullied a sibling in the past 6 months (ranging from “Never”, “Once or Twice”, “Two or Three times a month”, “About once a week,” or, “Several times a week”). Then, at the age of about 18, these same children were assessed for psychosis-like symptoms, including whether they experienced visual/auditory hallucinations, delusions (like being spied on), or felt they had experienced thought interference by others.  

With these two measures in hand (whether children were bullies/bullied/both, and whether they suffered some forms of psychosis), the authors sought to determine whether the sibling bullying at time 1 predicted the psychosis at time 2, controlling for a few other measures I won’t get into here. The following results fell out of the analysis: children bullied by their siblings and who bullied their siblings tended to have lower IQ scores, more conduct disorders early on, and experienced more peer bullying as well. The mothers of these children were also more likely to experience depression during pregnancy and domestic violence was more likely to have been present in the households. Bullying, it would seem, was influenced by the quality of the children and their households (a point we’ll return to later).

“This is for making mom depressed prenatally”

In terms of the psychosis measures, 55 of the children in the sample met the criteria for having a disorder (1.5%). Of those children who bullied their siblings, 11 met this criteria (3%), as did 6 of those who were purely bullied (2.5%), and 11 of those were both bully and bullied (3%). Children who were regularly bullied (about once a week or more), then, were about twice as likely to report psychosis than those who were bullied less often. In brief, both being bullied by and bullying other siblings seemed to make hallucinations more common. Dantchev, Zammit, and Wolke (2017) took this as evidence suggesting a causal relationship between the two: more bullying causes more psychosis.

There’s a lot to say about this finding, the first thing being this: the vast majority of regularly-bullied children didn’t develop psychosis; almost none of them did, in fact. This tells us quite clearly that the psychosis per se is by no means a usual response to bullying. This is an important point because, as I mentioned initially, some psychological strategies might evolve to help individuals deal with outside threats. Anxiety works because it readies attentional and bodily resources to deal with those challenges effectively. It seems plausible such a response could work well in humans facing aggression from their peers or family. We might thus expect some kinds of anxiety disorders to be more common among those bullied regularly; depression too, since that could well serve to signal that one is in need of social support to others and help recruit it. So long as one can draw a reasonable, adaptive line between psychological discomfort and doing something useful, we might predict a connection between bullying and mental health issues.

But what are we to make of that correlation between being bullied and the development of hallucinations? Psychosis would not seem to help an individual respond in a useful way to the challenges they are facing, as evidenced by nearly all of the bullied children not developing this response. If such a response were useful, we should generally expect much more of it. That point alone seems to put the metaphorical nail in the coffin of two of the three explanations the authors put forth for their finding: that social defeat and negative perceptions of one’s self and the world are causal factors in developing psychosis. These explanations are – on their face – as silly as they are incomplete. There is no plausible adaptive line the authors attempt to draw from thinking negatively about one’s self or the world to the development of hallucinations, much less how those hallucinations are supposed to help. I would also add that these explanations are discussed only briefly at the end of paper, suggesting to me not enough time or thought went into trying to understand the reasons these predictions were made before the research was undertaken. That’s a shame, as a better sense for why one would expect to see a result would affect the way research is designed for the better. 

“Well, we’re done…so what’s it supposed to be?”

Let’s think in more detail about why we’re seeing what we’re seeing regarding bullying and psychosis. There are a number of explanations one might float, but the most plausible to me goes something like this: these mental health issues are not being caused by the bullying but are, in a sense, actually eliciting the bullying. In other words, causation runs in the opposite direction the authors think it does.

To fully understand this explanation, let’s begin with the basics: kin are usually expected to be predisposed to behave altruistically towards each other because they share genes in common. This means investment in your relatives is less costly than it would be otherwise, as helping them succeed is, in a very real sense, helping yourself succeed. This is how you get adaptations like breastfeeding and brotherly love. However, that cost/benefit ratio does not always lean in the direction of helping. If you have a relative that is particularly unlikely to be successful in the reproductive realm, investment in them can be a poor choice despite their relatedness to you. Even though they share genes with you, you share more genes with yourself (all of them, in fact), so helping yourself do a little better can sometimes be the optimal reproductive strategy over helping them do much better (since they aren’t likely to do anything even with your help). In that regard, relatives suffering from mental health issues are likely worse investments than those not suffering from them, all else being equal. The probability of investment paying off is simply lower.

Now that might end up predicting that people should ignore their siblings suffering from such issues; to get to bullying we need something else, and in this case we certainly have it: competition for the same pool of limited resources, namely parental investment. Brothers and sisters compete for the same resources from their parents – time, protection, provisioning, and so on – and resources invested in one child are not capable of being invested in another much of the time. Since parents don’t have unlimited amounts of these resources, you get competition between siblings for them. This sometimes results in aggressive and vicious competition. As we already saw in the study results, children of lower quality (lower IQ scores and more conduct disorders) coming from homes with fewer resources (likely indexed by more maternal depression and domestic violence) tend to bully and be bullied more. Competition for resources is more acute here and your brother or sister can be your largest source of it.

They’re much happier now that the third one is out of the way

To put this into an extreme example of non-human sibling “bullying”, there are some birds that lay two or three eggs in the same nest a few days apart. What usually happens in these scenarios is that when the older sibling hatches in advance of the younger it gains a size advantage, allowing it to peck the younger one to death or roll it out of the nest to starve in order to monopolize the parental investment for itself. (For those curious why the mother doesn’t just lay a single egg, that likely has something to do with having a backup offspring in case something goes wrong with the first one). As resources become more scarce and sibling quality goes down, competition to monopolize more of those resources should increase as well. That should hold for birds as well as humans.

A similar logic extends into the wider social world outside of the family: those suffering from psychosis (or any other disorders, really) are less valuable social assets to others than those not suffering from them, all else being equal. As such, sufferers receive less social support in the form of friendships or other relationships. Without such social support, this also makes one an easier target for social predators looking to exploit the easiest targets available. What this translates into is children who are less able to defend themselves being bullied by others more often. In the context of the present study, it was also documented that peer bullying tends to increase with psychosis, which would be entirely unsurprising; just not because bullying is causing children to become psychotic.

This brings us to the final causal hypothesis: sometimes bullying is so severe that it actually causes brain damage that causes later psychosis. This would involve what I imagine would either be a noticeable degree of physical head trauma or similarly noticeable changes brought on by a body’s response to stress that causes brain damage over time. Neither hypothesis strikes me as particularly likely in terms of explaining much of what we’re seeing here, given the scope of sibling bullying is probably not often large enough to pose that much of a physical threat to the brain. I suspect the lion’s share of the connection between bullying and psychosis is simply that psychotic individuals are more likely to be bullied, rather than because bullying is doing the causing. 

References: Dantchev, S., Zammit S., & Wolke, D. (2017). Sibling bullying in middle childhood and psychotic disorder at 18 years: a prospective cohort study. Psychological Medicine, https://doi.org/10.1017/S0033291717003841.

My Father Was A Gambling Man

And if you think I stole that title from a popular song, you’re very wrong

Hawaii recently introduced some bills aimed at prohibiting the sale of games with for-purchase loot boxes to anyone under 21. For those not already in the know concerning the world of gaming, loot boxes are effectively semi-random grab bags of items within video games. These loot boxes are usually received by players either as a reward for achieving something within a game (such as leveling up) and/or can be purchased with currency, be that in-game currency or real world money. Specifically, then, the bills in question are aimed at games that sell loot boxes for real money, attempting to keep them out of the hands of people under 21.

Just like tobacco companies aren’t permitted to advertise to minors out of fear that children will come to find smoking an interesting prospect, the fear here is that children who play games with loot boxes might develop a taste for gambling they otherwise wouldn’t have. At least that’s the most common explicit reason for this proposal. The gaming community seems to be somewhat torn about the issue: some gamers welcome the idea of government regulation of loot boxes while others are skeptical of government involvement in games. In the interest of full disclosure for potential bias – as a long-time gamer and professional loner – I consider myself to be a part of the latter camp.

My hope today is to explore this debate in greater detail. There are lots of questions I’m going to discuss, including (a) whether loot boxes are gambling, (b) why gamers might oppose this legislation, (c) why gamers might support it, (d) what other concerns might be driving the acceptance of regulation within this domain, and (e) talk about whether this kind of random mechanics actually make for better games.

Lets begin our investigation in gaming’s seedy underbelly

To set the stage, a loot box is just what it sounds like: a package randomized of in-game items (loot) which are earned by playing the game or purchased. In my opinion, loot boxes are gambling-adjacent types of things, but not bone-fide gambling. The prototypical example of gambling is along the lines of a slot machine. You put money into it and have no idea what you’re going to get out. You could get nothing (most of the time), a small prize (a few of the times), or a large prize (almost never). Loot boxes share some of those features – the paying money for randomized outcomes – but they don’t share others: first, with loot boxes there isn’t a “winning” and “losing” outcome in the same way there is with a slot machine. If you purchase a loot box, you should have some general sense as to what you’re buying; say, 5 items with varying rarities. It’s not like you sometimes open a loot box and there are no items, other times there are 5, and other times there are 20 (though more on that in a moment). The number of items you receive is usually set even if the contents are random. More to the point, the items you “receive” you often don’t even own; not in the true sense. If the game servers get shut down or you violate terms of service, for instance, your account with the items get deleted and they disappear from existence and you don’t get to sue someone for stealing from you. There is also no formal cashing out of many of these games. In that sense, there is less of a gamble in loot boxes than what we traditionally consider gambling.

Importantly, the value of these items is debatable. Usually players really want to open some items and don’t care about others. In that sense, it’s quite possible to open a loot box and get nothing of value, as far as you’re concerned, while hitting jackpots in others. However, if that valuation is almost entirely subjective in nature, then it’s hard to say that not getting what you want is losing while getting what you do is winning, as that’s going to vary from person to person. What you are buying with loot boxes isn’t a chance at a specific item you want; it is a set number of random items from a pool of options. To put that into an incomplete but simple example, if you put money into a gumball machine and get a gumball, that’s not really a gamble and you didn’t really lose. It doesn’t become gambling, nor do you lose, if the gumballs are different colors/flavors and you wanted a blue one but got a green one.

One potential exception to the argument of equal value to this is when the items opened aren’t bound to the opener; that is, they can be traded or sold to other players. You don’t like your gumball flavor? Well, now you can trade your friend your gumball for theirs, or even buy their gumball from them. When this possibility exists, secondary markets pop up for the digital items where some can be sold for lots of real money while others are effectively worthless. Now, as far as the developers are concerned, all the items can have the same value, which makes it look less like gambling; it’s the secondary market that makes it look more like gambling, but the game developers aren’t in control of that.

Kind of like these old things

An almost-perfect metaphor for this can be found in the sale of Baseball cards (which I bought when I was younger, though I don’t remember what the appeal was): packs containing a set number of cards – let’s say 10 – are purchased for a set price – say $5 – but the contents of those packs is randomized. The value of any single card, from the perspective of the company making them, is 1/10 the cost of the pack. However, some people value specific cards more than others; a rookie card of a great player is more desired than the card for a veteran who never achieved anything. In such cases, a secondary market crops up among those who collect the cards, and those collectors are willing to pay a premium for the desired items. One card might sell for $50 (worth 10-times the price of a pack), while another might be unable to find a buyer at all, effectively worth $0.

This analogy, of course, raises other questions about the potential legality of existing physical items, like sports cards, or those belonging to any trading card game (like Magic: The Gathering, Pokemon, or Yugioh). If digital loot boxes are considered a form of gambling and might have effects worth protecting children from, then their physical counterparts likely pose the same risks. If anything, the physical versions look more like gambling because at least some digital items cannot be traded or sold between players, while all physical items pose that risk of developing real value on a secondary market. Imagine putting money into a slot machine, hitting the jackpot, and then getting nothing out of it. That’s what many virtual items amount to.

Banning the sale of loot boxes in gaming from people under the age of 21 likely also entails the banning of card packs from them as well. While the words “slippery slope” are usually used together with the word “fallacy,” there does seem to be a very legitimate slope here worth appreciating. The parallels between loot boxes and physical packs of cards are almost perfect (and, where they differ, card packs look more like gambling; not less). Strangely, I’ve seen very few voices in the gaming community suggesting that the sale of packs of cards should be banned from minors; some do (mostly for consistency sake; they don’t raise the issue independently of the digital loot box issue almost ever as far as I’ve seen), but most don’t seem concerned with the matter. The bill being introduced in Hawaii doesn’t seem to mention baseball or trading cards anywhere either (unless I missed it), which would be a strange omission. I’ll return to this point later when we get to talking about the motives behind the approval of government regulation in the digital realm coming from gamers.

The first step towards addiction to that sweet cardboard crack

But, while we’re on the topic of slippery slopes, let’s also consider another popular game mechanic that might also be worth examination: randomized item drops from in-game enemies. These aren’t items you purchase with money (at least not in game), but rather ones you purchase with time and effort. Let’s consider one of the more well-known games to use this: WoW (World of Warcraft). In WoW, when you kill enemies with your character, you may receive valued items from their corpse as you loot the bodies. The items are not found in a uniform fashion: some are very common and other quite rare. I’ve watched a streamer kill the same boss dozens of times over the course of several weeks hoping to finally get a particular item to drop. There are many moments of disappointment and discouragement, complete with feelings of wasted time, after many attempts are met with no reward. But when the item finally does drop? There is a moment of elation and celebration, complete with a chatroom full of cheering viewers. If you could only see the emotional reaction of the people to getting their reward and not their surroundings, my guess is that you’d have a hard time differentiating a gamer getting a rare drop they wanted from someone opening the desired item out of a loot box for which they paid money.

What I’m not saying is that I feel random loot drops in World of Warcraft are gambling; what I am saying is that if one is concerned about the effects loot boxes might have on people when it comes to gambling, they share enough in common with randomized loot drops that the latter are worth examining seriously as well. Perhaps it is the case that the item a player is after has a fundamentally different psychological effect on them if chances at obtaining it are purchased with real money, in-game currency, or play time. Then again, perhaps there is no meaningful difference; it’s not hard to find stories of gamers who spent more time than is reasonable trying to obtain rare in-game items to the point that it could easily be labeled an addiction. Whether buying items with money or time have different effects is a matter that would need to be settled empirically. But what if they were fundamentally similar in terms of their effects on the players? If you’re going to ban loot boxes sold with cash under the fear of the impact they have on children’s propensity to gamble or develop a problem, you might also end up with a good justification for banning randomized loot drops in games like World of Warcraft as well, since both resemble pulling the lever of a slot machine in enough meaningful ways.

Despite that, I’ve seen very few people in the pro-regulation camp raise the concern about the effects that World of Warcraft loot tables are having on children. Maybe it’s because they haven’t thought about it yet, but that seems doubtful, as the matter has been brought up and hasn’t been met with any concern. Maybe it’s because they view the costs of paying real money for items as more damaging than paying with time. Either way, it seems that even after thinking about it, those who favor regulation of loot boxes largely don’t seem to care as much about card games, and even less about randomized loot tables. This suggests there are other variables beyond the presence of gambling-like mechanics underlying their views.

“Alright; children can buy some lottery tickets, but only the cheap ones”

But let’s talk a little more about the fear of harming children in general. Not that long ago there were examination of other aspects of video games: specifically, the component of violence often found and depicted within them. Indeed, research into the topic is still a thing today. The fear sounded like a plausible one to many: if violence is depicted within these games – especially within the context of achieving something positive, like winning by killing the opposing team’s characters – those who play the games might become desensitized to violence or come to think it acceptable. In turn, they would behave more violently themselves and be less interested in alleviating violence directed against others. This fear was especially pronounced when it came to children who were still developing psychologically and potentially more influenced by the depictions of violence.

Now, as it turns out, those fears appear to be largely unfounded. Violence has not been increasing as younger children have been playing increasingly violent video games more frequently. The apparent risk factor for increasing aggressive behavior (at least temporarily; not chronically) was losing at the game or finding it frustrating to play (such as when the controls feel difficult to use). The violent content per se didn’t seem to be doing much causing when it came later violence. While players who are more habitually aggressive might prefer somewhat different games than those who are not, that doesn’t mean the games are causing them to be violent.

This gives us something of a precedent for worrying about the face-validity of the claims that loot boxes are liable to make gambling seem more appealing on a long-term scale. It is possible that the concern over loot boxes represents more of a moral panic on the part of the legislatures, rather than a real issue having a harmful impact. Children who are OK with ripping an opponent’s head off in a video game are unlikely to be OK with killing someone for real, and violence in video games doesn’t seem to make the killing seem more appealing. It might similarly be the case that opening loot boxes makes people no more likely to want to gamble in other domains. Again, this is an empirical matter that requires good evidence to prove the connection (and I emphasize the word good because there exists plenty of low-quality evidence that has been used to support the connection between violence in video games causing it in real life).

Video games inspire cosplay; not violence

If it’s not clear at this point, I believe the reasons that some portion of the gaming community supports this type of regulation has little to nothing to do with their concerns about children gambling. For the most part, children do not have access to credit cards and so cannot themselves buy lots of loot boxes, nor do they have access to lots of cash they can funnel into online gift cards. As such, I suspect that very few children do serious harm to themselves or their financial future when it comes to buying loot boxes. The ostensible concern for children is more of a plausible-sounding justification than one actually doing most of the metaphorical cart-pulling. Instead, I believe the concern over loot boxes (at least among gamers) is driven by two more mundane concerns.

The first of these is simply the perceived cost of a “full” game. There has long been a growing discontent in the gaming community over DLC (downloadable content), where new pieces of content are added to a game after release for a fee. While that might seem like the simple purchase of an expansion pack (which is not a big deal), the discontent arises were a developer is perceived to have made a “full” game already, but then cut sections out of it purposefully to sell later as “additional” content. To place that into an example, you could have a fighting game that was released with 8 characters. However, the game became wildly popular, resulting in the developers later putting together 4 new characters and selling them because demand was that high. Alternatively, you could have a developer that created 12 characters up front, but only made 8 available in the game to begin with, knowingly saving the other 4 to sell later when they could have just as easily been released in the original. In that case, intent matters.

Loot boxes do something similar psychologically at times. When people go to the store and pay $60 for a game, then take it home to find out the game wants them to pay $10 or more (sometimes a lot more) to unlock parts of the game that already exist on the disk, that feels very dishonest. You thought you were purchasing a full game, but you didn’t exactly get it. What you got was more of an incomplete version. As games become increasingly likely to use these loot boxes (as they seem to be profitable), the true cost of games (having access to all the content) will go up.

Just kidding! It’s actually 20-times more expensive

Here is where the distinction between cosmetic and functional (pay-to-win) loot boxes arises. For those not in the know about this, the loot boxes that games sell vary in terms of their content. In some games, these items are nothing more than additional colorful outfits for your characters that have no effect on game play. In others, you can buy items that actually increase your odds of winning a game (items that make your character do more damage or automatically improve their aim). Many people who dislike loot boxes seem to be more OK (or even perfectly happy) with them so long as the items are only cosmetic. So long as they can win the game as effectively spending $0 as they could spending $1000, they feel that they own the full version. When it feels like the game you bought gives an advantage to players who spent more money on it, it again feels like the copy of the game you bought isn’t the same version as theirs; that it’s not as complete an experience.

Another distinction arises here in that I’ve noticed gamers seem more OK with loot boxes in games that are Free-to-Play. These are games that cost nothing to download, but much of their content is locked up-front. To unlock content, you usually invest time or money. In such cases, the feeling of being lied to about the cost of the game don’t really exist. Even if such free games are ultimately more expensive than traditional ones if you want to unlock everything (often much more expensive if you want to do so quickly), the actual cost of the game was $0. You were not lied to about that much and anything else you spent afterwards was completely voluntary. Here the loot boxes look more like a part of the game than an add-on to it. Now this isn’t to say that some people don’t dislike loot boxes even in free-to-play games; just that they mind them less.

“Comparatively, it’s not that bad”

The second, related concern, then, is that developers might be making design decisions that ultimately make games worse to try and sell more loot boxes. To put that in perspective, there are some cases of win/win scenarios, like when a developer tries to sell loot boxes by making a game that’s so good people enjoy spending money on additional content to show off how much they like it. Effectively, people are OK with paying for quality. Here, the developer gets more money and the players get a great game. But what happens when there is a conflict? A decision needs to be made that will either (a) make the game play experience better but sell fewer loot boxes, or (b) make the game play experience worse, but sell more loot boxes? However frequently these decisions needs to be made, they assuredly are made at some points.

To use a recent example, many of the rare items in the game Destiny 2 were found within an in-game store called Eververse. Rather than unlocking rare items through months of completing game content over and over again (like in Destiny 1), many of these rare, cosmetic items were found only within Eververse. You could unlock them with time, in theory, but only at very slow rates (which were found to actually be intentionally slowed down by the developers if a player put too much time into the game). In practice, the only way to unlock these rare items was through spending money. So, rather than put interesting and desirable content into the game as a reward for being good at it or committed to it, it was largely walled off behind a store. This was a major problem for people’s motivation to continue playing the game, but it traded off against people’s willingness to spend money on the game. These conflicts created a worse experience for a great many players. It also yielded the term “spend-game content” to replace “end-game content.” More loot boxes in games potentially means more decisions like that will be made where reasons to play the game are replaced with reasons to spend money.

Another such system was discussed in regards to a potential patent by Electronic Arts (EA), though as far as I’m aware it has not made its way into a real game yet. This system revolved around online, multiplayer games with items available for purchase. The system would be designed such that players who spent money on some particular item would be intentionally matched against players of lower skill. As the lower-skill players would be easier for the buyer to beat with their new items, it would make the purchaser feel like their decision to buy was worth it. By contrast, the lower-level player might become impressed by how good the player with the purchased item performed and feel they would become better at the game if they too purchased it. While this might encourage players to buy in-game items, it would yield an ultimately less-competitive and interesting matchmaking system. While such systems are indeed bad for the game play experience, it is at least worth noting that such a system would work if the items were being sold came from loot boxes or were directly purchased.

“Buy the golden king now to get matched against total scrubs!”

If I’m right and the reasons gamers who favor regulation center around the cost and design direction of games, why not just say that instead of talking about children and gambling? Because, frankly, it’s not very persuasive. It’s too selfish of a concern to rally much social support. It would be silly for me to say, “I want to see loot boxes regulated out of games because I don’t want to spend money on them and think they make for worse gaming experiences for me.” People would just tell me to either not buy loot boxes or not buy games with loot boxes. Since both suggestions are reasonable and I can do them already, the need for regulation isn’t there.

Now if I decide to vote with my wallet and not buy games with loot boxes, that won’t have any impact on the industry. My personal impact is too small. So long as enough other people buy those games, they will continue to be produced and my enjoyment of the games will be decreased because of the aforementioned cost and design issues. What I need to do, then, is convince enough people to follow my lead and not buy these games either. It wouldn’t be until enough gamers aren’t buying the games that there would be incentives for developers to abandon that model. One reason to talk about children, then, is because you don’t trust that the market will swing in your favor. Rather than allow the market to decide feely, you can say that children are incapable of making good choices and are being actively harmed. This will rally more support to tip the scales of that market in your favor by forcing government intervention. If you don’t trust enough people will vote with their wallet like you do, make it illegal for younger gamers to be allowed to vote in any other way.

A real concern about children, then, might not be that they will come to view gambling as normal, but rather that they will come to view loot boxes (or other forms of added content, like dishonest DLC) in games as normal. They will accept that games often have loot boxes and they will not be deterred from buying titles that include them. That means more consumers now and in the future who are willing to tolerate or purchase loot boxes/DLC. That means fewer games without them which, in turn, means fewer options available to those voting with their wallets and not buying them. Children and gambling are brought up not because they are the gamer’s primary target of concern, but rather because they’re useful for a strategic end.

Of course, there are real issues when it comes to children and these microtransactions: they don’t tend to make great decisions, sometimes get access to the parent’s credit card information and then go on insane spending sprees in their games. This type of family fraud has been the subject of previous legal disputes, but it is important to note that this is not a loot box issue per se. Children will just as happily waste their parents money on known quantities of in-game resources as they would on loot boxes. It’s also something more a matter of parental responsibilities and creating purchasing verification than it is the heart of the matter at hand. Even if children do occasionally make lots of unauthorized purchases, I don’t think major game companies are counting on that as an intended source of vital revenue.

They start ballin’ out so young these days

For what it’s worth, I think loot boxes do run certain risks for the industry, as outlined above. They can make games costlier than they need to be and they can result in design decisions I find unpleasant. In many regards I’m not a fan of them. I just happen to think that (a) they aren’t gambling and (b) don’t require government intervention to remove because they are harming children, persuading them that gambling is fun and leading to more of it in the future. I think any kinds of microtransactions – whether random or not – can result in the same kinds of harms, addiction, and reckless spending. However, when it comes to human psychology, I think loot boxes are designed more a tool to fit our psychology than one that shapes it, not unlike how water takes the shape of the container it is in and not the other way around. As such, it is possible that some facets of loot boxes and other random item generation mechanics make players engage with the game in a way that yields more positive experiences, in addition to the costs they carry. If these gambling-like mechanics weren’t, in some sense, fun people would simply avoid games with them. 

For instance, having content that one is aiming to unlock can provide a very important motivation to continue playing a game, which is a big deal if you want your game to last and be interesting for a long time. My most recent example of this is Destiny 2 again. Though I didn’t play the first Destiny, I have a friend who did that told me about it. In that game, items randomly dropped, and they dropped with random perks. This means you could get several versions of the same item, but have them all be different. It gave you a reason and a motivation to be excited about getting the same item for the 100th time. This wasn’t the case in Destiny 2. In that game, when you got a gun, you got the gun. There was no need to try and get another version of it because that didn’t exist. So what happened when Destiny 2 removed the random rolls from items? The motivation for hardcore players to keep playing long-term largely dropped off a cliff. At least that’s what happened to me. The moment I got the last piece of gear I was trying to achieve, a sense of, “why am I playing?” washed over me almost instantly and I shut the game off. I haven’t touched it since. The same thing happen to me in Overwatch when I unlocked the last skin I was interested in at the time. Had all that content be available from the start, the turning-off point likely would have come much sooner. 

As another example, imagine a game like World of Warcraft, where a boss has a random chance to drop an amazing item. Say this chance is 1 in 500. Now imagine an alternative reality where this practice is banned because it’s deemed to be too much like gambling (not saying it will be; just imagine that it was). Now the item is obtained in the following way: whenever the boss is killed, it drops a token guaranteed. After you collect 500 of those tokens, you can hand them in and get the item as a reward. Do you think players would have a better time under that kind of gambling-like system, where each boss kill represents the metaphorical pull of a slot machine lever, or in the consistent condition? I don’t know the answer to that question offhand, but what I do know is that collecting 500 tokens sure sounds boring, and that’s coming from the person who values consistency, saving, and doesn’t enjoy traditional gambling. No one is going to make a compilation video of people reacting to finally collecting 500 items because all you’d have was another moment, just like the last 499 moments where the same thing happened. People would – and do – make compilation videos of streamers finally getting valuable or rare items, as such moments are more entertaining for views and players alike.

Sinking Costs

My cat displays a downright irrational behavior: she enjoys stalking and attacking pieces of string. I would actually say that this behavior extends beyond enjoying it the point of actively craving it. It’s fairly common for her to meow at me until she gets my attention before running over to her string and sitting by it, repeating this process until I play with her. At that point, she will chase it, claw at it, and bite it as if it were a living thing she could catch. This is irrational behavior for the obvious reason that the string isn’t prey; it’s not the type of thing it is appropriate to chase. Moreover, despite numerous opportunities to learn this, she never seems to cease this behavior, continuing to treat the string like a living thing. What could possibly explain this mystery?

If you’re anything like me, you might find that entire premise rather silly. My cat’s behavior only looks irrational when compared against an arguably-incorrect frame of reference; one in which my cat ought to only chase things that are alive and capable of being killed/eaten. There are other ways of looking at the behavior which make it understandable. Let’s examine two such perspectives briefly. The first of these is that my cat is – in some sense – interested in practicing for future hunting. In much the same way that people might practice in advance of a real event to ensure success, my cat may enjoy chasing the string because of the practice it affords her for achieving successful future hunts. Another perspective (which is not mutually exclusive) is that the string might give off proximate cues that resemble those of prey (such as ostensibly self-directed movement) which in turn activate other cognitive programs in my cat’s brain associated with hunting. In much the same way that people watch cartoons and perceive characters on the screen, rather than collections of pixels or drawings, my cat may be responding to proximate facsimiles of cues that signaled something important over evolutionary time when she sees strings moving.

The point of this example is that if you want to understand behavior – especially behavior that seems strange – you need to place it within its proper adaptive context. Simply calling something irrational is usually a bad idea for figuring out what is going on, as no species has evolved cognitive mechanisms that exist because they encouraged that organism to behave in irrational, maladaptive, or otherwise pointless ways. Any such mechanism would represent a metabolic cost endured for either no benefit or a cost, and those would quickly disappear from the population, outcompeted by organisms that didn’t make such silly mistakes.  

For instance, burying one’s head in the proverbial sand doesn’t help avoid predators

Today I wanted to examine one such behavior that gets talked about fairly regularly: what is referred to as the sunk-cost fallacy (implying a mistake is occurring). It refers to cases where people make decisions based on previous investments, rather than future expected benefits. For instance, if you happened to have a Master’s degree in a field that isn’t likely to present you with a job opportunity, the smart thing to do (according to most people, I imagine) would be to cut your losses and find a new major in a field that is likely to offer work. The sunk-cost fallacy here might represent saying to yourself, “Well, I’ve already put so much time into this program that I might as well put in more and get that PhD,” even though committing further resources is more than likely going to be a waste. In another case, you might sometimes continuing to pour money into a failing business venture because they had already invested most of their life savings. In fact, the tendency to invest in such projects is usually predictable by how much was invested in the past. The more you already put in, the more likely you are to see it through to its conclusion. I’m sure you can come up with your own examples of this from things you’ve either seen or done in the past.

On the face of it, this behavior looks irrational. You cannot get your previous investments back, so why should they have any sway over future decision making? If you end up concluding that such behavior couldn’t possibly be useful – that it’s a fallacious way of thinking – there’s a good chance you haven’t thought about it enough yet. To begin understanding why sunk costs might factor into decision making, it’s helpful to start with a basic premise: humans did not evolve in a world where financial decisions – such as business investments – were regularly made (if they were made at all). Accordingly, whatever cognitive mechanisms underlie sunk-cost thinking likely have nothing at all to do with money (or the pursuit of degrees, or other such endeavors). If we are using cognitive mechanisms to manage tasks they did not evolve for solving, it shouldn’t be surprising that we see some strange decisions cropping up from time to time. In much the same way, cats are not adapted to worlds with toys and strings. Whatever cognitive mechanism impels my cat to chase them, it is not adapted for that function.

So – when it comes to sunk costs – what might the cognitive mechanisms leading us to make these choices be designed to do? While humans might not have done a lot of financial investing over our evolutionary history, we sure did a lot of social investing. This includes protecting, provisioning, and caring for family members, friends, and romantic partners who in turn do the same for you. Such relationships need to be managed and broken off from time to time. In that regard, sunk costs begin to look a bit different.  

“Well, this one is a dud. Better to cut our losses and try again”

On the empirical end, it has been reported that people respond to social investments in a different way than they do financial ones. In a recent study by Hrgović & Hromatko (2017), 112 students were asked to respond to a stock market task and a social task. In the financial task, they read about a hypothetical investment they had made in their own business, but they had been losing value. The social tasks were similar: participants were told they had invested in a romantic partner, a sibling, and a friend. All were suffering financial difficulties, and the participant had been trying to help. Unfortunately, the target of this investment hadn’t been pulling themselves back up, even turning down job offers, so the investments were not currently paying off. In both the financial and social tasks, participants were then given the option to (a) stop investing in them now, (b) keep investing for another year only, or (c) keep investing indefinitely until the issue was resolved. The responses and time to response were recorded.

When it came to the business investment, about 40% of participants terminated future investments immediately; when it came to the numbers social contexts, these were about 35% in the romantic partner scenario, 25% in the sibling context, and about 5% in the friend context. The numbers for investing another year were about 35% in the business context, 50% in the romantic, and about 65% in the sibling and friend conditions. Finally, about 25% of participants would invest indefinitely in the business, 10% in the romantic partner, 5% in the sibling, and 30% in the friendship. In general, the picture that emerges is that people were willing to terminate the business investments much more readily than the social ones. Moreover, the time it took to make a decision was also longer in the business context, suggesting that people found the decision to continue investing in social relationships easierPhrased in terms of sunk costs, people appeared to be more willing to factor those into the decision to keep investing in social relationships. 

So at least you’ll have company as you sink into financial ruin

The question remains as to why that might be? Part of that answer no doubt involves opportunity costs. In the business world, if you want to invest your money into a new venture, doing so is relatively easy. Your money is just as green as the next person’s. It is far more difficult to just go out into the world and get yourself a new friend, sibling, or romantic partner. Lots of people already have friends, families, and friendships and aren’t looking to add to that list, as their investment potential in that realm is limited. Even if they are looking to add to it, they might not be looking to add you. Accordingly, the expected value of finding a better relationship needs to weighed against the time it takes to find it, as well as the degree of improvement it would likely yield. If you cannot just go out into the world and find new relationships with ease, breaking off an existing one could be more costly when weighed against the prospect of waiting it out to see if it improves in the future. 

There are other factors to consider as well. For instance, the return on social investment may often not be all that immediate and, in other cases, might come from sources other than the person being invested in. Taking those in order, if you break off social investments with others at the first sign of trouble – especially deeper, longer-lasting relationships – you may develop a reputation as a fair-weather friend. Simply put, people don’t want to invest and be friends with someone who is liable to abandon them when they need it most. We’d rather have friends who are deeply and honestly committed to our welfare, as those can be relied on. Breaking off social relationships too readily demonstrates to others that one is not that appealing as a social asset, making you less likely to have a place in their limited social roster. 

Further, investing in one person is also to invest in their social network. If you take care of a sick child, you’re not going to hope that the child will pay you back. Doing so might ingratiate you to their parents, however, and perhaps others as well. This can be contrasted with investing in a business: trying to help a failing business isn’t liable to earn you any brownie points as an attractive social asset to other businesses looking to court your investment, nor is Ford going to return the poor investment you made in BP because they’re friends with each other.

Whatever the explanation, it seems that the human willingness to succumb to sunk costs in the financial realm may well be a byproduct of an adaptive mechanism in the social domain being co-opted for a task it was not designed to solve. When that happens, you start seeing some weird behavior. The key to understanding that weirdness is to understand the original functionality.

References: Hrgović, J. & Hromatko, I. (2017). The time and social context in sunk-cost effects. Evolutionary Psychological Science, doi: 10.1007/s40806-017-0134-4

Predicting The Future With Faces

“Your future will be horrible, but at least it will be short. So there’s that”

The future is always uncertain, at least as far as human (and non-human) knowledge is concerned. This is one reason why some people have difficulty saving or investing money for the future: if you give up rewards today for the promise of rewards tomorrow, that might end up being a bad idea if tomorrow doesn’t come for you (or a different tomorrow than the one you envisioned does). Better to spend that money immediately when it can more reliably bring rewards. The same logic extends to other domains of life, including the social. If you’re going to invest time and energy into a friendship or sexual relationship, you will always run the risk of that investment being misplaced. Friends or partners who betray you or don’t reciprocate your efforts are not usually the ones you want to be investing in the first place. You’d much rather invest that effort into the people who will give you better return.

Consider a specific problem, to help make this clear: human males face a problem when it comes to long-term sexual relationships, which is that female reproductive potential is limited. Not only can women only manage one pregnancy at a time, but they also enter into menopause later in life, reducing their subsequent reproductive output to zero. One solution to this problem is to only seek short-term encountered but, if you happen to be a man looking for a long-term relationship, you’d be doing something adaptive by selecting a mate with the greatest number of years of reproductive potential ahead of her. This could mean selecting a partner who is younger (and thus has the greatest number of likely fertile years ahead of her) and/or selecting one who is liable to enter menopause later.

Solving the first problem – age – is easy enough due to the presence of visual cues associated with development. Women who are too young and do not possess these cues are not viewed as attractive mates (as they are not currently fertile), become more attractive as they mature and enter their fertile years, and then become less attractive over time as fertility (both present and future) declines. Solving the second problem – future years of reproductive potential, or figuring out the age at which a woman will enter menopause – is trickier. It’s not like men have some kind of magic crystal ball they can look into to predict a woman’s future expected age at menopause to maximize their reproductive output. However, women do have faces and, as it turns out, those might actually be the next best tool for the job.

Fred knew it wouldn’t be long before he hit menopause

A recent study by Bovet et al (2017) sought to test whether men might be able to predict a woman’s age at menopause in advance of that event by only seeing her face. One obvious complicating factor with such research is that if you want to assess the extent to which attractiveness around, say, age 25 predicts menopause in the same sample of women, you’re going to have to wait a few decades for them to hit menopause. Thankfully, a work-around exists in that menopause – like most other traits – is partially heritable. Children resemble their partners in many regards, and age of menopause is one of them. This allowed the researchers to use a woman’s mother’s age of menopause as a reasonable proxy for when the daughter would be expected to reach menopause, saving them a lot of waiting. 

Once the participating women’s mother’s age of menopause was assessed, the rest of the study involved taking pictures of the women’s faces (N = 68; average age = 28.4) without any makeup and with as neutral as an expression as possible. These faces were then presented in pairs to male raters (N = 156) who selected which of the two was more attractive (completing that task a total of 30 times each). The likelihood of being selected was regressed against the difference between the mother’s age of menopause for each pair, controlling for facial femininity, age, voice pitch, waist-to-hip ratio, and a value representing the difference between a woman’s actual and perceived age (to ensure that women who looked younger/older than they actually were didn’t throw things off).

A number of expected results showed up, with more feminine faces (ß = 0.4) and women with more feminine vocal pitch (ß = 0.2) being preferred (despite the latter trait not being assessed by the raters). Women who looked older were also less likely to be selected (ß = -0.56) Contrary to predictions, women with more masculine WHRs were preferred (ß = 0.13), even though these were not visible in the photos, suggesting WHR may cue different traits than facial ones. The main effect of interest, however, concerned the menopausal variable. These results showed that as the difference between the pair of women’s mother’s age of menopause increased (i.e., one woman expected to go through menopause later than the other), so too did the probability of the later-menopausal woman getting selected (ß = 0.24). Crucially, there was no correlation between a woman’s expected age of menopause and any of the more-immediate fertility cues, like age, WHR, facial or vocal femininity. Women’s faces seemed to be capturing something unique about expected age at menopause that made them more attractive.

Trading off hot daughters for hot flashes

Now precisely what features were being assessed as more attractive and the nature of their connection to age of menopause is unknown. It is possible – perhaps even likely – that men were assessing some feature like symmetry that primarily signals developmental stability and health, but that variable just so happen to correlate with age at menopause as well (e.g., healthier women go through menopause later as they can more effectively bear the costs of childbearing into later years). Whatever systems were predicting age at menopause might not specifically be designed to do so. While it is possible that some features of a woman’s face uniquely cues people into expected age at menopause more directly without primarily cuing some other trait, that remains to be demonstrated. Nevertheless, the results are an interesting first step in that direction worth thinking about.

References: Bovet, J., Barkat-Defradas, M., Durand, V., Faurie, C., & Raymond, M. (2017). Women’s attractiveness is linked to expected age at menopause. Journal of Evolutionary Biology, doi: 10.1111/jeb.13214

What Can Chimps Teach Us About Strength?

You better not be aping me…

There was a recent happening in the primatology literature that caught my eye. Three researchers were studying patterns of mating in captive chimpanzees. They were interested in finding out what physical cues female chimps tended to prefer in a mate. This might come as no surprise to you – it certainly didn’t to me – but female chimps seemed to prefer physically strong males. Stronger males were universally preferred by the females, garnering more attention and ultimately more sexual partners. Moreover, strength was not only the single best predictor of attractiveness, but there was no upper-limit on this effect: the stronger the male, the more he was preferred by the females. This finding makes perfect sense in its proper evolutionary context, given chimps’ penchant for getting into physical conflicts. Strength is a key variable for males in dominating others, whether this is in the context of conflicts over resources, social status, or even inter-group attacks. Males who were better able to win these contests were not only likely to do well for themselves in life, but their offspring would likely be the kind of males who would do likewise. That makes them attractive mating prospects, at least if having children likely to survive and mate is adaptive, which it seems to be.

What interested me so much was not this finding – I think it’s painfully obvious – but rather the reaction of some other academics to it. These opposing reactions claimed that the primatologists were too quick to place their results in that evolutionary context. Specifically, it was claimed that these preferences might not be universal, and that a cultural explanation makes more sense (as if the two are competing types of explanations). This cultural explanation, I’m told, goes something like, “chimpanzee females are simply most attracted to male bodies that are the most difficult to obtain because that’s how chimps in this time and place do things,” and “if this research was conducted 100 years ago, you’d have observed a totally different pattern of results.”

Now why the difficulty in achieving a body is supposed to be the key variable isn’t outlined, as far as I can tell. Presumably it too should have some kind of evolutionary explanation which would make a different set of predictions, but none are outlined. This point seems scarcely realized by the critics. Moreover, the idea that these findings would not obtain 100 years ago is tossed out with absolutely no supporting evidence and little hope of being tested. It seems unlikely that physical strength yielding adaptive benefits is some kind of evolutionary novelty, or that males did not differ in that regard as little as a hundred years ago despite plenty of contemporary variance.

One more thing: the study I’m talking about didn’t take place on chimps. It was a pattern observed in humans. The underlying logic and reactions, however, are pretty much spot on.  

Not unlike this man’s posing game

It’s long been understood that strong men are more attractive than weak ones, all else being equal. The present research by Sell et al (2017) was an attempt to (a) quantify approximately how much of a man’s bodily attractiveness is driven by his physical strength, (b) the nature of this relationship (whether it is more of a straight line or an inverted “U” shape, where very strong men are less attractive, and (c) whether some women find weaker men more attractive than stronger ones. There was also a section about quantifying the effects of height and weight.

To answer those questions, pictures of semi-to-shirtless men were photographed from the front and side, and their heads were blocked out so only their bodies remained. These pictures were then assessed by different groups for either strength or attractiveness (actual strength measures were collected by the researchers). The quick run down of the results are that perceived strength did track actual strength, and perceptions of strength accounted for about 60-70% of the variance in bodily attractiveness (which is a lot). As men got stronger, they got more attractive, and this trend was linear (meaning that, within the sample, there was no such thing as “too strong” after which men got less attractive). This pattern was also universal: there was not a single women (out of 160) who rated the weaker men as more attractive than the stronger ones. Accounting for strength, height accounted for a bit more of the attractiveness, and weight was negatively related to attractiveness. Women liked strong men; not fat ones.

While it’s nice to put something of a number on just how much strength matters in determining male bodily attractiveness (most of it), these findings are all mundane to anyone with eyes. I suspect they cut across multiple species, and I don’t think you’re going to find just about any species where females prefer to mate with physically weaker males. The explanation for these preferences for strength – the evolutionary framework into which they fit – should apply well to just about any of the species in that list. While I initially made up the fact that this study was about chimps, I’d say you’re likely to find a similar set of results if you did conduct such work.

Also, the winner – not the loser – of this contest will go on to mate

Enter the strange comments I mentioned initially:

“It’s my opinion that the authors are too quick to ascribe a causal role to evolution,” said Lisa Wade…“We know what kind of bodies are valorized and idealized,” Wade said. “It tends to be the bodies that are the most difficult to obtain.”

Try reading that criticism of the study and imagine it was applied to any other sexually-reproducing species on the planet. What adaptive benefits is “difficulty in obtaining” supposed to bring and what kind of predictions does that idea make? It would be difficult, for instance, to achieve a very thin body; the type usually seen in anorexic people. It’s hard for people to ignore their desires to eat certain foods in certain quantities, especially to the point you begin to physically waste away. Despite that difficulty in achieving the starved look, such bodies are not idealized as attractive. “Difficult to obtain” does not necessary translate into anything adaptively useful. 

And, more to the point, even if a preference for difficult-to-obtain bodies per se existed, where would Lisa suggest it came from? Surely, it didn’t fall from the sky. The explanation for a preference for difficult bodies would, at some point, have to reference some kind of evolutionary history. It’s not even close to sufficient to explain a preference by saying, “culture, not evolution, did it,” as if the capacity for developing a culture itself – and any given instantiation of it -  exists free from evolution. Despite her claims to the contrary, it is a theoretical benefit to thinking about evolutionary function when developing theories of psychological form; not a methodological problem. The only problem I see is that she seems to prefer worse, less-complete explanations to better ones. But, to use her own words, this is “…nothing unique to [her]. Much of this type of [criticism] has the same methodological problems

If your explanation for a particular type of psychological happening in humans doesn’t work for just about any other species, there’s a very good chance it is incomplete when it comes to explaining the behavior at the very least. For instance, I don’t think anyone would seriously suggest that chimp females entering into their reproductive years “might not have much of an experience with what attractiveness means,” if they favored physically strong males. I’d say it’s fairly common such explanations aren’t even pointing in the right direction a lot of the time, and are more likely to mislead researchers and students than help inform them. 

References: Sell, A., Lukazsweki, A., & Townsley, M. (2017). Cues of upper body strength account for most of the variance in men’s bodily attractiveness. Proc. R. Soc. B 284http://dx.doi.org/10.1098/rspb.2017.1819