The Beautiful People

Less money than it looks like if that’s all 10′s

There’s a perception that exists involving how out of touch rich people can be, summed up well in this popular clip from the show Arrested Development: It’s one banana, Michael, how much could it cost? Ten dollars?” The idea is that those with piles of money – perhaps especially those who have been born into it – have a distorted sense for the way the world works, as there are parts of it they’ve never had to experience. A similar hypothesis guides the research I wanted to discuss today, which sought to examine people’s beliefs in a just world. I’ve written about this belief-in-a-just-world hypothesis before; the reviews haven’t been positive.

The present research (Westfall, Millar, & Lovitt, 2018) took the following perspectives: first, believing in a just world (roughly that people get what they deserve and deserve what they get) is a cognitive bias that some people hold to because it makes them feel good. Notwithstanding the fact that “feeling good” isn’t a plausible function, for whatever reason the authors don’t seem to suggest that believing the world to be unfair is a cognitive bias as well, which is worth keeping in the back of your mind. Their next point is that those who believe in a just world are less likely to have experienced injustice themselves. The more personal injustice one experiences (those that affect you personally in a negative way), the more one is likely to reject their belief in a just world because, again, rejecting that belief when faced with contradictory evidence should maintain self-esteem. Placed in a simple example, if something bad happened to you and you believe the world is a just place, that would mean you deserved that bad thing because you’re a bad person. So, rather than think you’re a bad person, you reject the idea that the world is fair. Seems that the biasing factor there would be the message of, “I’m awesome and deserve good things” as that could explain both believing the world is fair if things are going well and unfair if they aren’t, rather than the just-world belief being the bias, but I don’t want to dwell on that point too much yet.

This is where the thrust of the paper begins to take shape: attractive people are thought to have things easier in life, not unlike being rich. Because being physically attractive means one will be exposed to fewer personally-negative injustices (hot people are more likely to find dates, be treated well in social situations, and so on), they should be more likely to believe the world is a just place. In simple terms, physical attractiveness = better life = more belief in a just world. As the authors put it:

Consistent with this reasoning, people who are societally privileged, such as wealthy, white, and men, tend to be more likely to endorse the just-world hypothesis than those considered underprivileged

The authors also throw some line in their introduction about how physical attractiveness is “largely beyond one’s personal control,” and how “…many long-held beliefs about relationships, such as an emphasis on personality or values, are little more than folklore,” in the face of people valuing physical attractiveness. Now these don’t have any relevance to their paper’s theory and aren’t exactly correct, but should also be kept in the back on your mind to understand the perspective they are writing from.

What a waste of time: physical attractiveness is largely beyond his control

In any case, the authors sought to test this connection between greater attractiveness (and societal privilege) to greater belief in a just world across two studies. The first of these involved asking about 200 participants (69 male) about their (a) belief in a just world, (b) perceptions of how attractive they thought they were, (c) self-esteem, (d) financial status, and (e) satisfaction with life. About as simple as things come, but I like simple. In this case, the correlation between how attractive one thought they were and belief in a just world were rather modest (r = .23), but present. Self-esteem was a better predictor of just-world beliefs (r = .34), as was life satisfaction (r = .34). A much larger correlation understandably emerged between life satisfaction and perceptions of one’s own attractiveness (r = .67). Thinking one was attractive made one happier with life than it did lead one to believe the world is just. Money did much the same: financial status correlated better with life satisfaction (r = .33) than it did just world beliefs (r = .17). Also worth noting is that men and women didn’t differ in their just world beliefs (Ms of 3.2 and 3.14 on the scale, respectively). 

Study 2 did much the same as study one with basically the same sample, but it also included ratings of a participant’s attractiveness supplied by others. This way you aren’t just asking people how attractive they are; you are also asking people less likely to have a vested interest in the answer to the question (for those curious, ratings of self-attractiveness only correlated with other-ratings at r = .21). Now, self-perception of physical attractiveness correlated with belief in a just world (r = .17) less well than independent ratings of attractiveness did (r = .28). Somewhat strangely, being rated as prettier by others wasn’t correlated with self-esteem (r = .07) or life satisfaction (r = .08) – which you might expect it would if being attractive leads others to treat you better – though self-ratings of attractiveness were correlated with these things (rs = .27 and .53, respectively). As before, men and women also failed to differ with respect to their just world beliefs.

From these findings, the authors conclude that being attractive and rich makes one more likely to believe in a just world under the premise that they experience less injustice. But what about that result where men and women don’t differ with respect to their belief in a just world? Doesn’t that similarly suggest that men and women don’t face different amounts of injustice? While this is one of the last notes the authors make in their paper, they do seem to conclude that – at least around college age – men might not be particularly privileged over women. A rather unusual passage to find, admittedly, but a welcome one. Guess arguments about discrimination and privilege apply less to at least college-aged men and women.

While reading this paper, I couldn’t shake the sense that the authors have a rather particular perspective about the nature of fairness and the fairness of the world. Their passages about how belief in a just world is a bias not containing any comparable comments about how thinking the world is unjust also a bias, coupled with comments about how attractiveness if largely outside of one own’s control and this…

Finally, the modest yet statistically significant relationship between current financial status and just-world beliefs strengthens the case that these beliefs are largely based on viewing the world from a position of privilege.

 …in the face of correlations ranging from about .2 to .3 does likely say something about the biases of the authors. Explaining about 10% or less of the variance in belief in a just world from ratings of attractiveness or financial status does not scream that ‘these beliefs are largely based’ on such things to me. In fact, it seems to suggest beliefs in a just world are largely based on other things. 

“The room is largely occupied the ceiling fan”

While there is an interesting debate to have over the concept of fairness in this article, I actually wanted to use this research to discuss a different point about stereotypes. As I have wrote before (LINK), people’s beliefs about the world should tend towards accuracy. That is not to say they will always be accurate, mind you, but rather that we shouldn’t expect there to be specific biases built into the system in many cases. People might be wrong about the world to various degrees, but not because the cognitive system generating those perceptions evolved to be wrong (that is, take accurate information about the world and distort it); they should just be wrong because of imperfect information or environmental noise. The reason for this is that there are costs to being wrong and acting on imperfect information. If I believe there is a monster that lives under my bed, I’m going to behave differently than the person who doesn’t believe in such things. If I’m acting under and incorrect belief, my odds of doing something adaptive go down, all else being equal.

That said, there are some cases where we might expect bias in beliefs: the context of persuasion. If I can convince you to hold an incorrect belief, the costs to me can be substantially reduced or outweighed entirely by the benefits. For instance, if I convince you that my company is doing very well and only going to be doing better in the future, I might attract your investment, regardless of whether that belief in me you have is true. Or, if I had authored the current paper, I might be trying to convince you that attractive/privileged people in the world are biased while the less privileged are grounded realists.

The question arises, then, as to what the current results represent: are the beautiful people more likely to perceive the world as fair and the ugly ones more likely to perceive it as unjust because of random mistakes, persuasion, or something else? Taking persuasion first, if those who aren’t doing as well in life as they might hope because of their looks (or behavior, or something else) are able to convince others they have been treated unjustly and are actually valuable social assets worthy of assistance, they might be able to receive more support than if they are convinced their lot in life has been deserved. Similarly, the attractive folk might see the world as more fair to justify their current status to others and avoid having it threatened by those who might seek to take those benefits for their own. This represents a case of bias: presenting a case to others that serves your own interest, irrespective of the truth.

While that’s an interesting idea – and I think there could be an element of that to it in these results – there another option I wanted to explore as well: it is possible that neither side is actually biased. They might both be acting off information that is accurate as far as they know, but simply be working under different sets of it.

“As far as I can tell, it seems flat”

This is where we return to stereotypes. If person A has had consistently negative interactions with people from group X over their life, I suspect person A would have some negative stereotypes about them. If person B has had consistently positive interactions with people from the same group X over their life, I further suspect person B would have some positive stereotypes about them. While those beliefs shape each person’s expectations of the behavior of unknown members of group X and those beliefs/expectations contrast with each other, both are accurate as far as each person is concerned. Person A and B are both simply using the best information they have and their cognitive systems are injecting no bias – no manipulation of this information – when attempted to develop as accurate a picture of the world as possible.

Placed into the context of this particular finding, you might expect that unattractive people are treated differently than attractive ones, the latter offering higher value in the mating market at a minimum (along with other benefits that come with greater developmental stability). Because of this, we might have a naturally-occurring context where people are exposed to two different versions of the same world, both develop different beliefs about it, but neither necessarily doing so because they have any bias. The world doesn’t feel unfair to the attractive person, so they don’t perceive it as such. Similarly, the world doesn’t feel fair to the unattractive person who feels passed over because of their looks. When you ask these people about how fair the world is, you will likely receive contradictory reports that are both accurate as far as the person doing the reporting is aware. They’re not biased; they just receive systematically different sets of information.

Imagine taking that same idea and studying stereotypes on a more local level. What I’ve read about when it comes to stereotype accuracy research has largely been looking at how people’s beliefs about a group compare to that group more broadly; along the lines of asking people, “How violent are men, relative to women,” and then comparing those responses to data collected from all men and women to see how well they match up. While such responses largely tend towards accuracy, I wonder if the degree of accuracy could be improved appreciably by considering what responses any given participant should provide, given the information they have access to. If someone grew up in an area where men are particularly violent, relative to the wider society, we should expect they have different stereotypes about male violence, as those perceptions are accurate as far as they know. Though such research is more tedious and less feasible than using broader measures, I can’t help but wonder what results it might yield. 

References: Westfall, R., Millar, M., & Lovitt A. (2018). The influence of physical attractiveness on belief in a just world. Psychological Reports, 0, 1-14.

Making A Great Leader

Selfies used to be a bit more hardcore

If you were asked to think about what makes a great leader, there are a number of traits you might call to mind, though what traits those happen to be might depend on what leader you call to mind: Hitler, Gandhi, Bush, Martin Luther King Jr, Mao, Clinton, or Lincoln were all leaders, but seemingly much different people. What kind of thing could possibly tie all these different people and personalities together under the same conceptual umbrella? While their characters may have all differed, there is one thing all these people shared in common and it’s what makes anyone anywhere a leader: they all had followers.

Humans are a social species and, as such, our social alliances have long been key to our ability to survive and reproduce over our evolutionary history (largely based around some variant of the point that two people are better at beating up one person than a single individual is; an idea that works with cooperation as well). While having people around who were willing to do what you wanted have clearly been important, this perspective on what makes a leader – possessing followers – turns the question of what makes a great leader on its head: rather than asking about what characteristics make one a great leader, you might instead ask what characteristics make one an attractive social target for followers. After all, while it might be good to have social support, you need to understand why people are willing to support others in the first place to fully understand the matter. If it was all cost to being a follower (supporting a leader at your own expense), then no one would be a follower. There must be benefits that flow to followers to make following appealing. Nailing down what those benefits are and why they are appealing should better help us understand how to become a leader, or how to fall from a position of leadership.

With this perspective in mind, our colorful cast of historical leaders suddenly becomes more understandable: they vary in character, personality, intelligence, and political views, but they must have all offered their followers something valuable; it’s just that whatever that something(s) was, it need not be the same something. Defense from rivals, economic benefits, friendship, the withholding of punishment: all of these are valuable resources that followers might receive from an alliance with a leader, even from the position of a subordinate. That something may also vary from time to time: the leader who got his start offering economic benefits might later transition into one who also provides defense from rivals; the leader who is followed out of fear of the costs they can inflict on you may later become a leader who offers you economic benefits. And so on.

“Come for the violence; stay for the money”

The corollary point is that features which fail to make one appealing to followers are unlikely to be the ones that define great leaders. For example – and of relevance to the current research on offer – gender per se is unlikely to define great leaders because being a man or a woman does not necessarily offer much to many followers. Traits associated with them might – like how those who are physically strong can help you fight against rivals better than one who is not, all else being equal – but not the gender itself. To the extent that one gender tends to end up in positions of leadership it is likely because they tend to possess higher levels of those desirable traits (or at least reside predominantly on the upper end of the population distribution of them). Possessing these favorable traits that allow leaders to do useful things is only one part of the equation, however: they must also appear willing to use those traits to provide benefits to their follows. If a leader possesses considerable social resources, they do you little good if said leader couldn’t be any less interested in granting you access to them.

This analysis also provides another context point for understanding the leader/follower dynamic: it ought to be context specific, at least to some extent. Followers who are looking for financial security might look for different leaders than those who are seeking protection from outside aggression; those facing personal social difficulties might defer to different leaders still. The match between the talents offer by a leader and the needs of the followers should help determine how appealing some leaders are. Even traits that might seem universally positive on their face – like a large social network – might not be positives to the extent it affects a potential follower’s perception of their likelihood of receiving benefits. For example, leaders with relatively full social rosters might appear less appealing to some followers if that follower is seeking a lot of a leader’s time; since too much of it is already spoken for, the follower might look elsewhere for a more personal leader. This can create ecological leadership niches that can be filled by different people at different times for different contexts.

With all that in mind, there are at least some generalizations we can make about what followers might find appealing in a leader in an, “all else being equal…” sense: those with more social support with be selected as leaders more often, as such resources are more capable of resolving disputes in your favor; those with greater physical strength or intelligence might be better leaders for similar reasons. Conversely, one might follow such leaders because of the costs failing to follow would incur, but the logic holds all the same. As such, once these and other important factors are accounted for, you should expect irrelevant factors – like sex – to fall out of the equation. Even if many leaders tend to be men, it’s not their maleness per se that makes them appealing leaders, but rather these valued and useful traits.

Very male, but maybe not CEO material

This is a hypothesis effectively tested in a recent paper by von Rueden et al (in press). The authors examined the distribution of leadership in a small-scale foraging/farming society in the Amazon, the Tsimane. Within this culture – as others – men tend to exercise the greater degree of political leadership, relative to women, as measured by domains including speaking more during social meetings, coordinating group efforts, and resolving disputes. The leadership status of members within this group were assessed by ratings of other group members. All adults within the community (male n = 80; female n = 72) were photographed, and these photos were then then given to 6 of the men and women in sets of 19. The raters were asked to place the photos in order in terms of which person whose voice tended to carry the most weight during debates, and then in terms of who managed the most community projects. These ratings were then summed up (from 1 to 19, depending on their position in the rankings, with 19 being the highest in terms of leadership) to figure out who tended to hold the largest positions of leadership.

As mentioned, men tended to reside in positions of greater leadership both in terms of debates and management (approximate mean male scores = 37; mean female scores = 22), and both men and women agreed on these ratings. A similar pattern was observed in terms of who tended to mediate conflicts within the community: 6 females were named in resolving such conflicts, compared with 17 males. Further, the males who were named as conflict mediators tended to be higher in leadership scores, relative to non-mediating males, while this pattern didn’t hold for the females.

So why were men in positions of leadership in greater percentages than females? A regression analysis was carried out using sex, height, weight, upper body strength, education, and number of cooperative partners predicting leadership scores. In this equation, sex (and height) no longer predicted leadership score, while all the other factors were significant predictors. In other words, it wasn’t that men were preferred as leaders per se, but rather that people with more upper body strength, education, and cooperative partners were favored, whether male or female. These traits were still favored in leaders despite leaders not being particularly likely to use force or violence in their position. Instead, it seems that traits like physical strength were favored because they could potentially be leveraged, if push came to shove.

“A vote for Jeff is a vote for building your community. Literally”

As one might expect, what makes followers want to follow a leader wasn’t their sex, but rather what skills the leader could bring to bear in resolving issues and settling disputes. While the current research is far from a comprehensive examination of all the factors that might tap leadership at different times and contexts, it represents a sound approach to understanding the problem of why followers select particular leaders. By thinking about what benefits followers tended to reap from leaders over evolutionary history can help inform our search for – and understanding of – the proximate mechanisms through which leaders end up attracting them.

References:  von Rueden, C., Alami, S., Kaplan, H., & Gurven, M. (In Press). Sex differences in political leadership in an egalitarian society. Evolution & Human Behavior, doi:10.1016/j.evolhumbehav.2018.03.005

Doesn’t Bullying Make You Crazy?

“I just do it for the old fashioned love of killing”

Having had many pet cats, I understand what effective predators they can be. The number of dead mice and birds they have returned over the years is certainly substantial, and the number they didn’t bring back is probably much higher. If you happen to be a mouse living in an area with lots of cats, your life is probably pretty stressful. You’re going to be facing a substantial adaptive challenge when it comes to avoiding detection by these predators and escaping them if you fail at that. As such, you might expect mice developed a number of anti-predator strategies (especially since cats aren’t the only thing they’re trying to not get killed by): they might freeze when they detect a cat to avoid being spotted; they might develop a more chronic state of psychological anxiety, as being prepared to fight or run at a moment’s notice is important when your life is often on the line. They might also develop auditory or visual hallucinations that provide them with an incorrect view of the world because…well, I actually can’t think of a good reason for that last one. Hallucinations don’t serve as an adaptive response that helps the mice avoid detection, flee, or otherwise protect themselves against those who would seek to harm them. If anything, hallucinations seem to have the opposite effect, directing resources away from doing something useful as the mice would be responding to non-existent threats.

But when we’re talking about humans and not mice, some people seem to have a different sense for the issue: specifically, that we ought to expect something of a social predation – bullying – to cause people to develop psychosis. At least that was the hypothesis behind some recent research published by Dantchev, Zammit, and Wolke (2017). This study examined a longitudinal data set of parents and children (N = 3596) at two primary times during their life: at 12 years old, children were given a survey asking about sibling bullying, defined as, “…saying nasty and hurtful things, or completely ignores [them] from their group of friends, hits, kicks, pushes or shoves [them] around, tells lies or makes up false rumors about [them].” They were asked how often they experienced bullying by a sibling and how many times a week they bullied a sibling in the past 6 months (ranging from “Never”, “Once or Twice”, “Two or Three times a month”, “About once a week,” or, “Several times a week”). Then, at the age of about 18, these same children were assessed for psychosis-like symptoms, including whether they experienced visual/auditory hallucinations, delusions (like being spied on), or felt they had experienced thought interference by others.  

With these two measures in hand (whether children were bullies/bullied/both, and whether they suffered some forms of psychosis), the authors sought to determine whether the sibling bullying at time 1 predicted the psychosis at time 2, controlling for a few other measures I won’t get into here. The following results fell out of the analysis: children bullied by their siblings and who bullied their siblings tended to have lower IQ scores, more conduct disorders early on, and experienced more peer bullying as well. The mothers of these children were also more likely to experience depression during pregnancy and domestic violence was more likely to have been present in the households. Bullying, it would seem, was influenced by the quality of the children and their households (a point we’ll return to later).

“This is for making mom depressed prenatally”

In terms of the psychosis measures, 55 of the children in the sample met the criteria for having a disorder (1.5%). Of those children who bullied their siblings, 11 met this criteria (3%), as did 6 of those who were purely bullied (2.5%), and 11 of those were both bully and bullied (3%). Children who were regularly bullied (about once a week or more), then, were about twice as likely to report psychosis than those who were bullied less often. In brief, both being bullied by and bullying other siblings seemed to make hallucinations more common. Dantchev, Zammit, and Wolke (2017) took this as evidence suggesting a causal relationship between the two: more bullying causes more psychosis.

There’s a lot to say about this finding, the first thing being this: the vast majority of regularly-bullied children didn’t develop psychosis; almost none of them did, in fact. This tells us quite clearly that the psychosis per se is by no means a usual response to bullying. This is an important point because, as I mentioned initially, some psychological strategies might evolve to help individuals deal with outside threats. Anxiety works because it readies attentional and bodily resources to deal with those challenges effectively. It seems plausible such a response could work well in humans facing aggression from their peers or family. We might thus expect some kinds of anxiety disorders to be more common among those bullied regularly; depression too, since that could well serve to signal that one is in need of social support to others and help recruit it. So long as one can draw a reasonable, adaptive line between psychological discomfort and doing something useful, we might predict a connection between bullying and mental health issues.

But what are we to make of that correlation between being bullied and the development of hallucinations? Psychosis would not seem to help an individual respond in a useful way to the challenges they are facing, as evidenced by nearly all of the bullied children not developing this response. If such a response were useful, we should generally expect much more of it. That point alone seems to put the metaphorical nail in the coffin of two of the three explanations the authors put forth for their finding: that social defeat and negative perceptions of one’s self and the world are causal factors in developing psychosis. These explanations are – on their face – as silly as they are incomplete. There is no plausible adaptive line the authors attempt to draw from thinking negatively about one’s self or the world to the development of hallucinations, much less how those hallucinations are supposed to help. I would also add that these explanations are discussed only briefly at the end of paper, suggesting to me not enough time or thought went into trying to understand the reasons these predictions were made before the research was undertaken. That’s a shame, as a better sense for why one would expect to see a result would affect the way research is designed for the better. 

“Well, we’re done…so what’s it supposed to be?”

Let’s think in more detail about why we’re seeing what we’re seeing regarding bullying and psychosis. There are a number of explanations one might float, but the most plausible to me goes something like this: these mental health issues are not being caused by the bullying but are, in a sense, actually eliciting the bullying. In other words, causation runs in the opposite direction the authors think it does.

To fully understand this explanation, let’s begin with the basics: kin are usually expected to be predisposed to behave altruistically towards each other because they share genes in common. This means investment in your relatives is less costly than it would be otherwise, as helping them succeed is, in a very real sense, helping yourself succeed. This is how you get adaptations like breastfeeding and brotherly love. However, that cost/benefit ratio does not always lean in the direction of helping. If you have a relative that is particularly unlikely to be successful in the reproductive realm, investment in them can be a poor choice despite their relatedness to you. Even though they share genes with you, you share more genes with yourself (all of them, in fact), so helping yourself do a little better can sometimes be the optimal reproductive strategy over helping them do much better (since they aren’t likely to do anything even with your help). In that regard, relatives suffering from mental health issues are likely worse investments than those not suffering from them, all else being equal. The probability of investment paying off is simply lower.

Now that might end up predicting that people should ignore their siblings suffering from such issues; to get to bullying we need something else, and in this case we certainly have it: competition for the same pool of limited resources, namely parental investment. Brothers and sisters compete for the same resources from their parents – time, protection, provisioning, and so on – and resources invested in one child are not capable of being invested in another much of the time. Since parents don’t have unlimited amounts of these resources, you get competition between siblings for them. This sometimes results in aggressive and vicious competition. As we already saw in the study results, children of lower quality (lower IQ scores and more conduct disorders) coming from homes with fewer resources (likely indexed by more maternal depression and domestic violence) tend to bully and be bullied more. Competition for resources is more acute here and your brother or sister can be your largest source of it.

They’re much happier now that the third one is out of the way

To put this into an extreme example of non-human sibling “bullying”, there are some birds that lay two or three eggs in the same nest a few days apart. What usually happens in these scenarios is that when the older sibling hatches in advance of the younger it gains a size advantage, allowing it to peck the younger one to death or roll it out of the nest to starve in order to monopolize the parental investment for itself. (For those curious why the mother doesn’t just lay a single egg, that likely has something to do with having a backup offspring in case something goes wrong with the first one). As resources become more scarce and sibling quality goes down, competition to monopolize more of those resources should increase as well. That should hold for birds as well as humans.

A similar logic extends into the wider social world outside of the family: those suffering from psychosis (or any other disorders, really) are less valuable social assets to others than those not suffering from them, all else being equal. As such, sufferers receive less social support in the form of friendships or other relationships. Without such social support, this also makes one an easier target for social predators looking to exploit the easiest targets available. What this translates into is children who are less able to defend themselves being bullied by others more often. In the context of the present study, it was also documented that peer bullying tends to increase with psychosis, which would be entirely unsurprising; just not because bullying is causing children to become psychotic.

This brings us to the final causal hypothesis: sometimes bullying is so severe that it actually causes brain damage that causes later psychosis. This would involve what I imagine would either be a noticeable degree of physical head trauma or similarly noticeable changes brought on by a body’s response to stress that causes brain damage over time. Neither hypothesis strikes me as particularly likely in terms of explaining much of what we’re seeing here, given the scope of sibling bullying is probably not often large enough to pose that much of a physical threat to the brain. I suspect the lion’s share of the connection between bullying and psychosis is simply that psychotic individuals are more likely to be bullied, rather than because bullying is doing the causing. 

References: Dantchev, S., Zammit S., & Wolke, D. (2017). Sibling bullying in middle childhood and psychotic disorder at 18 years: a prospective cohort study. Psychological Medicine, https://doi.org/10.1017/S0033291717003841.

My Father Was A Gambling Man

And if you think I stole that title from a popular song, you’re very wrong

Hawaii recently introduced some bills aimed at prohibiting the sale of games with for-purchase loot boxes to anyone under 21. For those not already in the know concerning the world of gaming, loot boxes are effectively semi-random grab bags of items within video games. These loot boxes are usually received by players either as a reward for achieving something within a game (such as leveling up) and/or can be purchased with currency, be that in-game currency or real world money. Specifically, then, the bills in question are aimed at games that sell loot boxes for real money, attempting to keep them out of the hands of people under 21.

Just like tobacco companies aren’t permitted to advertise to minors out of fear that children will come to find smoking an interesting prospect, the fear here is that children who play games with loot boxes might develop a taste for gambling they otherwise wouldn’t have. At least that’s the most common explicit reason for this proposal. The gaming community seems to be somewhat torn about the issue: some gamers welcome the idea of government regulation of loot boxes while others are skeptical of government involvement in games. In the interest of full disclosure for potential bias – as a long-time gamer and professional loner – I consider myself to be a part of the latter camp.

My hope today is to explore this debate in greater detail. There are lots of questions I’m going to discuss, including (a) whether loot boxes are gambling, (b) why gamers might oppose this legislation, (c) why gamers might support it, (d) what other concerns might be driving the acceptance of regulation within this domain, and (e) talk about whether this kind of random mechanics actually make for better games.

Lets begin our investigation in gaming’s seedy underbelly

To set the stage, a loot box is just what it sounds like: a package randomized of in-game items (loot) which are earned by playing the game or purchased. In my opinion, loot boxes are gambling-adjacent types of things, but not bone-fide gambling. The prototypical example of gambling is along the lines of a slot machine. You put money into it and have no idea what you’re going to get out. You could get nothing (most of the time), a small prize (a few of the times), or a large prize (almost never). Loot boxes share some of those features – the paying money for randomized outcomes – but they don’t share others: first, with loot boxes there isn’t a “winning” and “losing” outcome in the same way there is with a slot machine. If you purchase a loot box, you should have some general sense as to what you’re buying; say, 5 items with varying rarities. It’s not like you sometimes open a loot box and there are no items, other times there are 5, and other times there are 20 (though more on that in a moment). The number of items you receive is usually set even if the contents are random. More to the point, the items you “receive” you often don’t even own; not in the true sense. If the game servers get shut down or you violate terms of service, for instance, your account with the items get deleted and they disappear from existence and you don’t get to sue someone for stealing from you. There is also no formal cashing out of many of these games. In that sense, there is less of a gamble in loot boxes than what we traditionally consider gambling.

Importantly, the value of these items is debatable. Usually players really want to open some items and don’t care about others. In that sense, it’s quite possible to open a loot box and get nothing of value, as far as you’re concerned, while hitting jackpots in others. However, if that valuation is almost entirely subjective in nature, then it’s hard to say that not getting what you want is losing while getting what you do is winning, as that’s going to vary from person to person. What you are buying with loot boxes isn’t a chance at a specific item you want; it is a set number of random items from a pool of options. To put that into an incomplete but simple example, if you put money into a gumball machine and get a gumball, that’s not really a gamble and you didn’t really lose. It doesn’t become gambling, nor do you lose, if the gumballs are different colors/flavors and you wanted a blue one but got a green one.

One potential exception to the argument of equal value to this is when the items opened aren’t bound to the opener; that is, they can be traded or sold to other players. You don’t like your gumball flavor? Well, now you can trade your friend your gumball for theirs, or even buy their gumball from them. When this possibility exists, secondary markets pop up for the digital items where some can be sold for lots of real money while others are effectively worthless. Now, as far as the developers are concerned, all the items can have the same value, which makes it look less like gambling; it’s the secondary market that makes it look more like gambling, but the game developers aren’t in control of that.

Kind of like these old things

An almost-perfect metaphor for this can be found in the sale of Baseball cards (which I bought when I was younger, though I don’t remember what the appeal was): packs containing a set number of cards – let’s say 10 – are purchased for a set price – say $5 – but the contents of those packs is randomized. The value of any single card, from the perspective of the company making them, is 1/10 the cost of the pack. However, some people value specific cards more than others; a rookie card of a great player is more desired than the card for a veteran who never achieved anything. In such cases, a secondary market crops up among those who collect the cards, and those collectors are willing to pay a premium for the desired items. One card might sell for $50 (worth 10-times the price of a pack), while another might be unable to find a buyer at all, effectively worth $0.

This analogy, of course, raises other questions about the potential legality of existing physical items, like sports cards, or those belonging to any trading card game (like Magic: The Gathering, Pokemon, or Yugioh). If digital loot boxes are considered a form of gambling and might have effects worth protecting children from, then their physical counterparts likely pose the same risks. If anything, the physical versions look more like gambling because at least some digital items cannot be traded or sold between players, while all physical items pose that risk of developing real value on a secondary market. Imagine putting money into a slot machine, hitting the jackpot, and then getting nothing out of it. That’s what many virtual items amount to.

Banning the sale of loot boxes in gaming from people under the age of 21 likely also entails the banning of card packs from them as well. While the words “slippery slope” are usually used together with the word “fallacy,” there does seem to be a very legitimate slope here worth appreciating. The parallels between loot boxes and physical packs of cards are almost perfect (and, where they differ, card packs look more like gambling; not less). Strangely, I’ve seen very few voices in the gaming community suggesting that the sale of packs of cards should be banned from minors; some do (mostly for consistency sake; they don’t raise the issue independently of the digital loot box issue almost ever as far as I’ve seen), but most don’t seem concerned with the matter. The bill being introduced in Hawaii doesn’t seem to mention baseball or trading cards anywhere either (unless I missed it), which would be a strange omission. I’ll return to this point later when we get to talking about the motives behind the approval of government regulation in the digital realm coming from gamers.

The first step towards addiction to that sweet cardboard crack

But, while we’re on the topic of slippery slopes, let’s also consider another popular game mechanic that might also be worth examination: randomized item drops from in-game enemies. These aren’t items you purchase with money (at least not in game), but rather ones you purchase with time and effort. Let’s consider one of the more well-known games to use this: WoW (World of Warcraft). In WoW, when you kill enemies with your character, you may receive valued items from their corpse as you loot the bodies. The items are not found in a uniform fashion: some are very common and other quite rare. I’ve watched a streamer kill the same boss dozens of times over the course of several weeks hoping to finally get a particular item to drop. There are many moments of disappointment and discouragement, complete with feelings of wasted time, after many attempts are met with no reward. But when the item finally does drop? There is a moment of elation and celebration, complete with a chatroom full of cheering viewers. If you could only see the emotional reaction of the people to getting their reward and not their surroundings, my guess is that you’d have a hard time differentiating a gamer getting a rare drop they wanted from someone opening the desired item out of a loot box for which they paid money.

What I’m not saying is that I feel random loot drops in World of Warcraft are gambling; what I am saying is that if one is concerned about the effects loot boxes might have on people when it comes to gambling, they share enough in common with randomized loot drops that the latter are worth examining seriously as well. Perhaps it is the case that the item a player is after has a fundamentally different psychological effect on them if chances at obtaining it are purchased with real money, in-game currency, or play time. Then again, perhaps there is no meaningful difference; it’s not hard to find stories of gamers who spent more time than is reasonable trying to obtain rare in-game items to the point that it could easily be labeled an addiction. Whether buying items with money or time have different effects is a matter that would need to be settled empirically. But what if they were fundamentally similar in terms of their effects on the players? If you’re going to ban loot boxes sold with cash under the fear of the impact they have on children’s propensity to gamble or develop a problem, you might also end up with a good justification for banning randomized loot drops in games like World of Warcraft as well, since both resemble pulling the lever of a slot machine in enough meaningful ways.

Despite that, I’ve seen very few people in the pro-regulation camp raise the concern about the effects that World of Warcraft loot tables are having on children. Maybe it’s because they haven’t thought about it yet, but that seems doubtful, as the matter has been brought up and hasn’t been met with any concern. Maybe it’s because they view the costs of paying real money for items as more damaging than paying with time. Either way, it seems that even after thinking about it, those who favor regulation of loot boxes largely don’t seem to care as much about card games, and even less about randomized loot tables. This suggests there are other variables beyond the presence of gambling-like mechanics underlying their views.

“Alright; children can buy some lottery tickets, but only the cheap ones”

But let’s talk a little more about the fear of harming children in general. Not that long ago there were examination of other aspects of video games: specifically, the component of violence often found and depicted within them. Indeed, research into the topic is still a thing today. The fear sounded like a plausible one to many: if violence is depicted within these games – especially within the context of achieving something positive, like winning by killing the opposing team’s characters – those who play the games might become desensitized to violence or come to think it acceptable. In turn, they would behave more violently themselves and be less interested in alleviating violence directed against others. This fear was especially pronounced when it came to children who were still developing psychologically and potentially more influenced by the depictions of violence.

Now, as it turns out, those fears appear to be largely unfounded. Violence has not been increasing as younger children have been playing increasingly violent video games more frequently. The apparent risk factor for increasing aggressive behavior (at least temporarily; not chronically) was losing at the game or finding it frustrating to play (such as when the controls feel difficult to use). The violent content per se didn’t seem to be doing much causing when it came later violence. While players who are more habitually aggressive might prefer somewhat different games than those who are not, that doesn’t mean the games are causing them to be violent.

This gives us something of a precedent for worrying about the face-validity of the claims that loot boxes are liable to make gambling seem more appealing on a long-term scale. It is possible that the concern over loot boxes represents more of a moral panic on the part of the legislatures, rather than a real issue having a harmful impact. Children who are OK with ripping an opponent’s head off in a video game are unlikely to be OK with killing someone for real, and violence in video games doesn’t seem to make the killing seem more appealing. It might similarly be the case that opening loot boxes makes people no more likely to want to gamble in other domains. Again, this is an empirical matter that requires good evidence to prove the connection (and I emphasize the word good because there exists plenty of low-quality evidence that has been used to support the connection between violence in video games causing it in real life).

Video games inspire cosplay; not violence

If it’s not clear at this point, I believe the reasons that some portion of the gaming community supports this type of regulation has little to nothing to do with their concerns about children gambling. For the most part, children do not have access to credit cards and so cannot themselves buy lots of loot boxes, nor do they have access to lots of cash they can funnel into online gift cards. As such, I suspect that very few children do serious harm to themselves or their financial future when it comes to buying loot boxes. The ostensible concern for children is more of a plausible-sounding justification than one actually doing most of the metaphorical cart-pulling. Instead, I believe the concern over loot boxes (at least among gamers) is driven by two more mundane concerns.

The first of these is simply the perceived cost of a “full” game. There has long been a growing discontent in the gaming community over DLC (downloadable content), where new pieces of content are added to a game after release for a fee. While that might seem like the simple purchase of an expansion pack (which is not a big deal), the discontent arises were a developer is perceived to have made a “full” game already, but then cut sections out of it purposefully to sell later as “additional” content. To place that into an example, you could have a fighting game that was released with 8 characters. However, the game became wildly popular, resulting in the developers later putting together 4 new characters and selling them because demand was that high. Alternatively, you could have a developer that created 12 characters up front, but only made 8 available in the game to begin with, knowingly saving the other 4 to sell later when they could have just as easily been released in the original. In that case, intent matters.

Loot boxes do something similar psychologically at times. When people go to the store and pay $60 for a game, then take it home to find out the game wants them to pay $10 or more (sometimes a lot more) to unlock parts of the game that already exist on the disk, that feels very dishonest. You thought you were purchasing a full game, but you didn’t exactly get it. What you got was more of an incomplete version. As games become increasingly likely to use these loot boxes (as they seem to be profitable), the true cost of games (having access to all the content) will go up.

Just kidding! It’s actually 20-times more expensive

Here is where the distinction between cosmetic and functional (pay-to-win) loot boxes arises. For those not in the know about this, the loot boxes that games sell vary in terms of their content. In some games, these items are nothing more than additional colorful outfits for your characters that have no effect on game play. In others, you can buy items that actually increase your odds of winning a game (items that make your character do more damage or automatically improve their aim). Many people who dislike loot boxes seem to be more OK (or even perfectly happy) with them so long as the items are only cosmetic. So long as they can win the game as effectively spending $0 as they could spending $1000, they feel that they own the full version. When it feels like the game you bought gives an advantage to players who spent more money on it, it again feels like the copy of the game you bought isn’t the same version as theirs; that it’s not as complete an experience.

Another distinction arises here in that I’ve noticed gamers seem more OK with loot boxes in games that are Free-to-Play. These are games that cost nothing to download, but much of their content is locked up-front. To unlock content, you usually invest time or money. In such cases, the feeling of being lied to about the cost of the game don’t really exist. Even if such free games are ultimately more expensive than traditional ones if you want to unlock everything (often much more expensive if you want to do so quickly), the actual cost of the game was $0. You were not lied to about that much and anything else you spent afterwards was completely voluntary. Here the loot boxes look more like a part of the game than an add-on to it. Now this isn’t to say that some people don’t dislike loot boxes even in free-to-play games; just that they mind them less.

“Comparatively, it’s not that bad”

The second, related concern, then, is that developers might be making design decisions that ultimately make games worse to try and sell more loot boxes. To put that in perspective, there are some cases of win/win scenarios, like when a developer tries to sell loot boxes by making a game that’s so good people enjoy spending money on additional content to show off how much they like it. Effectively, people are OK with paying for quality. Here, the developer gets more money and the players get a great game. But what happens when there is a conflict? A decision needs to be made that will either (a) make the game play experience better but sell fewer loot boxes, or (b) make the game play experience worse, but sell more loot boxes? However frequently these decisions needs to be made, they assuredly are made at some points.

To use a recent example, many of the rare items in the game Destiny 2 were found within an in-game store called Eververse. Rather than unlocking rare items through months of completing game content over and over again (like in Destiny 1), many of these rare, cosmetic items were found only within Eververse. You could unlock them with time, in theory, but only at very slow rates (which were found to actually be intentionally slowed down by the developers if a player put too much time into the game). In practice, the only way to unlock these rare items was through spending money. So, rather than put interesting and desirable content into the game as a reward for being good at it or committed to it, it was largely walled off behind a store. This was a major problem for people’s motivation to continue playing the game, but it traded off against people’s willingness to spend money on the game. These conflicts created a worse experience for a great many players. It also yielded the term “spend-game content” to replace “end-game content.” More loot boxes in games potentially means more decisions like that will be made where reasons to play the game are replaced with reasons to spend money.

Another such system was discussed in regards to a potential patent by Electronic Arts (EA), though as far as I’m aware it has not made its way into a real game yet. This system revolved around online, multiplayer games with items available for purchase. The system would be designed such that players who spent money on some particular item would be intentionally matched against players of lower skill. As the lower-skill players would be easier for the buyer to beat with their new items, it would make the purchaser feel like their decision to buy was worth it. By contrast, the lower-level player might become impressed by how good the player with the purchased item performed and feel they would become better at the game if they too purchased it. While this might encourage players to buy in-game items, it would yield an ultimately less-competitive and interesting matchmaking system. While such systems are indeed bad for the game play experience, it is at least worth noting that such a system would work if the items were being sold came from loot boxes or were directly purchased.

“Buy the golden king now to get matched against total scrubs!”

If I’m right and the reasons gamers who favor regulation center around the cost and design direction of games, why not just say that instead of talking about children and gambling? Because, frankly, it’s not very persuasive. It’s too selfish of a concern to rally much social support. It would be silly for me to say, “I want to see loot boxes regulated out of games because I don’t want to spend money on them and think they make for worse gaming experiences for me.” People would just tell me to either not buy loot boxes or not buy games with loot boxes. Since both suggestions are reasonable and I can do them already, the need for regulation isn’t there.

Now if I decide to vote with my wallet and not buy games with loot boxes, that won’t have any impact on the industry. My personal impact is too small. So long as enough other people buy those games, they will continue to be produced and my enjoyment of the games will be decreased because of the aforementioned cost and design issues. What I need to do, then, is convince enough people to follow my lead and not buy these games either. It wouldn’t be until enough gamers aren’t buying the games that there would be incentives for developers to abandon that model. One reason to talk about children, then, is because you don’t trust that the market will swing in your favor. Rather than allow the market to decide feely, you can say that children are incapable of making good choices and are being actively harmed. This will rally more support to tip the scales of that market in your favor by forcing government intervention. If you don’t trust enough people will vote with their wallet like you do, make it illegal for younger gamers to be allowed to vote in any other way.

A real concern about children, then, might not be that they will come to view gambling as normal, but rather that they will come to view loot boxes (or other forms of added content, like dishonest DLC) in games as normal. They will accept that games often have loot boxes and they will not be deterred from buying titles that include them. That means more consumers now and in the future who are willing to tolerate or purchase loot boxes/DLC. That means fewer games without them which, in turn, means fewer options available to those voting with their wallets and not buying them. Children and gambling are brought up not because they are the gamer’s primary target of concern, but rather because they’re useful for a strategic end.

Of course, there are real issues when it comes to children and these microtransactions: they don’t tend to make great decisions, sometimes get access to the parent’s credit card information and then go on insane spending sprees in their games. This type of family fraud has been the subject of previous legal disputes, but it is important to note that this is not a loot box issue per se. Children will just as happily waste their parents money on known quantities of in-game resources as they would on loot boxes. It’s also something more a matter of parental responsibilities and creating purchasing verification than it is the heart of the matter at hand. Even if children do occasionally make lots of unauthorized purchases, I don’t think major game companies are counting on that as an intended source of vital revenue.

They start ballin’ out so young these days

For what it’s worth, I think loot boxes do run certain risks for the industry, as outlined above. They can make games costlier than they need to be and they can result in design decisions I find unpleasant. In many regards I’m not a fan of them. I just happen to think that (a) they aren’t gambling and (b) don’t require government intervention to remove because they are harming children, persuading them that gambling is fun and leading to more of it in the future. I think any kinds of microtransactions – whether random or not – can result in the same kinds of harms, addiction, and reckless spending. However, when it comes to human psychology, I think loot boxes are designed more a tool to fit our psychology than one that shapes it, not unlike how water takes the shape of the container it is in and not the other way around. As such, it is possible that some facets of loot boxes and other random item generation mechanics make players engage with the game in a way that yields more positive experiences, in addition to the costs they carry. If these gambling-like mechanics weren’t, in some sense, fun people would simply avoid games with them. 

For instance, having content that one is aiming to unlock can provide a very important motivation to continue playing a game, which is a big deal if you want your game to last and be interesting for a long time. My most recent example of this is Destiny 2 again. Though I didn’t play the first Destiny, I have a friend who did that told me about it. In that game, items randomly dropped, and they dropped with random perks. This means you could get several versions of the same item, but have them all be different. It gave you a reason and a motivation to be excited about getting the same item for the 100th time. This wasn’t the case in Destiny 2. In that game, when you got a gun, you got the gun. There was no need to try and get another version of it because that didn’t exist. So what happened when Destiny 2 removed the random rolls from items? The motivation for hardcore players to keep playing long-term largely dropped off a cliff. At least that’s what happened to me. The moment I got the last piece of gear I was trying to achieve, a sense of, “why am I playing?” washed over me almost instantly and I shut the game off. I haven’t touched it since. The same thing happen to me in Overwatch when I unlocked the last skin I was interested in at the time. Had all that content be available from the start, the turning-off point likely would have come much sooner. 

As another example, imagine a game like World of Warcraft, where a boss has a random chance to drop an amazing item. Say this chance is 1 in 500. Now imagine an alternative reality where this practice is banned because it’s deemed to be too much like gambling (not saying it will be; just imagine that it was). Now the item is obtained in the following way: whenever the boss is killed, it drops a token guaranteed. After you collect 500 of those tokens, you can hand them in and get the item as a reward. Do you think players would have a better time under that kind of gambling-like system, where each boss kill represents the metaphorical pull of a slot machine lever, or in the consistent condition? I don’t know the answer to that question offhand, but what I do know is that collecting 500 tokens sure sounds boring, and that’s coming from the person who values consistency, saving, and doesn’t enjoy traditional gambling. No one is going to make a compilation video of people reacting to finally collecting 500 items because all you’d have was another moment, just like the last 499 moments where the same thing happened. People would – and do – make compilation videos of streamers finally getting valuable or rare items, as such moments are more entertaining for views and players alike.

Sinking Costs

My cat displays a downright irrational behavior: she enjoys stalking and attacking pieces of string. I would actually say that this behavior extends beyond enjoying it the point of actively craving it. It’s fairly common for her to meow at me until she gets my attention before running over to her string and sitting by it, repeating this process until I play with her. At that point, she will chase it, claw at it, and bite it as if it were a living thing she could catch. This is irrational behavior for the obvious reason that the string isn’t prey; it’s not the type of thing it is appropriate to chase. Moreover, despite numerous opportunities to learn this, she never seems to cease this behavior, continuing to treat the string like a living thing. What could possibly explain this mystery?

If you’re anything like me, you might find that entire premise rather silly. My cat’s behavior only looks irrational when compared against an arguably-incorrect frame of reference; one in which my cat ought to only chase things that are alive and capable of being killed/eaten. There are other ways of looking at the behavior which make it understandable. Let’s examine two such perspectives briefly. The first of these is that my cat is – in some sense – interested in practicing for future hunting. In much the same way that people might practice in advance of a real event to ensure success, my cat may enjoy chasing the string because of the practice it affords her for achieving successful future hunts. Another perspective (which is not mutually exclusive) is that the string might give off proximate cues that resemble those of prey (such as ostensibly self-directed movement) which in turn activate other cognitive programs in my cat’s brain associated with hunting. In much the same way that people watch cartoons and perceive characters on the screen, rather than collections of pixels or drawings, my cat may be responding to proximate facsimiles of cues that signaled something important over evolutionary time when she sees strings moving.

The point of this example is that if you want to understand behavior – especially behavior that seems strange – you need to place it within its proper adaptive context. Simply calling something irrational is usually a bad idea for figuring out what is going on, as no species has evolved cognitive mechanisms that exist because they encouraged that organism to behave in irrational, maladaptive, or otherwise pointless ways. Any such mechanism would represent a metabolic cost endured for either no benefit or a cost, and those would quickly disappear from the population, outcompeted by organisms that didn’t make such silly mistakes.  

For instance, burying one’s head in the proverbial sand doesn’t help avoid predators

Today I wanted to examine one such behavior that gets talked about fairly regularly: what is referred to as the sunk-cost fallacy (implying a mistake is occurring). It refers to cases where people make decisions based on previous investments, rather than future expected benefits. For instance, if you happened to have a Master’s degree in a field that isn’t likely to present you with a job opportunity, the smart thing to do (according to most people, I imagine) would be to cut your losses and find a new major in a field that is likely to offer work. The sunk-cost fallacy here might represent saying to yourself, “Well, I’ve already put so much time into this program that I might as well put in more and get that PhD,” even though committing further resources is more than likely going to be a waste. In another case, you might sometimes continuing to pour money into a failing business venture because they had already invested most of their life savings. In fact, the tendency to invest in such projects is usually predictable by how much was invested in the past. The more you already put in, the more likely you are to see it through to its conclusion. I’m sure you can come up with your own examples of this from things you’ve either seen or done in the past.

On the face of it, this behavior looks irrational. You cannot get your previous investments back, so why should they have any sway over future decision making? If you end up concluding that such behavior couldn’t possibly be useful – that it’s a fallacious way of thinking – there’s a good chance you haven’t thought about it enough yet. To begin understanding why sunk costs might factor into decision making, it’s helpful to start with a basic premise: humans did not evolve in a world where financial decisions – such as business investments – were regularly made (if they were made at all). Accordingly, whatever cognitive mechanisms underlie sunk-cost thinking likely have nothing at all to do with money (or the pursuit of degrees, or other such endeavors). If we are using cognitive mechanisms to manage tasks they did not evolve for solving, it shouldn’t be surprising that we see some strange decisions cropping up from time to time. In much the same way, cats are not adapted to worlds with toys and strings. Whatever cognitive mechanism impels my cat to chase them, it is not adapted for that function.

So – when it comes to sunk costs – what might the cognitive mechanisms leading us to make these choices be designed to do? While humans might not have done a lot of financial investing over our evolutionary history, we sure did a lot of social investing. This includes protecting, provisioning, and caring for family members, friends, and romantic partners who in turn do the same for you. Such relationships need to be managed and broken off from time to time. In that regard, sunk costs begin to look a bit different.  

“Well, this one is a dud. Better to cut our losses and try again”

On the empirical end, it has been reported that people respond to social investments in a different way than they do financial ones. In a recent study by Hrgović & Hromatko (2017), 112 students were asked to respond to a stock market task and a social task. In the financial task, they read about a hypothetical investment they had made in their own business, but they had been losing value. The social tasks were similar: participants were told they had invested in a romantic partner, a sibling, and a friend. All were suffering financial difficulties, and the participant had been trying to help. Unfortunately, the target of this investment hadn’t been pulling themselves back up, even turning down job offers, so the investments were not currently paying off. In both the financial and social tasks, participants were then given the option to (a) stop investing in them now, (b) keep investing for another year only, or (c) keep investing indefinitely until the issue was resolved. The responses and time to response were recorded.

When it came to the business investment, about 40% of participants terminated future investments immediately; when it came to the numbers social contexts, these were about 35% in the romantic partner scenario, 25% in the sibling context, and about 5% in the friend context. The numbers for investing another year were about 35% in the business context, 50% in the romantic, and about 65% in the sibling and friend conditions. Finally, about 25% of participants would invest indefinitely in the business, 10% in the romantic partner, 5% in the sibling, and 30% in the friendship. In general, the picture that emerges is that people were willing to terminate the business investments much more readily than the social ones. Moreover, the time it took to make a decision was also longer in the business context, suggesting that people found the decision to continue investing in social relationships easierPhrased in terms of sunk costs, people appeared to be more willing to factor those into the decision to keep investing in social relationships. 

So at least you’ll have company as you sink into financial ruin

The question remains as to why that might be? Part of that answer no doubt involves opportunity costs. In the business world, if you want to invest your money into a new venture, doing so is relatively easy. Your money is just as green as the next person’s. It is far more difficult to just go out into the world and get yourself a new friend, sibling, or romantic partner. Lots of people already have friends, families, and friendships and aren’t looking to add to that list, as their investment potential in that realm is limited. Even if they are looking to add to it, they might not be looking to add you. Accordingly, the expected value of finding a better relationship needs to weighed against the time it takes to find it, as well as the degree of improvement it would likely yield. If you cannot just go out into the world and find new relationships with ease, breaking off an existing one could be more costly when weighed against the prospect of waiting it out to see if it improves in the future. 

There are other factors to consider as well. For instance, the return on social investment may often not be all that immediate and, in other cases, might come from sources other than the person being invested in. Taking those in order, if you break off social investments with others at the first sign of trouble – especially deeper, longer-lasting relationships – you may develop a reputation as a fair-weather friend. Simply put, people don’t want to invest and be friends with someone who is liable to abandon them when they need it most. We’d rather have friends who are deeply and honestly committed to our welfare, as those can be relied on. Breaking off social relationships too readily demonstrates to others that one is not that appealing as a social asset, making you less likely to have a place in their limited social roster. 

Further, investing in one person is also to invest in their social network. If you take care of a sick child, you’re not going to hope that the child will pay you back. Doing so might ingratiate you to their parents, however, and perhaps others as well. This can be contrasted with investing in a business: trying to help a failing business isn’t liable to earn you any brownie points as an attractive social asset to other businesses looking to court your investment, nor is Ford going to return the poor investment you made in BP because they’re friends with each other.

Whatever the explanation, it seems that the human willingness to succumb to sunk costs in the financial realm may well be a byproduct of an adaptive mechanism in the social domain being co-opted for a task it was not designed to solve. When that happens, you start seeing some weird behavior. The key to understanding that weirdness is to understand the original functionality.

References: Hrgović, J. & Hromatko, I. (2017). The time and social context in sunk-cost effects. Evolutionary Psychological Science, doi: 10.1007/s40806-017-0134-4

Predicting The Future With Faces

“Your future will be horrible, but at least it will be short. So there’s that”

The future is always uncertain, at least as far as human (and non-human) knowledge is concerned. This is one reason why some people have difficulty saving or investing money for the future: if you give up rewards today for the promise of rewards tomorrow, that might end up being a bad idea if tomorrow doesn’t come for you (or a different tomorrow than the one you envisioned does). Better to spend that money immediately when it can more reliably bring rewards. The same logic extends to other domains of life, including the social. If you’re going to invest time and energy into a friendship or sexual relationship, you will always run the risk of that investment being misplaced. Friends or partners who betray you or don’t reciprocate your efforts are not usually the ones you want to be investing in the first place. You’d much rather invest that effort into the people who will give you better return.

Consider a specific problem, to help make this clear: human males face a problem when it comes to long-term sexual relationships, which is that female reproductive potential is limited. Not only can women only manage one pregnancy at a time, but they also enter into menopause later in life, reducing their subsequent reproductive output to zero. One solution to this problem is to only seek short-term encountered but, if you happen to be a man looking for a long-term relationship, you’d be doing something adaptive by selecting a mate with the greatest number of years of reproductive potential ahead of her. This could mean selecting a partner who is younger (and thus has the greatest number of likely fertile years ahead of her) and/or selecting one who is liable to enter menopause later.

Solving the first problem – age – is easy enough due to the presence of visual cues associated with development. Women who are too young and do not possess these cues are not viewed as attractive mates (as they are not currently fertile), become more attractive as they mature and enter their fertile years, and then become less attractive over time as fertility (both present and future) declines. Solving the second problem – future years of reproductive potential, or figuring out the age at which a woman will enter menopause – is trickier. It’s not like men have some kind of magic crystal ball they can look into to predict a woman’s future expected age at menopause to maximize their reproductive output. However, women do have faces and, as it turns out, those might actually be the next best tool for the job.

Fred knew it wouldn’t be long before he hit menopause

A recent study by Bovet et al (2017) sought to test whether men might be able to predict a woman’s age at menopause in advance of that event by only seeing her face. One obvious complicating factor with such research is that if you want to assess the extent to which attractiveness around, say, age 25 predicts menopause in the same sample of women, you’re going to have to wait a few decades for them to hit menopause. Thankfully, a work-around exists in that menopause – like most other traits – is partially heritable. Children resemble their partners in many regards, and age of menopause is one of them. This allowed the researchers to use a woman’s mother’s age of menopause as a reasonable proxy for when the daughter would be expected to reach menopause, saving them a lot of waiting. 

Once the participating women’s mother’s age of menopause was assessed, the rest of the study involved taking pictures of the women’s faces (N = 68; average age = 28.4) without any makeup and with as neutral as an expression as possible. These faces were then presented in pairs to male raters (N = 156) who selected which of the two was more attractive (completing that task a total of 30 times each). The likelihood of being selected was regressed against the difference between the mother’s age of menopause for each pair, controlling for facial femininity, age, voice pitch, waist-to-hip ratio, and a value representing the difference between a woman’s actual and perceived age (to ensure that women who looked younger/older than they actually were didn’t throw things off).

A number of expected results showed up, with more feminine faces (ß = 0.4) and women with more feminine vocal pitch (ß = 0.2) being preferred (despite the latter trait not being assessed by the raters). Women who looked older were also less likely to be selected (ß = -0.56) Contrary to predictions, women with more masculine WHRs were preferred (ß = 0.13), even though these were not visible in the photos, suggesting WHR may cue different traits than facial ones. The main effect of interest, however, concerned the menopausal variable. These results showed that as the difference between the pair of women’s mother’s age of menopause increased (i.e., one woman expected to go through menopause later than the other), so too did the probability of the later-menopausal woman getting selected (ß = 0.24). Crucially, there was no correlation between a woman’s expected age of menopause and any of the more-immediate fertility cues, like age, WHR, facial or vocal femininity. Women’s faces seemed to be capturing something unique about expected age at menopause that made them more attractive.

Trading off hot daughters for hot flashes

Now precisely what features were being assessed as more attractive and the nature of their connection to age of menopause is unknown. It is possible – perhaps even likely – that men were assessing some feature like symmetry that primarily signals developmental stability and health, but that variable just so happen to correlate with age at menopause as well (e.g., healthier women go through menopause later as they can more effectively bear the costs of childbearing into later years). Whatever systems were predicting age at menopause might not specifically be designed to do so. While it is possible that some features of a woman’s face uniquely cues people into expected age at menopause more directly without primarily cuing some other trait, that remains to be demonstrated. Nevertheless, the results are an interesting first step in that direction worth thinking about.

References: Bovet, J., Barkat-Defradas, M., Durand, V., Faurie, C., & Raymond, M. (2017). Women’s attractiveness is linked to expected age at menopause. Journal of Evolutionary Biology, doi: 10.1111/jeb.13214

What Can Chimps Teach Us About Strength?

You better not be aping me…

There was a recent happening in the primatology literature that caught my eye. Three researchers were studying patterns of mating in captive chimpanzees. They were interested in finding out what physical cues female chimps tended to prefer in a mate. This might come as no surprise to you – it certainly didn’t to me – but female chimps seemed to prefer physically strong males. Stronger males were universally preferred by the females, garnering more attention and ultimately more sexual partners. Moreover, strength was not only the single best predictor of attractiveness, but there was no upper-limit on this effect: the stronger the male, the more he was preferred by the females. This finding makes perfect sense in its proper evolutionary context, given chimps’ penchant for getting into physical conflicts. Strength is a key variable for males in dominating others, whether this is in the context of conflicts over resources, social status, or even inter-group attacks. Males who were better able to win these contests were not only likely to do well for themselves in life, but their offspring would likely be the kind of males who would do likewise. That makes them attractive mating prospects, at least if having children likely to survive and mate is adaptive, which it seems to be.

What interested me so much was not this finding – I think it’s painfully obvious – but rather the reaction of some other academics to it. These opposing reactions claimed that the primatologists were too quick to place their results in that evolutionary context. Specifically, it was claimed that these preferences might not be universal, and that a cultural explanation makes more sense (as if the two are competing types of explanations). This cultural explanation, I’m told, goes something like, “chimpanzee females are simply most attracted to male bodies that are the most difficult to obtain because that’s how chimps in this time and place do things,” and “if this research was conducted 100 years ago, you’d have observed a totally different pattern of results.”

Now why the difficulty in achieving a body is supposed to be the key variable isn’t outlined, as far as I can tell. Presumably it too should have some kind of evolutionary explanation which would make a different set of predictions, but none are outlined. This point seems scarcely realized by the critics. Moreover, the idea that these findings would not obtain 100 years ago is tossed out with absolutely no supporting evidence and little hope of being tested. It seems unlikely that physical strength yielding adaptive benefits is some kind of evolutionary novelty, or that males did not differ in that regard as little as a hundred years ago despite plenty of contemporary variance.

One more thing: the study I’m talking about didn’t take place on chimps. It was a pattern observed in humans. The underlying logic and reactions, however, are pretty much spot on.  

Not unlike this man’s posing game

It’s long been understood that strong men are more attractive than weak ones, all else being equal. The present research by Sell et al (2017) was an attempt to (a) quantify approximately how much of a man’s bodily attractiveness is driven by his physical strength, (b) the nature of this relationship (whether it is more of a straight line or an inverted “U” shape, where very strong men are less attractive, and (c) whether some women find weaker men more attractive than stronger ones. There was also a section about quantifying the effects of height and weight.

To answer those questions, pictures of semi-to-shirtless men were photographed from the front and side, and their heads were blocked out so only their bodies remained. These pictures were then assessed by different groups for either strength or attractiveness (actual strength measures were collected by the researchers). The quick run down of the results are that perceived strength did track actual strength, and perceptions of strength accounted for about 60-70% of the variance in bodily attractiveness (which is a lot). As men got stronger, they got more attractive, and this trend was linear (meaning that, within the sample, there was no such thing as “too strong” after which men got less attractive). This pattern was also universal: there was not a single women (out of 160) who rated the weaker men as more attractive than the stronger ones. Accounting for strength, height accounted for a bit more of the attractiveness, and weight was negatively related to attractiveness. Women liked strong men; not fat ones.

While it’s nice to put something of a number on just how much strength matters in determining male bodily attractiveness (most of it), these findings are all mundane to anyone with eyes. I suspect they cut across multiple species, and I don’t think you’re going to find just about any species where females prefer to mate with physically weaker males. The explanation for these preferences for strength – the evolutionary framework into which they fit – should apply well to just about any of the species in that list. While I initially made up the fact that this study was about chimps, I’d say you’re likely to find a similar set of results if you did conduct such work.

Also, the winner – not the loser – of this contest will go on to mate

Enter the strange comments I mentioned initially:

“It’s my opinion that the authors are too quick to ascribe a causal role to evolution,” said Lisa Wade…“We know what kind of bodies are valorized and idealized,” Wade said. “It tends to be the bodies that are the most difficult to obtain.”

Try reading that criticism of the study and imagine it was applied to any other sexually-reproducing species on the planet. What adaptive benefits is “difficulty in obtaining” supposed to bring and what kind of predictions does that idea make? It would be difficult, for instance, to achieve a very thin body; the type usually seen in anorexic people. It’s hard for people to ignore their desires to eat certain foods in certain quantities, especially to the point you begin to physically waste away. Despite that difficulty in achieving the starved look, such bodies are not idealized as attractive. “Difficult to obtain” does not necessary translate into anything adaptively useful. 

And, more to the point, even if a preference for difficult-to-obtain bodies per se existed, where would Lisa suggest it came from? Surely, it didn’t fall from the sky. The explanation for a preference for difficult bodies would, at some point, have to reference some kind of evolutionary history. It’s not even close to sufficient to explain a preference by saying, “culture, not evolution, did it,” as if the capacity for developing a culture itself – and any given instantiation of it -  exists free from evolution. Despite her claims to the contrary, it is a theoretical benefit to thinking about evolutionary function when developing theories of psychological form; not a methodological problem. The only problem I see is that she seems to prefer worse, less-complete explanations to better ones. But, to use her own words, this is “…nothing unique to [her]. Much of this type of [criticism] has the same methodological problems

If your explanation for a particular type of psychological happening in humans doesn’t work for just about any other species, there’s a very good chance it is incomplete when it comes to explaining the behavior at the very least. For instance, I don’t think anyone would seriously suggest that chimp females entering into their reproductive years “might not have much of an experience with what attractiveness means,” if they favored physically strong males. I’d say it’s fairly common such explanations aren’t even pointing in the right direction a lot of the time, and are more likely to mislead researchers and students than help inform them. 

References: Sell, A., Lukazsweki, A., & Townsley, M. (2017). Cues of upper body strength account for most of the variance in men’s bodily attractiveness. Proc. R. Soc. B 284http://dx.doi.org/10.1098/rspb.2017.1819

Online Games, Harassment, and Sexism

Gamers are no strangers to the anger that can accompany competition. As a timely for-instance, before I sat down to start writing this post I was playing my usual online game to relax after work. As I began playing my first game of the afternoon, I saw a message pop up from someone who had sent me a friend request a few days back after I had won a match (you need to accept these friend requests before messages can be sent). Despite the lag in between the time that request was sent and when I accepted it, the message I was greeted with called me a cunt and informed me that I have no life before the person removed themselves from my friend list to avoid any kind of response. However accurately they may have described me, that is the most typical reason friend requests get sent in that game: to insult. Many people – myself included – usually don’t accept them from strangers for that reason and, if you do, it is advisable to wait a few days for the sender to cool off a bit and hopefully forget they added you. Even then, that’s no guarantee of a friendly response.

Now my game happens to be more of a single-player experience. In team-based player vs player games, communication between strangers can be vital for winning, meaning there is usually less of a buffer between players and the nasty comments of their teammates. This might not draw much social attention, but these players being insulted are sometimes women, bringing us nicely to some research on sexism.

Gone are the simpler days of yelling at your friends in person

A 2015 paper by Kasumovic & Kuznekoff examined how players in the online, first-person shooter game Halo 3 responded to the presence of a male and female voice in the team voice chat, specifically in terms of both positive and negative comments directed at them. What drew me to this paper is two-fold: first, I’m a gamer myself but, more importantly, the authors also constructed their hypotheses based on evolutionary theory, which is unusual for papers on sexism. The heart of the paper revolves around the following idea: common theories of sexist behavior towards women suggest that men behave aggressively towards them to try and remove them from male-dominated arenas. Women get nasty comments because men want them gone from male spaces. The researchers in this case took a different perspective, predicting instead that male performance within the game would be a key variable in understanding the responses players have.

As men heavily rely on their social status for access to mating opportunities, the authors predicted they should be expected to respond more aggressively to newcomers into a status hierarchy that displace them. Put into practice, this means that a low-performing male should be threatened by the entry of a higher-performing woman into their game as it pushes them down the status hierarchy, resulting in aggression directed at the newcomers. By contrast, males that perform better should be less concerned by women in the game, as it does not undercut their status. Instead of being aggressive, then, higher-performing men might give female players more positive comments in the interests of attracting them as possible mates. Putting that together, we end up with the predictions that women should receive more negative comments than men from men who are performing worse, while women should receive more positive comments from men who are performing better.

To test this idea, the researchers played the game with 7 other random players (two teams of 4 players) while playing either male or female voice lines at various intervals during the game (all of which were pretty neutral-to-positive in terms of content, such as, “I like this map” played at the beginning of a game). The recordings of what the other players (who did not know they were being monitored in this way, making their behavior more natural) said were then transcribed and coded for whether they were saying something positive, negative, or neutral directed at the experimenter playing the game. The coders also checked to see whether the comments contained hostile sexist language to look for something specifically anti-woman, rather than just negativity or anger in general.

Nothing like some wholesome, gender-blind rage

Across 163 games, any other players spoke at all in 102 of them. In those 102 games, 189 players spoke in total, 100% of whom were male. This suggests that Halo 3, unsurprisingly, is a game that women aren’t playing as much as men. Only those players who said something and were on the experimenter’s team (147 of them) were maintained for analysis. About 57% of those comments were in the female-voiced condition, while 44% where in the male condition. In general, then, the presence of a female voice led to more comments from other male players.

In terms of positive comments, the predicted difference appeared: the higher the skill level of the player talking at the experimenter, the more positive comments they made when a woman’s voice was heard; the worse the player, the fewer positive comments they made. This interaction was almost significant when considering the relative difference, rather than the absolute skill rating (i.e. Did the player talking do worse or better than the experimenter). By contrast, the number of positive comments directed at the male-voiced player was unrelated to the skill of the speaker.

Turning to the negative comments, it was found that they were negatively correlated with player skill in general: the higher the skill of the player, the fewer negative comments they made (and the lower the skill, the more negative they got. As the old saying goes, “Mad because bad”). The interaction with gender was less clear, however. In general, the teammates of the female-voiced experimenter made more negative comments than in the male condition. When considering the impact of how many deaths a speaking player had, the players were more negative towards the woman when dying less, but they were also more negative towards the man when dying extremely often (which sees to run counter to the initial predictions). The players were also more negative towards a women when they weren’t getting very many kills (with negativity towards the woman declining as their personal kills increased), but that relationship was not observed when they had heard a male voice (which is in line with the initial predictions).

Finally, only a few players (13%) made sexist statements, so the results couldn’t be analyzed particularly well. Statistically, these comments were unrelated to any performance metrics. Not much more to say about that beyond small sample size.  

Team red is much more supportive of women in gaming

Overall, the response that speaking players had to the gender of their teammate depended, to some extent, on their personal performance. Those men who were doing better at the game were more positive towards the women, while those who were doing worse were more negative towards them, generally speaking.

While there are a number of details and statements within the paper I could nitpick, I suspect that Kasumovic & Kuznekoff (2015) are on the right track with their thinking. I would add some additional points, though. The first of these is rather core to their hypothesis: if men are threatened by status losses brought on by their relative poor performance, it seems that these threats should occur regardless of the sex of the person they’re playing with: whether a man performs poorly relative to a woman or another man, he will still be losing relative status. So why is there less negativity directed at men (sometimes), relative to women? The authors mention one possibility that I wish they had expanded upon more, which is that men might be responding not to the women per se as much as the pitch of the speaker’s voice. As the authors write, voice pitch tends to correlate with dominance, such that deeper voices tend to correlate with increased dominance.

What I wish they had added more explicitly is that aggression should not be deployed indiscriminately. Being aggressive towards people who are liable to beat you in a physical contest isn’t a brilliant strategy. Since men tend to be stronger than women, behaving aggressively towards other men – especially those outperforming you – should be expected to have carried different sets of immediate consequences, historically-speaking (though there aren’t many costs in modern online environments, which is why people behave more aggressively there than in person). It might not be that the men are any less upset about losing when other men are on their team, but that they might not be equally aggressive (in all cases) to them due to potential physical retribution (again, historically).

There are other points I would consider beyond that. The first of these is the nature of insults in general. If you remember the interaction I had with an angry opponent initially, you should remember that the goal of their message was to insult me. They were trying to make me feel bad or in some way drag me down. If you want to make someone feel bad, you would do well to focus on their flaws and things about them which make you look better by comparison. In that respect, insulting someone by calling attention to something you share in common, like your gender, is a very weak insult. On those grounds we might expect more gendered insults against women, given that men are by far the majority in these games. Now because lots of hostile sexist insults weren’t observed in the present work, the point might not be terribly applicable here. It does, however, bring me to my next point: you don’t insult people by bringing attention to things that reflect positively on them.

“Ha! That loser can only afford cars much more expensive than I can!”

As women do not play games like Halo nearly as much as men, that corresponds to lower skill in those games on a population level. Not because women are inherently worse at the game but simply because they don’t practice them as much (and people who play those games more tend to become better at them). If you look at the top competitive performance in competitive online games, you’ll notice the rosters are largely, if not exclusively, male (not unlike all the people who spoke in the current paper). Regardless of the causes of that sex difference in performance, the difference exists all the same.

If you knew nothing else about a person beyond their gender, you would predict that a man would perform better at Halo than a woman (at least if you wanted your predictions to be accurate). As such, if you’ve just under-performed at this game and are feeling pretty angry about it, some players might be looking to direct blame at their teammates who clearly caused the issue (as it would never be their the speaker’s skill in the game, of course. At least not if you’re talking about the people yelling at strangers).

If you wanted to find out who was to blame, you might consult the match scores: factors like kills and deaths. But those aren’t perfect representations of player skill (that nebulous variable which is hard to get at) and they aren’t the only thing you might consult. After all, scores in a singular game are not necessarily indicative of what would happen over a larger number of games. Because of that, the players on these teams still have limited information about the relative skill of their teammates. Given this lack of information, some people may fall back on generally-accurate stereotypes in trying to find a plausible scapegoat for their loss, assigning relatively more blame for the loss to the people who might be expected to be more responsible for it. The result? More blame assigned to women, at least initially, given the population-level knowledge.

“I wouldn’t blame you if I knew you better, so how about we get to know each other over coffee?”

That’s where the final point I would add also comes in. If women perform worse on a population level than men, the low-performing men suffer something of a double status hit when they are outperformed by a woman: not only is there another player who is doing better than them, but one might expect this player to be doing worse, knowing only their gender. As such, being outperformed by such a player makes it more difficult to blame external causes for the outcome. In a sentence, being beaten by someone who isn’t expected to perform well is a more honest signal of poor skill. The result, then, is more anger: either in an attempt to persuade others that they’re better than they actually performed or in an attempt to get the people out of there who are making them look even worse. This would fit within the author’s initial hypothesis as well, and would probably have been worth mentioning.

References: Kasumovic, M. & Kuznekoff, J. (2015). Insights into sexism: Male status and performance moderates female-directed hostile and amicable behavior. PLoS ONE 10(7). doi:10.1371/journal.pone.0131613

Practice, Hard Work, And Giving Up

There’s no getting around it: if you want to get better at something – anything – you need to practice. I’ve spent the last several years writing rather continuously and have noticed that my original posts are of a much lower quality when I look back at them. If you want to be the best version of yourself that you can be, you’ll need to spend a lot of time working at your skills of choice. Nevertheless, people do vary widely in terms of how much practice they are willing to devote to a skill and how readily they abandon their efforts in the face of challenges, or simply to time. Some musicians will wake up and practice several hours a day, some only a few days a week, some a few times throughout the year, and some will stop playing entirely (in spite of almost none of them making anything resembling money from it). In a word, some musicians possess more grit than others.

Those of us who spend too much time at a computer acquire a different kind of grit

To give you a sense for what is meant by grit, consider the following description offered by Duckworth et al (2007):

The gritty individual approaches achievement as a marathon; his or her advantage is stamina. Whereas disappointment or boredom signals to others that it is time to change trajectory and cut losses, the gritty individual stays the course.

Grit, in this context, refers to those who continue to pursue their goals when faced with obstacles, major or minor. According to Duckworth et al (2007), this trait of grit is referenced regularly by people discussing the top performers in their field about as often as talent, even if they might not refer to it by that name.

The aim of the Duckworth et al (2007) paper, broadly speaking, was two-fold: to create a scale to measure grit (as one did not currently exist), and then use that scale to see how well grit predicted subsequent achievements. Without going too in depth into the details of the project, the grit scale eventually landed on 12 questions. Six of those dealt with how consistent one’s interests are (like, “my interests change from year to year”) and the other six with perseverance of effort (like, “I have overcome setbacks to conquer an important challenge”). While this measure of grit was highly correlated with the personality trait of conscientiousness (r = .77), the two were apparently different enough to warrant separate categorization, as the grit score still predicted some outcomes after controlling for personality.

When the new scale was directed at student populations, grit was also found to relate to educational achievement, controlling for measures of general intelligence: in this case, college GPA controlling for SAT scores in a sample of about 1,400 Upenn undergraduates. The relationship between grit and GPA was modest (r = .25), though it got somewhat larger after controlling for SAT scores (r = .34). In a follow-up study, the grit scale was also used to predict which cadets at a military academy completed their summer training. Though about 94% of the cadets completed this training, these grittiest individuals were the least likely to drop out, as one might expect. However, unlike in the Upenn sample, grit was not a good predictor of subsequent cadet GPA in that sample (r = .06), raising some questions about the previous result (which I’ll get to in a minute).

This is time not spent studying for that engineering test

With that brief summary of grit in mind – hopefully enough to give you a general sense for the term – I wanted to discuss some of the theoretical aspects of the idea. Specifically, I want to consider when grit might be a good thing and when it might be better to persevere a little less or find new interests.

One big complication stopping people from being gritty is the simple matter of opportunity costs. For every task I decide to invest years of dedicated, consistent practice to, there are other tasks I do not get to accomplish. Time spent writing this post is time I don’t get to spend pursuing other hobbies (which I have been taking intermediate breaks to pursue, for the record). This is, in fact, why I have begun writing a post every two weeks or so down from each week: there are simply other things in life I want to spend my time on. Being gritty about writing means I don’t get to be equally gritty about other things. In fact, if I were particularly gritty about writing I might not get to be gritty about anything at all. Not unless I wanted to stop being gritty about sleep, but even then I could just devote that sleeping time to writing as well.

This is a problem when it comes to grit being useful, because of a second issue: diminishing returns on practice. That first week, month, or year you spend learning a skill typically yields a more appreciable return than the second, third, or so on. Putting that into a quick example, if I started studying chess (a game I almost never play), I would see substantial improvements to my win rate in the first month. Let’s just say 10% to put a number on it. The next month of practice still increases my win rate, but not by quite as much, as there are less obvious mistakes I’m making. I go up another 5%. As this process continues, I might eventually spend a month of practice to increase my win rate by mere fractions of a percent. While this dedicated practice does, on paper, make me better, the size of those rewards relative to the time investment I need to make to get them gets progressively smaller. At a certain point, it doesn’t make much more sense to commit that time to chess when I could be learning to speak Spanish or even just spend that time with friends.

This brings us nicely to the next point: the rate of improvement, both in terms of how quickly you learn and how far additional practice can push you, ought to depend on one’s biological potential (for lack of a better term). No matter how much time I spend practicing guitar, for instance, there are certain ceilings on performance I will not be able to break: perhaps it becomes physically impossible to play any faster while maintaining accuracy; perhaps some memory constraints come into play and I cannot remember everything I’ve tried to learn. We should expect grit to interact with potential in a certain way: if you don’t have the ability to achieve a particular task, being gritty about pursuing it is going to be time spent effectively banging your head against a brick wall. By contrast, the individual who possesses a greater potential for the task in question has a much higher chance of grit paying off. They can simply get more from practice.

Some people just have nicer ceilings than others

This is, of course, assuming the task is actually one that can be accomplished. If you’re very gritty about finding the treasure buried in your backyard that doesn’t actually exist, you’ll spend a lot of time digging and none getting rich. Being gritty about achieving the impossible is a bad idea. But who’s to say what’s impossible? We usually don’t have access to enough information to say something cannot (or at least will not) be achieved, but we can often make some fairly-educated guesses. Let’s just stick to the music example for now: say you want to accomplish the task of becoming a world-famous rockstar. You have the potential to perform and you’re very gritty about pursuing it. You spend years practicing, forming bands, writing songs, finding gigs, and so on. One problem you’re liable to encounter in this case is simply that many other people who are similarly qualified are doing likewise, and there’s only so much room at the top. Even if you are all approximately as talented and gritty, there are some ceiling effects at play where being even grittier and more talented does not, by any means, guarantee more success. As I have mentioned before, the popularity of cultural products can be a fickle thing. It’s not just about the products you produce or what you can do. 

We see this playing out in the world of academia today. As many have lamented, there seem to be too few academic jobs for all the PhDs getting minted across the country. Being gritty about pursuing that degree – all the time, energy, and money spent earning it – turned out to not be a great idea for many who have done so. Sure, you can bet that just about everyone who achieved their dream job as a professor making a decent salary was pretty gritty about things. You have to be if you’re going to spend 10 or more years invested in higher education with little payoff and many challenges along the way. It’s just that lots of people who were about as gritty as those who got a job failed to do anything with their degree after they achieved it. As this example shows, not only does the task need to be achievable, but the rewards for achieving it need to be both valuable and likely if grit is to pay off. If the rewards aren’t valuable (eg, a job as an adjunct teaching 5 courses a semester for about as much as you’d make working minimum wage, all things considered), then pursuing them is a bad idea. If the rewards are valuable but unlikely (eg, becoming a top-selling pop artist), then pursuing them is similarly a bad idea for just about everyone. There are better things to do with your time.

The closest most people will come to being a rockstar

This yields the following summary: for grit to be potentially useful, a task needs to be capable of being accomplished, you need the potential to accomplish it given enough time, the rewards of achieving it need to be large enough, relative to the investment you put in, and the probability of achieving those rewards is comparably high. While that does leave many tasks for which passionate persistence and practice might pay off (and many for which it will not), this utility always exists in the context of other people doing likewise. For that reason, beyond a certain ceiling of effort more is not necessarily much of a guarantee of success. You can think of grit as – in many cases – something of a prerequisite for success rather than a great determinant. Finally, all of that needs to be weighed against the other things you could be doing with your time. Time spent being gritty about sports is time not spent being gritty about academics, which is time not spent being gritty about music, and so on.

If you want to reach your potential within a domain, there’s really no other option. You’ll need to invest lots of time and effort. Figuring out where that effort should go is the tricky part.

References: Duckworth, A., Peterson, C, Matthews, M., & Kelly, D. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality & Social Psychology, 92, 1087-1101.

Untitled Creativity Post

Creativity – much like intelligence – is a highly-valued trait. It is also – much like intelligence – a term that encompasses multiple abilities applied across a broad number of domains, which can result in some confusion over precisely what one means when the word is used. Since I wanted to think a bit about creativity today, a good starting point for this discussion would be to clarify what creativity refers to in terms of function and form. Being clear about these issues can help us avoid getting mired in topics related to creativity – like intelligence – but which are not creativity themselves. There’s nothing quite like definitional confusion when it comes to stagnating discussions; just ask gender.

These are all bathrooms now, and there still aren’t enough kinds

In terms of a good starting definition for thinking about creativity, I think we are lucky enough to have one available to us. Paraphrasing a bit from Plucker, Beghetto, and Dow (2004), creativity generally refers to the creation of something novel that manages to do something useful or appropriate. The former point is generally accepted in common usage: products or people that are viewed as creative are or do something that hasn’t been done quite that way before. Something new is being created (hence the term), rather than something being repeated or copied. The less-appreciated – but equally important – facet of the definition is the latter portion. There are a great many ways of creating something new without it being creative. You might, for instance, write a bunch of nouns on pieces of paper, mix them in a bag, then pull two at random and create a new product with them. Say you pulled out a piece that said “clock” and another that said “fish” and so designed a clock with a dead fish nailed to the middle of it. While that design would be novel – at least I haven’t seen many clocks with attached fish – it wouldn’t be appropriate or useful in most senses of the word. There’s thus a difference between creative and just being different, or even random. A quick examination of the lyrics to any Mars Volta song should highlight the importance of appropriateness when considering whether novelty is creative or just nonsense. Anyone can string words together in new ways, but that does not always (or even usually) make for a creative song.

Which brings me to another important point about problems and their solutions more generally: there is no such thing as a general-purpose problem and, accordingly, no such thing a general-purpose solution. To place that into a quick example, if I asked you to design me a tool that “does useful things,” you would likely find that request a bit underspecified. What kind of useful things should it do? This is an important question to answer because tools that are designed to do one task well often do others poorly, if at all. A hammer might be good at driving a nail into wood, less good at applying paint to a wall, worse still at holding water, and entirely incapable of transporting you from point A to B. The shape of a problem determines the shape of the solution, and as all problems have different shapes, so too must each solution.

There are several implications that flow from this idea as it pertains to creativity. The first is that the difference between novelty and creativity can be more readily understood. If I told you I wanted a device to hold water, there are an infinite number of possible devices you could give me that don’t currently exist. However, very few of that infinite set would do the job well (a hammer or sieve would not) and, of those that do accomplish the task, fewer still would be an improvement on existing solutions. This is why “novel” alone does not translate into “creative.”

As seen on TV, since no store would ever stock it

Yet another implication is that – just like humans (or any other species I’m aware of) don’t appear to possess general-purpose learning mechanisms, equally capable of learning anything – so too should we expect that creativity is not any singular mechanism within the mind that gets applied equally well to any problem. Those who are considered creative with respect to painting may not be expected to evidence that same degree of creativity when it comes to math or biology. It’s not likely that there’s a way to make people more creative across every domain they might encounter. After all, if creativity refers to the generation of more efficient and appropriate solutions to problems, asking that someone become more creative in general is like asking that they become better at effectively solving all types of problems or making connections between all areas of their brains. In keeping with the tool example from above, it would also be like asking that your water-holding device get better at solving all problems related to holding liquid (small and large quantities or varying types for varying lengths, etc), which doesn’t work well in practice; if it did, we wouldn’t need oil drums and cooking pots and measuring cups. We could just use one device for all of those tasks. Good luck using a 40-gallon drum to measure out a quarter cup of water effectively, though.

This expectation has been demonstrated empirically as well. Baer (1996) examined what effects training poetry-relevant creativity skills would have both writing poetry and short stories; an ostensibly-related domain. In this case, approximately 75 students were trained up on divergent-thinking skills relevant to poetry including thinking of words that sound the same as a target, have the same sound, work as a metaphor, or inventing words that are suggestive of other things. Another 75 students did not receive this training to serve as a control group. All the students then wrote poetry and short stories that would be evaluated by independent judges for creativity on a 1 to 5 scale. As it turned out, the poems written by the trained students did end up more creative (3 vs 2.2), yielding a gain of about 0.8 points. By contrast, the short stories in the trained group saw a substantially smaller gain of 0.3 points (2.8 v 2.5). Creativity training did not appear to have an equivalent effect across domains, even though the domains were, in many respects, closely related.

The final implication I wanted to cover right now when it comes to creativity concerns the purpose of solutions in the first place. We seek solutions to problems, in large part, because solutions are time savers. Once you have learned how to complete a task, you don’t need to relearn how to complete it each time you attempt it. Once I learned how to commute to work, I don’t need to figure out how to get there every day, which saves me time. Chefs working in kitchens don’t need to relearn how to make dishes (or even what dishes they will be making) each and every time they come into work, allowing them to complete their tasks with greater ease in shorter amounts of time. By contrast, creativity can be a time-consuming process, where new candidate solutions need to be developed and tested against existing alternatives, then learned and mastered. In other words, creativity is costly both in terms of the time and energy it takes to develop something new, but also costly in the sense that all the time you spend creating is time spent not applying existing solutions to a problem. The probability of your creative endeavors paying off at all in terms of improving outputs, as well as the degree to which they improve upon existing alternatives, needs to be weighed against the time it takes to develop them.

Thanks for all your hard work and effort. Next!

But what if your creative endeavors are successful? Well, first of all, good for you if they are. Achieving that much is no easy task. But assuming they are successful, you now have a new, even-more efficient solution to a problem you were facing. What are you going to do now? Well, you could continue your creative search for an additional solution that’s even better than the one you came up with, or you could apply your new solution to the problem. Remember: solutions are time savers. If you spend all your time innovating and none of it actually applying what you came up with, then you haven’t really saved time. In fact, if you aren’t going to then apply that solution, searching for it seems rather pointless. The great irony here, then, is that an end goal of creativity is effectively to not have to be creative anymore, at least with respect to that problem.

The more empirical end of this suggestion is represented by the finding that creativity appears to decrease with education, at least among engineering undergraduate students. Sola et al (2017) examined a sample of approximately 60 introductory and senior engineering college students. Creativity was assessed through a thinking-drawing procedure, where participants were presented with an incomplete picture and asked to complete it in any manner they wished. These drawings were subsequently assessed across 15 factors, ultimately finding higher creativity scores among the freshmen, on average, in several of the domains.

Nothing quite like the tried and true

To be clear, then, some people will generally be more creative than others, just like some people will generally be more intelligent than others. In that sense, you could consider some people creative. That does not mean their creativity will extend to all domains of life, however, or even that their creativity will extend throughout the same domains across their life. When you have a solution to a problem, the need to seek out a new solution is relatively lower, and so creativity should decline.

An implication of this framework would seem to be that if you want to keep creative output high, you need to constantly be facing problems that are perceived to be notably different from those already encountered (and the solutions to those problems need to be meaningful to find. People likely won’t be too motivated to be creative if finding a new solution will only yield minimal benefits). That said, there is also a risk in making a constant stream of problems seem novel: it suggests that the creative solutions you develop to a problem are not liable to serve you well in the future, as the problems you will face tomorrow are not the same ones you are facing today. If the solutions are not perceived to be useful in the future, creative efforts may be scaled back accordingly. Striking that balance between novelty and predictability may prove key in determining subsequent creative efforts.

References: Baer, J. (1996). The effect of task-specific divergent thinking training. The Journal of Creative Behavior, 30, 183-187.

Sola, E., Hoekstra, R., Fiore, S., & McCauley, P. (2017). An investigation of the state of creativity and critical thinking in engineering undergraduates. Creative Education, 8, 1495-1522.