Is Makeup A Valid Cue Of Sociosexuality?

Nothing like a good makeup exam

Being wrong is costly. If I think you’re aggressive when you’re not, I will behave inappropriately around you and incur costs I need not face. If I think you can help me when you can’t, I will hinder achieving my goals and give up on my search to find valuable assistance. Nevertheless, people are wrong constantly. Being wrong itself isn’t that unusual, as it takes the proper cognitive faculties, time, and energy to be right. The world is just a messy place, and there are opportunity costs to gathering, scrutinizing, and processing information, as well as diminishing returns on that search. Being wrong is costly, but so is being right, and those costs need to be balanced against each other, given limited resources. What is unusual is when people are systematically wrong about something; when they’re wrong in the same particular direction. If, say, 90% of people believe something incorrectly in the same way, that’s certainly a strange state of affairs that requires special kinds of explanations.

As such, if you believe people are systematically wrong about something, there are two things you should probably do: (1) earnestly assess whether your belief about them being wrong is accurate – since it’s often more likely you’re wrong than everyone else is – and then, if they actually are wrong, (2) try to furnish the proper explanation for that state of affairs and test it.

Putting that in an example I’ve discussed before, some literature claims that men over-perceive women’s sexual interest (here). In other words, the belief here is that many men are systematically wrong in the same way; they’re making the same error. One special explanation provided for this was that men over-perceiving sexual interest would lead them to approach more women (they otherwise wouldn’t) and ultimately get more mating opportunities as a result. So men are wrong because being wrong brings more benefits than costs. There are some complications with that explanation, however. First, why would we expect men to not perceive women’s interests more accurately (she’s not interested in me) but approach them anyway (the odds are low, but I might as well go for it)? That would lead to the same end point (approaching lots of women) without the inaccuracy that might have other consequences (like failing to pursue a woman who’s actually interested because you mistakenly believe a different woman is interested when she isn’t). The special explanation also falls apart when you consider that when you ask women about other women’s sexual interest, you get the same result as the men. So either men and women are over-perceiving women’s sexual interest, or perhaps they aren’t wrong. Perhaps individual women are under-reporting their own interest for some social reasons. Maybe the women’s self-reports are inaccurate (consciously or not), rather than everyone else being wrong about them. The explanation that one person is wrong, rather than everyone else is, feels more plausible.

Speaking of women’s sexual interest and people being wrong, let’s talk about a new paper touting the idea that everyone is wrong about women’s makeup usage. Specifically, lots of people seem to be using makeup usage as a cue to a woman’s short-term sexual interest, and the researchers believe they’re all wrong to do so. That makeup is an invalid cue of sociosexuality.

Aren’t they all…

This was highlighted in three studies, which I’ll cover quickly. In the first, 69 women were photographed with and without their day-to-day makeup. Raters – 182 of them – judged those pictures in terms of (1) how much makeup they felt the women were wearing, (2) how attractive the faces were, and (3) how much they felt the women pictured would be comfortable with and enjoy having casual sex with different partners; a measure of sociosexuality. The results showed that male (d = 0.64) and female (d = 0.88) raters judged women with makeup as more attractive than same women without, and also that the women wearing makeup were more comfortable with casual sex than without. For those curious, this latter difference was larger for female raters (d = 1.14) than male ones (d = 0.32). Putting that into numbers, men rated women wearing makeup as about 0.2 points more likely to enjoy casual sex on a scale from 1-9; for women, this difference was closer to 0.5 points. Further, men’s perceptions of women’s interest in casual sex seemed to be driven less by makeup per se, as much as it was driven by a woman’s perceived attractiveness (and since makeup made them look more attractive, they also looked more interested in casual sex). The primary finding here, however, is that the perception was demonstrated: people (men and women) use women’s makeup usage as a cue to their sociosexuality.

Also, men were worse at figuring out when women weren’t wearing any makeup, compared to women likely given a lack of experience with the topic. Here, being wrong isn’t surprising.

The second study asked the women wearing the makeup themselves to answer questions about their own sociosexuality (using several items, rather than a single question). They were also asked about how much time they spent applying makeup and how much they spent on it on each month. The primary result here was a reported lack of correlation between women’s scores on the sociosexuality questions and the time they spent applying makeup. In other words, people thought makeup was correlated to sexual attitudes and behaviors, but it wasn’t. People were wrong, but in predictable ways. This ought to require a special kind of explanation, and we’ll get to that soon.

The final study examined the relationship between people’s perceptions of a woman’s sociosexuality and her own self-reports of it. Both men and women again seemed to get it wrong, with negative correlations showing up between perceived and self-reported sociosexuality. Both went in a consistent direction, though only the male correlations were significant (male raters about r = -0.33; female raters r = -0.21). Once attractiveness was controlled for, however, the male correlation was similarly non-significant and comparable to women’s ratings (average r = -0.22).

The general pattern of results, descriptively, is that men and women seem to perceive women wearing makeup as being more interested in casual sex than women not wearing makeup. However, the women themselves don’t self-report being more interested in casual sex; if anything, they report being less interested in it than people perceive. Isn’t it funny how so many people are consistently and predictably wrong about this? Perhaps. Then again, I think there’s more to say about the matter which isn’t explored in much detail within the paper.

“This paper is an invalid cue of the truth”

The first criticism of this research that jumped out at me is that the researchers only recruited women who used makeup regularly to be photographed, rated, and surveyed. In that context, they report no relationship between makeup use and sociosexuality (which we’ll get to in a minute, as that’s another important matter). Restricting their sample in this way naturally reduces the variance in the population, which might make it harder to find a real relationship that actually exists. For instance, if I was curious whether height is an important factor in basketball skill, I might find different answers to this question if I surveyed the general population (which contains lots of tall and short people) than if I only surveyed professional basketball players (who all tend to be taller than average; often substantially so). To the authors’ credit, they do mention this point…in their discussion, as more of an afterthought. This suggests to me the point was raised by a reviewer and was only added to the paper after the fact, as awareness of this sampling issue would usually encourage researchers to examine the question in advance, instead of just note that they failed to do so at the end. So, if a relationship exists between makeup use and interest in casual sex, they might have missed it through selective sampling.

The second large criticism concerns the actual reported results, and by how much that finding was missed. I find it noteworthy how the researchers interpret the correlation between women’s self-reported time applying makeup and their self-reported sociosexuality. In the statistical sense, the correlation is about as close to the significant threshold as possible; r = .25, p = 0.051. As the cut-off for significance is 0.05 or lower, this is a relationship that could (and likely would) be interpreted as evidence consistent with the possibility that a link between makeup usage and sociosexuality does exist, if one was looking for a connection; that makeup use is, potentially, a valid cue of sexual interests and behaviors. Nevertheless, the authors interpret it as “not significant” and title their paper accordingly (“Makeup is a FALSE signal of sociosexuality”, emphasis mine). That’s not wrong, in the statistical sense. It also feels like a rather bold description for data that is a hair’s breadth away from reaching the opposite conclusion, and suggests to me the authors had a rather specific hypothesis going into this. Again, to their credit, the authors note there is a “trend” there, but that stands in stark contrast to their rather dramatic title and repeated claims that makeup is an invalid cue. In fact, every instance of them noting there’s a trend between makeup use and sociosexuality seems to be followed invariably by a claim that the results suggest there is no relationship.

Further, there is a point never really discussed at all, which is that women might under-report their own sociosexuality, as per the original research I mention, perhaps because they’re wary of incurring social costs from being viewed as promiscuous. In many domains, I would default to the assumption that the self-reports are somewhat inaccurate. For example, when I surveyed women about their self-perceived attractiveness (from 1-10) several years back, not a single one rated herself below a 6 (out of 10), and the average was higher than that. Either I had managed recruited a sample of particularly beautiful women (possible) or people are interested in you believing they’re better than they actually are (more likely).  After all, if you believe something inaccurately about a person that’s flattering, while it may be a cost to you, it’s a benefit to them. So what’s more likely: that everyone believes something that’s wrong about others, or that some people misrepresent themselves in a flattering light?

Doesn’t get much more flattering than that

As a final note on explaining these findings, it is worth exploring the possibility that a woman’s physical attractiveness/makeup use actually is correlated with relatively higher sociosexuality (despite the author’s claims this isn’t true). In other words, people aren’t making a perceptual mistake – the general correlate holds true – but the current sample missed it for whatever reason (even if just barely). Indeed, there is some evidence that more attractive women score slightly higher on measures of sociosexuality (N = 226; Fisher et al, 2016. Ironically, this was published in the same journal 2 years prior). While short-term encounters do carry some adaptive costs for women, this small correlation might arise due to more physically-attractive women receiving offers for short-term encounters that can better offset them. At the very least, it could be expected that because attractive women carry more value in the mating market place these offers are, in principle, more numerous. Increasing numbers of better options should equal greater comfort and interest.

If that is true – that attractiveness does correlate in some small way with sociosexual orientation – then this could also help explain the (also fairly small) correlation between makeup usage and perceived sociosexuality: people view attractive women as more open to short-term encounters, makeup artificially increases attractiveness, and so people judge women wearing makeup as more open to short-term encounters than they are.

We can even go one layer deeper: women generally understand that makeup makes them look more attractive. They also understand that the more attractive they look, the more positive mating attention they’ll likely receive. Applying makeup, then, can be an effort to attract mating attention, in much the same way that I might wear a nice suit if I was going on a job interview. However, neither the suit nor the makeup is a “smart bomb”, so to speak. I might wear a suit to attract the attention of specific employers, but just because I’m wearing a suit that doesn’t mean I want any job (and if Taco Bell wanted to hire me, I might be choosy and say “No thanks”). Similarly, a woman wearing makeup might be interested in attracting mating attention from specific sources – and be perceived as being more sexually motivated, accordingly – without wishing to send a global signal of sexual interest to all available parties. That latter part just happens as a byproduct. Nevertheless, in this narrower sense, makeup usage could rightly be perceived as a sign of sexual signaling; perhaps one that ends up getting perceived a bit more broadly than intended.

Or perhaps it’s not even perceived more broadly. The question asked of raters in the study was whether a woman would be comfortable and enjoying having casual sex with different partners; it’s unspecified as to the nature of those partners. “Different” doesn’t mean “just anyone”. Women who are interested in makeup might be slightly more interested in these pursuits, on average…but only so long as the partners are suitably attractive.

References: Batresa, C., Russella, R., Simpsonb, J., Campbellc, L., Hansend, A., &
Cronke, L. (2018). Evidence that makeup is a false signal of sociosexuality. Personality & Individual Differences, 122, 148-154 

Fisher, C., Hahn, A., DeBruine, L., & Jones, B. (2016). Is women’s sociosexual orientation related to their physical attractiveness? Personality & Individual Differences, 101, 396-399

Keeping Making That Face And It’ll Freeze That Way

Just keep smiling and scare that depression away

Time to do one of my favorite things today: talk about psychology research that failed to replicate. Before we get into that, though, I want to talk a bit about our emotions to set the stage.

Let’s say we wanted to understand why people found something “funny.” To do so, I would begin in a very general way: some part(s) of your mind functions to detect cues in the environment that are translated into psychological experiences like “humor.” For example, when some part of the brain detects a double meaning in a sentence (“Did you hear about the fire at the circus? It was intense”) the output of detecting that double meaning might be the psychological experience of humor and the physiological display of a chuckle and a grin (and maybe an eye-roll, depending on how you respond to puns). There’s clearly more to humor than that, but just bear with me.

This leaves us with two outputs: the psychological experience of something being funny and the physiological response to those funny inputs. The question of interest here (simplifying a little) is which is causing which: are you smiling because you found something funny, or do you find something funny because you’re smiling?

Intuitively the answer feels obvious: you smile because you found something funny. Indeed, this is what the answer needs to be, theoretically: if some part of your brain didn’t detect the presence of humor, the physiological humor response makes no sense. That said, the brain is not a singular organ, and it is possible, at least in principle, that the part of your brain that outputs the conscious experience of “that was funny” isn’t the same piece that outputs the physiological response of laughing and smiling.

The other part of the brain hasn’t figured out that hurt yet

In other words, there might be two separate parts of your brain that function to detect humor independently. One functions before the other (at least sometimes), and generates the physical response. The second might then use that physiological output (I am smiling) as an input for determining the psychological response (That was funny). In that way, you might indeed find something funny because you were smiling.

This is what the Facial Feedback Hypothesis proposes, effectively: the part of your brain generating these psychological responses (That was funny) uses a specific input, which is the state of your face (Am I already smiling?). That’s not the only input it uses, of course, but it should be one that is used. As such, if you make people do something that causes their face to resemble a smile (like holding a pen between their teeth only), they might subsequently find jokes funnier. That was just the result reported by Strack, Martin, & Stepper (1988), in fact.

But why should it do that? That’s the part I’m getting stuck on.

Now, as it turns out, your brain might not do that at all. As I mentioned, this is a post about failures to replicate and, recently, the effect just failed to replicate across 17 labs (approximately 1,900 participants) in a pre-registered attempt. You can read more about the details here.  You can also read the original author’s response here (with all the standard suggestions of, “we shouldn’t rush to judgment about the effect not really replicating because…” which I’ll get to in a minute.

What I wanted to do first, however, is think about this effect on more of a theoretical level, as the replication article doesn’t do so.

Publish first; add theory later

One major issue with this facial feedback hypothesis is that similar physiological responses can underpin very different psychological ones. My heart races not only when I’m afraid, but also when I’m working out, when I’m excited, or when I’m experiencing love. I smile when I’m happy and when something is funny (even if the two things tend to co-occur). If some part of your brain is looking to use the physiological response (heart rate, smile, etc) to determine emotional state, then it’s facing an under-determination problem. A hypothetical inner-monologue would go something like this: “Oh, I have noticed I am smiling. Smiles tend to mean something is funny, so what is happening now must be funny.” The only problem there is that if I were smiling because I was happy – let’s say I just got a nice piece of cake – experiencing humor and laughing at the cake is not the appropriate response.

Even worse, sometimes physiological responses go the opposite direction from our emotions. Have you ever seen videos of people being proposed to or reuniting with loved ones? In such situations, crying doesn’t appear uncommon at all. Despite this, I don’t think some part of the brain would go, “Huh. I appear to be crying right now. That must mean I am sad. Reuniting with loved ones sure is depressing and I better behave as such.”

Now you might be saying that this under-determination isn’t much of an issue because our brains don’t “rely” on the physiological feedback alone; it’s just one of many sources of inputs being used. But then one might wonder whether the physiological feedback is offering anything at all.

The second issue is one I mentioned initially: this hypothesis effectively requires that at least two different cognitive mechanisms are responding to the same event. One is generating the physiological response and the other the psychological response. This is a requirement of the feedback hypothesis, and it raises additional questions: why are two different mechanisms trying to accomplish what is largely the same task? Why is the emotion-generating system using the output of the physiological-response system rather than the same set of inputs? This seems not only redundant, but prone to additional errors, given the under-determination problem. I understand that evolution doesn’t result in perfection when it comes to cognitive systems, but this one seems remarkably clunky.

Clearly the easiest way to determine emotions. Also, Mousetrap!

There’s also the matter of the original author’s response to the failures to replicate, which only adds more theoretically troublesome questions. The first criticism of the replications is that psychology students may differ from non-psychology students in showing the effect, which might be due to psychology students knowing more about this kind of experiment going into it. In this case, awareness of this effect might make it go away. But why should it? If the configuration of your face is useful information for determining your emotional state, simple awareness of that fact shouldn’t change the information’s value. If one realizes that the information isn’t useful and discards it, then one might wonder when it’s ever useful. I don’t have a good answer for that.

Another criticism focused on the presence of a camera (which was not a part of the initial study). The argument here is that the camera might have suppressed the emotional responses that otherwise would have obtained. This shouldn’t be a groundbreaking suggestion on my part, but smiling is a signal for others; not you. You don’t need to smile to figure out if you’re happy; you smile to show others you are. If that’s true, then claiming that this facial feedback effect goes away in the presence of being observed by others is very strange indeed. Is information about your facial structure suddenly not useful in that context? If the effects go away when being observed, that might demonstrate that not only are such feedback effects not needed, but they’re also potentially not important. After all, if they were important, why ignore them?

In sum, the facial feedback hypothesis should require the following to be generally true:

  • (1) One part of our brain should successfully detect and process humor, generating a behavioral output: a smile.
  • (2) A second part of our brain also tries to detect and process humor, independent of the first, but lacks access to the same input information (why?). As such, it uses the outputs of the initial system to produce subsequent psychological experiences (that then do what? The relevant behavior already seems to be generated so it’s unclear what this secondary output accomplishes. That is, if you’re already laughing, why do you need to then experience something as funny?)
  • (3) This secondary mechanism has the means to differentiate between similar physiological responses in determining its own output (fear/excitement/exercise all create overlapping kinds of physical responses, happiness sometimes makes us cry, etc. If it didn’t differentiate it would make many mistakes, but if it can already differentiate, what does the facial information add?).
  • (4) Finally, that this facial feedback information is more or less ignorable (consciously or not), as such effects may just vanish when people are being observed (which was most of our evolutionary history around things like humor) or if they’re aware of their existence. (This might suggest the value of the facial information is, in a practical sense, low. If so, why use it?)

As we can see, that seems rather overly convoluted and leaves us with more questions than it answers. If nothing else, these questions present a good justification for undertaking deeper theoretical analyses of the “whys” behind a mechanism before jumping into studying it.

References: Strack, F., Martin, L. L., Stepper, S. (1988). Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis. Journal of Personality and Social Psychology, 54, 768–777

Wagenmaker, E. et al (2016). Registered replication report: Strack, Martin, & Stepper, (1988). Perspectives on Psychological Science, 11, https://doi.org/10.1177/1745691616674458

Getting Off Your Phone: Benefits?

The videos will almost be as good as being there in person

If you’ve been out to any sort of live event lately – be it a concert or other similar gathering; something interesting – you’ll often find yourself looking out over a sea of camera phones (perhaps through a camera yourself) in the audience. This has often given me a sense of general unease at times, namely for two reasons: first, I’ve taken such pictures before in the past and, generally speaking, they come out like garbage. Turns out it’s not the easiest thing in the world to get clear audio in a video at a loud concert, or even a good picture if you’re not right next to the stage. But, more importantly, I’ve found such activities to detract from the experience; either because you’re spending time on your phone instead of just watching what you’re there to see, or because it signals an interest to showing other people what you’re doing rather than just doing it and enjoying yourself. Some might say all those people taking pictures aren’t quite living for the moment, so to speak.

In fact, it has been suggested (Soares & Storm, 2018) that the act of taking a picture can actually make your memory for the event worse at times. Why might this be? There are two candidate explanations that come to mind: first, and perhaps most intuitively, screwing around on your phone is a distraction. When you’re busy trying to work the camera and get the right shot, you’re just not paying attention to what you’re photographing as much. It’s a boring explanation, but perfectly plausible, just like how texting makes people worse drivers; their attention is simply elsewhere.

The other explanation is a bit more involved, but also plausible. The basics go like this: memory is a biologically-costly thing. You need to devote resources to attending to information, creating memories, maintaining them, and calling them to mind when appropriate. If we remembered everything we ever saw, for instance, we would likely be devoting lots of resources to ultimately irrelevant information (no one really cares how many windows each building you pass on your way home from work has, so why remember it?), and finding the relevant memory amidst a sea of irrelevant ones would take more time. Those who store memories efficiently might thus be favored by selection pressures as they can more quickly recall important information with less investment. What does that have to do with taking pictures? If you happen to snap a picture, you now have a resource you could later consult for details. Rather than store this information in your head, you can just store it in the picture and consult the picture when needed. In this sense, the act of taking a picture may serve as a proximate cue to the brain that information needs to be attended to less deeply and committed less firmly to memory.

Too bad it won’t help everyone else forget about your selfies

Worth noting is that these explanations aren’t mutually exclusive: it could both be true that taking a picture is a cue you don’t need to remember information as well and that taking pictures is distracting. Nevertheless, both could explain the same phenomenon, and if you want to test to see if they’re true, you need a way of differentiating them; a context in which the two make opposing predictions about what would happen. As a spoiler warning, the research I wanted to cover today tries to do that, but ultimately fails at the task. Nevertheless, the information is still interesting, and appreciating why the research failed at its goal is useful for future designs, some of which I will list at the end.

Let’s begin with what the researchers did: they followed a classic research paradigm in this realm and had participants take part in a memory task. They were shown a series of images and then given a test about them to see how much they remembered. The key differentiating variable here was that some of the time participants would watch without taking pictures, take a picture of each target before studying it, or take a picture and delete it before studying the target. The thinking here was that – if the efficiency explanation was true – participants who took pictures in a way they knew they wouldn’t be able to consult later – such as when they are snapchatted or deleted – would instead commit more of the information to memory. If you can’t rely on the camera to have the pictures, it’s an unreliable source of memory offloading (the official term), and so we shouldn’t offload. By contrast, if the mere act of taking the picture was distracting and interfered with memory in some way because of that, whether the picture was deleted or not shouldn’t matter. The simple act of taking the picture should be what causes the memory deficits, and similar deficits should be observed regardless of whether the picture was saved or deleted.

Without going too deeply into the specifics, this is basically what the researchers found: when participants had merely taken a picture – regardless of whether it was deleted or stored – the memory deficits were similar. People remembered these images better when they weren’t taking pictures. Does this suggest that taking pictures is simply an attention problem on forming memories, rather than an offloading one?

Maybe the trash can is still a reliable offloading device

Not quite, and here’s why: imagine an experiment where you were measuring how much participants salivated. You think that the mere act of cooking will get people to salivate, and so construct two conditions: one in which hungry people cook and then get to eat the food after, and another in which hungry people cook the food and then throw it away before they get to eat (and they know in advance they will be throwing it away). What you’ll find in both cases is that people will salivate when cooking because the sights and smells of the food are proximate cues of getting to eat. Some part of their brains are responding to those cues that signal food availability, even if those cues do not ultimately correspond to their ability to eat it in the future. The part of the brain that consciously knows it won’t be getting food isn’t the same part responding to those proximate cues. While one part of you understands you’ll be throwing the food away, another part disagrees and thinks, “these cues mean food is coming,” and you start salivating anyway because of it.

This is basically the same problem the present research ran into. Taking a picture may be a proximate cue that information is stored somewhere else and so you don’t need to remember it as well, even if that part of the brain that is instructed to delete the picture believes otherwise. We don’t have one mind, but rather a series of smaller minds that may all be working with different assumptions and sets of information. Like a lot of research, then, the design here focuses too heavily on what people are supposed to consciously understand, rather than on what cues the non-conscious parts of the brain are using to generate behavior.

Indeed, the authors seem to acknowledge as much in their discussion, writing the following:

”Although the present results are inconsistent with an “explicit” form of offloading, they cannot rule out the possibility that through learned experience, people develop a sort of implicit transactive memory system with cameras such that they automatically process information in a way that assumes photographed information is going to be offloaded and available later (even if they consciously know this to be untrue). Indeed, if this sort of automatic offloading does occur then it could be a mechanism by which photo-taking causes attentional disengagement”

All things considered, that’s a good passage, but one might wonder why that passage was saved for the end of their paper, in the discussion section. Imagine instead that this passage appeared in the introduction:

“While it is possible that operating a camera taking a picture disrupts participants attention and results in a momentary encoding deficit, it is also completely possible that the mere act of taking picture is a proximate cue used by the brain to determine how thoroughly (largely irrelevant) information needs to be encoded. Thus, our experiment doesn’t actually differentiate between these alternative hypotheses, but here’s what we’re doing anyway…”

Does your interest in the results of the paper go up or down at that point? Because that would effectively be the same thing the discussion section said. As such, it seems probable that the discussion passage may well represent an addition made to the paper after the fact, per a reviewer request. In other words, the researchers probably didn’t think the idea through as fully as they might like.  With that in mind, here are a few other experimental conditions they could have run which would have been better at the task of separating the hypotheses:

  • Have participants do something with a phone that isn’t taking a picture to distract themselves. If this effect isn’t picture specific, but people simply remember less when they’ve been messing around on a phone (like typing out a word, then looking at the picture), then the attention hypothesis would look better, especially if the impairments to memory are effectively identical.
  • Have an experimenter take the pictures instead of the participant. That way participants would not be distracted by using a phone at all, but still have a cue that the information might be retrievable elsewhere. However, the experimenter could also be viewed as a source of information themselves, so there could be another condition where an experimenter is simply present doing something that isn’t taking a picture. If an experimenter taking a picture results in worse memory as well, then it might be something about the knowledge of a picture in general causing the effect.
  • Better yet, if messing around with the phone is only temporarily disrupting encoding, then having participants take a picture of the target briefly and then wait a period (say, a minute) before viewing the target for the 15 seconds proper should help differentiate the two hypotheses. If the mere act of taking a picture in the past (whether deleted or not) causes participants to encode information less thoroughly because of proximate cues for efficient offloading, then this minor time delay shouldn’t alleviate those memory deficits. By contrast, if messing with the phone is just distracting people momentarily, the time delay should help counteract the effect.

These are all productive avenues that could be explored in the future for creating conditions where these hypotheses make different predictions, especially the first and third ones. Again, both could be true, and that could show up in the data, but these designs give the opportunity for that to be observed.

And, until the research is conducted, do yourself a favor and enjoy your concerts instead of viewing them through a small phone screen. (The caveat here is that it’s unclear whether such results would generalize, as in real life people decide what to take pictures of, rather than taking pictures of things they probably don’t really care about).

References: Soares, J. & Storm, B. (2018). Forgot in a flash: A further investigation of the photo-taking-impairment effect. Journal of Applied Research in Memory & Cognition, 7, 154-160

Making A Great Leader

Selfies used to be a bit more hardcore

If you were asked to think about what makes a great leader, there are a number of traits you might call to mind, though what traits those happen to be might depend on what leader you call to mind: Hitler, Gandhi, Bush, Martin Luther King Jr, Mao, Clinton, or Lincoln were all leaders, but seemingly much different people. What kind of thing could possibly tie all these different people and personalities together under the same conceptual umbrella? While their characters may have all differed, there is one thing all these people shared in common and it’s what makes anyone anywhere a leader: they all had followers.

Humans are a social species and, as such, our social alliances have long been key to our ability to survive and reproduce over our evolutionary history (largely based around some variant of the point that two people are better at beating up one person than a single individual is; an idea that works with cooperation as well). While having people around who were willing to do what you wanted have clearly been important, this perspective on what makes a leader – possessing followers – turns the question of what makes a great leader on its head: rather than asking about what characteristics make one a great leader, you might instead ask what characteristics make one an attractive social target for followers. After all, while it might be good to have social support, you need to understand why people are willing to support others in the first place to fully understand the matter. If it was all cost to being a follower (supporting a leader at your own expense), then no one would be a follower. There must be benefits that flow to followers to make following appealing. Nailing down what those benefits are and why they are appealing should better help us understand how to become a leader, or how to fall from a position of leadership.

With this perspective in mind, our colorful cast of historical leaders suddenly becomes more understandable: they vary in character, personality, intelligence, and political views, but they must have all offered their followers something valuable; it’s just that whatever that something(s) was, it need not be the same something. Defense from rivals, economic benefits, friendship, the withholding of punishment: all of these are valuable resources that followers might receive from an alliance with a leader, even from the position of a subordinate. That something may also vary from time to time: the leader who got his start offering economic benefits might later transition into one who also provides defense from rivals; the leader who is followed out of fear of the costs they can inflict on you may later become a leader who offers you economic benefits. And so on.

“Come for the violence; stay for the money”

The corollary point is that features which fail to make one appealing to followers are unlikely to be the ones that define great leaders. For example – and of relevance to the current research on offer – gender per se is unlikely to define great leaders because being a man or a woman does not necessarily offer much to many followers. Traits associated with them might – like how those who are physically strong can help you fight against rivals better than one who is not, all else being equal – but not the gender itself. To the extent that one gender tends to end up in positions of leadership it is likely because they tend to possess higher levels of those desirable traits (or at least reside predominantly on the upper end of the population distribution of them). Possessing these favorable traits that allow leaders to do useful things is only one part of the equation, however: they must also appear willing to use those traits to provide benefits to their follows. If a leader possesses considerable social resources, they do you little good if said leader couldn’t be any less interested in granting you access to them.

This analysis also provides another context point for understanding the leader/follower dynamic: it ought to be context specific, at least to some extent. Followers who are looking for financial security might look for different leaders than those who are seeking protection from outside aggression; those facing personal social difficulties might defer to different leaders still. The match between the talents offer by a leader and the needs of the followers should help determine how appealing some leaders are. Even traits that might seem universally positive on their face – like a large social network – might not be positives to the extent it affects a potential follower’s perception of their likelihood of receiving benefits. For example, leaders with relatively full social rosters might appear less appealing to some followers if that follower is seeking a lot of a leader’s time; since too much of it is already spoken for, the follower might look elsewhere for a more personal leader. This can create ecological leadership niches that can be filled by different people at different times for different contexts.

With all that in mind, there are at least some generalizations we can make about what followers might find appealing in a leader in an, “all else being equal…” sense: those with more social support with be selected as leaders more often, as such resources are more capable of resolving disputes in your favor; those with greater physical strength or intelligence might be better leaders for similar reasons. Conversely, one might follow such leaders because of the costs failing to follow would incur, but the logic holds all the same. As such, once these and other important factors are accounted for, you should expect irrelevant factors – like sex – to fall out of the equation. Even if many leaders tend to be men, it’s not their maleness per se that makes them appealing leaders, but rather these valued and useful traits.

Very male, but maybe not CEO material

This is a hypothesis effectively tested in a recent paper by von Rueden et al (in press). The authors examined the distribution of leadership in a small-scale foraging/farming society in the Amazon, the Tsimane. Within this culture – as others – men tend to exercise the greater degree of political leadership, relative to women, as measured by domains including speaking more during social meetings, coordinating group efforts, and resolving disputes. The leadership status of members within this group were assessed by ratings of other group members. All adults within the community (male n = 80; female n = 72) were photographed, and these photos were then then given to 6 of the men and women in sets of 19. The raters were asked to place the photos in order in terms of which person whose voice tended to carry the most weight during debates, and then in terms of who managed the most community projects. These ratings were then summed up (from 1 to 19, depending on their position in the rankings, with 19 being the highest in terms of leadership) to figure out who tended to hold the largest positions of leadership.

As mentioned, men tended to reside in positions of greater leadership both in terms of debates and management (approximate mean male scores = 37; mean female scores = 22), and both men and women agreed on these ratings. A similar pattern was observed in terms of who tended to mediate conflicts within the community: 6 females were named in resolving such conflicts, compared with 17 males. Further, the males who were named as conflict mediators tended to be higher in leadership scores, relative to non-mediating males, while this pattern didn’t hold for the females.

So why were men in positions of leadership in greater percentages than females? A regression analysis was carried out using sex, height, weight, upper body strength, education, and number of cooperative partners predicting leadership scores. In this equation, sex (and height) no longer predicted leadership score, while all the other factors were significant predictors. In other words, it wasn’t that men were preferred as leaders per se, but rather that people with more upper body strength, education, and cooperative partners were favored, whether male or female. These traits were still favored in leaders despite leaders not being particularly likely to use force or violence in their position. Instead, it seems that traits like physical strength were favored because they could potentially be leveraged, if push came to shove.

“A vote for Jeff is a vote for building your community. Literally”

As one might expect, what makes followers want to follow a leader wasn’t their sex, but rather what skills the leader could bring to bear in resolving issues and settling disputes. While the current research is far from a comprehensive examination of all the factors that might tap leadership at different times and contexts, it represents a sound approach to understanding the problem of why followers select particular leaders. By thinking about what benefits followers tended to reap from leaders over evolutionary history can help inform our search for – and understanding of – the proximate mechanisms through which leaders end up attracting them.

References:  von Rueden, C., Alami, S., Kaplan, H., & Gurven, M. (In Press). Sex differences in political leadership in an egalitarian society. Evolution & Human Behavior, doi:10.1016/j.evolhumbehav.2018.03.005

Doesn’t Bullying Make You Crazy?

“I just do it for the old fashioned love of killing”

Having had many pet cats, I understand what effective predators they can be. The number of dead mice and birds they have returned over the years is certainly substantial, and the number they didn’t bring back is probably much higher. If you happen to be a mouse living in an area with lots of cats, your life is probably pretty stressful. You’re going to be facing a substantial adaptive challenge when it comes to avoiding detection by these predators and escaping them if you fail at that. As such, you might expect mice developed a number of anti-predator strategies (especially since cats aren’t the only thing they’re trying to not get killed by): they might freeze when they detect a cat to avoid being spotted; they might develop a more chronic state of psychological anxiety, as being prepared to fight or run at a moment’s notice is important when your life is often on the line. They might also develop auditory or visual hallucinations that provide them with an incorrect view of the world because…well, I actually can’t think of a good reason for that last one. Hallucinations don’t serve as an adaptive response that helps the mice avoid detection, flee, or otherwise protect themselves against those who would seek to harm them. If anything, hallucinations seem to have the opposite effect, directing resources away from doing something useful as the mice would be responding to non-existent threats.

But when we’re talking about humans and not mice, some people seem to have a different sense for the issue: specifically, that we ought to expect something of a social predation – bullying – to cause people to develop psychosis. At least that was the hypothesis behind some recent research published by Dantchev, Zammit, and Wolke (2017). This study examined a longitudinal data set of parents and children (N = 3596) at two primary times during their life: at 12 years old, children were given a survey asking about sibling bullying, defined as, “…saying nasty and hurtful things, or completely ignores [them] from their group of friends, hits, kicks, pushes or shoves [them] around, tells lies or makes up false rumors about [them].” They were asked how often they experienced bullying by a sibling and how many times a week they bullied a sibling in the past 6 months (ranging from “Never”, “Once or Twice”, “Two or Three times a month”, “About once a week,” or, “Several times a week”). Then, at the age of about 18, these same children were assessed for psychosis-like symptoms, including whether they experienced visual/auditory hallucinations, delusions (like being spied on), or felt they had experienced thought interference by others.  

With these two measures in hand (whether children were bullies/bullied/both, and whether they suffered some forms of psychosis), the authors sought to determine whether the sibling bullying at time 1 predicted the psychosis at time 2, controlling for a few other measures I won’t get into here. The following results fell out of the analysis: children bullied by their siblings and who bullied their siblings tended to have lower IQ scores, more conduct disorders early on, and experienced more peer bullying as well. The mothers of these children were also more likely to experience depression during pregnancy and domestic violence was more likely to have been present in the households. Bullying, it would seem, was influenced by the quality of the children and their households (a point we’ll return to later).

“This is for making mom depressed prenatally”

In terms of the psychosis measures, 55 of the children in the sample met the criteria for having a disorder (1.5%). Of those children who bullied their siblings, 11 met this criteria (3%), as did 6 of those who were purely bullied (2.5%), and 11 of those were both bully and bullied (3%). Children who were regularly bullied (about once a week or more), then, were about twice as likely to report psychosis than those who were bullied less often. In brief, both being bullied by and bullying other siblings seemed to make hallucinations more common. Dantchev, Zammit, and Wolke (2017) took this as evidence suggesting a causal relationship between the two: more bullying causes more psychosis.

There’s a lot to say about this finding, the first thing being this: the vast majority of regularly-bullied children didn’t develop psychosis; almost none of them did, in fact. This tells us quite clearly that the psychosis per se is by no means a usual response to bullying. This is an important point because, as I mentioned initially, some psychological strategies might evolve to help individuals deal with outside threats. Anxiety works because it readies attentional and bodily resources to deal with those challenges effectively. It seems plausible such a response could work well in humans facing aggression from their peers or family. We might thus expect some kinds of anxiety disorders to be more common among those bullied regularly; depression too, since that could well serve to signal that one is in need of social support to others and help recruit it. So long as one can draw a reasonable, adaptive line between psychological discomfort and doing something useful, we might predict a connection between bullying and mental health issues.

But what are we to make of that correlation between being bullied and the development of hallucinations? Psychosis would not seem to help an individual respond in a useful way to the challenges they are facing, as evidenced by nearly all of the bullied children not developing this response. If such a response were useful, we should generally expect much more of it. That point alone seems to put the metaphorical nail in the coffin of two of the three explanations the authors put forth for their finding: that social defeat and negative perceptions of one’s self and the world are causal factors in developing psychosis. These explanations are – on their face – as silly as they are incomplete. There is no plausible adaptive line the authors attempt to draw from thinking negatively about one’s self or the world to the development of hallucinations, much less how those hallucinations are supposed to help. I would also add that these explanations are discussed only briefly at the end of paper, suggesting to me not enough time or thought went into trying to understand the reasons these predictions were made before the research was undertaken. That’s a shame, as a better sense for why one would expect to see a result would affect the way research is designed for the better. 

“Well, we’re done…so what’s it supposed to be?”

Let’s think in more detail about why we’re seeing what we’re seeing regarding bullying and psychosis. There are a number of explanations one might float, but the most plausible to me goes something like this: these mental health issues are not being caused by the bullying but are, in a sense, actually eliciting the bullying. In other words, causation runs in the opposite direction the authors think it does.

To fully understand this explanation, let’s begin with the basics: kin are usually expected to be predisposed to behave altruistically towards each other because they share genes in common. This means investment in your relatives is less costly than it would be otherwise, as helping them succeed is, in a very real sense, helping yourself succeed. This is how you get adaptations like breastfeeding and brotherly love. However, that cost/benefit ratio does not always lean in the direction of helping. If you have a relative that is particularly unlikely to be successful in the reproductive realm, investment in them can be a poor choice despite their relatedness to you. Even though they share genes with you, you share more genes with yourself (all of them, in fact), so helping yourself do a little better can sometimes be the optimal reproductive strategy over helping them do much better (since they aren’t likely to do anything even with your help). In that regard, relatives suffering from mental health issues are likely worse investments than those not suffering from them, all else being equal. The probability of investment paying off is simply lower.

Now that might end up predicting that people should ignore their siblings suffering from such issues; to get to bullying we need something else, and in this case we certainly have it: competition for the same pool of limited resources, namely parental investment. Brothers and sisters compete for the same resources from their parents – time, protection, provisioning, and so on – and resources invested in one child are not capable of being invested in another much of the time. Since parents don’t have unlimited amounts of these resources, you get competition between siblings for them. This sometimes results in aggressive and vicious competition. As we already saw in the study results, children of lower quality (lower IQ scores and more conduct disorders) coming from homes with fewer resources (likely indexed by more maternal depression and domestic violence) tend to bully and be bullied more. Competition for resources is more acute here and your brother or sister can be your largest source of it.

They’re much happier now that the third one is out of the way

To put this into an extreme example of non-human sibling “bullying”, there are some birds that lay two or three eggs in the same nest a few days apart. What usually happens in these scenarios is that when the older sibling hatches in advance of the younger it gains a size advantage, allowing it to peck the younger one to death or roll it out of the nest to starve in order to monopolize the parental investment for itself. (For those curious why the mother doesn’t just lay a single egg, that likely has something to do with having a backup offspring in case something goes wrong with the first one). As resources become more scarce and sibling quality goes down, competition to monopolize more of those resources should increase as well. That should hold for birds as well as humans.

A similar logic extends into the wider social world outside of the family: those suffering from psychosis (or any other disorders, really) are less valuable social assets to others than those not suffering from them, all else being equal. As such, sufferers receive less social support in the form of friendships or other relationships. Without such social support, this also makes one an easier target for social predators looking to exploit the easiest targets available. What this translates into is children who are less able to defend themselves being bullied by others more often. In the context of the present study, it was also documented that peer bullying tends to increase with psychosis, which would be entirely unsurprising; just not because bullying is causing children to become psychotic.

This brings us to the final causal hypothesis: sometimes bullying is so severe that it actually causes brain damage that causes later psychosis. This would involve what I imagine would either be a noticeable degree of physical head trauma or similarly noticeable changes brought on by a body’s response to stress that causes brain damage over time. Neither hypothesis strikes me as particularly likely in terms of explaining much of what we’re seeing here, given the scope of sibling bullying is probably not often large enough to pose that much of a physical threat to the brain. I suspect the lion’s share of the connection between bullying and psychosis is simply that psychotic individuals are more likely to be bullied, rather than because bullying is doing the causing. 

References: Dantchev, S., Zammit S., & Wolke, D. (2017). Sibling bullying in middle childhood and psychotic disorder at 18 years: a prospective cohort study. Psychological Medicine, https://doi.org/10.1017/S0033291717003841.

Sinking Costs

My cat displays a downright irrational behavior: she enjoys stalking and attacking pieces of string. I would actually say that this behavior extends beyond enjoying it the point of actively craving it. It’s fairly common for her to meow at me until she gets my attention before running over to her string and sitting by it, repeating this process until I play with her. At that point, she will chase it, claw at it, and bite it as if it were a living thing she could catch. This is irrational behavior for the obvious reason that the string isn’t prey; it’s not the type of thing it is appropriate to chase. Moreover, despite numerous opportunities to learn this, she never seems to cease this behavior, continuing to treat the string like a living thing. What could possibly explain this mystery?

If you’re anything like me, you might find that entire premise rather silly. My cat’s behavior only looks irrational when compared against an arguably-incorrect frame of reference; one in which my cat ought to only chase things that are alive and capable of being killed/eaten. There are other ways of looking at the behavior which make it understandable. Let’s examine two such perspectives briefly. The first of these is that my cat is – in some sense – interested in practicing for future hunting. In much the same way that people might practice in advance of a real event to ensure success, my cat may enjoy chasing the string because of the practice it affords her for achieving successful future hunts. Another perspective (which is not mutually exclusive) is that the string might give off proximate cues that resemble those of prey (such as ostensibly self-directed movement) which in turn activate other cognitive programs in my cat’s brain associated with hunting. In much the same way that people watch cartoons and perceive characters on the screen, rather than collections of pixels or drawings, my cat may be responding to proximate facsimiles of cues that signaled something important over evolutionary time when she sees strings moving.

The point of this example is that if you want to understand behavior – especially behavior that seems strange – you need to place it within its proper adaptive context. Simply calling something irrational is usually a bad idea for figuring out what is going on, as no species has evolved cognitive mechanisms that exist because they encouraged that organism to behave in irrational, maladaptive, or otherwise pointless ways. Any such mechanism would represent a metabolic cost endured for either no benefit or a cost, and those would quickly disappear from the population, outcompeted by organisms that didn’t make such silly mistakes.  

For instance, burying one’s head in the proverbial sand doesn’t help avoid predators

Today I wanted to examine one such behavior that gets talked about fairly regularly: what is referred to as the sunk-cost fallacy (implying a mistake is occurring). It refers to cases where people make decisions based on previous investments, rather than future expected benefits. For instance, if you happened to have a Master’s degree in a field that isn’t likely to present you with a job opportunity, the smart thing to do (according to most people, I imagine) would be to cut your losses and find a new major in a field that is likely to offer work. The sunk-cost fallacy here might represent saying to yourself, “Well, I’ve already put so much time into this program that I might as well put in more and get that PhD,” even though committing further resources is more than likely going to be a waste. In another case, you might sometimes continuing to pour money into a failing business venture because they had already invested most of their life savings. In fact, the tendency to invest in such projects is usually predictable by how much was invested in the past. The more you already put in, the more likely you are to see it through to its conclusion. I’m sure you can come up with your own examples of this from things you’ve either seen or done in the past.

On the face of it, this behavior looks irrational. You cannot get your previous investments back, so why should they have any sway over future decision making? If you end up concluding that such behavior couldn’t possibly be useful – that it’s a fallacious way of thinking – there’s a good chance you haven’t thought about it enough yet. To begin understanding why sunk costs might factor into decision making, it’s helpful to start with a basic premise: humans did not evolve in a world where financial decisions – such as business investments – were regularly made (if they were made at all). Accordingly, whatever cognitive mechanisms underlie sunk-cost thinking likely have nothing at all to do with money (or the pursuit of degrees, or other such endeavors). If we are using cognitive mechanisms to manage tasks they did not evolve for solving, it shouldn’t be surprising that we see some strange decisions cropping up from time to time. In much the same way, cats are not adapted to worlds with toys and strings. Whatever cognitive mechanism impels my cat to chase them, it is not adapted for that function.

So – when it comes to sunk costs – what might the cognitive mechanisms leading us to make these choices be designed to do? While humans might not have done a lot of financial investing over our evolutionary history, we sure did a lot of social investing. This includes protecting, provisioning, and caring for family members, friends, and romantic partners who in turn do the same for you. Such relationships need to be managed and broken off from time to time. In that regard, sunk costs begin to look a bit different.  

“Well, this one is a dud. Better to cut our losses and try again”

On the empirical end, it has been reported that people respond to social investments in a different way than they do financial ones. In a recent study by Hrgović & Hromatko (2017), 112 students were asked to respond to a stock market task and a social task. In the financial task, they read about a hypothetical investment they had made in their own business, but they had been losing value. The social tasks were similar: participants were told they had invested in a romantic partner, a sibling, and a friend. All were suffering financial difficulties, and the participant had been trying to help. Unfortunately, the target of this investment hadn’t been pulling themselves back up, even turning down job offers, so the investments were not currently paying off. In both the financial and social tasks, participants were then given the option to (a) stop investing in them now, (b) keep investing for another year only, or (c) keep investing indefinitely until the issue was resolved. The responses and time to response were recorded.

When it came to the business investment, about 40% of participants terminated future investments immediately; when it came to the numbers social contexts, these were about 35% in the romantic partner scenario, 25% in the sibling context, and about 5% in the friend context. The numbers for investing another year were about 35% in the business context, 50% in the romantic, and about 65% in the sibling and friend conditions. Finally, about 25% of participants would invest indefinitely in the business, 10% in the romantic partner, 5% in the sibling, and 30% in the friendship. In general, the picture that emerges is that people were willing to terminate the business investments much more readily than the social ones. Moreover, the time it took to make a decision was also longer in the business context, suggesting that people found the decision to continue investing in social relationships easierPhrased in terms of sunk costs, people appeared to be more willing to factor those into the decision to keep investing in social relationships. 

So at least you’ll have company as you sink into financial ruin

The question remains as to why that might be? Part of that answer no doubt involves opportunity costs. In the business world, if you want to invest your money into a new venture, doing so is relatively easy. Your money is just as green as the next person’s. It is far more difficult to just go out into the world and get yourself a new friend, sibling, or romantic partner. Lots of people already have friends, families, and friendships and aren’t looking to add to that list, as their investment potential in that realm is limited. Even if they are looking to add to it, they might not be looking to add you. Accordingly, the expected value of finding a better relationship needs to weighed against the time it takes to find it, as well as the degree of improvement it would likely yield. If you cannot just go out into the world and find new relationships with ease, breaking off an existing one could be more costly when weighed against the prospect of waiting it out to see if it improves in the future. 

There are other factors to consider as well. For instance, the return on social investment may often not be all that immediate and, in other cases, might come from sources other than the person being invested in. Taking those in order, if you break off social investments with others at the first sign of trouble – especially deeper, longer-lasting relationships – you may develop a reputation as a fair-weather friend. Simply put, people don’t want to invest and be friends with someone who is liable to abandon them when they need it most. We’d rather have friends who are deeply and honestly committed to our welfare, as those can be relied on. Breaking off social relationships too readily demonstrates to others that one is not that appealing as a social asset, making you less likely to have a place in their limited social roster. 

Further, investing in one person is also to invest in their social network. If you take care of a sick child, you’re not going to hope that the child will pay you back. Doing so might ingratiate you to their parents, however, and perhaps others as well. This can be contrasted with investing in a business: trying to help a failing business isn’t liable to earn you any brownie points as an attractive social asset to other businesses looking to court your investment, nor is Ford going to return the poor investment you made in BP because they’re friends with each other.

Whatever the explanation, it seems that the human willingness to succumb to sunk costs in the financial realm may well be a byproduct of an adaptive mechanism in the social domain being co-opted for a task it was not designed to solve. When that happens, you start seeing some weird behavior. The key to understanding that weirdness is to understand the original functionality.

References: Hrgović, J. & Hromatko, I. (2017). The time and social context in sunk-cost effects. Evolutionary Psychological Science, doi: 10.1007/s40806-017-0134-4

Predicting The Future With Faces

“Your future will be horrible, but at least it will be short. So there’s that”

The future is always uncertain, at least as far as human (and non-human) knowledge is concerned. This is one reason why some people have difficulty saving or investing money for the future: if you give up rewards today for the promise of rewards tomorrow, that might end up being a bad idea if tomorrow doesn’t come for you (or a different tomorrow than the one you envisioned does). Better to spend that money immediately when it can more reliably bring rewards. The same logic extends to other domains of life, including the social. If you’re going to invest time and energy into a friendship or sexual relationship, you will always run the risk of that investment being misplaced. Friends or partners who betray you or don’t reciprocate your efforts are not usually the ones you want to be investing in the first place. You’d much rather invest that effort into the people who will give you better return.

Consider a specific problem, to help make this clear: human males face a problem when it comes to long-term sexual relationships, which is that female reproductive potential is limited. Not only can women only manage one pregnancy at a time, but they also enter into menopause later in life, reducing their subsequent reproductive output to zero. One solution to this problem is to only seek short-term encountered but, if you happen to be a man looking for a long-term relationship, you’d be doing something adaptive by selecting a mate with the greatest number of years of reproductive potential ahead of her. This could mean selecting a partner who is younger (and thus has the greatest number of likely fertile years ahead of her) and/or selecting one who is liable to enter menopause later.

Solving the first problem – age – is easy enough due to the presence of visual cues associated with development. Women who are too young and do not possess these cues are not viewed as attractive mates (as they are not currently fertile), become more attractive as they mature and enter their fertile years, and then become less attractive over time as fertility (both present and future) declines. Solving the second problem – future years of reproductive potential, or figuring out the age at which a woman will enter menopause – is trickier. It’s not like men have some kind of magic crystal ball they can look into to predict a woman’s future expected age at menopause to maximize their reproductive output. However, women do have faces and, as it turns out, those might actually be the next best tool for the job.

Fred knew it wouldn’t be long before he hit menopause

A recent study by Bovet et al (2017) sought to test whether men might be able to predict a woman’s age at menopause in advance of that event by only seeing her face. One obvious complicating factor with such research is that if you want to assess the extent to which attractiveness around, say, age 25 predicts menopause in the same sample of women, you’re going to have to wait a few decades for them to hit menopause. Thankfully, a work-around exists in that menopause – like most other traits – is partially heritable. Children resemble their partners in many regards, and age of menopause is one of them. This allowed the researchers to use a woman’s mother’s age of menopause as a reasonable proxy for when the daughter would be expected to reach menopause, saving them a lot of waiting. 

Once the participating women’s mother’s age of menopause was assessed, the rest of the study involved taking pictures of the women’s faces (N = 68; average age = 28.4) without any makeup and with as neutral as an expression as possible. These faces were then presented in pairs to male raters (N = 156) who selected which of the two was more attractive (completing that task a total of 30 times each). The likelihood of being selected was regressed against the difference between the mother’s age of menopause for each pair, controlling for facial femininity, age, voice pitch, waist-to-hip ratio, and a value representing the difference between a woman’s actual and perceived age (to ensure that women who looked younger/older than they actually were didn’t throw things off).

A number of expected results showed up, with more feminine faces (ß = 0.4) and women with more feminine vocal pitch (ß = 0.2) being preferred (despite the latter trait not being assessed by the raters). Women who looked older were also less likely to be selected (ß = -0.56) Contrary to predictions, women with more masculine WHRs were preferred (ß = 0.13), even though these were not visible in the photos, suggesting WHR may cue different traits than facial ones. The main effect of interest, however, concerned the menopausal variable. These results showed that as the difference between the pair of women’s mother’s age of menopause increased (i.e., one woman expected to go through menopause later than the other), so too did the probability of the later-menopausal woman getting selected (ß = 0.24). Crucially, there was no correlation between a woman’s expected age of menopause and any of the more-immediate fertility cues, like age, WHR, facial or vocal femininity. Women’s faces seemed to be capturing something unique about expected age at menopause that made them more attractive.

Trading off hot daughters for hot flashes

Now precisely what features were being assessed as more attractive and the nature of their connection to age of menopause is unknown. It is possible – perhaps even likely – that men were assessing some feature like symmetry that primarily signals developmental stability and health, but that variable just so happen to correlate with age at menopause as well (e.g., healthier women go through menopause later as they can more effectively bear the costs of childbearing into later years). Whatever systems were predicting age at menopause might not specifically be designed to do so. While it is possible that some features of a woman’s face uniquely cues people into expected age at menopause more directly without primarily cuing some other trait, that remains to be demonstrated. Nevertheless, the results are an interesting first step in that direction worth thinking about.

References: Bovet, J., Barkat-Defradas, M., Durand, V., Faurie, C., & Raymond, M. (2017). Women’s attractiveness is linked to expected age at menopause. Journal of Evolutionary Biology, doi: 10.1111/jeb.13214

What Can Chimps Teach Us About Strength?

You better not be aping me…

There was a recent happening in the primatology literature that caught my eye. Three researchers were studying patterns of mating in captive chimpanzees. They were interested in finding out what physical cues female chimps tended to prefer in a mate. This might come as no surprise to you – it certainly didn’t to me – but female chimps seemed to prefer physically strong males. Stronger males were universally preferred by the females, garnering more attention and ultimately more sexual partners. Moreover, strength was not only the single best predictor of attractiveness, but there was no upper-limit on this effect: the stronger the male, the more he was preferred by the females. This finding makes perfect sense in its proper evolutionary context, given chimps’ penchant for getting into physical conflicts. Strength is a key variable for males in dominating others, whether this is in the context of conflicts over resources, social status, or even inter-group attacks. Males who were better able to win these contests were not only likely to do well for themselves in life, but their offspring would likely be the kind of males who would do likewise. That makes them attractive mating prospects, at least if having children likely to survive and mate is adaptive, which it seems to be.

What interested me so much was not this finding – I think it’s painfully obvious – but rather the reaction of some other academics to it. These opposing reactions claimed that the primatologists were too quick to place their results in that evolutionary context. Specifically, it was claimed that these preferences might not be universal, and that a cultural explanation makes more sense (as if the two are competing types of explanations). This cultural explanation, I’m told, goes something like, “chimpanzee females are simply most attracted to male bodies that are the most difficult to obtain because that’s how chimps in this time and place do things,” and “if this research was conducted 100 years ago, you’d have observed a totally different pattern of results.”

Now why the difficulty in achieving a body is supposed to be the key variable isn’t outlined, as far as I can tell. Presumably it too should have some kind of evolutionary explanation which would make a different set of predictions, but none are outlined. This point seems scarcely realized by the critics. Moreover, the idea that these findings would not obtain 100 years ago is tossed out with absolutely no supporting evidence and little hope of being tested. It seems unlikely that physical strength yielding adaptive benefits is some kind of evolutionary novelty, or that males did not differ in that regard as little as a hundred years ago despite plenty of contemporary variance.

One more thing: the study I’m talking about didn’t take place on chimps. It was a pattern observed in humans. The underlying logic and reactions, however, are pretty much spot on.  

Not unlike this man’s posing game

It’s long been understood that strong men are more attractive than weak ones, all else being equal. The present research by Sell et al (2017) was an attempt to (a) quantify approximately how much of a man’s bodily attractiveness is driven by his physical strength, (b) the nature of this relationship (whether it is more of a straight line or an inverted “U” shape, where very strong men are less attractive, and (c) whether some women find weaker men more attractive than stronger ones. There was also a section about quantifying the effects of height and weight.

To answer those questions, pictures of semi-to-shirtless men were photographed from the front and side, and their heads were blocked out so only their bodies remained. These pictures were then assessed by different groups for either strength or attractiveness (actual strength measures were collected by the researchers). The quick run down of the results are that perceived strength did track actual strength, and perceptions of strength accounted for about 60-70% of the variance in bodily attractiveness (which is a lot). As men got stronger, they got more attractive, and this trend was linear (meaning that, within the sample, there was no such thing as “too strong” after which men got less attractive). This pattern was also universal: there was not a single women (out of 160) who rated the weaker men as more attractive than the stronger ones. Accounting for strength, height accounted for a bit more of the attractiveness, and weight was negatively related to attractiveness. Women liked strong men; not fat ones.

While it’s nice to put something of a number on just how much strength matters in determining male bodily attractiveness (most of it), these findings are all mundane to anyone with eyes. I suspect they cut across multiple species, and I don’t think you’re going to find just about any species where females prefer to mate with physically weaker males. The explanation for these preferences for strength – the evolutionary framework into which they fit – should apply well to just about any of the species in that list. While I initially made up the fact that this study was about chimps, I’d say you’re likely to find a similar set of results if you did conduct such work.

Also, the winner – not the loser – of this contest will go on to mate

Enter the strange comments I mentioned initially:

“It’s my opinion that the authors are too quick to ascribe a causal role to evolution,” said Lisa Wade…“We know what kind of bodies are valorized and idealized,” Wade said. “It tends to be the bodies that are the most difficult to obtain.”

Try reading that criticism of the study and imagine it was applied to any other sexually-reproducing species on the planet. What adaptive benefits is “difficulty in obtaining” supposed to bring and what kind of predictions does that idea make? It would be difficult, for instance, to achieve a very thin body; the type usually seen in anorexic people. It’s hard for people to ignore their desires to eat certain foods in certain quantities, especially to the point you begin to physically waste away. Despite that difficulty in achieving the starved look, such bodies are not idealized as attractive. “Difficult to obtain” does not necessary translate into anything adaptively useful. 

And, more to the point, even if a preference for difficult-to-obtain bodies per se existed, where would Lisa suggest it came from? Surely, it didn’t fall from the sky. The explanation for a preference for difficult bodies would, at some point, have to reference some kind of evolutionary history. It’s not even close to sufficient to explain a preference by saying, “culture, not evolution, did it,” as if the capacity for developing a culture itself – and any given instantiation of it -  exists free from evolution. Despite her claims to the contrary, it is a theoretical benefit to thinking about evolutionary function when developing theories of psychological form; not a methodological problem. The only problem I see is that she seems to prefer worse, less-complete explanations to better ones. But, to use her own words, this is “…nothing unique to [her]. Much of this type of [criticism] has the same methodological problems

If your explanation for a particular type of psychological happening in humans doesn’t work for just about any other species, there’s a very good chance it is incomplete when it comes to explaining the behavior at the very least. For instance, I don’t think anyone would seriously suggest that chimp females entering into their reproductive years “might not have much of an experience with what attractiveness means,” if they favored physically strong males. I’d say it’s fairly common such explanations aren’t even pointing in the right direction a lot of the time, and are more likely to mislead researchers and students than help inform them. 

References: Sell, A., Lukazsweki, A., & Townsley, M. (2017). Cues of upper body strength account for most of the variance in men’s bodily attractiveness. Proc. R. Soc. B 284http://dx.doi.org/10.1098/rspb.2017.1819

Online Games, Harassment, and Sexism

Gamers are no strangers to the anger that can accompany competition. As a timely for-instance, before I sat down to start writing this post I was playing my usual online game to relax after work. As I began playing my first game of the afternoon, I saw a message pop up from someone who had sent me a friend request a few days back after I had won a match (you need to accept these friend requests before messages can be sent). Despite the lag in between the time that request was sent and when I accepted it, the message I was greeted with called me a cunt and informed me that I have no life before the person removed themselves from my friend list to avoid any kind of response. However accurately they may have described me, that is the most typical reason friend requests get sent in that game: to insult. Many people – myself included – usually don’t accept them from strangers for that reason and, if you do, it is advisable to wait a few days for the sender to cool off a bit and hopefully forget they added you. Even then, that’s no guarantee of a friendly response.

Now my game happens to be more of a single-player experience. In team-based player vs player games, communication between strangers can be vital for winning, meaning there is usually less of a buffer between players and the nasty comments of their teammates. This might not draw much social attention, but these players being insulted are sometimes women, bringing us nicely to some research on sexism.

Gone are the simpler days of yelling at your friends in person

A 2015 paper by Kasumovic & Kuznekoff examined how players in the online, first-person shooter game Halo 3 responded to the presence of a male and female voice in the team voice chat, specifically in terms of both positive and negative comments directed at them. What drew me to this paper is two-fold: first, I’m a gamer myself but, more importantly, the authors also constructed their hypotheses based on evolutionary theory, which is unusual for papers on sexism. The heart of the paper revolves around the following idea: common theories of sexist behavior towards women suggest that men behave aggressively towards them to try and remove them from male-dominated arenas. Women get nasty comments because men want them gone from male spaces. The researchers in this case took a different perspective, predicting instead that male performance within the game would be a key variable in understanding the responses players have.

As men heavily rely on their social status for access to mating opportunities, the authors predicted they should be expected to respond more aggressively to newcomers into a status hierarchy that displace them. Put into practice, this means that a low-performing male should be threatened by the entry of a higher-performing woman into their game as it pushes them down the status hierarchy, resulting in aggression directed at the newcomers. By contrast, males that perform better should be less concerned by women in the game, as it does not undercut their status. Instead of being aggressive, then, higher-performing men might give female players more positive comments in the interests of attracting them as possible mates. Putting that together, we end up with the predictions that women should receive more negative comments than men from men who are performing worse, while women should receive more positive comments from men who are performing better.

To test this idea, the researchers played the game with 7 other random players (two teams of 4 players) while playing either male or female voice lines at various intervals during the game (all of which were pretty neutral-to-positive in terms of content, such as, “I like this map” played at the beginning of a game). The recordings of what the other players (who did not know they were being monitored in this way, making their behavior more natural) said were then transcribed and coded for whether they were saying something positive, negative, or neutral directed at the experimenter playing the game. The coders also checked to see whether the comments contained hostile sexist language to look for something specifically anti-woman, rather than just negativity or anger in general.

Nothing like some wholesome, gender-blind rage

Across 163 games, any other players spoke at all in 102 of them. In those 102 games, 189 players spoke in total, 100% of whom were male. This suggests that Halo 3, unsurprisingly, is a game that women aren’t playing as much as men. Only those players who said something and were on the experimenter’s team (147 of them) were maintained for analysis. About 57% of those comments were in the female-voiced condition, while 44% where in the male condition. In general, then, the presence of a female voice led to more comments from other male players.

In terms of positive comments, the predicted difference appeared: the higher the skill level of the player talking at the experimenter, the more positive comments they made when a woman’s voice was heard; the worse the player, the fewer positive comments they made. This interaction was almost significant when considering the relative difference, rather than the absolute skill rating (i.e. Did the player talking do worse or better than the experimenter). By contrast, the number of positive comments directed at the male-voiced player was unrelated to the skill of the speaker.

Turning to the negative comments, it was found that they were negatively correlated with player skill in general: the higher the skill of the player, the fewer negative comments they made (and the lower the skill, the more negative they got. As the old saying goes, “Mad because bad”). The interaction with gender was less clear, however. In general, the teammates of the female-voiced experimenter made more negative comments than in the male condition. When considering the impact of how many deaths a speaking player had, the players were more negative towards the woman when dying less, but they were also more negative towards the man when dying extremely often (which sees to run counter to the initial predictions). The players were also more negative towards a women when they weren’t getting very many kills (with negativity towards the woman declining as their personal kills increased), but that relationship was not observed when they had heard a male voice (which is in line with the initial predictions).

Finally, only a few players (13%) made sexist statements, so the results couldn’t be analyzed particularly well. Statistically, these comments were unrelated to any performance metrics. Not much more to say about that beyond small sample size.  

Team red is much more supportive of women in gaming

Overall, the response that speaking players had to the gender of their teammate depended, to some extent, on their personal performance. Those men who were doing better at the game were more positive towards the women, while those who were doing worse were more negative towards them, generally speaking.

While there are a number of details and statements within the paper I could nitpick, I suspect that Kasumovic & Kuznekoff (2015) are on the right track with their thinking. I would add some additional points, though. The first of these is rather core to their hypothesis: if men are threatened by status losses brought on by their relative poor performance, it seems that these threats should occur regardless of the sex of the person they’re playing with: whether a man performs poorly relative to a woman or another man, he will still be losing relative status. So why is there less negativity directed at men (sometimes), relative to women? The authors mention one possibility that I wish they had expanded upon more, which is that men might be responding not to the women per se as much as the pitch of the speaker’s voice. As the authors write, voice pitch tends to correlate with dominance, such that deeper voices tend to correlate with increased dominance.

What I wish they had added more explicitly is that aggression should not be deployed indiscriminately. Being aggressive towards people who are liable to beat you in a physical contest isn’t a brilliant strategy. Since men tend to be stronger than women, behaving aggressively towards other men – especially those outperforming you – should be expected to have carried different sets of immediate consequences, historically-speaking (though there aren’t many costs in modern online environments, which is why people behave more aggressively there than in person). It might not be that the men are any less upset about losing when other men are on their team, but that they might not be equally aggressive (in all cases) to them due to potential physical retribution (again, historically).

There are other points I would consider beyond that. The first of these is the nature of insults in general. If you remember the interaction I had with an angry opponent initially, you should remember that the goal of their message was to insult me. They were trying to make me feel bad or in some way drag me down. If you want to make someone feel bad, you would do well to focus on their flaws and things about them which make you look better by comparison. In that respect, insulting someone by calling attention to something you share in common, like your gender, is a very weak insult. On those grounds we might expect more gendered insults against women, given that men are by far the majority in these games. Now because lots of hostile sexist insults weren’t observed in the present work, the point might not be terribly applicable here. It does, however, bring me to my next point: you don’t insult people by bringing attention to things that reflect positively on them.

“Ha! That loser can only afford cars much more expensive than I can!”

As women do not play games like Halo nearly as much as men, that corresponds to lower skill in those games on a population level. Not because women are inherently worse at the game but simply because they don’t practice them as much (and people who play those games more tend to become better at them). If you look at the top competitive performance in competitive online games, you’ll notice the rosters are largely, if not exclusively, male (not unlike all the people who spoke in the current paper). Regardless of the causes of that sex difference in performance, the difference exists all the same.

If you knew nothing else about a person beyond their gender, you would predict that a man would perform better at Halo than a woman (at least if you wanted your predictions to be accurate). As such, if you’ve just under-performed at this game and are feeling pretty angry about it, some players might be looking to direct blame at their teammates who clearly caused the issue (as it would never be their the speaker’s skill in the game, of course. At least not if you’re talking about the people yelling at strangers).

If you wanted to find out who was to blame, you might consult the match scores: factors like kills and deaths. But those aren’t perfect representations of player skill (that nebulous variable which is hard to get at) and they aren’t the only thing you might consult. After all, scores in a singular game are not necessarily indicative of what would happen over a larger number of games. Because of that, the players on these teams still have limited information about the relative skill of their teammates. Given this lack of information, some people may fall back on generally-accurate stereotypes in trying to find a plausible scapegoat for their loss, assigning relatively more blame for the loss to the people who might be expected to be more responsible for it. The result? More blame assigned to women, at least initially, given the population-level knowledge.

“I wouldn’t blame you if I knew you better, so how about we get to know each other over coffee?”

That’s where the final point I would add also comes in. If women perform worse on a population level than men, the low-performing men suffer something of a double status hit when they are outperformed by a woman: not only is there another player who is doing better than them, but one might expect this player to be doing worse, knowing only their gender. As such, being outperformed by such a player makes it more difficult to blame external causes for the outcome. In a sentence, being beaten by someone who isn’t expected to perform well is a more honest signal of poor skill. The result, then, is more anger: either in an attempt to persuade others that they’re better than they actually performed or in an attempt to get the people out of there who are making them look even worse. This would fit within the author’s initial hypothesis as well, and would probably have been worth mentioning.

References: Kasumovic, M. & Kuznekoff, J. (2015). Insights into sexism: Male status and performance moderates female-directed hostile and amicable behavior. PLoS ONE 10(7). doi:10.1371/journal.pone.0131613

Practice, Hard Work, And Giving Up

There’s no getting around it: if you want to get better at something – anything – you need to practice. I’ve spent the last several years writing rather continuously and have noticed that my original posts are of a much lower quality when I look back at them. If you want to be the best version of yourself that you can be, you’ll need to spend a lot of time working at your skills of choice. Nevertheless, people do vary widely in terms of how much practice they are willing to devote to a skill and how readily they abandon their efforts in the face of challenges, or simply to time. Some musicians will wake up and practice several hours a day, some only a few days a week, some a few times throughout the year, and some will stop playing entirely (in spite of almost none of them making anything resembling money from it). In a word, some musicians possess more grit than others.

Those of us who spend too much time at a computer acquire a different kind of grit

To give you a sense for what is meant by grit, consider the following description offered by Duckworth et al (2007):

The gritty individual approaches achievement as a marathon; his or her advantage is stamina. Whereas disappointment or boredom signals to others that it is time to change trajectory and cut losses, the gritty individual stays the course.

Grit, in this context, refers to those who continue to pursue their goals when faced with obstacles, major or minor. According to Duckworth et al (2007), this trait of grit is referenced regularly by people discussing the top performers in their field about as often as talent, even if they might not refer to it by that name.

The aim of the Duckworth et al (2007) paper, broadly speaking, was two-fold: to create a scale to measure grit (as one did not currently exist), and then use that scale to see how well grit predicted subsequent achievements. Without going too in depth into the details of the project, the grit scale eventually landed on 12 questions. Six of those dealt with how consistent one’s interests are (like, “my interests change from year to year”) and the other six with perseverance of effort (like, “I have overcome setbacks to conquer an important challenge”). While this measure of grit was highly correlated with the personality trait of conscientiousness (r = .77), the two were apparently different enough to warrant separate categorization, as the grit score still predicted some outcomes after controlling for personality.

When the new scale was directed at student populations, grit was also found to relate to educational achievement, controlling for measures of general intelligence: in this case, college GPA controlling for SAT scores in a sample of about 1,400 Upenn undergraduates. The relationship between grit and GPA was modest (r = .25), though it got somewhat larger after controlling for SAT scores (r = .34). In a follow-up study, the grit scale was also used to predict which cadets at a military academy completed their summer training. Though about 94% of the cadets completed this training, these grittiest individuals were the least likely to drop out, as one might expect. However, unlike in the Upenn sample, grit was not a good predictor of subsequent cadet GPA in that sample (r = .06), raising some questions about the previous result (which I’ll get to in a minute).

This is time not spent studying for that engineering test

With that brief summary of grit in mind – hopefully enough to give you a general sense for the term – I wanted to discuss some of the theoretical aspects of the idea. Specifically, I want to consider when grit might be a good thing and when it might be better to persevere a little less or find new interests.

One big complication stopping people from being gritty is the simple matter of opportunity costs. For every task I decide to invest years of dedicated, consistent practice to, there are other tasks I do not get to accomplish. Time spent writing this post is time I don’t get to spend pursuing other hobbies (which I have been taking intermediate breaks to pursue, for the record). This is, in fact, why I have begun writing a post every two weeks or so down from each week: there are simply other things in life I want to spend my time on. Being gritty about writing means I don’t get to be equally gritty about other things. In fact, if I were particularly gritty about writing I might not get to be gritty about anything at all. Not unless I wanted to stop being gritty about sleep, but even then I could just devote that sleeping time to writing as well.

This is a problem when it comes to grit being useful, because of a second issue: diminishing returns on practice. That first week, month, or year you spend learning a skill typically yields a more appreciable return than the second, third, or so on. Putting that into a quick example, if I started studying chess (a game I almost never play), I would see substantial improvements to my win rate in the first month. Let’s just say 10% to put a number on it. The next month of practice still increases my win rate, but not by quite as much, as there are less obvious mistakes I’m making. I go up another 5%. As this process continues, I might eventually spend a month of practice to increase my win rate by mere fractions of a percent. While this dedicated practice does, on paper, make me better, the size of those rewards relative to the time investment I need to make to get them gets progressively smaller. At a certain point, it doesn’t make much more sense to commit that time to chess when I could be learning to speak Spanish or even just spend that time with friends.

This brings us nicely to the next point: the rate of improvement, both in terms of how quickly you learn and how far additional practice can push you, ought to depend on one’s biological potential (for lack of a better term). No matter how much time I spend practicing guitar, for instance, there are certain ceilings on performance I will not be able to break: perhaps it becomes physically impossible to play any faster while maintaining accuracy; perhaps some memory constraints come into play and I cannot remember everything I’ve tried to learn. We should expect grit to interact with potential in a certain way: if you don’t have the ability to achieve a particular task, being gritty about pursuing it is going to be time spent effectively banging your head against a brick wall. By contrast, the individual who possesses a greater potential for the task in question has a much higher chance of grit paying off. They can simply get more from practice.

Some people just have nicer ceilings than others

This is, of course, assuming the task is actually one that can be accomplished. If you’re very gritty about finding the treasure buried in your backyard that doesn’t actually exist, you’ll spend a lot of time digging and none getting rich. Being gritty about achieving the impossible is a bad idea. But who’s to say what’s impossible? We usually don’t have access to enough information to say something cannot (or at least will not) be achieved, but we can often make some fairly-educated guesses. Let’s just stick to the music example for now: say you want to accomplish the task of becoming a world-famous rockstar. You have the potential to perform and you’re very gritty about pursuing it. You spend years practicing, forming bands, writing songs, finding gigs, and so on. One problem you’re liable to encounter in this case is simply that many other people who are similarly qualified are doing likewise, and there’s only so much room at the top. Even if you are all approximately as talented and gritty, there are some ceiling effects at play where being even grittier and more talented does not, by any means, guarantee more success. As I have mentioned before, the popularity of cultural products can be a fickle thing. It’s not just about the products you produce or what you can do. 

We see this playing out in the world of academia today. As many have lamented, there seem to be too few academic jobs for all the PhDs getting minted across the country. Being gritty about pursuing that degree – all the time, energy, and money spent earning it – turned out to not be a great idea for many who have done so. Sure, you can bet that just about everyone who achieved their dream job as a professor making a decent salary was pretty gritty about things. You have to be if you’re going to spend 10 or more years invested in higher education with little payoff and many challenges along the way. It’s just that lots of people who were about as gritty as those who got a job failed to do anything with their degree after they achieved it. As this example shows, not only does the task need to be achievable, but the rewards for achieving it need to be both valuable and likely if grit is to pay off. If the rewards aren’t valuable (eg, a job as an adjunct teaching 5 courses a semester for about as much as you’d make working minimum wage, all things considered), then pursuing them is a bad idea. If the rewards are valuable but unlikely (eg, becoming a top-selling pop artist), then pursuing them is similarly a bad idea for just about everyone. There are better things to do with your time.

The closest most people will come to being a rockstar

This yields the following summary: for grit to be potentially useful, a task needs to be capable of being accomplished, you need the potential to accomplish it given enough time, the rewards of achieving it need to be large enough, relative to the investment you put in, and the probability of achieving those rewards is comparably high. While that does leave many tasks for which passionate persistence and practice might pay off (and many for which it will not), this utility always exists in the context of other people doing likewise. For that reason, beyond a certain ceiling of effort more is not necessarily much of a guarantee of success. You can think of grit as – in many cases – something of a prerequisite for success rather than a great determinant. Finally, all of that needs to be weighed against the other things you could be doing with your time. Time spent being gritty about sports is time not spent being gritty about academics, which is time not spent being gritty about music, and so on.

If you want to reach your potential within a domain, there’s really no other option. You’ll need to invest lots of time and effort. Figuring out where that effort should go is the tricky part.

References: Duckworth, A., Peterson, C, Matthews, M., & Kelly, D. (2007). Grit: Perseverance and passion for long-term goals. Journal of Personality & Social Psychology, 92, 1087-1101.