More About Dunning-Kruger

Several years back I wrote a post about the Dunning-Kruger effect. At the time I was still getting my metaphorical sea legs for writing and, as a result, I don’t think the post turned out as well as it could have. In the interests of holding myself to a higher standard, today I decided to revisit the topic both in the interests of improving upon the original post and generating a future reference for me (and hopefully you) when discussing it with others. This is something of a time-saver for me because people talk about the effect frequently despite, ironically, not really understanding it too deeply.

First things first, what is the Dunning-Kruger effect? As you’ll find summarized just about everywhere, it refers to the idea that people who are below-average performers in some domains – like logical reasoning or humor – will tend to judge their performance as being above average. In other words, people are inaccurate at judging how well their skills stack up to their peers or, in some cases, to some objective standard. Moreover, this effect gets larger the more unskilled one happens to be. Not only are the worst performers worse at the task then others, but they’re also worse at understanding they’re bad at the task. This effect was said to obtain because people need to know what good performance is before they can accurately assess their own. So, because below-average performers don’t understand how to perform a task correctly, they also lack the skills to judge their performance accurately, relative to others.

Now available at Ben & Jerry’s: Two Scoops of Failure

As mentioned in my initial post (and by Kruger & Dunning themselves), this type of effect shouldn’t extend to domains where production and judging skills can be uncoupled. Just because you can’t hit a note to save your life on karaoke night, that doesn’t mean you will be unable to figure out which other singers are bad. This effect should also be primarily limited to domains in which the feedback you receive isn’t objective or standards for performance are clear. If you’re asked to re-assemble a car engine, for instance, unskilled people will quickly realize they cannot do this unassisted. That said, to highlight the reason why the original explanation for this finding doesn’t quite work – not even for the domains that were studied in the original paper – I wanted to examine a rather important graph of the effect from Kruger & Dunning (1999) with respect to their humor study:

My crudely-added red arrows demonstrate the issue. On the left-hand side, we see what people refer to as the Dunning-Kruger effect: those who were the worst performers in the humor realm were also the most inaccurate in judging their own performance, compared to others. They were unskilled and unaware of it. However, the right-hand side betrays the real issue that caught my eye: the best performers were also inaccurate. The pattern you should expect, according to the original explanation, is that the higher one’s performance, the more accurately they estimate their relative standings, but what we see is that the best performers aren’t quite as accurate as those who are only modestly above average. At this point, some of you might be thinking that this point I’m raising is basically a non-issue because the best performers were still more accurate than the worst performers, and the right-hand inaccuracy I’m highlighting isn’t appreciable. Let me try to persuade you otherwise.

Assume for a moment that people were just guessing as to how they performed, relative to others. Because having a good sense of humor is a socially-desirable skill, people all tend to rate themselves “modestly above-average” in the domain to try and persuade others they actually are funny (and because, in that moment, there are no consequences to being wrong). Despite these just being guesses, those who actually are modestly above-average will appear to be more accurate in their self-assessment than those who are in the bottom half of the population; that accuracy just doesn’t have anything to do with their true level of insight into their abilities (referred to as their meta-cognitive skills). Likewise, those who are more than modestly above average (i.e. are underestimating their skills) will be less accurate as well; there will just be fewer of them than those who overestimated their abilities.

Considering the findings of Kruger & Dunning (1999) on the whole, the above scenario I just outlined doesn’t reflect reality perfectly. There was a positive correlation between people’s performance and their rating of their relative standing (r = .39), but, for the most part, people’s judgments of their own ability (the black line) appear relatively uniform. Then again, if you consider their results in studies two and three of that same paper (logical reasoning and grammar), the correlations between performance and judgments of performance relative to others drop to a low of r = .05 ranging up to a peak of r = .19, which was statistically significant. People’s judgments of their relative performance were almost flat across several such tasks. To the extent these meta-cognitive judgments of performance use actual performance as an input for determining relative standings, it’s clearly not the major factor for either low or high performers.

They all shop at the same cognitive store

Indeed, actual performance shouldn’t be expected to be the primary input for these meta-cognitive systems (the ones that generate relative judgments of performance) for two reasons. The first of these is the original performance explanation posited by Kruger & Dunning (1999): if the system generating the performance doesn’t have access to the “correct” answer, then it would seem particularly strange that another system – the meta-cognitive one – would have access to the correct answer, but only use it to judge performance, rather than to help generate it.

To put that in a quick memory example, say you were experiencing a tip-of-the-tongue state, where you are sure you know the right answer to a question, but you can’t quite recall it.  In this instance, we have a long-term memory system generating performance (trying to recall an answer) and a meta-cognitive system generating confidence judgments (the tip-of-the-tongue state). If the meta-cognitive system had access to the correct answer, it should just share it with the long-term memory system, rather than using the correct answer to tell the other system to keep looking for the correct answer. The latter path is clearly inefficient and redundant. Instead, the meta-cognitive system should use some cues other than direct access to information in generating its judgments.

The second reason actual performance (relative to others) wouldn’t be an input for these meta-cognitive systems is that people don’t have reliable and accurate access to population-level data. If you’re asking people how funny they are relative to everyone else, they might have some sense for it (how funny are you, relative to some particular people you know), but they certainly don’t have access to how funny everyone is because they don’t know everyone; they don’t even know most people. If you don’t have the relevant information, then it should go without saying that you cannot use it to help inform your responses.

Better start meeting more people to do better in the next experiment

So if these meta-cognitive systems are using inputs other than accurate information in generating their judgments about how we stack up to others, what would those inputs be? One possible input would be task difficulty, not in the sense of how hard the task objectively is for a person to complete, but rather in terms of how difficult a task feels. This means that factors like how quickly an answer can be called to mind likely play a role in these judgments, even if the answer itself is wrong. If judging the humor value of a joke feels easy, people might be inclined to say they are above average in that domain, even if they aren’t.

This yields an important prediction: if you provide people with tasks that feel difficult, you should see them largely begin to guess they are below-average in that domain. If everyone is effectively guessing that they are below average (regardless of their actual performance), this means that those who perform the best will be the most inaccurate in judging their relative ability. In tasks that feel easy, people might be unskilled and unaware; for those that feel hard, people might be skilled but still unaware.

This is precisely what Burson, Larrick, & Klayman (2006) tested, across three studies. While I won’t go into details about the specifics of all their studies (this is already getting long), I will recreate a graph from one of their three studies that captures their overall pattern of results pretty well:

As we can see, when the domains being tested became harder, it was now the case that the worst performers were more accurate in estimating their percentile rank than the best ones. On tasks of moderate difficulty, the best and worst performers were equally calibrated. However, it doesn’t seem that this accuracy is primarily due to their real insights into their performance; it just so happened to be the case that their guesses landed closer to the truth. When people think, “this task is hard,” they all seem to estimate their performance as being modestly below average; when the task feels easy instead, they all seem to estimate their performance as being modestly above average. The extent to which that matches reality is largely due to chance, relative to true insight.

Worth noting is that when you ask people to make different kinds of judgments, there is (or at least can be) a modest average advantage for top performers, relative to bottom ones. Specifically, when you ask people to judge their absolute performance (i.e., how many of these questions did you get right?) and compare that to their actual performance, the best performers sometimes had a better grasp on that estimate than the worst ones, but the size of that advantage varied depending on the nature of the task and wasn’t entirely consistent. Averaged across the studies reported by Burson et al (2006), top-half performers displayed a better correlation between their perceived and actual absolute performance (r = .45), relative to bottom performers (r = .05). The corresponding correlations for actual and relative percentiles were in the same direction, but lower (rs = .23 and .03, respectively). While there might be some truth to the idea that the best performers are more sensitive to their relative rank, the bulk of the miscalibration seems to be driven by other factors.

Driving still feels easy, so I’m still above-average at it

These judgments of one’s relative standing compared to others appear rather difficult for people to get accurate. As they should, really; for the most part we lack access to the relevant information/feedback and there are possible social-desirability issues to contend with, coupled with a lack on consequences for being wrong. This is basically a perfect storm for inaccuracy. Perhaps worth noting is that the correlation between one’s relative performance and their actual performance was pretty close for one domain in particular in Burson et al (2006): knowledge of pop music trivia (the graph of which can seen here). As pop music is the kind of thing people have more experience learning and talking about with others, it is a good candidate for a case when these judgments might be more accurate because people do have more access to the relevant information.

The important point to take away from this research is that people don’t appear to be particularly good at judging their abilities relative to others, and this obtains regardless of whether the judges are themselves skilled or unskilled. At least for most of the contexts studied, anyway; it’s perfectly plausible that people – again, skilled and unskilled – will be better able to judge their relative (and absolute) performance when they have experience with a domain in question and have received meaningful feedback on their performance. This is why people sometimes drop out of a major or job after receiving consistent negative feedback, opting to believe they aren’t as cut out for it instead of persisting to believe they are actually above average in that context. You will likely see the least miscalibration for domains where people’s judgments of their ability need to hit reality and there are consequences for being wrong.

References: Burson, K., Larrick, R., & Klayman, J. (2006). Skilled or unskilled, but still unaware of it: How perceptions of difficulty drive miscalibration in relative comparisons. Journal of Personality & Social Psychology, 90, 60-77.

Kruger, J. & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality & Social Psychology, 77, 1121-1134.

Why Do We Roast The Ones We Love?

One very interesting behavior that humans tend to engage in is murder. While we’re far from the only species that does this (as there are some very real advantages to killing members of your species – even kin – at times), it does tend to garner quite a bit of attention, and understandably so. One very interesting piece of information about this interesting behavior concerns motives; why people kill. If you were to hazard a guess as to some of the most common motives for murder, what would you suggest? Infidelity is a good one, as is murder resulting from other deliberate crimes, like when a robbery is resisted or witnesses are killed to reduce the probability of detection. Another major factor that many might not guess is minor slights or disagreements, such as one person stepping on another person’s foot by accident, followed by an insult (“watch where you’re going, asshole!”), which is responded to with an additional insult, and things kind of get out of hand until someone is dead (Daly & Wilson, 1988). Understanding why seemingly minor slights get blown so far out of proportion is a worthwhile matter in its own right. The short-version of the answer as to why it happens is that one’s social status (especially if you’re a male) can be determined, in large part, by whether other people know they can push you around. If I know you will tolerate negative behavior without fighting back, I might be encouraged to take advantage of you in more extreme ways more often. If others see you tolerating insults, they too may exploit you, knowing you won’t fight back. On the other hand, if I know you will respond to even slight threats with violence, I have a good reason to avoid inflicting costs on you. The more dangerous you are, the more people will avoid harming you.

“Anyone else have something to say about my shirt?! Didn’t think so…”

This is an important foundation for understanding why another facet of human behavior is strange (and, accordingly, interesting): friends frequently insult each other in a manner intended to be cordial. This behavior is exemplified well by the popular Comedy Central Roasts, where a number of comedians will get together to  publicly make fun of each other and their guest of honor. If memory serves, the (unofficial?) motto of these events is, “We only roast the ones we love,” which is intended to capture the idea that these insults are not intended to burn bridges or truly cause harm. They are insults born of affection, playful in nature. This is an important distinction because, as the murder statistics help demonstrate, strangers often do not tolerate these kinds of insults. If I were to go up to someone I didn’t know well (or knew well as an enemy) and started insulting their drug habits, dead loved ones, or even something as simple as their choice of dress, I could reasonably expect anything from hurt feelings to a murder. This raises an interesting series of mysteries surrounding the matter of why the stranger might want to kill me but my friends will laugh, as well as when my friends might be inclined to kill me as well.

Insults can be spoken in two primary manners: seriously and in jest. In the former case, harm is intended, while in the latter it often isn’t. As many people can attest to, however, the line between serious and jesting insults is not always as clear as we’d like. Despite our best intentions, ill-phrased or poorly-timed jokes can do harm in much the same way that a serious insult can. This suggests that the nature of the insults is similar between the two contexts. As the function of a serious insult between strangers would seem to be to threaten or lower the insulted target’s status, this is likely the same function of an insult made in jest between friends, though the degree of intended threat is lower in those contexts. The closest analogy that comes to mind is the difference between a serious fight and a friendly tussle, where the combatants either are, or are not, trying to inflict serious harm on each other. Just like play fighting, however, things sometimes go too far and people do get hurt. I think joking insults between friends go much the same way.

This raises another worthwhile question: as friends usually have a vested interest in defending each other from outside threats and being helpful, why would they then risk threatening the well-being of their allies through such insults? It would be strange if they were all risk and reward, so it would be up to us to explain what that reward is. There are a few explanations that come to mind, all of which focus on one crucial facet of friendships: they are dynamic. While friendships can be – and often are – stable over time, who you are friends with in general as well as the degree of that friendship changes over time. Given that friendships are important social resources that do shift, it’s important that people have reliable ways of assessing the strength of these relationships. If you are not assessing these relationships now and again, you might come to believe that your social ties are stronger than they actually are, which can be a problem when you find yourself in need of social support and realize that you don’t have it. Better to assess what kind of support you have before you actually need it so you can tailor your behavior more appropriately.

“You guys got my back, right?….Guys?….”

Insults between friends can help serve this relationship-monitoring function. As insults – even the joking kind – carry the potential to inflict costs on their target, the willingness of an individual to tolerate the insult – to endure those costs – can serve as a credible signal for friendship quality. After all, if I’m willing to endure the costs of being insulted by you without responding aggressively in turn, this likely means I value your friendship more than I dislike the costs being inflicted. Indeed, if these insults did not carry costs, they would not be reliable indications of friendship strength. Anyone could tolerate behavior that didn’t inflict costs to maintain a friendship, but not everyone will tolerate behaviors that do. This yields another prediction: the degree of friendship strength can also be assessed by the degree of insults willing to be tolerated. In other words, the more it takes to “go too far” when it comes to insults, the closer and stronger the friendship between two individuals. Conversely, if you were to make a joke about your friend that they become incredibly incensed over, this might result in your reevaluating the strength of that bond: if you thought the bond was stronger than it was, you might either take steps to remedy the cost you just inflicted and make the friendship stronger (if you value the person highly) or perhaps spend less time investing in the relationship, even to the point of walking away from it entirely (if you do not).

Another possible related function of these insults could be to ensure that your friends don’t start to think too highly of themselves. As mentioned previously, friendships are dynamic things based, in part, on what each party can offer to the other. If one friend begins to see major changes to their life in a positive direction, the other friend may no longer be able to offer the same value they did previously. To put that in a simple example, if two friends have long been poor, but one suddenly gets a new, high-paying job, the new status that job affords will allow that person to make friends he likely could not before. Because the job makes them more valuable to others, others will now be more inclined to be their friend. If the lower-status friend wishes to retain their friendship with the newly-employed one, they might use these insults to potentially undermine the confidence of their friend in a subtle way. It’s an indirect way of trying to ensure the high-status friend doesn’t begin to think he’s too good for his old friends.

Such a strategy could be risky, though. If the lower-status party can no longer offer the same value to the higher-status one, relative to their new options, that might also not be the time to test the willingness of the higher-status one to tolerate insults. At the same time, times of change are also precisely when the value of reassessing relationship strength can be at its highest. There’s less of a risk of a person abandoning a friendship when nothing has changed, relative to when it has. In either case, the assessment and management of social relationships is likely the key for understanding the tolerance of insults from friends and intolerance of them from strangers.

“Enjoy your new job, sellout. You used to be cool”

This analysis can speak to another interesting facet of insults as well: they’re directed towards the speaker at times, referred to self-deprecating humor when done in jest (and just self-deprecation when not). It might seem strange that people would insult themselves, as it would act to directly threaten their own status. That people do so with some regularity suggests there might be some underlying logic to these self-directed insults as well. One possibility is that these insults do what was just discussed: signal that one doesn’t hold themselves in high esteem and, accordingly, signal that one isn’t “too good” to be your friend. This seems like a profitable place from which to understand self-depreciating jokes. When such insults directed towards the self are not made in jest, they likely carry additional implications as well, such as that expectations should be set lower (e.g., “I’m really not able to do that”) or that one is in need of additional investment, relative to the joking kind. 

References: Daly, M. & Wilson, M. (1988). Homicide. Aldine De Gruyter: NY.

To Meaningfully Talk About Gender

Let’s say I was to tell you I am a human male. While this sentence is short and simple, the amount of information you could glean from it is a potential goldmine, assuming you are starting from a position of near total ignorance about me. First, it provides you with my species identification. In the most general sense, that lets you know what types of organisms in the world I am capable of potentially reproducing with (to produce reproductively-viable offspring in turn). In addition to that rather concrete fact, you also learn about my likely preferences. Just as humans share a great deal of genes in common (which is why we can reproduce with one another), we also share a large number of general preferences and traits in common (as these are determined heavily by our genes). For instance, you likely learn that I enjoy the taste of fruit, that I make my way around the world on two feet, and that hair continuously grows from the top of my head but much more sparingly on the rest of my body, among many other things. While these probable traits might not hold true for me in particular – perhaps I am totally hairless/covered in hair, have no legs, and find fruit vile – they do hold for humans more generally, so you can make some fairly-educated guesses as to what I’m like in many regards even if you know nothing else about me as a person. It’s not a perfect system, but you’ll do better on average with this information than you would if you didn’t have it. To make the point crystal clear, imagine trying to figure out what kind of things I liked if you didn’t even know my species. 

Could be delicious or toxic, depending on my species. Choose carefully.

When you learn that I am a male, you learn something concrete about the sex chromosomes in my body: specifically, that I have an XY configuration and tend to produce particular types of gametes. In addition to that concrete fact, you also learn about my likely traits and preferences. Just as humans share a lot of traits in common, males tend to share more traits in common with each other than they do with females (and vice versa). For instance, you likely learn that the distribution of muscle mass in my upper body is more substantial than females, that I have a general willingness to relax my standards when it comes to casual sex, that I have a penis, and that I’m statistically more likely to murder you than a female (I’m also more likely to be murdered myself, for the record). Again, while these might not all hold true for me specifically, if you knew nothing else about me, you could still make some educated guesses as to what I enjoy and my probable behavior because of my group membership.

One general point I hope these examples illuminate is that, to talk meaningful about a topic, we need to have a clear sense for our terms. Once we know what the terms “human” and “male” mean, we can begin to learn a lot about what membership in those groups entail. We can learn quite a bit about deviations from those general commonalities as well. For instance, some people might have an XY set of chromosomes and no penis. This would pose a biological mystery to us, while someone having an XX set and no penis would pose much less of one. The ability to consistently apply a definition – even an arbitrary one – is the first step in being able to say something useful about a topic. Without clear boundary conditions on what we’re talking about, you can end up with people talking about entirely different concepts using the same term. This yields unproductive discussions and is something to be avoided if you’re looking to cut down on wasted time.

Speaking of unproductive discussions, I’ve seen a lot of metaphorical ink spilled over the concept of gender; a term that is supposed to be distinct from sex, yet is highly related to it. According to many of the sources one might consult, sex is supposed to refer to biological features (as above), while gender is supposed to refer, “…to either social roles based on the sex of the person (gender role) or personal identification of one’s own gender based on an internal awareness (gender identity).” I wanted to discuss the latter portion of that gender definition today: the one referring to people’s feelings about their gender. Specifically, I’ve been getting the growing sense that this definition is not particularly useful. In essence, I’m not sure it really refers to anything in particular and, accordingly, doesn’t help advance our understanding of much in the world. To understand why, let’s take a quick trip through some interesting current events. 

Some very colorful, current events…

In this recent controversy, a woman called Rachel Dolezal claimed her racial identity was black. The one complicating factor in her story is that she was born to white parents.  Again, there’s been a lot of metaphorical ink spilled over the issue (including the recent mudslinging directed at Rebecca Tuvel who published a paper on the matter), with most of the discussions seemingly unproductive and, from what I can gather, mean-spirited. What struck me when I was reading about the issue is how little of those discussions explicitly focused on what should have been the most important, first point: how are we defining our terms when it comes to race? Those who opposed Rachel’s claims to be black appear to fall back on some kind of implicit hereditary definition: that one or more of one’s parents need to be black in order to consider oneself a member of that group. That’s not a perfect definition as we need to then determine what makes a parent black, but it’s a start. Like the definition of sex I gave above, this concept of race references some specific feature of the world that determines one racial identity and I imagine it makes intuitive sense to most people. Crucially, this definition is immune to feelings. It doesn’t matter if one is happy, sad, indifferent, or anything else with respect to their ethnic heritage; it simply is what it is regardless of those feelings. In this line of thinking, Rachel is white regardless of how she feels about it, how she wears her hair, dresses, acts, or even whether we want to accept her identification as black and treat her accordingly (whatever that is supposed to entail). What she – or we – feel about her racial identity is a different matter than her heritage.

On the other side of the issue, there are people (notably Rachel herself) who think that what matters is how you feel when it comes to determining identity. If you feel black (i.e., your internal awareness tells you that you’re black), then you are black, regardless of biological factors or external appearances. This idea runs into some hard definitional issues, as above: what does it mean to feel black, and how is it distinguished from other ethnic feelings? In other words, when you tell me that you feel black, what am I supposed to learn about you? Currently, that’s a big blank in my mind. This definitional issue is doubly troubling in this case, however, because if one wants to say they are black because they feel black, then it seems one first needs to identify a preexisting group of black people to have any sense at all for what those group members feel like. However, if you can already identify who is and is not black from some other criteria, then it seems the feeling definition is out of place as you’d already have another definition for your term. In that case, one could just say they are white but feel like they’re black (again, whatever “feeling black” is supposed to mean). I suppose they could also say they are white and feel unusual for that group, too, without needing to claim they are a member of a different ethnic group.

The same problems, I feel, apply to the gender issue despite the differences between gender and race. Beginning with the feeling definition, the parallels are clear. If someone told me they feel like a woman, a few things have to be made clear for that statement to mean anything. First, I’d need to know what being a woman feels like. In order to know what being a woman feels like, I’d need to already have identified a group of women so the information could be gathered. This means I’d need to know who was a woman and who was not in advance of learning about their specific feelings. However, if I can do that – if I can already determine who is and is not a woman – then it seems I don’t need to identify them on the basis of their feelings; I would be doing so with some other criteria. Presumably, the most common criteria leveraged in such a situation would be sex: you’d go out and find a bunch of females and ask them about what it was like to be a woman. If those responses are to be meaningful, though, you need to consider “female” to equate to “woman” which, according to definitions I listed above, it does not. This leaves us in a bit of a catch-22: we need to identify women by how they feel, but we can’t say how they feel until we identify them. Tricky business indeed (even forgoing the matter of claims that there are other genders).

Just keep piling the issues on top of each other and hope that sorts it out

On the other hand, let’s say gender is defined by some objective criteria and is distinct from sex. So, someone might be a male because of their genetic makeup but fall under the category of “woman” because, say, their psychology has developed in a female-typical pattern for enough key traits. Perhaps enough of their metaphorical developmental dials have been turned towards the female portion. Now that’s just a hypothetical example, but it should demonstrate the following point well enough: regardless of whether the male in question wants to be identified as a female or not, it wouldn’t matter in terms of this definition. It might matter a whole bunch if you want to be polite and nice to them, but not for our definition. Once we had a sense for what dials – or how many of them – needed to be flipped to “female” and had a way of measuring that for a male to be considered a woman, one’s internal awareness seems to be besides the point.

While this definition helps us talk more meaningfully about gender, at least in principle, it also seems like the gender term is a little unnecessary. If we’re just using “man” as a synonym for “male” and “woman” as one for “female”, then the entire sex/gender distinction kind of falls apart, which defeats the whole purpose. You wouldn’t feel like a man; you’d feel like a male (whatever that feels like, and I say that as a male myself). Rather than calling our female-typical male a woman, we could also call him an atypical man.

The second issue with this idea nagging at me is that almost all traits do not run on a spectrum from male to female. Let’s consider traits with psychological sex differences, like depression or aggression. Since females are more likely to experience depression than males, we could consider experiencing depression as something that pushes one towards the “woman” end of the gender spectrum. However, when one feels depressed, they don’t feel like a woman; they feel sad and hopeless. When someone feels aggressive, they don’t feel like a man; they feel angry and violent. The same kind of logic can be applied to most other traits as well, including components of personality, risk-seeking, and so on. These don’t run on a spectrum between male/masculine and female/feminine, as it would make no sense to say that one has a feminine height.

If this still all sounds very confusing to you, then you’re on the same page as me. As far as I’ve seen, it is incredibly difficult for people to verbalize anything of a formal definition or set of standards that tells us who falls into one category or the other when it comes to gender. In the absence of such a standard, it seems profitable to just discard the terms and find something better – something more precise – to use instead.

Unusual Names In Learning Research

Learning new skills and bodies of knowledge takes time, repetition, and sustained effort. It’s a rare thing indeed for people to learn even simple skills or bodies of knowledge fluently with only a single exposure to them if they’re properly motivated. Given the importance of learning to succeed in life, a healthy body of literature in psychology examines people’s ability to learn and remember information. This literature extends both to how we learn successfully and the contexts in which we fail. Good research in this realm will often leverage something in the way of adaptive function for understanding why we learn what we do. It is unfortunate that this theoretical foundation appears to be lacking in much of the research on psychology in general, with learning and memory research being no exception. In the course I taught on the topic last semester, for instance, I’m not entirely sure the world “relevance” appeared once in the textbook I was using to help the reader understand our memory mechanisms. There was, however, a number of parts of that book which caught my attention, though not for the best reasons.

You have my attention, but no longer have a working car.

Recently, for instance, I came upon a reference to a phenomenon called the labor-in-vain effect through this textbook. In it, the effect was summarized as such: 

Here’s the basic methodology. Nelson and Leonesio (1988) asked participants to study words paired with nonsense syllables (e.g., monkey–DAX). Participants made judgments of learning in an initial stage. Then, when given a chance to study the items again, each participant could choose the amount of time to study for each item. Finally, in a cued recall test, participants were given the English word and asked to recall the nonsense syllable….Even though they spent most of their time studying the difficult items, they were still better at remembering the easy ones. For this reason, Nelson and Leonesio labeled the effect labor in vain because their experiment showed that participants were unable to compensate for the difficulty of those items

As I like to be thorough when preparing the materials for my course, I did what every self-respecting teacher should do (even though not all of them will): I went to go track down and read the primary literature upon which this passage was based. Professors (or anyone who wants to talk about these findings) ought to go read the source material themselves for two reasons: first, because you want to be an expert in the material you’re teaching your students about (why else would they be listening to you?) and, second, because textbooks – really secondary sources in general – have a bad habit of getting details wrong. What I found in this case was not only that the textbook mischaracterized the effect and failed to provide crucial details about the research, but the original study itself was a bit ambitious in their naming and assessment of the phenomenon. Let’s take those points in order.

First, to see why the textbook’s description wasn’t on point, let’s consider the research itself (Nelson & Leonesio, 1988). The general procedure in their experiments was as follows: participants (i.e., undergraduate students looking for extra credit) were given lists to study. In the first experiment these were trigrams (like BUG or DAX), in the second they were words paired with trigrams (like Monkey-DAX), and in the third they were tested on general-information questions they had failed to answer correctly (like, “what is the capital of Chile?”). During each experiment, the participants would be broken up into groups that either emphasized speed or accuracy in learning. Both groups were told they could study the target information at their own pace and that the goal was to remember as much of the information as possible, but the speed groups were told their study time would count against their eventual score. Following that study phase, participants were then given a recall task after a brief delay to see how successful their study time had been. 

As one might expect, the speed-emphasis groups studied the information for less time than the accuracy-emphasis groups. Crucially, the extra study time invested by the participants did not yield statistically significant gains in their ability to subsequently recall the information in 2 of the 3 experiments (in experiment three, the difference was significant). This was dubbed the labor-in-vain effect because participants were putting in extra labor for effectively little to no gain.

We can see from this summary that the textbook’s description of the labor-in-vain effect isn’t quite accurate. The labor in vain effect does not refer to the fact that participants were unable to make up the difference between the easy and hard items (which they actually did in one of the three studies); instead, it refers to the idea that the participants were not gaining anything at all from their extra study time. To quote the original paper: 

We refer to this finding of substantial extra study time yielding little or no gain in recall as the labor-in-vain effect. Although we had anticipated that extra study time might yield diminishing (i.e., negatively accelerated) gains in recall, the present findings are quite extreme in showing not even a reliable gain in recall after more than twice as much extra study time.

This mischaracterization might seem like a minor error speaking to the meticulousness of the author, but that’s not the only problem with the book’s presentation of the information. Specifically, the textbook provided no sense as for the exact methodological details, the associated data, and whether the interpretation of these findings were accurate. So let’s turn to those now.

If the labor will all be in vain, why bother laboring at all?

The general summary of the research I just provided is broadly true, but very important details are missing that help contextualize it. The first of these involves how the study phases of the experiments took place. Let’s just consider the first experiment, as the methods are broadly similar across the three. In the study phase, the participants had 27 trigrams to commit to memory. The participants were seated at a computer, and one of these trigrams would appear on the screen at a time. After the participants felt they had studied it enough, they would hit the enter key to advance to the next item, but they could not go back to previous items once they did. This meant there was no ability to restudy or practice test oneself in advance of the formal test. To be frank, this method of study resembles no kind that I know humans to naturally engage in. Since the context of studying in the experiment is so strange, I would be hesitant to say that it tells us much about how learning occurs in the real word, but the problems get worse than that.

As I mentioned before, these are undergraduate participants trying to earn extra credit. With that mental picture of the samples in mind, we might come to expect that the participants are a little less than motivated to deliver a flawless performance. If they’re anything like the undergraduates I’ve known, they likely just want to get the experiment over and done with so they can go back to doing things they actually want to. In terms of the interests of college students, learning nonsense syllables isn’t high on that list; in fact, I don’t think that task is high on anybody’s list. The practical information value of what they’re learning is nonexistent, and very little is riding on their success. It might come as no surprise, then, that the participants dedicated effectively no time to studying these items. Bear in mind, there were 27 of these trigrams to learn. In the speed group, the average number of seconds devoted to study was 1.9 per trigram. Two whole seconds of learning per bit of nonsense. In the accuracy group, this study time skyrocketed to a substantial…5.4 seconds.

An increase of 3.3 seconds per item does not strike me as anything I’d refer to as labor, even if the amount of study time was nominally over twice as long. A similar pattern emerged in the other two experiments. The speed/accuracy study times were 4.8 and 15.2 in the second study, and 1.2 and 8.4 in the third. Putting this together up to this point, we have (likely unmotivated, undergraduate) participants studying useless information in unnatural ways for very brief periods of time. Given that, why on Earth would anyone expect to find large differences in later recall performance?

Speaking of eventual performance, though, let’s finally consider how well each group performed during the recall task; how much of that laboring was being done in vain. In the first experiment, the speed group recalled 43% of the trigrams; the accuracy group got 49% correct. That extra study time of about 3 seconds per item yields a 6% improvement in performance. The difference wasn’t statistically significant but, again, exactly how large of an improvement should have been expected, given the context? In the second study, these percentages were 49% and 57%, respectively (a gain of 8%); in the third, they were 75% and 83% (another 8% difference that actually was statistically significant given the larger sample size for experiment 3). So, across three studies, we do not see evidence of people laboring in vain; not really. Instead, what we see is that very small amounts of extra time devoted to studying nonsense in unusual ways by people who want to be doing other things yields corresponding small – but consistent – gains in recall performance. It’s not that this labor was in vain; it’s that not much labor was invested in the first place, so the gains were minimal.  

If you want to make serious gains, you’ll need more than baby weight

On a theoretical level, it sure would be strange if people would spend substantially extra time laboring in study to make effectively no gains. Why waste all that valuable time and energy doing something that has no probability of paying off? That’s not something anyone should posit a brain would do if they were using evolutionary theory to guide their thinking. It would be strange to truly observe a labor-in-vain effect in the biological sense of the word. However, given a fuller picture of the methods of the research and the data it uncovered, it doesn’t seem like the name of that effect is particularly apt. The authors of the original paper seem to have tried to make these results sound more exciting than they are (through their naming of the effect and the use of phrases like, “…substantial extra study time,” and differences in study time that are, “highly significant,” as well as an exclamation point here and there). That the primary literature is a little ambitious is one thing, but we also saw that the secondary summary of the research by my textbook was less than thorough or accurate. Anyone reading the textbook would not leave with a good sense for what this research found. It’s not hard to imagine how this example could extend further to a student summarizing the summary they read to someone else, at which point all the information to be gained from the original study is effectively gone.

The key point to take away from this is that textbooks (indeed, secondhand sources in general) should certainly not be used as an end-point for research; they should be used as a tentative beginning to help track down primary literature. However, that primary literature is not always to be taken at face value. Even assuming the original study was well-designed and interpreted properly, it would still only represent a single island of information in the academic ocean. Obtaining true and useful information from that ocean takes time and effort which, unfortunately, you often cannot trust others to do on your behalf. To truly understand the literature, you need to dive into it yourself.

References: Nelson, T. & Leonesio, R. (1988). Allocation of self-paced study time and the “Labor-in-Vain Effect”. Journal of Experimental Psychology, 14, 676-686.