Practice Makes Better, But Not Necessarily Much Better

“But I’m not good at anything!” Well, I have good news — throw enough hours of repetition at it and you can get sort of good at anything…It took 13 years for me to get good enough to make the New York Times best-seller list. It took me probably 20,000 hours of practice to sand the edges off my sucking.” -David Wong

That quote is from one of my favorite short pieces of writing entitled “6 Harsh Truths That Will Make You a Better Person”, which you can find linked above. The jist of the article is simple: the world (or, more precisely, the people in the world) only care about what valuable things you provide them, and what is on the inside, so to speak, only matters to the extent that it makes you do useful things for others. This captures nicely some of the logic of evolutionary theory – a piece that many people seem to not appreciate – namely that evolution cannot “see” what you feel; it can only “see” what organisms do (seeing, in this sense, referring to selecting for variants that do reproductively-useful things). No matter how happy you are in life, if you aren’t reproducing, whatever genes contributed to that happiness will not see the next generation. Given that your competence at performing a task is often directly related to the value it could potentially provide for others, the natural question many people begin to consider is, “how can I get better at doing things?”

  Step 1: Print fake diploma for the illusion of achievement

The typical answer to that question, as David mentions, is practice: by throwing enough hours of practice at something, people tend to get sort of good at it. The “sort of” in that last sentence is rather important, according to a recent meta-analysis. The paper – by Macnamara et al (2014) – examines the extent of that “sort of” across a variety of different studies tracking different domains of skill one might practice, as well as across a variety of different reporting styles concerning that practice. The results from the paper that will probably come as little surprise to anyone is that – as intuition might suggest – the amount of time one spends practicing does, on the whole, seem to show a positive correlation with performance; the ones that probably will come as a surprise is that the extent of that benefit explains a relatively-small percentage of the variance in eventual performance between people.

Before getting into the specific results of the paper, it’s worth noting that, as a theoretical matter, there are reasons we might expect practicing on a task to correlate with eventual performance even if the practicing itself has little effect: people might stop practicing things they don’t think they’re very good at doing. Let’s say I wanted to get myself as close to whatever the chess-equivalent of a rockstar happens to be. After playing the game for around a month, I find that, despite my best efforts, I seem to be losing; a lot. While it is true that more practice playing chess might indeed improve my performance to some degree, I might rightly conclude that investing the required time really won’t end up being worth the payoff. Spending 10,000 hours of practice to go from a 15% win rate to a 25% win rate won’t net me any chess groupies. If my time spent practicing chess is, all things considered, a bad investment, not investing anymore time in it than I already had would be a rather useful thing to do, even if it might lead to some gains. The idea that one ought to persist at the task despite the improbable nature of a positive outcome (“if at first you don’t succeed, try, try again”) is as optimistic as it is wasteful. That’s not a call to give up on doing things altogether, of course; just a recognition that my time might be profitably invested in other domains with better payoffs. Then again, I majored in psychology instead of computer science or finance, so maybe I’m not the best person to be telling anyone else about profitable payoffs…

In any case, turning to the meat of the paper, the authors began by locating around 9,300 articles that might have been relevant to their analysis. As it turns out, only 88 of them possessed the relevant inclusion criteria: (1) an estimate of the numbers of hours spent practicing, (2) a measure of performance level, (3) and effect size of the relationship between those first two things, (4) written in English, and (5) conducted on humans. These 88 studies contained 111 samples, 157 effect sizes, and approximately 11,000 participants. Of those 157 correlations, 147 of them were positive: as performance increased, so too did hours of practice tend to increase in an overwhelming majority of the papers. The average correlation between hours of practice and performance was a 0.35. This means that, overall, deliberate practice explained around 12% of the variance in performance. Throw enough hours of repetition at something and you can get sort of better at it. Well, somewhat slightly better at it, anyway…sometimes…

At least it only took a few decades of practice to realize that mediocrity

The average correlation doesn’t give a full sense of the picture, as many averages tend to not. Macnamera et al (2014) first began to break the analysis down by domain, as practicing certain tasks might yield greater improve than others. The largest gains were seen in the realm of games, where hours of practice could explain around a forth of the variance in performance. From there, the percentages decreased to 21% in music, 18% in sports, 4% in education, and less than 1% in professions. Further, as one might also expect, practice showed the greatest effect when the tasks were classified as highly predictable (24% of the variance), followed by moderately (12%) and poorly predictable (4%). If you don’t know what to expect, it’s awfully difficult to know what or how to practice to achieve a good outcome. Then again, even if you do know what to expect, it still seems hard to achieve those outcomes.

Somewhat troubling, however, was that the type of reporting about practicing seemed to have a sizable effect as well: reports that relied on retrospective interviews (i.e. “how often would you say you have practiced over the last X weeks/months/years”) tended to show larger effects; around 20% of the variance explained. When the method was a retrospective questionnaire, rather than an interview, this dropped to 12%. For the studies that actually involved keeping daily logs of practice, this percentage dropped precipitously to a mere 5%. So it seems at least plausible that people might over-report how much time they spend practicing, especially in a face-to-face context. Further still, the extent of the relationship between practice and product depended heavily on the way performance was measured. For the studies simply using “group membership” as the measure of skill, around 26% of the variance was explained. This fell to 14% when laboratory studies alone were considered, and fell further when expert ratings (9%) or standardized measures of performance (8%) were used.

Not only might be people be overestimating how much time they spend practicing a skill, then, but the increase in ability possibly attributable to that practice appears to shrink the more fine-grained or specific an analysis gets. Now it’s worth mentioning that this analysis is not able to answer the question of how much improvement in performance is attributable to practice in some kind of absolute sense; it just deals with how much of the existing differences between people’s ability might be attributable to differences in practice. To make that point clear, imagine a population of people who were never allowed to practice basketball at all, but were asked to play the game anyway. Some people will likely be better than others owing to a variety of factors (like height, speed, fine motor control, etc), but none of that variance would be attributable to practice. It doesn’t mean that people wouldn’t get better if they were allowed to practice, of course; just that none of the current variation would be able to be chalked up to it.

And life has a habit of not being equitable

As per the initial quote, this paper suggests that deliberate practicing, at least past a certain point, might have more to do with sanding the harsh edges off one’s ability rather than actually carving it out. The extent of that sanding likely depends on a lot of things: interests, existing ability, working memory, general cognitive functioning, what kind of skill is in question, and so on. In short, it’s probably not simply a degree of practice that separates a non-musician from Mozart. What extensive practice can help with seems to be more pushing the good towards the great. As nice as it sounds to tell people that they can achieve anything they put their mind to, nice does not equal true. That said, if you have a passion for something and just wish to get better at it (and the task at hand lends itself to improvement via practice), the ability to improve performance by a few percentage points is perfectly respectable. Being slightly better at something can, on the margins, mean the difference between winning or losing (in whatever form that takes); it’s just that all the optimism and training montages in the world probably won’t take you from the middle to the top.

References: Macnamera, B., Hambrick, D., & Oswald, F. (2014). Deliberate practice and performance in music, games, sports, education, and professions: A meta-analysis. Psychological Science, DOI: 10.1177/0956797614535810

Comments are closed.