Imagine, for a moment, that someone you know tells you that they hate tomatoes, and how people who like tomatoes are seriously intellectually misguided. Later, you find that same person enjoying a BLT, going on about how much they really love the tasty red fruit between the lettuce and the bread. You might be rightly confused, and, if you’re someone who happens to like tomatoes, perhaps a bit put-off by their attitude about the whole thing.
On a related note, there are many critics of evolutionary psychology out there. Well, I say critics of evolutionary psychology, but what they’re actually critical of has little or nothing to do with anything found here. They just don’t know what they’re talking about. In fact, some of the ostensible critics of evolutionary psychology actually agree with many (or all) of the theoretical assumptions of field without even knowing it. Not only are these critics not experts about the topic they’re talking about (the politest possible way of putting it), but they are completely unaware of that fact. They display the same quality of self-assessment that is found in the recently iconic “I’m sexy and I know it”.
Not so recently, Kruger and Dunning (1999) examined self-generated judgments, asking people to rate their performance on various task of logic, humor, and grammar, and then compared those judgments to people’s actual performance. At this point, I doubt that I’m spoiling anything by telling you that people aren’t very good at delivering accurate self-assessments in certain contexts, humor, logical, and grammatical abilities being three of them.
The first of the four studies in this paper concerned humor. Thirty ostensible jokes were rated by a panel of professional comedians on a scale of 1 to 11, yielding an average humor score for each item. These 30 items were then rated on the same scale by participants; those who assessed the jokes with a similar score to the comedians were said to have performed well, and the further the scores deviated, the worse their performance was rated as being. Subjects then rated their performance in assessing the quality of jokes, relative to their peers. On average, people in the sample rated themselves to have performed in the 66th percentile; moderately above average. Among those who scored in the lowest quarter of the distribution when it came to assessment – the worst performers – they still rated themselves, on average, to be in the 58th percentile; mildly above average in their ability.
The second study looked at performance on tests of logical reasoning. The logical domain has an advantage over humor in that there are objective answers to the questions being examined, rather than subjective ones. Forty-five subjects completed a 20-item test of logical reasoning and were then asked to estimate both their logical reasoning abilities, relative to their peers, and also how many of the 20 questions they got right. Again, on average, the subjects placed their performance in the 66th percentile. In this case, perceived performance on the task was not significantly correlated to actual performance. Also again, the worst performers – the ones in the bottom forth of the sample – rated their logical reasoning ability in the 68th percentile and their performance in the 62nd. Whereas they thought they had answers 14.2 questions correctly, they had in fact only answer 9.5 right, on average. Lest I bore you with more repetition, a set of nearly identical results were also obtained for measures of grammatical ability in the third study.
“So many people wasted their time getting a degree they won’t use; completely unlike me!”
The forth study found that training can, to some extent, help participants become a bit more realistic in their self-assessments. In fact, those subjects that actually fell into the bottom quarter of all subjects had initially rated their performance as being in the 55th percentile. After training, when they realized some of the mistakes they had made, those estimates were revised…to be in the 44th percentile. So they went from rating themselves as slightly above average to slightly below average. They were now off by a mere 30 points, rather than 40, but that’s progress, I suppose.
Kruger and Dunning (1999) attempt to explain this pattern of results by suggesting that those who lack the skills to perform in certain domains tend to, as a result, lack the ability to judge competent from incompetent performance. The two, as the authors suggest, “are often the very same skills” (p.1121 – my emphasis). While that may sound like a plausible explanation at first glance, I find it lacking. As the authors note, is that this effect is not found universally across domains: my ability to recognize some badly performed karaoke is not dependent on my ability to produce good karaoke, and I won’t be attempting to dunk on Michael Jordan anytime soon even though I’m not good at basketball. As I’ve written before, these positively biased self-assessments (known as the ‘above-average effect’) would appear to be more prevalent in fuzzy domains. When skills (like whether you can successfully skate down three flights of stairs) or physical traits (like your height or eye color) are readily observable, we shouldn’t expect to see much in the way of biased self-assessment. This is because, outside of certain social contexts involving persuasion, being wrong about them tends to carry costs that aren’t going to be reliably offset by the benefits.
Further, it seemed to be the case that the accuracy of self ratings had more to do with chance than assessment proficiency in the present study (Burson et al., 2006). Most everyone seems to hover around a self-assessment set of point of mildly-to-moderately above-average. For those who perform at or around that level, those kind of assessments will tend to be accurate; for anyone performing above or below that point, you’ll find that self-assessments get less and less accurate. Since most people, by definition, perform below that point, you’ll see the worst misrepresenters there, which we do. However, it also works in the opposite direction; the best performers, for instance, consistently underestimated how competent they were, relative to others. If assessment abilities are supposed to be tied to production abilities, it’s unclear why that gap exists. Kruger and Dunning (1999) suggest that this is because of a false consensus effect among the highly knowledgable (i.e. “I assume other people know what I do”), which is more of a restatement of the original finding than an explanation of it.
Thanks again, field of psychology.
This non-explanation is similar to Kruger and Dunning’s (1999) suggestion that a lack of metacognitive abilities among the worst performers are responsible for their very poor assessments. They are bad at assessing their performance because they lack metacognitive abilities. What are metacognitive abilities, you ask? They’re the abilities required to accurately assess performance. So the fact that people are bad at assessing their performance is explained by the fact that those people are bad at assessing their performance. All this theoretical spinning is making me dizzy.
I feel these results can be better explained (by which I mean actually explained) by considering a persuasion framework. There are certain things that might make me better off if others believe them (i.e. I’m a desirable mate or a reliable friend). In the service of persuading others about them, it helps me to be strategically wrong about them myself (Kurzban, 2010). However, when it comes to things that can’t be persuaded (like gravity) or cases where persuasion seems implausible (like your ability to speak a foreign language you can’t actually speak), self-assessments should tend to head towards a more accurate assessment, provided relevant feedback information is present. The important thing to bear in mind here is that it’s not the accuracy of these self-assessments, per se, that matter; what matters is what those self-assessments ultimately end up leading an organism to do. Evolution is blind to what you feel but not blind to what you do. These self-assessments should be considered in light of their consequences, not their accuracy.
References: Burson KA, Larrick RP, & Klayman J (2006). Skilled or unskilled, but still unaware of it: how perceptions of difficulty drive miscalibration in relative comparisons. Journal of personality and social psychology, 90 (1), 60-77 PMID: 16448310
Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77 (6), 1121-1134 DOI: 10.1037//0022-3522.214.171.1241
Kurzban, R. (2010). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton, NJ: Princeton University Press.