Simple Rules Do Useful Things, But Which Ones?

Depending on who you ask – and their mood at moment – you might come away with the impression that humans are a uniquely intelligent species, good at all manner of tasks, or a profoundly irrational and, well, stupid one, prone to frequent and severe errors in judgment. The topic often penetrates into lay discussions of psychology, and has been the subject of many popular books, such as the Predictably Irrational series. Part of the reason that people might give these conflicting views of human intelligence – either in terms of behavior or reasoning – is the popularity of explaining human behavior through cognitive heuristics. Heuristics are essentially rules of thumb which focus only on limited sets of information when making decisions. A simple, perhaps hypothetical example of a heuristic might be something like a “beauty heuristic”. This heuristic might go something along the lines of when deciding who to get into a relationship with, pick the most physically attractive available option; other information – such as the wealth, personality traits, and intelligence of the perspective mates – would be ignored by the heuristic.

Which works well when you can’t notice someone’s personality at first glance.

While ignoring potential sources information might seem perverse at first glance, given that one’s goal is to make the best possible choice, it has the potential to be a useful strategy. One of these reasons is that the world is a rather large place, and gathering information is a costly process. The benefits of collecting additional bits of information are outweighed by the costs of doing so past a certain point, and there are many, many potential sources of information to choose from. To the extent that additional information helps one make a better choice, making the best objective choice is often a practical impossibility. In this view, heuristics trade off accuracy with effort, leading to ‘good-enough’ decisions. A related, but somewhat more nuanced benefit of heuristics comes from the sampling-error problem: whenever you draw samples from a population, there is generally some degree of error in your sample. In other words, your small sample is often not entirely representative of the population from which it’s drawn. For instance, if men are, on average, 5 inches taller than women the world over, if you select 20 random men and women from your block to measure, your estimate will likely not be precisely 5 inches; it might be lower or higher, and the degree of that error might be substantial or negligible.

Of note, however, is the fact that the fewer people from the population you sample, the greater your error is likely to be: if you’re only sampling 2 men and women, your estimate is likely to be further from 5 inches (in one direction or the other) relative to when you’re sampling 20, relative to 50, relative to a million. Importantly, the issue of sampling error crops up for each source of information you’re using. So unless you’re sampling large enough quantities of information capable of balancing that error out across all the information sources you’re using, heuristics that ignore certain sources of information can actually lead to better choices at times. This is because the bias introduced by the heuristics might well be less predictively-troublesome than the degree of error variance introduced by insufficient sampling (Gigerenzer, 2010). So while the use of heuristics might at times seem like a second-best option, there appear to be contexts where it is, in fact, the best option, relative to an optimization strategy (where all available information is used).

While that seems to be all well and good, the acute reader will have noticed the boundary conditions required for heuristics to be of value: they need to know how much of which sources of information to pay attention to. Consider a simple case where you have five potential sources of information to attend to in order to predict some outcome: one of these is sources strongly predictive, while the other four are only weakly predictive. If you play an optimization strategy and have sufficient amounts of information about each source, you’ll make the best possible prediction. In the face of limited information, a heuristic strategy can do better provided you know you don’t have enough information and you know which sources of information to ignore. If you picked which source of information to heuristically-attend to at random, though, you’d end up making a worse prediction than the optimizer 80% of the time. Further, if you used a heuristic because you mistakenly believed you didn’t have sufficient amounts of information when you actually did, you’ve also made a worse prediction than the optimizer 100% of the time.

“I like those odds; $10,000 on blue! (The favorite-color heuristic)”

So, while heuristics might lead to better decisions than attempts at optimization at times, the contexts in which they manage that feat are limited. In order for these fast and frugal decision rules to be useful, you need to be aware of how much information you have, as well as which heuristics are appropriate for which situations. If you’re trying to understand why people use any specific heuristic, then, one would need to make substantially more textured predictions about the functions responsible for the existence of the heuristic in the first place. Consider the following heuristic, suggested by Gigerenzer (2010): if there is a default, do nothing about it. That heuristic is used to explain, in this case, the radically different rates of being an organ donor between countries: while only 4.3% of Danish people are donors, nearly everyone in Sweden is (approximately 85%). Since the explicit attitudes about the willingness to be a donor don’t seem to differ substantially between the two countries, the variance might prove a mystery; that is, until one realizes that the Danes have an ‘opt in’ policy to be a donor, whereas the Swedes have an ‘opt out’ one. The default option appears to be responsible for driving most of variance in rates of organ donor status.

While such a heuristic explanation might seem, at least initially, to be a satisfying one (in that it accounts for a lot of the variance), it does leave one wanting in certain regards. If anything, the heuristic seems more like a description of a phenomenon (the default option matters sometimes) rather than an explanation of it (why does it matter, and under what circumstances might we expect it to not?). Though I have no data on this, I imagine if you brought subjects into the lab and presented them with an option to give the experimenter $5 or have the experimenter give them $5, but highlighted the first option as default, you would probably find very few people who did not ignore the default heuristic. Why, then, might the default heuristic be so persuasive at getting people to be or fail to be organ donors, but profoundly unpersuasive at getting people to give up money? Gigerenzer’s hypothesized function for the default heuristic – group coordination – doesn’t help us out here, since people could, in principle, coordinate around either giving or getting. Perhaps one might posit that another heuristic – say, when possible, benefit the self over others – is at work in the new decision, but without a clear, and suitably textured theory for predicting when one heuristic or another will be at play, we haven’t explained these results.

In this regard, then, heuristics (as explanatory variables) share the same theoretical shortcoming as other “one-word explanations” (like ‘culture’, ‘norms’, ‘learning’, ‘the situation’, or similar such things frequently invoked by psychologists). At best, they seem to describe some common cues picked up on by various cognitive mechanisms, such as authority relations (what Gigerenzer suggested formed the following heuristic: if a person is an authority, follow requests) or peer behavior (the imitate-your-peers heuristic: do as your peers do) without telling us anything more. Such descriptions, it seems, could even drop the word ‘heuristic’ altogether and be none the worse for it. In fact, given that Gigerenzer (2010) mentions the possibility of multiple heuristics influencing a single decision, it’s unclear to me that he is still be discussing heuristics at all. This is because heuristics are designed specifically to ignore certain sources of information, as mentioned initially. Multiple heuristics working together, each of which dabble in a different source of information that the others ignore seem to resemble an optimization strategy more closely than heuristic one.

And if you want to retain the term, you need to stay within the lines.

While the language of heuristics might prove to be a fast and frugal way of stating results, it ends up being a poor method of explaining them or yielding much in the way of predictive value. In determining whether some decision rule even is a heuristic in the first place, it would seem to behoove those advocating the heuristic model to demonstrate why some source(s) of information ought to be expected to be ignored prior to some threshold (or whether such a threshold even exists). What, I wonder, might heuristics have to say about the variance in responses to the trolley and footbridge dilemmas, or the variation in moral views towards topics like abortion or recreational drugs (where people are notably not in agreement)? As far as I can tell, focusing on heuristics per se in these cases is unlikely to do much to move us forward. Perhaps, however, there is some heuristic heuristic that might provide us with a good rule of thumb for when we ought to expect heuristics to be valuable…

References: Gigerenzer, G. (2010). Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality Topics in Cognitive Science., 2, 528-554 DOI: 10.1111/j.1756-8765.2010.01094.x

One comment on “Simple Rules Do Useful Things, But Which Ones?

  1. Pingback: 7 Simple Pop Blogs - Pop Revelations