More About Memes

Sometime ago, I briefly touched on the why I felt the concept of a meme didn’t help us understand some apparent human (and nonhuman) predispositions for violence. I don’t think my concerns about the idea that memes are analogs to genes – both being replicators that undergo a selective process, resulting in what one might call evolution by natural selection – were done full justice there. Specifically, I only scratched the surface of one issue, without explicitly getting down to the deeper, theoretical concerns with the ‘memes-as-replicators’ idea. As far I can see at the moment, memetics proves to be too underspecified in many key regards to profitably help us understand human cognition and behavior. By extension, the concept of cultural group selection faces many of the same challenges. None of what I’m about to say discredits the notion that people can often end up with similar ideas in their heads: I didn’t think up the concepts of evolution by natural selection, genes, or memes, yet here I am discussing them (with people who will presumably understand them to some degree as well). The point is that those ideas probably didn’t end up in our heads because the ideas themselves were good replicators.

Good luck drawing the meme version of the tree of life.

The first of these conceptual issues concerns the problem of discreteness. This is the basic question of what are the particulate units of inheritance that are being replicated? Let’s use the example provided by Wikipedia:

A meme has no given size. Susan Blackmore writes that melodies from Beethoven’s symphonies are commonly used to illustrate the difficulty involved in delimiting memes as discrete units. She notes that while the first four notes of Beethoven’s Fifth Symphony form a meme widely replicated as an independent unit, one can regard the entire symphony as a single meme as well.

So, are those first four notes to be regarded an independent meme or part of a larger meme? The answer, unhelpfully, seems to be, “yes”. To see why this answer is unhelpful, consider a biological context:organisms are collections of traits, traits are collections of proteins, proteins are coded for by genes, and genes are made up of alleles. By contrast, this post (a meme) is made up of paragraphs (memes), which are made up of sentences (memes), which are made up of words (memes), which are made up of letters (memes), all of which are intended to express abstract ideas (also memes). In the biological sense, then, the units of heredity (alleles/genes) can be conceived of and spoken about in a distinct manner from their products (proteins, traits, and organisms). The memetics sense, blurs this distinction; the hypothetical units of heredity (memes) are the same as their products (memes), and can broken down into effectively-limitless combinations (words, letters, notes, songs, speeches, cultures, etc). If the definition of a meme can be applied to accommodate almost anything, it adds nothing to our understanding of ideas.

This definitional obscurity has other conceptual downsides as well that begin to tip the idea that ‘memes replicate‘ into the realm of unfalsifiability. Let’s return to the biological domain: here, two organisms can have identical sets of genes, yet display different phenotypes, as their genetic relatedness is a separate concept from their phenotypic relatedness. The reverse can also hold: two organisms can have phenotypically similar traits – like wings – despite not inheriting that trait from a genetic common ancestor (think bats and pigeons). What these examples tell us is that phenotypic resemblance – or lack thereof – is not necessarily a good cue for determining biological relatedness. In the case of memes, there is no such conceptual dividing line using parallel concepts: the phenotype of a meme is its genotype. This makes it very difficult to do things like measure relatedness between memes or determine if they have a common ancestor. To make this example more concrete, imagine you have come up with a great idea (or a truly terrible one; the example works regardless of quality). When you share this idea with your friend, your friend appears stunned, for just the other day they had precisely the same idea.

Assuming both of you have identical copies of this idea in your respective heads, does it make sense to call one idea a replication of the other? It would seem not. Though they might resemble one another in every regard, one is not the offspring of another. To shift the example back to biology, were a scientist to create a perfect clone of you, that clone would not be a copy of you by descent; you would not share any common ancestors, despite your similarities. The conceptualization of memes appears to blur this distinction, as there is currently no way of separating out descent from a common ancestor from separate creation events in regards to ideas. Without this distinction, the potential application of natural selection to memes is weakened substantially. One could make the argument that memes, like adaptations, are too improbably organized to arise spontaneously, which would imply they represent replications with mutations/modifications, rather than independent creation events. That argument would be deficient on at least two counts.

One case in which there is a real controversy.

The first problem with that potential counterargument is that there are two competing accounts for special design: evolution and creationism. In the case of biology, that debate is (or at least ought to be) largely over. In the case of memes, however, the creationism side has a lot going for it; not in the supernatural-sense, mind you, but rather in the information-processing sense. Our minds are not passive receptors for sensory information, attempting to bring perceptions from ‘out there’ inside; they actively process incoming information, structuring it in predictable ways to create our subjective experience of the world (Michael Mills has an excellent post on that point). Brains are designed to organize and represent incoming information in particular ways and, importantly, this organization is often not recoverable from the information itself. There is nothing about certain wavelengths of light that would lead to their automatic perception as “green” or “red”, and nothing intrinsic about speech that makes it grammatical. This would imply that at least some memes (like grammatical rules) need to be created in a more or less de novo fashion; others need to be given meaning not found in the information itself: while a parrot can be taught to repeat certain phrases, it is unlikely that the phrases trigger the same set of representations inside the parrot’s head as they do in ours.

The second response to the potential rebuttal concerns the design features of memes more generally, and again returns us to their definitional obscurity. Biological replicators which create more copies of themselves become more numerous, relative to replicators that do a worse job; that much is a tautology. The question of interest is how they manage to do so. There are scores of adaptive problems that need to be successfully solved for biological organisms to reproduce. When we look for evidence of special design, we are looking for evidence of adaptations designed to solve those kinds of problems. To do so requires (a) the identification of an adaptive problem, (b) a trait that solves the problem, and (c) an account of how it does does so.  As the basic structure of memes has not been formally laid out, it becomes impossible to pick out evidence of memetic design features that came to be because they solved particular adaptive problems. I’m not even sure whether proper adaptive problems faced by memes specifically, and not adaptive problems faced by their host organism, have even been articulated.

One final fanciful example that highlights both these points is the human ability to (occasionally) comprehend scrambled words with ease:

I cdn’uolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg: the phaonmneel pweor of the hmuan mnid. Aoccdrnig to a rseearch taem at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae.

In the above passage, what is causing some particular meme (the word ‘taht’) to be transformed into a different meme (the word ‘that’)? Is there some design feature of the word “that” which is particularly good at modifying other memes to make copies of itself? Probably not, since no one read “cluod” in the above passage as “that”. Perhaps the meme ‘taht’ is actually composed of 4 different memes, ‘t’, ‘a’, ‘h’, and ‘t’, which have some affinity for each other. Then again, probably not, since I doubt non-English speakers would spontaneously turn the four into the word ‘that’. The larger points here are that (a) our minds are not passive recipients of information, but rather activity represent and create it, and (b) if one cannot speak meaningfully about different features of memes (like design features, or heritable units) beyond, “I know it when I see it”, the enterprise of discussing memes seems to more closely resemble a post hoc fitting of any observed set of data to the theory, rather than the theory driving predictions about unknown data. 

“Oh, it’ll fit alright…”

All of this isn’t to say that memetics will forever be useless in furthering our understanding of how ideas are shaped and spread but, in order to be useful, a number of key concepts would need to be deeply clarified at a minimum. A similar analysis applies to other similar types of explanations, such as cultural ones: it’s beyond doubt that local conditions – like cultures and ideas – can shape behavior. The key issue, however, is not noting that these things can have effects, but rather developing theories that deliver testable predictions about the ways in which those effects are realized. Adaptationism and evolution by natural selection fit the bill well, and in that respect I would praise memetics: it recognizes the potential power of such an approach. The problem, however, lies in the execution. If these biological theories are used loosely to the point of metaphor, their conceptual power to guide research wanes substantially.

Simple Rules Do Useful Things, But Which Ones?

Depending on who you ask – and their mood at moment – you might come away with the impression that humans are a uniquely intelligent species, good at all manner of tasks, or a profoundly irrational and, well, stupid one, prone to frequent and severe errors in judgment. The topic often penetrates into lay discussions of psychology, and has been the subject of many popular books, such as the Predictably Irrational series. Part of the reason that people might give these conflicting views of human intelligence – either in terms of behavior or reasoning – is the popularity of explaining human behavior through cognitive heuristics. Heuristics are essentially rules of thumb which focus only on limited sets of information when making decisions. A simple, perhaps hypothetical example of a heuristic might be something like a “beauty heuristic”. This heuristic might go something along the lines of when deciding who to get into a relationship with, pick the most physically attractive available option; other information – such as the wealth, personality traits, and intelligence of the perspective mates – would be ignored by the heuristic.

Which works well when you can’t notice someone’s personality at first glance.

While ignoring potential sources information might seem perverse at first glance, given that one’s goal is to make the best possible choice, it has the potential to be a useful strategy. One of these reasons is that the world is a rather large place, and gathering information is a costly process. The benefits of collecting additional bits of information are outweighed by the costs of doing so past a certain point, and there are many, many potential sources of information to choose from. To the extent that additional information helps one make a better choice, making the best objective choice is often a practical impossibility. In this view, heuristics trade off accuracy with effort, leading to ‘good-enough’ decisions. A related, but somewhat more nuanced benefit of heuristics comes from the sampling-error problem: whenever you draw samples from a population, there is generally some degree of error in your sample. In other words, your small sample is often not entirely representative of the population from which it’s drawn. For instance, if men are, on average, 5 inches taller than women the world over, if you select 20 random men and women from your block to measure, your estimate will likely not be precisely 5 inches; it might be lower or higher, and the degree of that error might be substantial or negligible.

Of note, however, is the fact that the fewer people from the population you sample, the greater your error is likely to be: if you’re only sampling 2 men and women, your estimate is likely to be further from 5 inches (in one direction or the other) relative to when you’re sampling 20, relative to 50, relative to a million. Importantly, the issue of sampling error crops up for each source of information you’re using. So unless you’re sampling large enough quantities of information capable of balancing that error out across all the information sources you’re using, heuristics that ignore certain sources of information can actually lead to better choices at times. This is because the bias introduced by the heuristics might well be less predictively-troublesome than the degree of error variance introduced by insufficient sampling (Gigerenzer, 2010). So while the use of heuristics might at times seem like a second-best option, there appear to be contexts where it is, in fact, the best option, relative to an optimization strategy (where all available information is used).

While that seems to be all well and good, the acute reader will have noticed the boundary conditions required for heuristics to be of value: they need to know how much of which sources of information to pay attention to. Consider a simple case where you have five potential sources of information to attend to in order to predict some outcome: one of these is sources strongly predictive, while the other four are only weakly predictive. If you play an optimization strategy and have sufficient amounts of information about each source, you’ll make the best possible prediction. In the face of limited information, a heuristic strategy can do better provided you know you don’t have enough information and you know which sources of information to ignore. If you picked which source of information to heuristically-attend to at random, though, you’d end up making a worse prediction than the optimizer 80% of the time. Further, if you used a heuristic because you mistakenly believed you didn’t have sufficient amounts of information when you actually did, you’ve also made a worse prediction than the optimizer 100% of the time.

“I like those odds; $10,000 on blue! (The favorite-color heuristic)”

So, while heuristics might lead to better decisions than attempts at optimization at times, the contexts in which they manage that feat are limited. In order for these fast and frugal decision rules to be useful, you need to be aware of how much information you have, as well as which heuristics are appropriate for which situations. If you’re trying to understand why people use any specific heuristic, then, one would need to make substantially more textured predictions about the functions responsible for the existence of the heuristic in the first place. Consider the following heuristic, suggested by Gigerenzer (2010): if there is a default, do nothing about it. That heuristic is used to explain, in this case, the radically different rates of being an organ donor between countries: while only 4.3% of Danish people are donors, nearly everyone in Sweden is (approximately 85%). Since the explicit attitudes about the willingness to be a donor don’t seem to differ substantially between the two countries, the variance might prove a mystery; that is, until one realizes that the Danes have an ‘opt in’ policy to be a donor, whereas the Swedes have an ‘opt out’ one. The default option appears to be responsible for driving most of variance in rates of organ donor status.

While such a heuristic explanation might seem, at least initially, to be a satisfying one (in that it accounts for a lot of the variance), it does leave one wanting in certain regards. If anything, the heuristic seems more like a description of a phenomenon (the default option matters sometimes) rather than an explanation of it (why does it matter, and under what circumstances might we expect it to not?). Though I have no data on this, I imagine if you brought subjects into the lab and presented them with an option to give the experimenter $5 or have the experimenter give them $5, but highlighted the first option as default, you would probably find very few people who did not ignore the default heuristic. Why, then, might the default heuristic be so persuasive at getting people to be or fail to be organ donors, but profoundly unpersuasive at getting people to give up money? Gigerenzer’s hypothesized function for the default heuristic – group coordination – doesn’t help us out here, since people could, in principle, coordinate around either giving or getting. Perhaps one might posit that another heuristic – say, when possible, benefit the self over others – is at work in the new decision, but without a clear, and suitably textured theory for predicting when one heuristic or another will be at play, we haven’t explained these results.

In this regard, then, heuristics (as explanatory variables) share the same theoretical shortcoming as other “one-word explanations” (like ‘culture’, ‘norms’, ‘learning’, ‘the situation’, or similar such things frequently invoked by psychologists). At best, they seem to describe some common cues picked up on by various cognitive mechanisms, such as authority relations (what Gigerenzer suggested formed the following heuristic: if a person is an authority, follow requests) or peer behavior (the imitate-your-peers heuristic: do as your peers do) without telling us anything more. Such descriptions, it seems, could even drop the word ‘heuristic’ altogether and be none the worse for it. In fact, given that Gigerenzer (2010) mentions the possibility of multiple heuristics influencing a single decision, it’s unclear to me that he is still be discussing heuristics at all. This is because heuristics are designed specifically to ignore certain sources of information, as mentioned initially. Multiple heuristics working together, each of which dabble in a different source of information that the others ignore seem to resemble an optimization strategy more closely than heuristic one.

And if you want to retain the term, you need to stay within the lines.

While the language of heuristics might prove to be a fast and frugal way of stating results, it ends up being a poor method of explaining them or yielding much in the way of predictive value. In determining whether some decision rule even is a heuristic in the first place, it would seem to behoove those advocating the heuristic model to demonstrate why some source(s) of information ought to be expected to be ignored prior to some threshold (or whether such a threshold even exists). What, I wonder, might heuristics have to say about the variance in responses to the trolley and footbridge dilemmas, or the variation in moral views towards topics like abortion or recreational drugs (where people are notably not in agreement)? As far as I can tell, focusing on heuristics per se in these cases is unlikely to do much to move us forward. Perhaps, however, there is some heuristic heuristic that might provide us with a good rule of thumb for when we ought to expect heuristics to be valuable…

References: Gigerenzer, G. (2010). Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality Topics in Cognitive Science., 2, 528-554 DOI: 10.1111/j.1756-8765.2010.01094.x

Do People Try To Dishonestly Signal Fairness?

“My five-year old, the other day, one of her toys broke, and she demanded I break her sister’s toy to make it fair. And I did.” – Louis CK

This quote appeared in a post of mine around the middle of last month, in which I wanted to draw attention to the fact that a great deal of caution is warranted in inferring preferences for fairness per se from the end-states of economic games. Just because people behaved in ways that resulted in inequality being reduced, it does not necessarily follow that people were consciously acting in those ways to reduce inequality, or that humans have cognitive adaptations designed to do so; to achieve fairness. In fact, I have the opposite take on matter: since achieving equality per se doesn’t necessarily do anything useful, we should not expect to find cognitive mechanisms designed for achieving that end state. In this view, concerns for fairness are byproducts of cognitive systems designed to do other useful things. Fairness, after all, would be – and indeed can only be – an additional restriction to tack onto the range of possible, consequentially-usefully outcomes. As the Louis CK quote makes clear, concerns for fairness might involve doing things which are actively detrimental, like destroying someone else’s property to maintain some kind of equal distribution of resources. As his quote also makes clear, people are, in fact, sometimes willing to do just that.

Which brings us nicely to the topic of fairness and children.

There has been some research on children where an apparent preference for fairness (the Louis CK kind) has been observed. In the first study in a paper by Shaw et al (2012), children, ages 6 to 8, were asked a series of questions so as to be rewarded with colorful erasers (a valued resource for children). The experimenter also told the child that another, non-present child had finished a similar task and, as such, had also earned some erasers. Initially, the experimenter divided four erasers equally between the two, then left the room to retrieve a final eraser they had ostensibly forgotten. The experimenter returned to the room, and asked the child what they should do with the fifth eraser: should the child themselves get it, should the non-present child get it, or should it be thrown away? A remarkable 80% of children suggested that the eraser should be thrown away, rather than taking it for themselves or giving it to the other child. The first thing worth noting here is that children appeared willing to achieve equality through welfare destruction; equality made no one better off here, and at least one person worse off. This is what I meant when I said that achieving equality only limits the possible range of behaviors. The more interesting finding, though, is what happened when children had the available option for non-transparently unfair behavior.

The two other conditions in this first study tracked the possibility that children only wanted to appear fair, without actually being fair. In these conditions, the erasers were placed inside envelops, so as to be concealed from view. In the first of these two conditions, the child was given 1 eraser while the other non-present child was given two. When the experimenter left the room to retrieve the last eraser, a confederate came in and placed an additional eraser inside the child’s envelop and told the child to keep it secret. Then, the experimenter returned with the final eraser and asked the child what they should do with it. In this condition, only 25% of children said the eraser should be thrown away, with the rest opting instead to keep it for themselves; an unfair distribution. The second version of this condition was the same, except it was the non-present child who got the 1 eraser initially, with the confederate adding the same secret eraser to the non-present child’s envelop. In that condition, 60% of children suggested the experimenter should throw away the last eraser, with the remaining 40% keeping it for themselves (making them appear indifferent between a fair distribution or a selfish, unfair one).

So, just to recap, children will publicly attempt to achieve a fair outcome, even though doing so results in worse consequentialist outcomes (there is no material benefit to either child to throwing away an otherwise-valued eraser). However, privately, children are perfectly content to behave unfairly. The proffered explanation for these findings is that children wanted to send a signal of fairness to others publicly, but actually preferred to behave unfairly, and when they had some way of obscuring that they were doing so, they would make use of it. Indeed, findings along these same lines have been demonstrated across a variety of studies in adults as well – appear publicly fair and privately selfish – so the patterns of behavior appear sound. While I think there is certainly something to the signaling model proposed by Shaw et al (2012), I also think the signaling explanation requires some semantic and conceptual tweaking in order to make it work since, as it stands, it doesn’t make good sense. These alterations focus on two main areas: the nature of communication itself and the necessary conditions for signals to evolve, and also on how to precisely conceptualize what signal is – or rather isn’t – being sent, as well as why we ought to expect that state of affairs. Let’s begin by talking honesty.

Liar, Liar, I’m bad a poetry and you have third degree burns now.

The first issue with the signaling explanation involves a basic conceptual point about communication more generally: in order for a receiver to care about a signal from a sender in the first place, the signal needs to (generally) be honest. If I publicly proclaim that I’m a millionaire when I’m actually not, it would behoove listeners to discount what it is I have to say. A dishonest signal is of no value to the receiver. The same logic holds throughout the animal kingdom, which is why ornaments that signal an animal’s state – like the classic peacock tail – are generally very costly to grow, maintain, and survive with. These handicaps ensure the signal’s honesty and make it worth the peahen’s while to respond to. If, on the other hand, the peacocks could display a train without actually being in better condition, the signal value of the trait is lost, and we should expect peahens to eventually evolve in the direction of no longer caring about the signal. The fairness signaling explanation, then, seems to be phrased rather awkwardly: in essence, it would appear to say that, “though people are not actually fair, they try to signal that they are because other people will believe them”. This requires positing that the signal itself is a dishonest one and that receivers care about it. That’s a conceptual problem.

The second issue is that, even if one was actually fair in terms of resource distribution both publicly and privately, it seems unclear to me that one would benefit in any way by signaling that fact about themselves. Understanding why should be fairly easy: partial friends – one’s who are distinctly and deeply interested in promoting your welfare specifically – are more desirable allies than impartial ones. Someone who would treat all people equally, regardless of preexisting social ties, appears to pose no distinct benefits as an association partner. Imagine, for instance, how desirable a romantic partner would be who is just as interested in taking you out for dinner as they are in taking anyone else out. If they don’t treat you special in any way, investing your time in them would be a waste. Similarly, a best friend who is indifferent between spending time with you or someone they just met works as well for the purposes of this example. Signaling you’re truly fair, then, is signaling that you’re not a good social investment. Further, as this experiment demonstrated, achieving fairness can often mean worse outcomes for many. Since the requirement of fairness is a restriction on the range of possible behaviors on can engage it, fairness per se cannot lead to better utilitarian outcomes. Not only would signaling true fairness make you seem like a poor friend, it would also tell others that you’re the type of person who will tend to make worse decisions, overall. This doesn’t paint a pretty picture of fair individuals.

So what are we to make of the children’s curious behavior of throwing out an eraser? My intuition is that children weren’t trying to send a signal of fairness so much as they were trying to avoid sending a signal of partiality. This is a valuable distinction to make, as it makes the signaling explanation immediately more plausible: now, instead of a dishonest signal that needs to be believed, we’re left with the lack of a distinct signal that need not be considered either honest or dishonest. The signal is what’s important, but the children’s goal is avoid letting signals leak, rather than actively sending them. This raises the somewhat-obvious question of why we might expect people to sometimes forgo personal benefits to themselves or others so as to avoid sending a signal of partiality. This is an especially important consideration, as not throwing away a resource can (potentially) be beneficial no matter where it ends up: either directly beneficial in terms of gaining a resource for yourself, or beneficial in terms of initiating or maintaining new alliances if generously given to others. Though I don’t have a more-definite response to that concern, I do have some tentative suggestions.

Most of which sadly request that I, “Go eat a…”, well, you know.

An alternative possibility is that people might wish to, at times, avoid giving other people information pertaining to the extent of their existing partial relationships. If you know, for instance, that I am already deeply invested in friendships with other people, that might make me look like a bad potential investment, as I have, proportionately, fewer available resources to invest in others than if I didn’t have those friendships; I would also have less of a need for additional friends (as I discussed previously). Further, social relationships can come with certain costs or obligations, and there are times where initiating a new relationship with someone is not in your best interests: even if that person might treat you well, associating with them might carry costs from others your new partner has mistreated in the past. Though these potentials might not necessarily explain why the children are behaving the way they do with respect to erasers, it at least gives us a better theoretical grounding from which to start considering the question. What I feel we can be confident about is that the strategy that children are deploying resembles poker players trying to avoid letting other people see what cards they’re holding, rather than trying to lie to other people about what they are. There is some information that it’s not always wise to send out into the world, and some information it’s not worth listening to. Since communication is a two-way street, it’s important to avoid trying to think about each side individually in these matters.

References: Shaw A, Montinari N, Piovesan M, Olson KR, Gino F, & Norton MI (2013). Children Develop a Veil of Fairness. Journal of experimental psychology. General PMID: 23317084

Why Would Bad Information Lead To Better Results?

There are some truly strange arguments made in the psychological literature from time to time. Some might even be so bold as to call that frequency “often”, while others might dismiss the field of psychology as a variety of pseudoscience and call it a day. Now, were I to venture some guesses as to why strange arguments seem so popular, I’d have two main possibilities in mind: first, there’s the lack of well-grounded theoretical framework that most psychologists tend to suffer from, and, second, there’s a certain pressure put on psychologists to find and publish surprising results (surprising in that they document something counter-intuitive or some human failing. I blame this one for the lion’s share of these strange arguments). These two factors might come together to result in rather nonsensical arguments being put forth fairly-regularly and their not being spotted for what they are. One of these strange arguments that has come across my field of vision fairly frequently in the past few weeks is the following: that our minds are designed to actively create false information, and because of that false information we are supposed to be able to make better choices. Though it comes in various guises across different domains, the underlying logic is always the same: false beliefs are good. On the face of it, such an argument seems silly. In all fairness, however, it only seems that way because, well, it is that way.

If only all such papers came with gaudy warning hats…

Given the strangeness of these arguments, it’s refreshing to come across papers critical of them that don’t pull any rhetorical punches. For that reason, I was immediately drawn towards a recent paper entitled, “How ‘paternalistic’ is spatial perception? Why wearing a heavy backpack doesn’t – and couldn’t – make hills steeper” (Firestone, 2013; emphasis his). The general idea that the paper argues against is the apparently-popular suggestion that our perception essentially tells us – the conscious part of us, anyway – many little lies to get us to do or not do certain things. As the namesake of the paper implies, one argument goes that wearing a heavy backpack will make hills actually look steeper. Not just feel harder to climb, mind you, but actually look visually steeper. The reason some researchers posited this might be the case is because they realized, correctly, that wearing a heavy backpack makes hills harder to climb. In order to dissuade us from climbing them under such conditions, then, out perceptual system is thought makes the hill look harder to climb than it actually is, so we don’t try. Additionally, such biases are said to make decisions easier by reducing the cognitive processing required to make them.

Suggestions like these do violence to our intuitive experience of the world. Were you looking down the street unencumbered, for instance, your perception of the street would not visibly lengthen before your eyes were you to put on a heavy backpack, despite the distance now being harder to travel. Sure; you might be less inclined to take that walk down the street with the heavy backpack on, but that’s a different matter as to whether you would see the world any differently. Those who favor the embodied model might (and did) counter that it’s not that the distances themselves change, but rather the units on the ruler used to measure one’s position relative to them that does (Proffitt, 2013). In other words, since our measuring tool looks different, the distances look different. I find such an argument wanting, as it appears to be akin to suggesting that we should come to a different measurement of a 12-foot room contingent on whether we’re using foot-long or yard-long measuring sticks, but perhaps I’m missing some crucial detail.

In any case, there are many other problems with the embodied account that Firestone (2013) goes through, such as the magnitude of the effect sizes – which can be quite small – being insufficient to accurately adjust behavior, their being little to no objective way of scaling one’s relative abilities to certain kinds of estimates, and, perhaps most damningly, that many of these effects fail to replicate or can be eliminated by altering the demand characteristics of the experiments in which they’re found. Apparently subjects in these experiments seemed to make some connection – often explicitly – between the fact they were just asked to put on a heavy backpack and then make an estimate of the steepness of a hill. They’re inferring what the experimenter wants and then adjusting their estimates accordingly.

While Firestone (2013) makes many good points in suggesting why the paternalistic (or embodied) account probably isn’t right, there are some I would like to add to the list. The first of these additions is that, in many cases, the embodied account seems to be useless for discriminating between even directly-comparable actions. Consider the following example in which such biases might come into play: you have a heavy load to transport from point A to point B, and you want to figure out the easiest way of doing so. One route takes you over a steep hill; another route takes you the longer distance around the hill. How should we expect perceptual estimates to be biased in order to help you solve the task? On the one hand, they might bias you to avoid the hill, as the hill now looks steeper; on the other hand, they might bias you to avoid the more circuitous route, as distances now look longer. It would seem the perceptual bias resulting from the added weight wouldn’t help you make a seemingly simple decision. At best, such biases might make you decide to not bother carrying the load in the first place, but the moment you put it down, the perceptions of these distances ought to shrink, making the task seem more manageable. All such a biasing system would seem to do in cases like this, then, is add extra cognitive processing into the mix in the form of whatever mechanisms are required to bias your initial perceptions.

“It’s symbolic; things don’t always have to “do” things. Now help me plug it into the wall”

The next addition I’d like to make is also in regards to the embodied account not being useful: the embodied account, at least at times, would seem to get causality backwards. Recall that the hypothesized function of these ostensible perceptual distortions is to guide actions. Provided I’m understanding the argument correctly, then, these perceptual distortions ought to occur before one decides what to do; not after the decision has already been made. The problem is that they don’t seem to be able to work in that fashion, and here’s why: these biasing systems would be unable to know in which direction to bias perceptions prior to a decision being made. If, for instance, some part of your mind is trying to bias your perception of the steepness of a hill so as to dissuade you from climbing it, that would seem to imply that some part of your mind already made the decision as to whether or not to try and make the climb. If the decision hadn’t been made, the direction or extent of the bias would remain undetermined. Essentially, these biasing rules are being posited to turn your perceptual systems into superfluous yes-men.

On that point, it’s worth noting that we are talking about biasing existing perceptions. The proposition on the table seems to be the following chain of events: first, we perceive the world as it is (or at least as close to that state as possible; what I’ll call the true belief). This leaves most of cognitive work already done, as I mentioned above. Then, from those perceptions, an action is chosen based on some expected cost/benefit analysis (i.e. don’t climb the hill because it will be too hard). Following this, our mind takes the true belief it already made the action decision with and turns it into a false one. This false belief then biases our behavior so as to get us to do what we were going to do anyway. Since the decision can be made on the basis of the initially-calculated true information, the false belief seem to have no apparent benefit for your immediate decision. The real effect of these false beliefs, then, ought to be expected to be seen in subsequent decisions. This raises yet another troubling possibility for the model: in the event that some perception – like steepness – is used to generate estimates of multiple variables (such as energy expenditure, risk, or so on), a biased perception will similarly bias all these estimates.

A quick example should highlight some of potential problems with this. Let’s say you’re a camper returning home with a heavy load of gear on your back. Because you’re carrying a heavy load, you mistakenly perceive that your camping group is farther away than they actually are. Suddenly, you notice an rather hungry-looking predator approaching you. What do you do? You could try and run back to the safety of your group, or you could try and fight it off (forgoing other behavioral options for the moment). Unfortunately, because you mistakenly believe that your group is farther away than they are, you miscalculate the probability of making it to them before the predator catches up with you and opt to fight it off instead. Since the basis for that decision is false information, the odds of it being the best choice are diminished. This analysis works in the opposition direction as well. There are two types of errors you might make: thinking you can make the distance when you can’t, or thinking you can’t make it when you can. Both of these are errors to be avoided, and avoiding errors is awfully hard when you’re working with bad information.

Especially when you just disregarded the better information you had

It seems hard to find the silver lining in these false-belief models. They don’t seem to save any cognitive load, as they require the initially true beliefs to already be present in the mind somewhere. They don’t seem to help us make a decision either. At best, false beliefs lead us to do the same thing we would do in the presence of true beliefs anyway; at worst, false beliefs lead us to make worse decisions than we otherwise would. These models appear to require that our minds take the best possible state of information they have access to and then add something else to it. Despite these (perhaps not-so) clear shortcomings, false belief models appear to be remarkably popular, and are used to explain topics from religious beliefs to ostensible misperceptions of sexual interest. Given that people generally seem to understand that it’s beneficial to see through the lies of others and not be manipulated with false information, it seems peculiar that they have a harder time recognizing that it’s similarly beneficial to avoid lying to ourselves.

References: Firestone, C. (2013). How “Paternalistic” Is Spatial Perception? Why Wearing a Heavy Backpack Doesn’t- and Couldn’t – Make Hills Look Steeper. Perspectives on Psychological Science, 8, 455-473

Proffitt, D. (2013). An Embodied Approach to Perception: By What Units Are Visual Perceptions Scaled? Perspectives on Psychological Science, 8, 474-483.