Why Would Bad Information Lead To Better Results?

There are some truly strange arguments made in the psychological literature from time to time. Some might even be so bold as to call that frequency “often”, while others might dismiss the field of psychology as a variety of pseudoscience and call it a day. Now, were I to venture some guesses as to why strange arguments seem so popular, I’d have two main possibilities in mind: first, there’s the lack of well-grounded theoretical framework that most psychologists tend to suffer from, and, second, there’s a certain pressure put on psychologists to find and publish surprising results (surprising in that they document something counter-intuitive or some human failing. I blame this one for the lion’s share of these strange arguments). These two factors might come together to result in rather nonsensical arguments being put forth fairly-regularly and their not being spotted for what they are. One of these strange arguments that has come across my field of vision fairly frequently in the past few weeks is the following: that our minds are designed to actively create false information, and because of that false information we are supposed to be able to make better choices. Though it comes in various guises across different domains, the underlying logic is always the same: false beliefs are good. On the face of it, such an argument seems silly. In all fairness, however, it only seems that way because, well, it is that way.

If only all such papers came with gaudy warning hats…

Given the strangeness of these arguments, it’s refreshing to come across papers critical of them that don’t pull any rhetorical punches. For that reason, I was immediately drawn towards a recent paper entitled, “How ‘paternalistic’ is spatial perception? Why wearing a heavy backpack doesn’t – and couldn’t – make hills steeper” (Firestone, 2013; emphasis his). The general idea that the paper argues against is the apparently-popular suggestion that our perception essentially tells us – the conscious part of us, anyway – many little lies to get us to do or not do certain things. As the namesake of the paper implies, one argument goes that wearing a heavy backpack will make hills actually look steeper. Not just feel harder to climb, mind you, but actually look visually steeper. The reason some researchers posited this might be the case is because they realized, correctly, that wearing a heavy backpack makes hills harder to climb. In order to dissuade us from climbing them under such conditions, then, out perceptual system is thought makes the hill look harder to climb than it actually is, so we don’t try. Additionally, such biases are said to make decisions easier by reducing the cognitive processing required to make them.

Suggestions like these do violence to our intuitive experience of the world. Were you looking down the street unencumbered, for instance, your perception of the street would not visibly lengthen before your eyes were you to put on a heavy backpack, despite the distance now being harder to travel. Sure; you might be less inclined to take that walk down the street with the heavy backpack on, but that’s a different matter as to whether you would see the world any differently. Those who favor the embodied model might (and did) counter that it’s not that the distances themselves change, but rather the units on the ruler used to measure one’s position relative to them that does (Proffitt, 2013). In other words, since our measuring tool looks different, the distances look different. I find such an argument wanting, as it appears to be akin to suggesting that we should come to a different measurement of a 12-foot room contingent on whether we’re using foot-long or yard-long measuring sticks, but perhaps I’m missing some crucial detail.

In any case, there are many other problems with the embodied account that Firestone (2013) goes through, such as the magnitude of the effect sizes – which can be quite small – being insufficient to accurately adjust behavior, their being little to no objective way of scaling one’s relative abilities to certain kinds of estimates, and, perhaps most damningly, that many of these effects fail to replicate or can be eliminated by altering the demand characteristics of the experiments in which they’re found. Apparently subjects in these experiments seemed to make some connection – often explicitly – between the fact they were just asked to put on a heavy backpack and then make an estimate of the steepness of a hill. They’re inferring what the experimenter wants and then adjusting their estimates accordingly.

While Firestone (2013) makes many good points in suggesting why the paternalistic (or embodied) account probably isn’t right, there are some I would like to add to the list. The first of these additions is that, in many cases, the embodied account seems to be useless for discriminating between even directly-comparable actions. Consider the following example in which such biases might come into play: you have a heavy load to transport from point A to point B, and you want to figure out the easiest way of doing so. One route takes you over a steep hill; another route takes you the longer distance around the hill. How should we expect perceptual estimates to be biased in order to help you solve the task? On the one hand, they might bias you to avoid the hill, as the hill now looks steeper; on the other hand, they might bias you to avoid the more circuitous route, as distances now look longer. It would seem the perceptual bias resulting from the added weight wouldn’t help you make a seemingly simple decision. At best, such biases might make you decide to not bother carrying the load in the first place, but the moment you put it down, the perceptions of these distances ought to shrink, making the task seem more manageable. All such a biasing system would seem to do in cases like this, then, is add extra cognitive processing into the mix in the form of whatever mechanisms are required to bias your initial perceptions.

“It’s symbolic; things don’t always have to “do” things. Now help me plug it into the wall”

The next addition I’d like to make is also in regards to the embodied account not being useful: the embodied account, at least at times, would seem to get causality backwards. Recall that the hypothesized function of these ostensible perceptual distortions is to guide actions. Provided I’m understanding the argument correctly, then, these perceptual distortions ought to occur before one decides what to do; not after the decision has already been made. The problem is that they don’t seem to be able to work in that fashion, and here’s why: these biasing systems would be unable to know in which direction to bias perceptions prior to a decision being made. If, for instance, some part of your mind is trying to bias your perception of the steepness of a hill so as to dissuade you from climbing it, that would seem to imply that some part of your mind already made the decision as to whether or not to try and make the climb. If the decision hadn’t been made, the direction or extent of the bias would remain undetermined. Essentially, these biasing rules are being posited to turn your perceptual systems into superfluous yes-men.

On that point, it’s worth noting that we are talking about biasing existing perceptions. The proposition on the table seems to be the following chain of events: first, we perceive the world as it is (or at least as close to that state as possible; what I’ll call the true belief). This leaves most of cognitive work already done, as I mentioned above. Then, from those perceptions, an action is chosen based on some expected cost/benefit analysis (i.e. don’t climb the hill because it will be too hard). Following this, our mind takes the true belief it already made the action decision with and turns it into a false one. This false belief then biases our behavior so as to get us to do what we were going to do anyway. Since the decision can be made on the basis of the initially-calculated true information, the false belief seem to have no apparent benefit for your immediate decision. The real effect of these false beliefs, then, ought to be expected to be seen in subsequent decisions. This raises yet another troubling possibility for the model: in the event that some perception – like steepness – is used to generate estimates of multiple variables (such as energy expenditure, risk, or so on), a biased perception will similarly bias all these estimates.

A quick example should highlight some of potential problems with this. Let’s say you’re a camper returning home with a heavy load of gear on your back. Because you’re carrying a heavy load, you mistakenly perceive that your camping group is farther away than they actually are. Suddenly, you notice an rather hungry-looking predator approaching you. What do you do? You could try and run back to the safety of your group, or you could try and fight it off (forgoing other behavioral options for the moment). Unfortunately, because you mistakenly believe that your group is farther away than they are, you miscalculate the probability of making it to them before the predator catches up with you and opt to fight it off instead. Since the basis for that decision is false information, the odds of it being the best choice are diminished. This analysis works in the opposition direction as well. There are two types of errors you might make: thinking you can make the distance when you can’t, or thinking you can’t make it when you can. Both of these are errors to be avoided, and avoiding errors is awfully hard when you’re working with bad information.

Especially when you just disregarded the better information you had

It seems hard to find the silver lining in these false-belief models. They don’t seem to save any cognitive load, as they require the initially true beliefs to already be present in the mind somewhere. They don’t seem to help us make a decision either. At best, false beliefs lead us to do the same thing we would do in the presence of true beliefs anyway; at worst, false beliefs lead us to make worse decisions than we otherwise would. These models appear to require that our minds take the best possible state of information they have access to and then add something else to it. Despite these (perhaps not-so) clear shortcomings, false belief models appear to be remarkably popular, and are used to explain topics from religious beliefs to ostensible misperceptions of sexual interest. Given that people generally seem to understand that it’s beneficial to see through the lies of others and not be manipulated with false information, it seems peculiar that they have a harder time recognizing that it’s similarly beneficial to avoid lying to ourselves.

References: Firestone, C. (2013). How “Paternalistic” Is Spatial Perception? Why Wearing a Heavy Backpack Doesn’t- and Couldn’t – Make Hills Look Steeper. Perspectives on Psychological Science, 8, 455-473

Proffitt, D. (2013). An Embodied Approach to Perception: By What Units Are Visual Perceptions Scaled? Perspectives on Psychological Science, 8, 474-483.

7 comments on “Why Would Bad Information Lead To Better Results?

  1. The idea that the brain constructs false believes does seem plausible to me. Evolution theory posits that random mutations will lead to random behavior. Some of this behavior gives some individuals survival advantages. But that does not mean that a random behavior should be advantageous in all possible situations. Imagine that I am a wooden boat. I might have an advantage above a billion dollar stealth bomber in the case of a flood, but not in the case when the environment becomes dry.

    I might think of examples of brain illusions that help us survive:
    • We perceive the things around us as solid, but in fact there is more space than matter in the atoms. The illusion of solidity helps us in many ways, like avoiding head wounds from bumping into walls.
    • We perceive the things as objects, not as atoms accidentally in positions next to each other. This enables us to manipulate objects physically and computationally. For instance if my brain presents me a group of atoms as a stone, I don’t have to calculate what every atom will do if I take the stone and throw it to a predator. Computationally it is much easier for the brain to treat that group of atoms as one object that will stay as one object when I manipulate it. But there are no objects in the nature, just atoms (or even smaller elementary particles). Despite that my mind creates the false illusion that there exist objects.

    • Jesse Marczyk on said:

      With regards to your examples, it’s worth highlighting that organisms can well be wrong about things without having what I’m calling false beliefs. The true/false belief dichotomy comes in simply on the basis of whether our minds are using their best, unbiased estimate of the world, or whether they’re adding bias into the information they’re receiving. Though we may well not expect for an organism to get everything right, we should not expect an organism to be designed in order to purposefully get things wrong, generally speaking.

      There might be some cases where an organism is designed to get things wrong (or at least fail to represent them accurately), but doing so would need to offer some compensating benefit, like helping one persuade others about something, Even then, the scope of how wrong can be is important.

  2. It’s not that the brain constructs *false* information, it’s that the optimal perceptual representation of a given object, scene, etc., needs to be *different* depending on context and task. The purpose of perceptual systems is not to represent things as they “really are” but rather to represent them such that the organism can respond appropriately. Murray et al., put it well in their ’06 paper on how visual cortex codes perceived vs. actual size of an object:

    “The ultimate goal of the visual system is clearly not to precisely measure the size of an image projected onto the retina. A more behaviorally critical property of an object is its size relative to the environment, which helps determine its identity and how one should interact with the object” (Murray, Boyaci, & Kersten 2006, p. 432).

    -Gary Lupyan
    Assistant Professor of Psychology
    University of Wisconsin-Madison

    • Jesse Marczyk on said:

      You’re indeed correct that our mind is not necessarily perceiving the world as it is. It is not as if apples are ‘really green’ or certain displays are ‘really threatening’. If I gave that impression, my writing clearly fell prey to not ‘really’ representing what I had in mind.

      The physical world (the non-living parts of it, anyway), however, is not trying to deceive us, nor can we deceive it in turn. While our perception of the world might not be objective or entirely accurate at all times, then, we should expect it to be honest. It’s our best guess, based on the information we have.

      By false beliefs here, I meant dishonest or strategic ones. These are beliefs based on that best information plus something else. If, for instance, our best estimate of the width of a cliff is approximately 15m, it does us no good to then modify that guess down perceptually in the hopes that we can then jump it. A decision maker that uses this ‘true’ information and calculates the expected value of jumping will do better on the whole than a decision maker who uses ‘false’ information to calculate that expected value.

  3. Hi Jesse,

    Thanks for sharing these very interesting thoughts!

    Needless to say, I find much to agree with in your remarks. I especially liked your insight about the usefulness (or lack thereof) of perceptual distortions that are occasioned only by prior decisions to perform the very actions that are supposedly informed by the distortions. In fact, an earlier version of the “paternalistic vision” paper had an entire section devoted to (a cousin of) your objection. To flesh it out a bit, consider the case where grasping a baton (allegedly) makes objects look closer, but only if you intend to use the baton for reaching. The case in which such a tool-induced distance-compression effect would be most useful to a perceiver is the case where grasping the baton makes reachable a formerly unreachable object. So, suppose a perceiver is in such a situation, looking at an unreachable (and, let’s assume, unreachable-looking) object, and then grabs a baton. On the assumption that action-intentions trigger paternalistic perceptual effects, the object will continue to look out of reach until the perceiver intends to reach it, and only then will the object creep closer and look to be within reach. But this leaves us with a puzzle: Why would anyone form an intention to reach an unreachable-looking object in the first place? If once-unreachable objects continue to look as unreachable as ever until baton-wielding perceivers intend to reach them, then it’s not clear how paternalistic distance-compression effects would ever obtain outside of the laboratory (where subjects were explicitly instructed to reach)—because it’s unclear to begin with why a perceiver would intend to reach an object that looked unreachable! Anyway, the objection ended up on the paper’s cutting floor for various reasons, including that it only works for positive changes in ability but not negative changes. (For example, the objections fails against the famous backpack manipulation, because the hill might look climbable initially, so you *would* intend to climb it, but then decide against it once it looked steeper.)

    I very much liked your other points as well, although I’d think supporters of embodied/paternalistic vision would have some natural responses both to them and to the main themes of your post. In particular, I’d think they would simply reject the notion that there is an undistorted representation somewhere in the mind that gets contaminated by other influences to create a percept of, say, a greater distance or a steeper hill. According to (at least my reading of) the “scaling” view developed in Proffitt’s reply in Perspectives and also in a very interesting 2013 book chapter by Proffitt & Linkenauger (I can send a copy by e-mail if you’d like), the information processed by “perceptual rulers” is raw, uninterpreted visual information (e.g. retinal projections, binocular disparities, etc.), which is directly transformed into representations of spatial layout that are scaled by the relevant action capability. So, according to that theory, there’s no intermediate step wherein the visual system figures out how far some object *really* is and only then plays around with that representation in accordance with how much weight is on your shoulders; it’s that the visual system goes straight from the angular projection of that extent to its specification in units of effort, just as it does for eye-height scaling. (After all, it doesn’t seem right to say that the miniature children in Honey I Shrunk the Kids were *misperceiving* the family dog when the dog appeared huge and monster-like to them.)

    For a somewhat similar reason, I’d also think supporters of the “perceptual ruler” theory of embodied/paternalistic vision would take issue with your objection to Proffitt’s argument about those rulers’ consequences for perceptual phenomenology. At least as I understand the theory, it’s not quite as you said that rulers that were merely of different sizes (e.g. a foot-long ruler vs. a yard-long ruler) would return different measurements of an extent. (That, as you pointed out, seems obviously false.) It’s more like if you had a foot-long ruler made of putty, and then, before measuring, you stretched out the ruler a bit, so that the distance between its tick-marks was increased. In that case, the ruler really would return a different measurement (in feet) for distance. But, according to Proffitt, that wouldn’t mean that the newly-stretched ruler is telling you that the object whose distance it is measuring is in a different location now than before; instead, the ruler would be telling you that that location is a different distance away now than it was before, in the native units of the ruler.

    Anyway, whether or not that’s right, I think that reply fails for independent reasons (which I’ll explore in a reply of my own that I’m nearly done and am hoping to post soon). But your blog post clearly raises a number of very interesting issues that I should probably be thinking harder about!

    All the best,
    Chaz

    • Jesse Marczyk on said:

      When you do get your post up, I’d like to read it. Keep me posted.

      [EDIT] I just realized that your reply will likely come in the form of a publication and not a blog like this one, so perhaps I should rephrase and say I’d simply like to read it when it becomes available.

      While I’m adding more, there’s a quote by Wittgenstein I feel is relevant here:

      He (Ludwig Wittgenstein) once asked me: ‘Why do people say it is more logical to think that the sun turns around the Earth than Earth rotating around its own axis?’ I answered: ‘I think because it seems as if the sun turns around the Earth.’ ‘Good,’ he said, ‘but how would it have been if it had seemed as if the Earth rotates around its own axis then?’

      I feel this has some relevance to the perceptual ruler argument. There’s a difference, I feel, between accurately perceiving the world and generating some feeling about those perceptions. While the shrunken children might feel that the dog is now tremendous, relative to their current size, they would also still be accurately perceiving the size of the dog. Indeed, putting on a heavy backpack might might a mile-long run feel more difficult, while not altering our visual perception of the distance itself.

      Unfortunately, trying to test such things becomes complicated, since you’re asking the part of the brain that talks about what the part of the brain that perceives thinks. The problems are readily apparent, in that were you to ask people whether they thought pornography was “real”, they will most always tell you no while becoming physically aroused, indicating that some part of the brain disagrees with that verbal assessment.

      Also, just in case you’re not already familiar with them, Rob Kurzban has a series of good posts on this topic, which can be found below:

      http://www.epjournal.net/blog/2010/11/jack-jill-and-a-lion/
      http://www.epjournal.net/blog/2011/11/advantages-of-error/
      http://www.epjournal.net/blog/2012/10/mice-managing-mistakes/

  4. Nice related research today: “Angry Opponents Seem Bigger to Tied Up Men”

    http://www.sciencedaily.com/releases/2013/08/130807204839.htm?