Keeping Making That Face And It’ll Freeze That Way

Just keep smiling and scare that depression away

Time to do one of my favorite things today: talk about psychology research that failed to replicate. Before we get into that, though, I want to talk a bit about our emotions to set the stage.

Let’s say we wanted to understand why people found something “funny.” To do so, I would begin in a very general way: some part(s) of your mind functions to detect cues in the environment that are translated into psychological experiences like “humor.” For example, when some part of the brain detects a double meaning in a sentence (“Did you hear about the fire at the circus? It was intense”) the output of detecting that double meaning might be the psychological experience of humor and the physiological display of a chuckle and a grin (and maybe an eye-roll, depending on how you respond to puns). There’s clearly more to humor than that, but just bear with me.

This leaves us with two outputs: the psychological experience of something being funny and the physiological response to those funny inputs. The question of interest here (simplifying a little) is which is causing which: are you smiling because you found something funny, or do you find something funny because you’re smiling?

Intuitively the answer feels obvious: you smile because you found something funny. Indeed, this is what the answer needs to be, theoretically: if some part of your brain didn’t detect the presence of humor, the physiological humor response makes no sense. That said, the brain is not a singular organ, and it is possible, at least in principle, that the part of your brain that outputs the conscious experience of “that was funny” isn’t the same piece that outputs the physiological response of laughing and smiling.

The other part of the brain hasn’t figured out that hurt yet

In other words, there might be two separate parts of your brain that function to detect humor independently. One functions before the other (at least sometimes), and generates the physical response. The second might then use that physiological output (I am smiling) as an input for determining the psychological response (That was funny). In that way, you might indeed find something funny because you were smiling.

This is what the Facial Feedback Hypothesis proposes, effectively: the part of your brain generating these psychological responses (That was funny) uses a specific input, which is the state of your face (Am I already smiling?). That’s not the only input it uses, of course, but it should be one that is used. As such, if you make people do something that causes their face to resemble a smile (like holding a pen between their teeth only), they might subsequently find jokes funnier. That was just the result reported by Strack, Martin, & Stepper (1988), in fact.

But why should it do that? That’s the part I’m getting stuck on.

Now, as it turns out, your brain might not do that at all. As I mentioned, this is a post about failures to replicate and, recently, the effect just failed to replicate across 17 labs (approximately 1,900 participants) in a pre-registered attempt. You can read more about the details here.  You can also read the original author’s response here (with all the standard suggestions of, “we shouldn’t rush to judgment about the effect not really replicating because…” which I’ll get to in a minute.

What I wanted to do first, however, is think about this effect on more of a theoretical level, as the replication article doesn’t do so.

Publish first; add theory later

One major issue with this facial feedback hypothesis is that similar physiological responses can underpin very different psychological ones. My heart races not only when I’m afraid, but also when I’m working out, when I’m excited, or when I’m experiencing love. I smile when I’m happy and when something is funny (even if the two things tend to co-occur). If some part of your brain is looking to use the physiological response (heart rate, smile, etc) to determine emotional state, then it’s facing an under-determination problem. A hypothetical inner-monologue would go something like this: “Oh, I have noticed I am smiling. Smiles tend to mean something is funny, so what is happening now must be funny.” The only problem there is that if I were smiling because I was happy – let’s say I just got a nice piece of cake – experiencing humor and laughing at the cake is not the appropriate response.

Even worse, sometimes physiological responses go the opposite direction from our emotions. Have you ever seen videos of people being proposed to or reuniting with loved ones? In such situations, crying doesn’t appear uncommon at all. Despite this, I don’t think some part of the brain would go, “Huh. I appear to be crying right now. That must mean I am sad. Reuniting with loved ones sure is depressing and I better behave as such.”

Now you might be saying that this under-determination isn’t much of an issue because our brains don’t “rely” on the physiological feedback alone; it’s just one of many sources of inputs being used. But then one might wonder whether the physiological feedback is offering anything at all.

The second issue is one I mentioned initially: this hypothesis effectively requires that at least two different cognitive mechanisms are responding to the same event. One is generating the physiological response and the other the psychological response. This is a requirement of the feedback hypothesis, and it raises additional questions: why are two different mechanisms trying to accomplish what is largely the same task? Why is the emotion-generating system using the output of the physiological-response system rather than the same set of inputs? This seems not only redundant, but prone to additional errors, given the under-determination problem. I understand that evolution doesn’t result in perfection when it comes to cognitive systems, but this one seems remarkably clunky.

Clearly the easiest way to determine emotions. Also, Mousetrap!

There’s also the matter of the original author’s response to the failures to replicate, which only adds more theoretically troublesome questions. The first criticism of the replications is that psychology students may differ from non-psychology students in showing the effect, which might be due to psychology students knowing more about this kind of experiment going into it. In this case, awareness of this effect might make it go away. But why should it? If the configuration of your face is useful information for determining your emotional state, simple awareness of that fact shouldn’t change the information’s value. If one realizes that the information isn’t useful and discards it, then one might wonder when it’s ever useful. I don’t have a good answer for that.

Another criticism focused on the presence of a camera (which was not a part of the initial study). The argument here is that the camera might have suppressed the emotional responses that otherwise would have obtained. This shouldn’t be a groundbreaking suggestion on my part, but smiling is a signal for others; not you. You don’t need to smile to figure out if you’re happy; you smile to show others you are. If that’s true, then claiming that this facial feedback effect goes away in the presence of being observed by others is very strange indeed. Is information about your facial structure suddenly not useful in that context? If the effects go away when being observed, that might demonstrate that not only are such feedback effects not needed, but they’re also potentially not important. After all, if they were important, why ignore them?

In sum, the facial feedback hypothesis should require the following to be generally true:

  • (1) One part of our brain should successfully detect and process humor, generating a behavioral output: a smile.
  • (2) A second part of our brain also tries to detect and process humor, independent of the first, but lacks access to the same input information (why?). As such, it uses the outputs of the initial system to produce subsequent psychological experiences (that then do what? The relevant behavior already seems to be generated so it’s unclear what this secondary output accomplishes. That is, if you’re already laughing, why do you need to then experience something as funny?)
  • (3) This secondary mechanism has the means to differentiate between similar physiological responses in determining its own output (fear/excitement/exercise all create overlapping kinds of physical responses, happiness sometimes makes us cry, etc. If it didn’t differentiate it would make many mistakes, but if it can already differentiate, what does the facial information add?).
  • (4) Finally, that this facial feedback information is more or less ignorable (consciously or not), as such effects may just vanish when people are being observed (which was most of our evolutionary history around things like humor) or if they’re aware of their existence. (This might suggest the value of the facial information is, in a practical sense, low. If so, why use it?)

As we can see, that seems rather overly convoluted and leaves us with more questions than it answers. If nothing else, these questions present a good justification for undertaking deeper theoretical analyses of the “whys” behind a mechanism before jumping into studying it.

References: Strack, F., Martin, L. L., Stepper, S. (1988). Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis. Journal of Personality and Social Psychology, 54, 768–777

Wagenmaker, E. et al (2016). Registered replication report: Strack, Martin, & Stepper, (1988). Perspectives on Psychological Science, 11, https://doi.org/10.1177/1745691616674458

The Beautiful People

Less money than it looks like if that’s all 10′s

There’s a perception that exists involving how out of touch rich people can be, summed up well in this popular clip from the show Arrested Development: It’s one banana, Michael, how much could it cost? Ten dollars?” The idea is that those with piles of money – perhaps especially those who have been born into it – have a distorted sense for the way the world works, as there are parts of it they’ve never had to experience. A similar hypothesis guides the research I wanted to discuss today, which sought to examine people’s beliefs in a just world. I’ve written about this belief-in-a-just-world hypothesis before; the reviews haven’t been positive.

The present research (Westfall, Millar, & Lovitt, 2018) took the following perspectives: first, believing in a just world (roughly that people get what they deserve and deserve what they get) is a cognitive bias that some people hold to because it makes them feel good. Notwithstanding the fact that “feeling good” isn’t a plausible function, for whatever reason the authors don’t seem to suggest that believing the world to be unfair is a cognitive bias as well, which is worth keeping in the back of your mind. Their next point is that those who believe in a just world are less likely to have experienced injustice themselves. The more personal injustice one experiences (those that affect you personally in a negative way), the more one is likely to reject their belief in a just world because, again, rejecting that belief when faced with contradictory evidence should maintain self-esteem. Placed in a simple example, if something bad happened to you and you believe the world is a just place, that would mean you deserved that bad thing because you’re a bad person. So, rather than think you’re a bad person, you reject the idea that the world is fair. Seems that the biasing factor there would be the message of, “I’m awesome and deserve good things” as that could explain both believing the world is fair if things are going well and unfair if they aren’t, rather than the just-world belief being the bias, but I don’t want to dwell on that point too much yet.

This is where the thrust of the paper begins to take shape: attractive people are thought to have things easier in life, not unlike being rich. Because being physically attractive means one will be exposed to fewer personally-negative injustices (hot people are more likely to find dates, be treated well in social situations, and so on), they should be more likely to believe the world is a just place. In simple terms, physical attractiveness = better life = more belief in a just world. As the authors put it:

Consistent with this reasoning, people who are societally privileged, such as wealthy, white, and men, tend to be more likely to endorse the just-world hypothesis than those considered underprivileged

The authors also throw some line in their introduction about how physical attractiveness is “largely beyond one’s personal control,” and how “…many long-held beliefs about relationships, such as an emphasis on personality or values, are little more than folklore,” in the face of people valuing physical attractiveness. Now these don’t have any relevance to their paper’s theory and aren’t exactly correct, but should also be kept in the back on your mind to understand the perspective they are writing from.

What a waste of time: physical attractiveness is largely beyond his control

In any case, the authors sought to test this connection between greater attractiveness (and societal privilege) to greater belief in a just world across two studies. The first of these involved asking about 200 participants (69 male) about their (a) belief in a just world, (b) perceptions of how attractive they thought they were, (c) self-esteem, (d) financial status, and (e) satisfaction with life. About as simple as things come, but I like simple. In this case, the correlation between how attractive one thought they were and belief in a just world were rather modest (r = .23), but present. Self-esteem was a better predictor of just-world beliefs (r = .34), as was life satisfaction (r = .34). A much larger correlation understandably emerged between life satisfaction and perceptions of one’s own attractiveness (r = .67). Thinking one was attractive made one happier with life than it did lead one to believe the world is just. Money did much the same: financial status correlated better with life satisfaction (r = .33) than it did just world beliefs (r = .17). Also worth noting is that men and women didn’t differ in their just world beliefs (Ms of 3.2 and 3.14 on the scale, respectively). 

Study 2 did much the same as study one with basically the same sample, but it also included ratings of a participant’s attractiveness supplied by others. This way you aren’t just asking people how attractive they are; you are also asking people less likely to have a vested interest in the answer to the question (for those curious, ratings of self-attractiveness only correlated with other-ratings at r = .21). Now, self-perception of physical attractiveness correlated with belief in a just world (r = .17) less well than independent ratings of attractiveness did (r = .28). Somewhat strangely, being rated as prettier by others wasn’t correlated with self-esteem (r = .07) or life satisfaction (r = .08) – which you might expect it would if being attractive leads others to treat you better – though self-ratings of attractiveness were correlated with these things (rs = .27 and .53, respectively). As before, men and women also failed to differ with respect to their just world beliefs.

From these findings, the authors conclude that being attractive and rich makes one more likely to believe in a just world under the premise that they experience less injustice. But what about that result where men and women don’t differ with respect to their belief in a just world? Doesn’t that similarly suggest that men and women don’t face different amounts of injustice? While this is one of the last notes the authors make in their paper, they do seem to conclude that – at least around college age – men might not be particularly privileged over women. A rather unusual passage to find, admittedly, but a welcome one. Guess arguments about discrimination and privilege apply less to at least college-aged men and women.

While reading this paper, I couldn’t shake the sense that the authors have a rather particular perspective about the nature of fairness and the fairness of the world. Their passages about how belief in a just world is a bias not containing any comparable comments about how thinking the world is unjust also a bias, coupled with comments about how attractiveness if largely outside of one own’s control and this…

Finally, the modest yet statistically significant relationship between current financial status and just-world beliefs strengthens the case that these beliefs are largely based on viewing the world from a position of privilege.

 …in the face of correlations ranging from about .2 to .3 does likely say something about the biases of the authors. Explaining about 10% or less of the variance in belief in a just world from ratings of attractiveness or financial status does not scream that ‘these beliefs are largely based’ on such things to me. In fact, it seems to suggest beliefs in a just world are largely based on other things. 

“The room is largely occupied the ceiling fan”

While there is an interesting debate to have over the concept of fairness in this article, I actually wanted to use this research to discuss a different point about stereotypes. As I have wrote before, people’s beliefs about the world should tend towards accuracy. That is not to say they will always be accurate, mind you, but rather that we shouldn’t expect there to be specific biases built into the system in many cases. People might be wrong about the world to various degrees, but not because the cognitive system generating those perceptions evolved to be wrong (that is, take accurate information about the world and distort it); they should just be wrong because of imperfect information or environmental noise. The reason for this is that there are costs to being wrong and acting on imperfect information. If I believe there is a monster that lives under my bed, I’m going to behave differently than the person who doesn’t believe in such things. If I’m acting under and incorrect belief, my odds of doing something adaptive go down, all else being equal.

That said, there are some cases where we might expect bias in beliefs: the context of persuasion. If I can convince you to hold an incorrect belief, the costs to me can be substantially reduced or outweighed entirely by the benefits. For instance, if I convince you that my company is doing very well and only going to be doing better in the future, I might attract your investment, regardless of whether that belief in me you have is true. Or, if I had authored the current paper, I might be trying to convince you that attractive/privileged people in the world are biased while the less privileged are grounded realists.

The question arises, then, as to what the current results represent: are the beautiful people more likely to perceive the world as fair and the ugly ones more likely to perceive it as unjust because of random mistakes, persuasion, or something else? Taking persuasion first, if those who aren’t doing as well in life as they might hope because of their looks (or behavior, or something else) are able to convince others they have been treated unjustly and are actually valuable social assets worthy of assistance, they might be able to receive more support than if they are convinced their lot in life has been deserved. Similarly, the attractive folk might see the world as more fair to justify their current status to others and avoid having it threatened by those who might seek to take those benefits for their own. This represents a case of bias: presenting a case to others that serves your own interest, irrespective of the truth.

While that’s an interesting idea – and I think there could be an element of that to it in these results – there another option I wanted to explore as well: it is possible that neither side is actually biased. They might both be acting off information that is accurate as far as they know, but simply be working under different sets of it.

“As far as I can tell, it seems flat”

This is where we return to stereotypes. If person A has had consistently negative interactions with people from group X over their life, I suspect person A would have some negative stereotypes about them. If person B has had consistently positive interactions with people from the same group X over their life, I further suspect person B would have some positive stereotypes about them. While those beliefs shape each person’s expectations of the behavior of unknown members of group X and those beliefs/expectations contrast with each other, both are accurate as far as each person is concerned. Person A and B are both simply using the best information they have and their cognitive systems are injecting no bias – no manipulation of this information – when attempted to develop as accurate a picture of the world as possible.

Placed into the context of this particular finding, you might expect that unattractive people are treated differently than attractive ones, the latter offering higher value in the mating market at a minimum (along with other benefits that come with greater developmental stability). Because of this, we might have a naturally-occurring context where people are exposed to two different versions of the same world, both develop different beliefs about it, but neither necessarily doing so because they have any bias. The world doesn’t feel unfair to the attractive person, so they don’t perceive it as such. Similarly, the world doesn’t feel fair to the unattractive person who feels passed over because of their looks. When you ask these people about how fair the world is, you will likely receive contradictory reports that are both accurate as far as the person doing the reporting is aware. They’re not biased; they just receive systematically different sets of information.

Imagine taking that same idea and studying stereotypes on a more local level. What I’ve read about when it comes to stereotype accuracy research has largely been looking at how people’s beliefs about a group compare to that group more broadly; along the lines of asking people, “How violent are men, relative to women,” and then comparing those responses to data collected from all men and women to see how well they match up. While such responses largely tend towards accuracy, I wonder if the degree of accuracy could be improved appreciably by considering what responses any given participant should provide, given the information they have access to. If someone grew up in an area where men are particularly violent, relative to the wider society, we should expect they have different stereotypes about male violence, as those perceptions are accurate as far as they know. Though such research is more tedious and less feasible than using broader measures, I can’t help but wonder what results it might yield. 

References: Westfall, R., Millar, M., & Lovitt A. (2018). The influence of physical attractiveness on belief in a just world. Psychological Reports, 0, 1-14.

Who Deserves Healthcare And Unemployment Benefits?

As I find myself currently recovering from a cold, it’s a happy coincidence that I had planned to write about people’s intuitions about healthcare this week. In particular, a new paper by Jensen & Petersen (2016) attempted to demonstrate a fairly automatic cognitive link between the mental representation of someone as “sick” and of that same target as “deserving of help.” Sickness is fairly unique in this respect, it is argued, because of our evolutionary history with it: as compared with what many refer to as diseases of modern lifestyle (including those resulting from obesity and smoking), infections tended to strike people randomly; not randomly in the sense that anyone is equally as likely to get sick, but more in the sense that people often had little control over when they did. Infections were rarely the result of people intentionally seeking them out or behaving in certain ways. In essence, then, people view those who are sick as unlucky, and unlucky individuals are correspondingly viewed as being more deserving of help than those who are responsible for their own situation.

…and more deserving of delicious, delicious pills

This cognitive link between luck and deservingness can be partially explained by examining expected returns on investment in the social world (Tooby & Cosmides, 1996). In brief, helping others takes time and energy, and it would only be adaptive for an organism to sacrifice resources to help another if doing so was beneficial to the helper in the long term. This is often achieved by me helping you at a time when you need it (when my investment is more valuable to you than it is to me), and then you helping me in the future when I need it (when your investment is more valuable to me than it is to you). This is reciprocal altruism, known by the phrase, “I scratch your back and you scratch mine.” Crucially, the probability of receiving reciprocation from the target you help should depend on why that target needed help in the first place: if the person you’re helping is needy because of their own behavior (i.e., they’re lazy), their need today is indicative of their need tomorrow. They won’t be able to help you later for the same reasons they need help now. By contrast, if someone is needy because they’re unlucky, their current need is not as diagnostic of their future need, and so it is more likely they will repay you later. Because the latter type is more likely to repay than the former, our intuitions about who deserves help shift accordingly.

As previously mentioned, infections tend to be distributed more randomly; my being sick today (generally) doesn’t tell you much about the probability of my future ability to help you once I recover. Because of that, the need generated by infections tends to make sick individuals look like valuable targets of investment: their need state suggests they value your help and will be grateful for it, both of which likely translate into their helping you in the future. Moreover, the needs generated by illnesses can frequently be harmful, even to the point of death if assistance isn’t provided. The greater the need state to be filled, the greater the potential for alliances to be formed, both with and against you. To place that point in a quick, yet extreme, example, pulling someone from a burning building is more likely to ingratiate them to you than just helping them move; conversely, failing to save someone’s life when it’s well within your capabilities can set their existing allies against you.

The sum total of this reasoning is that people should intuitively perceive the sick as more deserving of help than those suffering from other problems that cause need. The particular other problem that Jensen & Petersen (2016) contrast sickness with is unemployment, which they suggest is a fairly modern problem. The conclusion drawn by the authors from these points is that the human mind – given its extensive history with infections and their random nature – should automatically tag sick individuals as deserving of assistance (i.e., broad support for government healthcare programs), while our intuitions about whether the unemployed deserve assistance should be much more varied, contingent on the extent to which unemployment is viewed as being more luck- or character-based. This fits well with the initial data that Jensen & Petersen (2016) present about the relative, cross-national support for government spending on healthcare and unemployment: not only is healthcare much more broadly supported than unemployment benefits (in the US, 90% vs 52% of the population support government assistance), but support for healthcare is also quite a bit less variable across countries.

Probably because the unemployed don’t have enough bake sales or ribbons

Some additional predictions drawn by the authors were examined across a number of studies in the paper, only two of which I would like to focus on for length constraints. The first of these studies presented 228 Danish participants with one of four scenarios: two in which the target was sick and two in which the target was unemployed. In each of these conditions, the target was also said to be lazy (hasn’t done much in life and only enjoys playing video games) or hardworking (is active and does volunteer work; of note, the authors label the lazy/hardworking conditions as high/low control, respectively, but I’m not sure that really captures the nature of the frame well). Participants were asked how much an individual like that deserved aid from the government when sick/unemployed on a 7-point scale (which was converted to a 0-1 scale for ease of interpretation).

Overall, support for government aid was lower in both conditions when the target was framed as being lazy, but this effect was much larger in the case of unemployment. When it came to the sick individual, support for healthcare for the hardworking target was about a 0.9, while support for the lazy one dipped to about 0.75; by contrast, the hardworking unemployed individual was supported with benefits at about 0.8, while the lazy one only received support around the 0.5 point. As the authors put it, the effect of the deservingness information was about 200% less influential when it came to sickness.

There is an obvious shortcoming in that study, however: being lazy has quite a bit less to do with getting sick than it does to getting a job. This issue was addressed better in the third study where the stimuli were more tailored to the problems. In the case of unemployed individuals, they were described as being unskilled workers who were told to get further training by their union, with the union even offering to help. The individual either takes or does not take the additional training, but either way eventually ends up unemployed. In the case of healthcare, the individual is described as being a long-term smoker who was repeatedly told by his doctor to quit. The person either eventually quits smoking or does not, but either way ends up getting lung cancer. The general pattern of results from study two replicated again: for the smoker, support for government aid hovered around 0.8 when he quit and 0.7 when he did not; for the unemployed person, support was about 0.75 when he took the training and around 0.55 when he did not.

“He deserves all that healthcare for looking so cool while smoking”

While there does seem to be evidence for sicknesses being cognitively tagged as more deserving of assistance than unemployment (there were also some association studies I won’t cover in detail), there is a recurrent point in the paper that I am hesitant about endorsing fully. The first mention of this point is found early on in the manuscript, and reads:

“Citizens appear to reason as if exposure to health problems is randomly distributed across social strata, not noting or caring that this is not, in fact, the case…we argue that the deservingness heuristic is built to automatically tag sickness-based needs as random events…”

A similar theme is mentioned later in the paper as well:

“Even using extremely well-tailored stimuli, we find that subjects are reluctant to accept explicit information that suggests that sick people are undeserving.”

In general I find the data they present to be fairly supportive of this idea, but I feel it could do with some additional precision. First and foremost, participants did utilize this information when determining deservingness. The dips might not have been as large as they were for unemployment (more on that later), but they were present. Second, participants were asked about helping one individual in particular. If, however, sickness is truly being automatically tagged as randomly distributed, then deservingness factors should not be expected to come into play when decisions involve making trade-offs between the welfare of two individuals. In a simple case, a hospital could be faced with a dilemma in which two patients need a lung transplant, but only a single lung is available. These two patients are otherwise identical except one has lung cancer due to a long history of smoking, while the other has lung cancer due to a rare infection. If you were to ask people which patient should get the organ, a psychological system that was treating all illness as approximately random should be indifferent between giving it to the smoker or the non-smoker. A similar analysis could be undertaken when it comes to trading-off spending on healthcare and non-healthcare items as well (such as making budget cuts to education or infrastructure in favor of healthcare). 

Finally, there are two additional factors which I would like to see explored by future research in this area. First, the costs of sickness and unemployment tend to be rather asymmetric in a number of ways: not only might sickness be more often life-threatening than unemployment (thus generating more need, which can swamp the effects of deservingness to some degree), but unemployment benefits might well need to be paid out over longer periods of time than medical ones (assuming sickness tends to be more transitory than unemployment). In fact, unemployment benefits might actively encourage people to remain unemployed, whereas medical benefits do not encourage people to remain sick. If these factors could somehow be held constant or removed, a different picture might begin to emerge. I could imagine deservingness information mattering more when a drug is required to alleviate discomfort, rather than save a life. Second - though I don’t know to what extent this is likely to be relevant – the stimulus materials in this research all ask about whether the government ought to be providing aid to sick/unemployed people. It is possible that somewhat different responses might have been obtained if some measures were taken about the participant’s own willingness to provide that aid. After all, it is much less of a burden on me to insist that someone else ought to be taking care of a problem relative to taking care of it myself.

References: Jensen, C. & Petersen, M. (2016). The deservingness heuristic and the politics of health care. American Journal of Political Science, DOI: 10.1111/ajps.12251

 Tooby, J. & Cosmides, L. (1996). Friendship and the banker’s paradox:Other pathways to the evolution of adaptations for altruism. Proceedings of the British Academy, 88, 119-143

Not-So-Shocking Results About People Shocking Themselves

I’m bored is a useless thing to say. I mean, you live in a great, big, vast world that you’ve seen none percent of. Even the inside of your own mind is endless; it goes on forever, inwardly, do you understand? The fact that you’re alive is amazing, so you don’t get to say ‘I’m bored’”. – Louis CK

One of the most vivid – and strangely so – memories of my childhood involved stand-up comedy. I used to always have the TV on in the background of my room when I was younger, usually tuned to comedy central or cartoons. Young me would somewhat-passively absorb all that comedic material and then regurgitate it around people I knew without context; a strategy that made me seem a bit weirder than I already was as a child. The joke in particular that I remember so vividly came from Jim Gaffigan: in it, he expressed surprise that the male seahorses are the one’s who give birth, suggesting that they should just call the one that gives birth the female; he also postulates that the reason this wasn’t the case was that a stubborn scientist had made a mistake. One reason this joke stood out to me in particular was that, as my education progressed, it served as perhaps the best example of how people who don’t know much about the subject they’re discussing can be surprised at some ostensible quirks about it that actually make a great deal of sense to more knowledgeable individuals.

In fact, many things about the world are quite shocking to them.

In the case of seahorses, the mistake Jim was making is that biological sex in that species, as well as many others, can be defined by which sex produces the larger gametes (eggs vs. sperm). In seahorses, the females produce the eggs, but the males provide much of the parental investment, carrying the fertilized eggs in a pouch until they hatch. In such species where the burden of parental care is shouldered more heavily by the males, we also tend to see reversals of mating preferences, where the males tend to become more selective about sexual partners, relative to the females. So not only was the labeling of the sexes no mistake, but there are lots of neat insights to be drawn about psychology from that knowledge. Admittedly, this knowledge does also ruin the joke, but here at Popsych we take the utmost care to favor being a buzzkill over being inaccurate (because we have integrity and very few friends). In the interests of continuing that proud tradition, I would like to explain why the second part of the initial Louis CK joke – the part about us not being allowed to be bored because we ought to just be able to think ourselves to entertainment and pleasure – is, at best, misguided.

That part of the joke contains an intuition shared by more than Louis, of course. In the somewhat-recent past, an article was making its way around the popular psychological press about how surprising it was that people tended to find sitting with their own thoughts rather unpleasant. The paper, by Wilson et al (2014), contains 11 studies. Given that number, along with the general repetitiveness of the designs and lack of details presented in the paper itself, I’ll run through them in the briefest form possible before getting to meat of the discussion. The first six studies involved around 400 undergrads being brought into a “sparsely-furnished room” after having given up any forms of entertainment they were carrying, like their cell phones and writing implements. They were asked to sit in a chair and entertain themselves with only their thoughts for 6-15 minutes without falling asleep. Around half the participants rated the experience as negative, and a majority reported difficulty concentrating or their mind wandering.The next study repeated this design with 169 subjects asked to sit alone at home without any distractions and just think. The undergraduates found the experience about as thrilling at home as they did in the lab, the only major difference being that now around a third of the participants reported “cheating” by doing things like going online or listening to music. Similar results were obtained in a community sample of about 60 people, a full half of which reported cheating during the period.

Finally, we reach the part of the study that made the headlines. Fifty-five undergrads were again brought into the lab. Their task began by rating the pleasantness of various stimuli, one of which being a mild electric shock (designed to be unpleasant, but not too painful). After doing this, they were given the sit-alone-and-think task, but were told they could, if they wanted, shock themselves again during their thinking period via an ankle bracelet they were wearing. Despite participants knowing that the shock was unpleasant and that shocking themselves was entirely optional, around 70% of men and 25% of women opted to deliver at least one shock to themselves during the thinking period when prompted with the option. Even among the subjects who said they would pay $5 instead of being shocked again, 64% of the men and 15% of the women shocked themselves anyway. From this, Wilson et al (2014) concluded that thinking was so aversive that people would rather shock themselves than think if given the option, even if they didn’t like the shock.

“The increased risk of death sure beats thinking!”

The authors of the paper posited two reasons as to why people might dislike doing nothing but sitting around thinking, neither of which make much sense to me: their first explanation was that people might ruminate more about their own shortcomings when they don’t have anything else to do. Why people’s minds would be designed to do such a thing is a bit beyond me and, in any case, the results didn’t find that people defaulted to thinking they were failures. The second explanation was that people might find it unpleasant to be alone with their own thoughts because they had to be a “script writer” and an “experiencer” of them. Why that would be unpleasant is also a bit beyond me and, again, that wasn’t the case either: participants did not find having someone else prompting the focus of thoughts anymore pleasant.

Missing from this paper, like many papers in psychology, is an evolutionary-level, functional consideration of what’s going on here: not explored or mentioned is the idea that thinking itself doesn’t really do anything. By that, I mean evolution, as a process, cannot “see” (i.e. select for) what organisms think or feel directly. The only thing evolution can “see” is what an organism does; how it behaves. That is to say the following: if you had one organism that had a series of incredibly pleasant thoughts but never did anything because of them, and another that never had any thoughts whatsoever but actually behaved in reproductively-useful ways, the latter would win the evolutionary race every single time.

To further drive this point home, imagine for a moment an individual member of a species which could simply think itself into happiness; blissful happiness, in fact. What would be the likely result for the genes of that individual? In all probability, they would fair less well than their counterparts who were not so inclined since, as we just reviewed, feeling good per se does not do anything reproductively useful. If those positive feelings derived from just thinking happy thoughts motivated any kind of behavior (which they frequently do) and those feelings were not designed to be tied to some useful fitness outcomes (which they wouldn’t be, in this case), it is likely that the person thinking himself to bliss would end up doing fewer useful things along with many maladaptive ones. The logic here is that there many things they could do, but only a small subsection of those things are actually worth doing. So, if organisms selected what to do on the basis of their emotions, but these emotions were being generated for reasons unrelated to what they were doing, they would select poor behavioral options more than not. Importantly, we could make a similar argument for an individual that thought himself into despair frequently: to the extent that feeling motivate behaviors, and to the extent that those feelings are divorced from their fitness outcomes, we should expect bad fitness results.

Accordingly, we ought to also expect that thinking per se is not what people find aversive in this experiment. There’s no reason for the part of the brain doing the thinking (rather loosely conceived) here to be hooked up to the pleasure or pain centers of the brain. Rather, what the subjects likely found aversive here was the fact that they weren’t doing anything even potentially useful or fun. The participants in these studies were asked to do more than just think about something; the participants were also asked to forego doing other activities, like browsing the internet, reading, exercising, or really anything at all. So not only were the subjects asked to sit around and do absolutely nothing, they were also asked not do the other fun, useful things they might have otherwise spent their time on.

“We can’t figure out why he doesn’t like being in jail despite all that thinking time…”

Now, sure, it might seem a bit weird that people would shock themselves instead of just sit there and think at first glance. However, I think that strangeness can be largely evaporated by considering two factors: first, there are probably some pretty serious demand characteristics at work here. When people know they’re in a psychology experiment and you only prompt them to do one thing (“but don’t worry it’s totally optional. We’ll just be in the other room watching you…”), many of them might do it because they think that’s what the point of the experiment is (which, I might add, they would be completely correct about in this instance). There did not appear to be any control group to see how often people independently shocked themselves when not prompted to do so, or when it wasn’t their only option. I suspect few people would under that circumstance.

The second thing to consider is that most organisms would likely start behaving very strangely after a time if you locked them in an empty room; not just humans. This is because, I would imagine, the minds of organisms are not designed to function in environments where there is nothing to do. Our brains have evolved to solve a variety of environmentally-recurrent problems and, in this case, there seems to be no way to solve the problem of what to do with one’s time. The cognitive algorithms in their mind would be running through a series of “if-then” statements and not finding a suitable “then”. The result is that their mind could potentially start generating relatively-random outputs. In a strange situation, the mind defaults to strange behaviors. To make the point simply, computers stop working well if you’re using them in the shower, but, then again, they were never meant to go in the shower in the first place.

To return to Louis CK, I don’t think I get bored because I’m not thinking about anything, nor do I think that thinking about things is what people found aversive here. After all, we are all thinking about things – many things – constantly. Even when we are “distracted”, that doesn’t mean we are thinking about nothing; just that our attention is on something we might prefer it wasn’t. If thinking it what was aversive here, we should be feeling horrible pretty much all the time, which we don’t. Then again, maybe animals in captivity really do start behaving weird because they don’t want to be the “script writer” and “experiencer” of their own thoughts…

References: Wilson, T., Reinhard, D., Westgate, E., Gilbert, D., Ellerbeck, N., Hahn, C., Brown, C., & Shaked, A. (2014). Just think: The challenges of the disengaged mind. Science, 345, 75-77.

Where Did I Leave My Arousal?

Jack is your average, single man. Like many single men, Jack might be said to have an interest in getting laid. There are a number of women – and let’s just say they all happen to be called Jill – that he might attempt to pursue to achieve that goal. Now which of these Jills Jack will pursue depends a number of factors: first, is Jack looking for something more short-term and causal, or is he looking for a long-term relationship? Jack might want to consider whether or not any given Jill is currently single, known for promiscuity, or attractive, regardless of which type he’s looking for. If he’s looking for something more long-term, he might also want to know more about how intelligent and kind all these Jills are. He might also wish to assess how interested each of the Jills happens to be in him, given what he offers, as he might otherwise spend a lot of time pursuing sexual dead-ends. If he really wanted to make a good decision, though, Jack might also wish to take into account whether or not he happened to have been scared at the time he met a given Jill, as his fear level at the time is no doubt a useful piece of information.

“Almost getting murdered was much more of a turn-on than I expected”

OK; so maybe that last piece sounded a bit strange. After all, it doesn’t seem like Jack’s experience of fear tells him anything of value when it comes to trying to find a Jill: it doesn’t tell him anything about that Jill as a suitable mate or the probability that he will successfully hook up with her. In fact, by using his level of fear to an unrelated issue to try and make a mating decision, it seems Jack can only make a worse decision than if he did not use that piece of information (on the whole, anyway; his estimates of which Jill(s) are his best bet to pursue might not be entirely accurate, but they’re at least based on otherwise-relevant information). Jack might as well make his decision about who to pursue on the basis of whether he happened to be hungry when he met them or whether it was cloudy that day. However, what if Jack – or some part of Jack’s brain, more precisely – mistook the arousal he felt when he was afraid for sexual attraction?

There’s an idea floating around in some psychological circles that one can, essentially, misplace their arousal (perhaps in the way one misplaces their keys: you think you left them on the steps, but they’re actually in your coat pocket). What this means is that someone who might be aroused due to fear might end up thinking they’re actually rather attracted to someone else instead, because both of those things – fear and sexual attraction – involve physiological arousal (in the form of things like an increased heart rate); apparently, physiological arousal is pretty vague and confusing thing for our brains. One study purporting to show this effect is a classic paper covered in many psychological textbooks: Dutton & Aron (1974). In the most famous experiment of the study, 85 men were approached by a confederate (either a man or a woman) after crossing a fear-inducing bridge or a non-fear-inducing bridge. The men were given a quick survey and asked to write a short story about an ambiguous image of a woman, after which the confederate provides the men with their number if they want to call and discuss the study further. The idea here is that the men might call if they were interested in a date, rather than the study, which seems reasonable.

When the men’s stories were assess for sexual content, those who had crossed the fear-inducing bridge tended to write stories containing more sexual content (M = 2.47 out of 5) compared to when they crossed the non-fear-inducing bridge (M = 1.41). However, this was only the case when the confederate was female; when a male confederate was administering the questions, there was no difference between the two condition in terms of sexual content (M = 0.8 and 0.61, respectively). Similarly, the male subjects were more likely to call the confederate following the interaction when crossing the fear bridge (39%), relative to the non-fear bridge (9%). Again, this difference was only significant when the confederate was a female; male confederates were called at the same rate (8% and 4.5%, respectively).  Dutton & Aron (1974) suggest that these results were consistent with a type of “cognitive relabeling”, where the arousal from fear becomes reinterpreted by the subjects as sexual attraction. The authors further (seem to, anyway) suggest that this relabeling might be useful because anxiety and fear are unpleasant things to feel, so by labeling them as sexual attraction, subjects get to feel good things (like horny) instead.

“There we go; much better”

These explanations – that the mind mistakes fear arousal for sexual arousal, and that this is useful because it makes people feel good – are both theoretically deficient, and in big ways. To understand why with a single example, let’s consider a hiker out in the wood who encounters a bear. Now this bear is none-too-happy to see the hiker and begins to charge at him. The hiker will, undoubtedly, experience a great deal of physiological arousal. So, what would happen if the hiker mistook his fear for sexual interest? At best, he would end up achieving an unproductive copulation; at worse, he would end up inside the bear, but not in the way he might have hoped. The first point to this example, then, is that the appropriate responses to fear and sexual attraction are quite different: fear should motivate you to avoid, escape, or defend against a threat, whereas sexual attraction should motivate you to more towards the object of your desires instead. Any cognitive system that could easily blur the lines between these two (and other) types of arousal would appear to be at a disadvantage, relative to one that did not make such mistakes. We would end up running away from our dates into the arms of bears. Unproductive indeed.

The second, related point is that feeling good per se does not do anything useful. I might feel better if I never experienced hunger; I might also starve to death, despite being perfectly content about the situation. As such, “anxiety-reduction” is not even a plausible function for this ostensible cognitive relabeling. If anxiety reduction were a plausible function, one would be left wondering why people bothered to feel anxiety in the first place: it seems easier to not bother feeling anxiety than to have one mechanism that – unproductively – generates it, and a second which quashes it. What we need here, then, is an entirely different type of explanation to understand these results; one that doesn’t rely on biologically-implausible functions or rather sloppy cognitive design. To understand what that explanation might look like, we could consider the following comic:

“I will take a prostitute, though, if you happen to have one…”

The joke here, obviously, is that the refusal of a cigarette prior to execution by firing squad for health reasons is silly; it only makes sense to worry about one’s health in the future if there is a future to worry about. Accordingly, we might predict that people who face (or at least perceive) uncertainty about their future might be less willing to forgo current benefits for future rewards. That is, they should be more focused on achieving short-term rewards: they might be more likely to use drugs, less likely to save money, less likely to diet, more likely to seek the protection of others, and more likely to have affairs if the opportunity arose. They would do all this not because they “mistook” their arousal from fear about the future for sexual attraction, pleasant tastes, friendship, and fiscal irresponsibility, but rather because information about their likely future has shifted the balance of preexisting cost/benefit ratios in favor of certain alternatives. They know that the cigarette would be bad for their future health, but there’s less of a future to worry about, so they might as well get the benefits of smoking while they can.

Such an explanation is necessarily speculative and incomplete (owing to this being a blog and not a theory piece), but it would certainly begin to help explain why people in relationships don’t seem to “mistake” their arousal from riding a roller-coaster for heightened levels of stranger attractiveness the way single people do (Meston & Frohlich, 2003). Not only that, but those in relationships didn’t rate their partners as any more attractive either; in fact, if anything, the aroused roller-coaster riders in committed relationships rated their partners as slightly less attractive, which might represent a subtle shift in one’s weighing of an existing cost/benefit ratio (related to commitment, in this case) in the light of new information about the future. Then again, maybe people in relationships are just less likely to misplace their arousal than single folk happen to be…

References: Dutton, D. & Aron, A. (1974). Some evidence for heightened sexual attraction under conditions of high anxiety. Journal of Personality and Social Psychology, 30, 510-517.

Meston, C. & Frohlich, P. (2003). Love at first fright: Partner salience moderates roller-coaster-induced excitation transfer. Archives of Sexual Behavior, 32, 537-544.

Pay No Attention To The Calories Behind The Curtain

Obesity is a touchy issue for many, as a recent twitter debacle demonstrated. However, there is little denying that the average body composition in the US has been changing in the past few decades: this helpful data and interactive map from the CDC shows the average BMI increasing substantially from year to year. In 1985, there was no state in which the percentage of residents with a BMI over 30 exceeded 14%; by 2010, there was no state for which that percentage was below 20%, and several for which it was over 30%. One can, of course, have debates over whether BMI is a good measure of obesity or health; at 6’1″ and 190 pounds, my BMI is approximately 25, nudging me ever so-slightly into the “overweight” category, though I am by no stretch of the imagination fat or unhealty. Nevertheless, these increases in BMI are indicative of something; unless that something is people putting on substantially more muscle relative to their height in recent decades – a doubtful proposition – the clear explanation is that people have been getting fatter.

Poorly-Marketed: The Self-Esteem-Destroying Scale

This steep rise in body mass in the recent years requires an explanation, and some explanations are more plausible than others. Trying to nominate genetic factors isn’t terribly helpful for a few reasons: first, we’re talking about drastic changes over the span of about a generation, which typically isn’t enough time for much appreciable genetic change, barring very extreme selection pressures. Second, saying that some trait or behavior has a “genetic component” is all but meaningless, since all traits are products of genetic and environmental interactions. Saying a trait has a genetic component is like saying that the area of a rectangle is related to its width; true, but unhelpful. Even if genetics were helpful as an explanation, however, referencing genetic factors would only help explain the increased weight in younger individuals, as the genetics of already-existing people haven’t been changing substantially over the period of BMI growth. You would need to reference some existing genetic susceptibility to some new environmental change.

Other voices have suggested that the causes of obesity are complex, unable to be expressed by a simple “calories-in/calories-out” formula.  This idea is a bit more pernicious, as the former half of that sentence is true, but the latter half does not follow from it. Like the point about genetic components, this explanation also suffers from the idea that it’s particularly unlikely the formula for determining weight gain or loss has become substantially more complicated in the span of a single generation. There is little doubt that the calories-in/calories-out formula is a complicated one, with many psychological and biological factors playing various roles, but its logic is undeniable: you cannot put on weight without an excess of incoming energy (or a backpack); that’s basic physics. No matter how many factors affect this caloric formula, they must ultimately have their effect through a modification of how many calories come in and go out. Thus, if you are capable of monitoring and restricting the number of calories you take in, you ought to have a fail-proof method of weight management (albeit a less-than-ideal one in terms of the pleasure people derive from eating).

For some people, however, this method seems flawed: they will report restricted-calorie diets, but they don’t lose weight. In fact, some might even end up gaining. The fail-proof methods fails. This means either something is wrong with physics, or there’s something wrong with the reports. A natural starting point for examining why people have difficulty managing their weight, even when they report calorically-restrictive diets, then, might be to examine whether people are accurately monitoring and reporting their intakes and outputs. After all, people do, occasionally, make incorrect self-reports. Towards this end, Lichtman et al (1992) recruited a sample of 10 diet-resistant individuals (those who reported eating under 1200 calories a day for some time and did not lose weight) and 80 control participants (all had BMIs of 27 of higher). The 10 subjects in the first group and 6 from the second were evaluated for reported intake, physical activity, body composition, and energy expenditure over two weeks. Metabolic rate was also measured for all the subjects in the diet-resistant group and for 75 of the controls.

Predicting the winner between physics and human estimation shouldn’t be hard.

First, we could consider the data from the metabolic rate: the daily estimated metabolic rate relative to fat-free body mass did not differ between the groups, and deviations of more than 10% from the group’s mean metabolic rate were rare. While there was clearly variation there, it wasn’t systematically favoring either group. Further, the total energy expenditure by fat-free body mass did not differ between the two groups either. When it came to losing weight, the diet-resistant individuals did not seem to be experiencing problems because they used more or less energy. So what about intake? Well, the diet-resistant individuals reported taking in an average of 1028 calories a day. This is somewhat odd, on account of them actually taking in around 2081 calories a day. The control group weren’t exactly accurate either, reporting 1694 calories in a day when they actually took in 2386. In terms of percentages, however, these differences are stark: the diet-resistant sample’s underestimates were about 150% as large as the controls.

In terms of estimates of energy expenditure, the picture was no brighter: diet-resistant individuals reported expending 1022 calories through physical activity each day, on average, when they actually exerted 771; the control group though the expended 1006, when they actually exerted 877. This means the diet-resistant sample were overestimating by almost twice as much as the controls. Despite this, those in the diet-resistant group also held more strongly to the belief that their obesity was caused by genetic and metabolic factors, and not their overeating, relative to controls. Now it’s likely that these subjects aren’t lying; they’re just not accurate in their estimates, though they earnestly believe them. Indeed, Lichtman et al (1992) reported that many of the subjects were distressed when they were presented with these results. I can only imagine what it must feel like to report having tried dieting 20 times or more only to be confronted with the knowledge that you likely weren’t doing so effectively. It sounds upsetting.

Now while that’s all well and good, one might object to these results on the basis of sample size: a sample size of about 10 per group clearly leaves a lot to be desired. Accordingly, a brief consideration of a new report examining people’s reported intakes is in order. Archer, Hand, and Blair (2013) examined people’s self-reports of intake relative to their estimated output across 40 years of  U.S. nutritional data. The authors were examining what percentage of people were reporting biologically-implausible caloric intakes. As they put it:

“it is highly unlikely that any normal, healthy free-living person could habitually exist at a PAL [i.e., TEE/BMR] of less than 1.35’”

Despite that minor complication of not being able to perpetually exist past a certain intake/output ratio, people of all BMIs appeared to be offering unrealistic estimates of their caloric intake; in fact, the majority of subjects reported values that were biologically-implausible, but the problem got worse as BMI increased. Normal-weight BMI women, for instance, offered up biologically-plausible values around 32-50% of the time; obese women reported plausible values around 12 to 31% of the time. In terms of calories, it was estimated that obese men and women tended to underreport by about 700 to 850 calories, on average (which is comparable to the estimates obtained from the previous study), whereas the overall sample underestimated around 280-360. People just seemed fairly inaccurate as estimating their intake all around.

“I’d estimate there are about 30 jellybeans in the picture…”

Now it’s not particularly odd that people underestimate how many calories they eat in general; I’d imagine there was never much selective pressure for great accuracy in calorie-counting over human evolutionary history. What might need more of an explanation is why obese individuals, especially those who reported resistance to dieting, tended to underreport substantially more than non-obese ones. Were I to offer my speculation on the matter, it would have something to do with (likely non-conscious) attempts to avoid the negative social consequences associated with obesity (obese people probably aren’t lying; just not perceiving their world accurately in this respect). Regardless of whether one feels those social consequences associated with obesity are deserved or not, they do exist, and one method of reducing consequences of that nature is to nominate alternative casual agents for the situation, especially ones – like genetics – that many people feel you can’t do much about, even if you tried. As one becomes more obese, then, they might face increased negative social pressures of that nature, resulting in their being more liable to learn, and subsequently reference, the socially-acceptable responses and behaviors (i.e. “it’s due to my genetics”, or, “I only ate 1000 calories today”; a speculation echoed by Archer, Hand, and Blair (2013)). Such an explanation is at least biologically-plausible, unlike most people’s estimates of their diets.

References: Archer, E., Hand, G., & Blair, S. (2013). Validity of U.S. national surveillance: National health and nutrition examination survey caloric energy intake data, 1971-2010. PLoS ONE, 8, e76632. doi:10.1371/journal.pone.0076632.

Lichtman et al. (1992). Discrepancy between self-reported and actual caloric intake and exercise in obese subjects. The New England Journal of Medicine, 327, 1893-1898.

 

Classic Research In Evolutionary Psychology: Learning

Let’s say I were to give you a problem to solve: I want you to design a tool that is good at cutting. Despite the apparent generality of the function, this is actually a pretty vague request. For instance, one might want to know more about the material to be cut: a sword might work if your job is cutting some kind human flesh, but it might also be unwieldy to keep around the kitchen for preparing dinner (I’m also not entirely sure they’re dishwasher-safe, provided you managed to fit a katana into your machine in the first place). So let’s narrow the request down to some kind of kitchen utensil. Even that request, however, is a bit vague, as evidenced by Wikipedia naming about a dozen different kinds of utensil-style knives (and about 51 different kinds of knives overall). That list doesn’t even manage to capture other kinds of cutting-related kitchen utensils, like egg-slicers, mandolines, peelers, and graters. Why do we see so much variety, even in the kitchen, and why can’t one simple knife be good enough? Simple: when different tasks have non-overlapping sets of best design solutions, functional specificity tends to yield efficiency in one realm, but not in another.

“You have my bow! And my axe! And my sword-themed skillet!”.

The same basic logic has been applied to the design features of living organisms as well, including aspects of our cognition as I argued in the last post: the part of the mind that functions to logically reason about cheaters in the social environment does not appear to be able logically reason with similar ease about other, even closely-related topics. Today, we’re going to expand on that idea, but shift our focus towards the realm of learning. Generally speaking, learning can be conceived of as some change to an organism’s preexisting cognitive structure due to some experience (typically unrelated to physical trauma). As with most things related to biological changes, however, random alterations are unlikely to result in improvement; to modify a Richard Dawkins quote ever so slightly, “However many ways there may be of [learning something useful], it is certain that there are vastly more ways of [learning something that isn't". For this reason, along with some personal experience, no sane academic has ever suggested that our learning occurs randomly. Learning needs to be a highly-structured process in order to be of any use.

Precisely how structured "highly-structured" entails is a bit of a sticky issue, though. There are undoubtedly still some who would suggest that some general type of reinforcement-style learning might be good enough for learning all sorts of neat and useful things. It's a simple rule: if [action] is followed by [reward], then increase the probability of [action]; if [action] is followed by [punishment], then decrease the probability of [action]. There are a number of problems with such a simple rule, and they return to our knife example: the learning rule itself is under-specified for the demands of the various learning problems organisms face. Let’s begin with an analysis of what is known as conditioned taste aversion. Organisms, especially omnivorous ones, often need to learn about what things in their environment are safe to eat and which are toxic and to be avoided. One problem in learning about which are potential foods are toxic is that the action (eating) is often divorced from the outcome (sickness) by a span of minutes to hours, and plenty of intervening actions take place in the interim. On top of that, this is not the type of learning you want to need repeated exposures to in order to learn, as, and this should go without saying, eating poisonous foods is bad for you. In order to learn the connection between the food and the sickness, then, a learning mechanism would seem to need to “know” that the sickness is related to the food and not other, intervening variables, as well as being related in some specific temporal fashion. Events that conform more closely to this anticipated pattern should be more readily learnable.

The first study we’ll consider, then, is by Garcia & Koelling (1966) who were examining taste conditioning in rats. The experimenters created conditions in which rats were exposed to “bright, noisy” water and “tasty” water. The former condition was created by hooking a drinking apparatus up to a circuit that connected to a lamp and a clicking mechanism, so when the rats drank, they were provided with visual and auditory stimuli. The tasty condition was created by flavoring the water. Garcia & Koelling (1966) then attempted to pair the waters with either nausea or electric shocks, and subsequently measure how the rats responded in their preference for the beverage. After the conditioning phase, during the post-test period, a rather interesting sets of results emerged: while rats readily learned to pair nausea with taste, they did not draw the connection between nausea and audiovisual cues. When it came to the shocks, however, the reverse pattern emerged: rats could pair shocks with audiovisual cues well, but could not manage to pair taste and shock. This result makes a good deal of sense in light of a more domain-specific learning mechanism: things which produce certain kinds of audiovisual cues (like predators) might also have the habit of inflicting certain kinds of shock-like harms (such as with teeth or claws). On the other hand, predators don’t tend to cause nausea; toxins in food tend to do so, and these toxins also tend to come paired with distinct tastes. An all-purpose learning mechanism, by contrast, should be able to pair all these kinds of stimuli and outcomes equally well; it shouldn’t matter whether the conditioning comes in the form of nausea or shocks.

Turns out that shocks are useful for extracting information, as well as communicating it.

The second experiment to consider on the subject of learning, like the previous one, also involves rats, and actually pre-dates it. This paper, by Petrinovich & Bolles (1954), examined whether different deprivation states have qualitatively different effects on behavior. In this case, the two deprivation states under consideration were hunger and thirst. Two samples of rats were either deprived of food or water, then placed in a standard T-maze (which looks precisely how you might imagine it would). The relevant reward – food for the hungry rats and water for the thirsty ones – was placed in one arm of the T maze. The first trial was always rewarded, no matter which side the rat chose. Following that initial choice, the food was placed on the side of the maze the rat did not chose on the previous trial. For instance, if the rat went ‘right’ on the first trial, the reward was placed in the ‘left’ arm on the second trial. Whether the rat chose correctly or incorrectly didn’t matter; the reward was always placed on the opposite side as its previous choice. Did it matter whether the reward was food or water?

Yes; it mattered a great deal. The hungry rats averaged substantially fewer errors in reaching the reward than the thirsty ones (approximately 13 errors over 34 trials, relative to 28 errors, respectively). The rats were further tested until they managed to perform 10 out of 12 trials correctly. The hungry rats managed to meet the criterion value substantially sooner, requiring a median of 23 total trials before reaching that mark. By contrast, 7 of the 10 thirsty rats failed to reach the criterion at all, and, of the three that did, they required approximately 30 trials on average to manage that achievement. Petrinovich & Bolles (1954) suggested that these results can be understood in the following light: hunger makes the rat’s behavior more variable, while thirst makes its behavior more stereotyped. Why? The most likely candidate explanation is the nature of the stimuli themselves, as they tend to appear in the world. Food sources tend to be distributed semi-unpredictably throughout the environment, and where there is food today, there might not be food tomorrow. By contrast, the location of water tends to be substantially more fixed (where there was a river today, there is probably a river tomorrow), so returning to the last place you found water would be the more-secure bet. To continue to drive this point home: a domain general learning mechanism should do both tasks equally as well, and a more general account would seem to struggle to explain these findings.

Shifting gears away from rats, the final study for consideration is one I’ve touched on before, and it involves the fear responses of monkeys. As I’ve already discussed the experiment, (Cook & Mineka, 1989) I’ll offer only a brief recap of the paper. Lab-reared monkeys show no intrinsic fear responses to snakes or flowers. However, social creatures that they are, these lab-reared monkeys can readily develop fear responses to snakes after observing another conspecific reacting fearfully to them. This is, quite literally, a case of monkey see, monkey do. Does this same reaction hold in response to observations of conspecifics reacting fearfully to a flower? Not at all. Despite the lab-reared monkeys being exposed to stimuli they have never seen before in their life (snakes and flowers) paired with a fear reaction in both cases, it seems that the monkeys are prepared to learn to fear snakes, but not similarly prepared to learn a fear of flowers. Of note is that this isn’t just a fear reaction in response to living organisms in general: while monkeys can learn a fear of crocodiles, they do not learn to fear rabbits under the same conditions.

An effect noted by Python (1975)

When it comes to learning, it does not appear that we are dealing with some kind of domain-general learning mechanism, equally capable of learning all types of contingencies. This shouldn’t be entirely surprising, as organisms don’t face all kinds of contingencies with equivalent frequencies: predators that cause nausea are substantially less common than toxic compounds which do. Don’t misunderstand this argument: humans and nonhumans alike are certainly capable of learning many phylogenetically novel things. That said, this learning is constrained and directed in ways we are often wholly unaware of. The specific content area of the learning is of prime importance in determining how quickly somethings can learned, how lasting the learning is likely to be, and which things are learned (or learnable) at all. The take-home message of all this research, then, can be phrased as such: Learning is not the end point of an explanation; it’s a phenomenon which itself requires an explanation. We want to know why an organism learns what it does; not simply that it learns.

References: Cook M, & Mineka S (1989). Observational conditioning of fear to fear-relevant versus fear-irrelevant stimuli in rhesus monkeys. Journal of abnormal psychology, 98 (4), 448-59 PMID: 2592680

Garcia, J. & Koelling, R. (1966). Relation of cue to consequence in avoidance learning. Psychonomic Science, 4, 123-124.

Petrinovich, L. & Bolles, R. (1954). Deprivation states and behavioral attributes. Journal of Comparative Physiological Psychology, 47, 450-453.

Classic Research In Evolutionary Psychology: Reasoning

I’ve consistently argued that evolutionary psychology, as a framework, is a substantial, and, in many ways, vital remedy to some wide-spread problems: it allows us to connect seemingly disparate findings under a common understanding, and, while the framework is by itself no guarantee of good research, it forces researchers to be more precise in their hypotheses, allowing for conceptual problems with hypotheses and theories to be more transparently observed and addressed. In some regards the framework is quite a bit like the practice of explaining something in writing: while you may intuitively feel as if you understand a subject, it is often not until you try to express your thoughts in actual words that you find your estimation of your understanding has been a bit overstated. Evolutionary psychology forces our intuitive assumptions about the world to be made explicit, often to our own embarrassment.

“Now that you mention it, I’m surprised I didn’t notice that sooner…”

As I’ve recently been discussing one of the criticisms of evolutionary psychology – that the field is overly focused on domain-specific cognitive mechanisms – I feel that now would be a good time to review some classic research that speaks directly to the topic. Though the research to be discussed itself is of recent vintage (Cosmides, Barrett, & Tooby, 2010), the topic has been examined for some time, which is whether our logical reasoning abilities are best convinced of as domain-general or domain-specific (whether they work equally well, regardless of content, or whether content area is important to their proper functioning). We ought to expect domain specificity in our cognitive functioning for two primary reasons (though these are not the only reasons): the first is that specialization yields efficiency. The demands of solving a specific task are often different from the demands of solving a different one, and to the extent that those demands do not overlap, it becomes difficult to design a tool that solves both problems readily. Imagining a tool that can both open wine bottles and cut tomatoes is hard enough; now imagine adding on the requirement that it also needs to function as a credit card and the problem becomes exceedingly clear. The second problem is outlined well by Cosmides, Barrett, & Tooby (2010) and, as usual, they express it more eloquently than I would:

The computational problems our ancestors faced were not drawn randomly from the universe of all possible problems; instead, they were densely clustered in particular recurrent families.

Putting the two together, we end up with the following: humans tend to face a non-random set of adaptive problems in which the solution to any particular one tends to differ from the solution to any other. As domain-specific mechanisms solve problems more efficiently than domain-general ones, we ought to expect the mind to contain a large number of cognitive mechanisms designed to solve these specific and consistently-faced problems, rather than only a few general-purpose mechanisms more capable of solving many problems we do not face, but poorly-suited to the specific problems we do. While such theorizing sounds entirely plausible and, indeed, quite reasonable, without empirical support for the notion of domain-specificity, it’s all so much bark and no bite.

Thankfully, empirical research abounds in the realm of logical reasoning. The classic tool used to assess people’s ability to reason logically is the Wason selection task. In this task, people are presented with a logical rule taking the form of “if P, then Q“, and a number of cards representing P, Q, ~P, and ~Q (i.e. “If a card has a vowel on one side, then it has an even number on the other”, with cards showing A, B, 1 & 2). They are asked to point out the minimum set of cards that would need to be checked to test the initial “if P, then Q” statement. People’s performance on the task is generally poor, with only around 5-30% of people getting it right on their first attempt. That said, performance on the task can become remarkably good – up to around 65-80% of subjects getting the correct answer – when the task is phrased as a social contract (“If someone [gets a benefit], then they need to [pay a cost]“, the most well known being “If someone is drinking, then they need to be at least 21″). Despite the underlying logical form not being altered, the content of the Wason task matters greatly in terms of performance. This is a difficult finding to account for if one holds to the idea of a domain-general logical reasoning mechanism that functions the same way in all tasks involving formal logic. Noting that content matters is one thing, though; figuring out how and why content matters becomes something of a more difficult task.

While some might suggest that content simply matters as a function of familiarity – as people clearly have more experience with age restrictions on drinking and other social situations than vaguer stimuli – familiarity doesn’t help: people will fail the task when it is framed in terms of familiar stimuli and people will succeed at the task for unfamiliar social contracts. Accordingly, criticisms of the domain-specific social contract (or cheater-detection) mechanism shifted to suggest that the mechanism at work is indeed content-specific, but perhaps not specific to social contracts. Instead, the contention was that people are good at reasoning about social contracts, but only because they’re good at reasoning about deontic categories – like permissions and obligations – more generally. Assuming such an account were accurate, it remains debatable as to whether that mechanism would be counted as a domain-general or domain-specific one. Such a debate need not be had yet, though, as the more general account turns out to be unsupported by the empirical evidence.

We’re just waiting for critics to look down and figure it out.

While all social contracts involve deontic logic, not all deontic logic involves social contracts. If the more general account of deontic reasoning were true, we ought to not expect performance difference between the former and latter types of problems. In order to test whether such differences exist, Cosmides, Barrett, & Tooby’s (2010) first experiment involved presenting subjects with a permission rule – “If you do P, you must do Q first” – varying whether P was a benefit (going out at night), neutral (staying in), or a chore (taking out the trash; Q, in this case, involved tying a rock around your ankle). When the rule was a social contract (the benefit), performance was high on the Wason task, with 80% of subjects answering correctly. However, when the rule involved staying in, only 52% of subjects got it right; that number was even lower in the garbage condition, with only 44% accuracy among subjects. Further, this same pattern of results was subsequently replicated in a new context involving filing/signing forms as well. This results is quite difficult to account for with a more-general permission schema, as all the conditions involve reasoning about permissions; they are, however, consistent with the predictions from social contract theory, as only the contexts involving some form of social contract ended up eliciting the highest levels of performance.

Permission schemas, in their general form, also appear unconcerned with whether one violates a rule intentionally or accidentally. By contrast, social contract theory is concerned with the intentionality of the violation, as accidental violations do not imply the presence of a cheater the way intentional violations do. To continue to test the distinction between the two models, subjects were presented with the Wason task in contexts where the violations of the rule were likely intentional (with or without a benefit for the actor) or accidental. When the violation was intentional and benefited the actor, subjects performed accurately 68% of the time; when it was intentional but did not benefit that actor, that percentage dropped to 45%; when the violation was likely unintentional, performance bottomed-out at 27%. These results make good sense if one is trying to find evidence of a cheater; they do not if one is trying to find evidence of a rule violation more generally.

In a final experiment, the Wason task was again presented to subjects, this time varying three factors: whether one was intending to violate a rule or not; whether it would benefit the actor or not; and whether the ability to violate was present or absent. The pattern of results mimicked those above: when benefit, intention, and ability were all present, 64% of subjects determined the correct answer to the task; when only 2 factors were present, 46% of subjects got the correct answer; and when only 1 factor was present, subjects did worse still, with only 26% getting the correct answer, which is approximately the same performance level as when there were no factors present. Taken together, these three experiments provide powerful evidence that people aren’t just good at reasoning about the behavior of other people in general, but rather that they are good at reasoning about social contracts in particular. In the now-immortal words of Bill O’Reilly, “[domain-general accounts] can’t explain that“.

“Now cut their mic and let’s call it a day!”

Now, of course, logical reasoning is just one possible example for demonstrating domain specificity, and these experiments certainly don’t prove that the entire structure of the mind is domain specific; there are other realms of life – such as, say, mate selection, or learning – where domain general mechanisms might work. The possibility of domain-general mechanisms remains just that – possible; perhaps not often well-reasoned on a theoretical level or well-demonstrated at an empirical one, but possible all the same. The problem in differentiating between these different accounts may not always be easy in practice, as they are often thought to generate some, or even many, of the same predictions, but in principle it remains simple: we need to place the two accounts in experimental contexts in which they generate opposing predictions. In the next post, we’ll examine some experiments in which we pit a more domain-general account of learning against some more domain-specific ones.

References: Cosmides L, Barrett HC, & Tooby J (2010). Adaptive specializations, social exchange, and the evolution of human intelligence. Proceedings of the National Academy of Sciences of the United States of America, 107 Suppl 2, 9007-14 PMID: 20445099

Simple Rules Do Useful Things, But Which Ones?

Depending on who you ask – and their mood at moment – you might come away with the impression that humans are a uniquely intelligent species, good at all manner of tasks, or a profoundly irrational and, well, stupid one, prone to frequent and severe errors in judgment. The topic often penetrates into lay discussions of psychology, and has been the subject of many popular books, such as the Predictably Irrational series. Part of the reason that people might give these conflicting views of human intelligence – either in terms of behavior or reasoning – is the popularity of explaining human behavior through cognitive heuristics. Heuristics are essentially rules of thumb which focus only on limited sets of information when making decisions. A simple, perhaps hypothetical example of a heuristic might be something like a “beauty heuristic”. This heuristic might go something along the lines of when deciding who to get into a relationship with, pick the most physically attractive available option; other information – such as the wealth, personality traits, and intelligence of the perspective mates – would be ignored by the heuristic.

Which works well when you can’t notice someone’s personality at first glance.

While ignoring potential sources information might seem perverse at first glance, given that one’s goal is to make the best possible choice, it has the potential to be a useful strategy. One of these reasons is that the world is a rather large place, and gathering information is a costly process. The benefits of collecting additional bits of information are outweighed by the costs of doing so past a certain point, and there are many, many potential sources of information to choose from. To the extent that additional information helps one make a better choice, making the best objective choice is often a practical impossibility. In this view, heuristics trade off accuracy with effort, leading to ‘good-enough’ decisions. A related, but somewhat more nuanced benefit of heuristics comes from the sampling-error problem: whenever you draw samples from a population, there is generally some degree of error in your sample. In other words, your small sample is often not entirely representative of the population from which it’s drawn. For instance, if men are, on average, 5 inches taller than women the world over, if you select 20 random men and women from your block to measure, your estimate will likely not be precisely 5 inches; it might be lower or higher, and the degree of that error might be substantial or negligible.

Of note, however, is the fact that the fewer people from the population you sample, the greater your error is likely to be: if you’re only sampling 2 men and women, your estimate is likely to be further from 5 inches (in one direction or the other) relative to when you’re sampling 20, relative to 50, relative to a million. Importantly, the issue of sampling error crops up for each source of information you’re using. So unless you’re sampling large enough quantities of information capable of balancing that error out across all the information sources you’re using, heuristics that ignore certain sources of information can actually lead to better choices at times. This is because the bias introduced by the heuristics might well be less predictively-troublesome than the degree of error variance introduced by insufficient sampling (Gigerenzer, 2010). So while the use of heuristics might at times seem like a second-best option, there appear to be contexts where it is, in fact, the best option, relative to an optimization strategy (where all available information is used).

While that seems to be all well and good, the acute reader will have noticed the boundary conditions required for heuristics to be of value: they need to know how much of which sources of information to pay attention to. Consider a simple case where you have five potential sources of information to attend to in order to predict some outcome: one of these is sources strongly predictive, while the other four are only weakly predictive. If you play an optimization strategy and have sufficient amounts of information about each source, you’ll make the best possible prediction. In the face of limited information, a heuristic strategy can do better provided you know you don’t have enough information and you know which sources of information to ignore. If you picked which source of information to heuristically-attend to at random, though, you’d end up making a worse prediction than the optimizer 80% of the time. Further, if you used a heuristic because you mistakenly believed you didn’t have sufficient amounts of information when you actually did, you’ve also made a worse prediction than the optimizer 100% of the time.

“I like those odds; $10,000 on blue! (The favorite-color heuristic)”

So, while heuristics might lead to better decisions than attempts at optimization at times, the contexts in which they manage that feat are limited. In order for these fast and frugal decision rules to be useful, you need to be aware of how much information you have, as well as which heuristics are appropriate for which situations. If you’re trying to understand why people use any specific heuristic, then, one would need to make substantially more textured predictions about the functions responsible for the existence of the heuristic in the first place. Consider the following heuristic, suggested by Gigerenzer (2010): if there is a default, do nothing about it. That heuristic is used to explain, in this case, the radically different rates of being an organ donor between countries: while only 4.3% of Danish people are donors, nearly everyone in Sweden is (approximately 85%). Since the explicit attitudes about the willingness to be a donor don’t seem to differ substantially between the two countries, the variance might prove a mystery; that is, until one realizes that the Danes have an ‘opt in’ policy to be a donor, whereas the Swedes have an ‘opt out’ one. The default option appears to be responsible for driving most of variance in rates of organ donor status.

While such a heuristic explanation might seem, at least initially, to be a satisfying one (in that it accounts for a lot of the variance), it does leave one wanting in certain regards. If anything, the heuristic seems more like a description of a phenomenon (the default option matters sometimes) rather than an explanation of it (why does it matter, and under what circumstances might we expect it to not?). Though I have no data on this, I imagine if you brought subjects into the lab and presented them with an option to give the experimenter $5 or have the experimenter give them $5, but highlighted the first option as default, you would probably find very few people who did not ignore the default heuristic. Why, then, might the default heuristic be so persuasive at getting people to be or fail to be organ donors, but profoundly unpersuasive at getting people to give up money? Gigerenzer’s hypothesized function for the default heuristic – group coordination – doesn’t help us out here, since people could, in principle, coordinate around either giving or getting. Perhaps one might posit that another heuristic – say, when possible, benefit the self over others – is at work in the new decision, but without a clear, and suitably textured theory for predicting when one heuristic or another will be at play, we haven’t explained these results.

In this regard, then, heuristics (as explanatory variables) share the same theoretical shortcoming as other “one-word explanations” (like ‘culture’, ‘norms’, ‘learning’, ‘the situation’, or similar such things frequently invoked by psychologists). At best, they seem to describe some common cues picked up on by various cognitive mechanisms, such as authority relations (what Gigerenzer suggested formed the following heuristic: if a person is an authority, follow requests) or peer behavior (the imitate-your-peers heuristic: do as your peers do) without telling us anything more. Such descriptions, it seems, could even drop the word ‘heuristic’ altogether and be none the worse for it. In fact, given that Gigerenzer (2010) mentions the possibility of multiple heuristics influencing a single decision, it’s unclear to me that he is still be discussing heuristics at all. This is because heuristics are designed specifically to ignore certain sources of information, as mentioned initially. Multiple heuristics working together, each of which dabble in a different source of information that the others ignore seem to resemble an optimization strategy more closely than heuristic one.

And if you want to retain the term, you need to stay within the lines.

While the language of heuristics might prove to be a fast and frugal way of stating results, it ends up being a poor method of explaining them or yielding much in the way of predictive value. In determining whether some decision rule even is a heuristic in the first place, it would seem to behoove those advocating the heuristic model to demonstrate why some source(s) of information ought to be expected to be ignored prior to some threshold (or whether such a threshold even exists). What, I wonder, might heuristics have to say about the variance in responses to the trolley and footbridge dilemmas, or the variation in moral views towards topics like abortion or recreational drugs (where people are notably not in agreement)? As far as I can tell, focusing on heuristics per se in these cases is unlikely to do much to move us forward. Perhaps, however, there is some heuristic heuristic that might provide us with a good rule of thumb for when we ought to expect heuristics to be valuable…

References: Gigerenzer, G. (2010). Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality Topics in Cognitive Science., 2, 528-554 DOI: 10.1111/j.1756-8765.2010.01094.x

Why Would Bad Information Lead To Better Results?

There are some truly strange arguments made in the psychological literature from time to time. Some might even be so bold as to call that frequency “often”, while others might dismiss the field of psychology as a variety of pseudoscience and call it a day. Now, were I to venture some guesses as to why strange arguments seem so popular, I’d have two main possibilities in mind: first, there’s the lack of well-grounded theoretical framework that most psychologists tend to suffer from, and, second, there’s a certain pressure put on psychologists to find and publish surprising results (surprising in that they document something counter-intuitive or some human failing. I blame this one for the lion’s share of these strange arguments). These two factors might come together to result in rather nonsensical arguments being put forth fairly-regularly and their not being spotted for what they are. One of these strange arguments that has come across my field of vision fairly frequently in the past few weeks is the following: that our minds are designed to actively create false information, and because of that false information we are supposed to be able to make better choices. Though it comes in various guises across different domains, the underlying logic is always the same: false beliefs are good. On the face of it, such an argument seems silly. In all fairness, however, it only seems that way because, well, it is that way.

If only all such papers came with gaudy warning hats…

Given the strangeness of these arguments, it’s refreshing to come across papers critical of them that don’t pull any rhetorical punches. For that reason, I was immediately drawn towards a recent paper entitled, “How ‘paternalistic’ is spatial perception? Why wearing a heavy backpack doesn’t – and couldn’t – make hills steeper” (Firestone, 2013; emphasis his). The general idea that the paper argues against is the apparently-popular suggestion that our perception essentially tells us – the conscious part of us, anyway – many little lies to get us to do or not do certain things. As the namesake of the paper implies, one argument goes that wearing a heavy backpack will make hills actually look steeper. Not just feel harder to climb, mind you, but actually look visually steeper. The reason some researchers posited this might be the case is because they realized, correctly, that wearing a heavy backpack makes hills harder to climb. In order to dissuade us from climbing them under such conditions, then, out perceptual system is thought makes the hill look harder to climb than it actually is, so we don’t try. Additionally, such biases are said to make decisions easier by reducing the cognitive processing required to make them.

Suggestions like these do violence to our intuitive experience of the world. Were you looking down the street unencumbered, for instance, your perception of the street would not visibly lengthen before your eyes were you to put on a heavy backpack, despite the distance now being harder to travel. Sure; you might be less inclined to take that walk down the street with the heavy backpack on, but that’s a different matter as to whether you would see the world any differently. Those who favor the embodied model might (and did) counter that it’s not that the distances themselves change, but rather the units on the ruler used to measure one’s position relative to them that does (Proffitt, 2013). In other words, since our measuring tool looks different, the distances look different. I find such an argument wanting, as it appears to be akin to suggesting that we should come to a different measurement of a 12-foot room contingent on whether we’re using foot-long or yard-long measuring sticks, but perhaps I’m missing some crucial detail.

In any case, there are many other problems with the embodied account that Firestone (2013) goes through, such as the magnitude of the effect sizes – which can be quite small – being insufficient to accurately adjust behavior, their being little to no objective way of scaling one’s relative abilities to certain kinds of estimates, and, perhaps most damningly, that many of these effects fail to replicate or can be eliminated by altering the demand characteristics of the experiments in which they’re found. Apparently subjects in these experiments seemed to make some connection – often explicitly – between the fact they were just asked to put on a heavy backpack and then make an estimate of the steepness of a hill. They’re inferring what the experimenter wants and then adjusting their estimates accordingly.

While Firestone (2013) makes many good points in suggesting why the paternalistic (or embodied) account probably isn’t right, there are some I would like to add to the list. The first of these additions is that, in many cases, the embodied account seems to be useless for discriminating between even directly-comparable actions. Consider the following example in which such biases might come into play: you have a heavy load to transport from point A to point B, and you want to figure out the easiest way of doing so. One route takes you over a steep hill; another route takes you the longer distance around the hill. How should we expect perceptual estimates to be biased in order to help you solve the task? On the one hand, they might bias you to avoid the hill, as the hill now looks steeper; on the other hand, they might bias you to avoid the more circuitous route, as distances now look longer. It would seem the perceptual bias resulting from the added weight wouldn’t help you make a seemingly simple decision. At best, such biases might make you decide to not bother carrying the load in the first place, but the moment you put it down, the perceptions of these distances ought to shrink, making the task seem more manageable. All such a biasing system would seem to do in cases like this, then, is add extra cognitive processing into the mix in the form of whatever mechanisms are required to bias your initial perceptions.

“It’s symbolic; things don’t always have to “do” things. Now help me plug it into the wall”

The next addition I’d like to make is also in regards to the embodied account not being useful: the embodied account, at least at times, would seem to get causality backwards. Recall that the hypothesized function of these ostensible perceptual distortions is to guide actions. Provided I’m understanding the argument correctly, then, these perceptual distortions ought to occur before one decides what to do; not after the decision has already been made. The problem is that they don’t seem to be able to work in that fashion, and here’s why: these biasing systems would be unable to know in which direction to bias perceptions prior to a decision being made. If, for instance, some part of your mind is trying to bias your perception of the steepness of a hill so as to dissuade you from climbing it, that would seem to imply that some part of your mind already made the decision as to whether or not to try and make the climb. If the decision hadn’t been made, the direction or extent of the bias would remain undetermined. Essentially, these biasing rules are being posited to turn your perceptual systems into superfluous yes-men.

On that point, it’s worth noting that we are talking about biasing existing perceptions. The proposition on the table seems to be the following chain of events: first, we perceive the world as it is (or at least as close to that state as possible; what I’ll call the true belief). This leaves most of cognitive work already done, as I mentioned above. Then, from those perceptions, an action is chosen based on some expected cost/benefit analysis (i.e. don’t climb the hill because it will be too hard). Following this, our mind takes the true belief it already made the action decision with and turns it into a false one. This false belief then biases our behavior so as to get us to do what we were going to do anyway. Since the decision can be made on the basis of the initially-calculated true information, the false belief seem to have no apparent benefit for your immediate decision. The real effect of these false beliefs, then, ought to be expected to be seen in subsequent decisions. This raises yet another troubling possibility for the model: in the event that some perception – like steepness – is used to generate estimates of multiple variables (such as energy expenditure, risk, or so on), a biased perception will similarly bias all these estimates.

A quick example should highlight some of potential problems with this. Let’s say you’re a camper returning home with a heavy load of gear on your back. Because you’re carrying a heavy load, you mistakenly perceive that your camping group is farther away than they actually are. Suddenly, you notice an rather hungry-looking predator approaching you. What do you do? You could try and run back to the safety of your group, or you could try and fight it off (forgoing other behavioral options for the moment). Unfortunately, because you mistakenly believe that your group is farther away than they are, you miscalculate the probability of making it to them before the predator catches up with you and opt to fight it off instead. Since the basis for that decision is false information, the odds of it being the best choice are diminished. This analysis works in the opposition direction as well. There are two types of errors you might make: thinking you can make the distance when you can’t, or thinking you can’t make it when you can. Both of these are errors to be avoided, and avoiding errors is awfully hard when you’re working with bad information.

Especially when you just disregarded the better information you had

It seems hard to find the silver lining in these false-belief models. They don’t seem to save any cognitive load, as they require the initially true beliefs to already be present in the mind somewhere. They don’t seem to help us make a decision either. At best, false beliefs lead us to do the same thing we would do in the presence of true beliefs anyway; at worst, false beliefs lead us to make worse decisions than we otherwise would. These models appear to require that our minds take the best possible state of information they have access to and then add something else to it. Despite these (perhaps not-so) clear shortcomings, false belief models appear to be remarkably popular, and are used to explain topics from religious beliefs to ostensible misperceptions of sexual interest. Given that people generally seem to understand that it’s beneficial to see through the lies of others and not be manipulated with false information, it seems peculiar that they have a harder time recognizing that it’s similarly beneficial to avoid lying to ourselves.

References: Firestone, C. (2013). How “Paternalistic” Is Spatial Perception? Why Wearing a Heavy Backpack Doesn’t- and Couldn’t – Make Hills Look Steeper. Perspectives on Psychological Science, 8, 455-473

Proffitt, D. (2013). An Embodied Approach to Perception: By What Units Are Visual Perceptions Scaled? Perspectives on Psychological Science, 8, 474-483.