So Drunk You’re See Double (Standards)

 

Jack and Jill (both played by Adam Sandler, for our purposes here) have been hanging out all night. While both have been drinking, Jill is substantially drunker than Jack. While Jill is in the bushes puking, she remembers that Jack had mentioned earlier he was out of cigarettes and wanted to get more. Once she emerges from the now-soiled side of the lawn she was on, Jill offers to drive Jack to the store so he can buy the cigarettes he wants. Along the way, they are pulled over for drunk driving. Jill wakes up the next day and doesn’t even recall getting into the car, but she regrets doing it.

Was Jill responsible for making the decision to get behind the wheel?

More importantly, should the cop had let them off in the hopes that the drunk driving would have stopped this movie from ever being made?

Jack and Jill (again, both played by Adam Sandler, for our purposes here) have been hanging out all night. While both have been drinking, Jill is substantially drunker than Jack. While Jill is in the bushes puking, she remembers that Jack had mentioned earlier he thought she was attractive and wanted to have sex. Once she emerges from the now-soiled side of the lawn she was on, Jill offers to have sex with Jack, and they go inside and get into bed together. The next morning, Jill wakes up, not remembering getting into bed with Jack, but she regrets doing it.

Was Jill responsible for making the decision to have sex?

More importantly, was what happened sex, incest, or masturbation? Either way, if Adam Sandler was doing it, it’s definitely gross.

According to my completely unscientific digging around discussions regarding the issue online, I can conclusively state that opinions are definitely mixed on the second question, though not so much on the first. In both cases, the underlying logic is the same: person X makes decision Y willingly while under the influence of alcohol, and later does not remember and regrets Y. As seen previously, slight changes in phrasing can make all the difference when it comes to people’s moral judgments, even if the underlying proposition is, essentially, the same.

To explore these intuitions in one other context, let’s turn down the dimmer, light some candles, pour some expensive wine (just not too much, to avoid impairing your judgment), and get a little more personal with them: You have been dating your partner – let’s just say they’re Adam Sandler, gendered to your preferences – who decided one night to go hang out with some friends. You keep in contact with your partner throughout the night, but as it gets later, the responses stop coming. The next day, you get a phone call; it’s your partner. Their tone of voice is noticeably shaken. They tell you that after they had been drinking for a while, someone else at the bar had started buying them drinks. Their memory is very scattered, but they recall enough to let you know that they had cheated on you, and, that at the time, they had offered to have sex with the person they met at the bar. They go on to tell you they regret doing it.

Would you blame your partner for what they did, or would you see them as faultless? How would you feel about them going out drinking alone the next weekend?

If you assumed the Asian man was the Asian woman’s husband, you’re a racist asshole.

Our perceptions of the situation and the responsibilities of the involved parties are going to be colored by self-interested factors (Kearns & Fincham, 2005). If you engage in a behavior that can do you or your reputation harm – like infidelity – you’re more likely to try and justify that behavior in ways that remove as much personal responsibility as possible (such as: “I was drunk” or “They were really hot”). On the other hand, if you’ve been wronged you’re also more likely to try and lump as much blame on others as possible on the party that wronged you, discounting environmental factors. Both perpetrators and victims bias their views on the situation, they just tend to do so in opposite directions.

What you can bet on, despite my not having available data on the matter, is that people won’t take kindly to having either their status as “innocent from (most) wrong-doing” or a “victim” be questioned. There is often too much at stake, in one form or another, to let consistency get in the way. After all, being a justified victim can easily put one into a strong social position, just as being known as one who slanders others in an unjustified fashion can drop you down the social ladder like a stone.

References: Kearns, J.N. & Fincham, F.D. (2005). Victim and perpetrator accounts of interpersonal transgressions: Self-serving or relationship-serving biases? Personality and Social Psychology Bulletin, 31, 321-333

Is Description Explanation?

[Social Psychology] has been self-handicapped with a relentless insistence on theoretical shallowness: on endless demonstrations that People are Really Bad at X, which are then “explained” by an ever-lengthening list of Biases, Fallacies, Illusions, Neglects, Blindnesses, and Fundamental Errors, each of which restates the finding that people are really bad at X. Wilson, for example, defines “self-affirmation theory” as “the idea that when we feel a threat to our self-esteem that’s difficult to deal with, sometimes the best thing we can do is to affirm ourselves in some completely different domain.” Most scientists would not call this a “theory.” It’s a redescription of a phenomenon, which needs a theory to explain it. – Steven Pinker

If you’ve sat through (almost) any psychology course at any point, you can probably understand Pinker’s point quite well (the full discussion can be found here). The theoretical shallowness that Steven references was the very dissatisfaction that drew me towards evolutionary theory so strongly. My first exposure to evolutionary psychology as an undergraduate immediately had me asking the sorely missing “why?” questions so often that I could have probably been mistaken for an annoying child (as if being an undergraduate didn’t already do enough on that front).

In keeping with the annoying child theme, I also started beating up other young children, because I love consistency.

That same theoretical shallowness has returned to me lately in the form of what are known as “norms”. As Fehr and Fischerbacher (2004) note, “…it is impossible to understand human societies without an adequate understanding of social norms”, and “It is, therefore, not surprising that social scientists…invoke no other concept more frequently…”. Did you read that? It’s impossible to understand human behavior without norms, so don’t even try. Of course, in the same paragraph they also note, “…we still know very little about how they are formed, the forces determining their content, how and why they change, their cognitive and emotional underpinnings, how they relate to values, how they shape our perceptions of justice and it’s violations, and how they are shaped by and shape our neuropsychological architecture”. So, just to recap, it’s apparently vital to understand norms in order to understand human behavior, and, despite social scientists knowing pretty much nothing about them, they’re referenced everywhere. Using my amazing powers of deduction, I only conclude that most social scientists think it’s vital they maintain a commitment to not understanding human behavior.

By adding the concept of “norms”, Fehr and Fischerbacher (2004) didn’t actually add anything to what they were trying to explain (which was why some uninvolved bystanders will sometimes pay a generally small amount to punish a perceived misdeed that didn’t directly affect them, if you were curious), but instead seemed to grant an illusion of explanatory depth (Rozenblits & Keil, 2002). It would seem neuroscience is capable of generating that same illusion.

This thing cost more money than most people see in a lifetime; it damn sure better have some answers.

Can simply adding irrelevant neuroscience information to an otherwise bad explanation suddenly make it sound good? Apparently, that answer is a resounding “yes”, at least for most people who aren’t neuroscience graduate students or above. Weisburg et al (2008) gave adults, students in a neuroscience class, and experts in the neuroscience field a brief description of a psychological phenomena, and then offered either a ‘good’ or a ‘bad’ explanation of the phenomena in question. In keeping with the theme of this post, the ‘bad’ explanations were simply circular, redescriptions of the phenomena (or, as many social psychologists would call it, a theory). Additionally, those good and bad explanations also came either without any neuroscience, or with a brief and irrelevant neuroscience tidbit tacked on that described where some activity occurs in a brain scan.

Across all groups, unsurprisingly, good explanations were rated as being more satisfying than bad explanations. However, the adults and the students rated bad explanations with the irrelevant neuroscience information as actually being on the satisfying side of things, and among the students, good explanations with neuroscience sounded better as well. Only those in the expert group did not find the irrelevant neuroscience information more satisfying; if anything, they found it less so – making good explanations less satisfying, as compared to the same explanation without the neuroscience – as they understood that the neuroscience was superfluous and used awkwardly.

This cognitive illusion is quite fascinating: descriptions appear to be capable of playing the role of explanations in some cases, despite them being woefully ill-suited for the task. This could mean that descriptions may also be capable of playing the role of justifications, by way of explanations, just try not to convince yourself that I’ve explained why they function this way.

References: Fehr, E. & Fischerbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25, 63-87.

Rozenblit, L. & Keil, F. (2002). The misunderstood limits of folk science: an illusion of explanatory depth. Cognitive Science, 26, 521-562

Weisberg, D.S., Keil, F.C., Goodstein, J. Rawson, E., & Gray, J.R. (2008). The seductive allure of neuroscience explanations. The Journal of Cognitive Neuroscience, 20, 470-477

Blame And Intent

The vice president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits and it will also harm the environment.” The chairman of the board answered, “I don’t care at all about harming the environment, I just want to make as much profit as I can. Let’s start the new program.” They started the new program. Sure enough, the environment was harmed.

How much blame do you think the chairman deserves for what he did (from 0-6)? Did the chairman intentionally harm the environment?

Those were questions posed to the people who read the story quoted above (Knobe, 2003). If you’re like most people, you’re probably basking in some sweet moral outrage right about now at the thought of the hypothetical chairman’s action; the kind people of fantasy-land will have to drink from polluted rivers, leading to the death of imaginary fish populations, harming the livelihood of the poor fishermen who were going to kill them anyway, and they all have the chairman to thank for it. To make the example a little more real, think about how certain economies recently took a big hit due to shady business practices, leading to some people occupying Wall Street getting a face full of pepper spray. 82% of participants said the chairman acted intentionally, and deserved to be blamed at about a 5 on average.

So now that we’ve established that the chairman definitely should be blamed for what he did since he was acting intentionally, do me a favor: go back to the original quote and read it again, replacing “harm” with “help”, then answer those first two questions, just replacing “blame” with “praise” and “harm with “help” again.

If you’re anything like me, you’re probably really handsome. If you’re anything like the participants in the study, your answers to the questions probably did a 180; of course the chairmen doesn’t deserve praise for what he did and he certainly didn’t act intentionally. In fact, 77% of participants now believe the chairman did not act intentionally and deserves to be praised at about 1.5.

Remember how I said people are bad at logically justifying their decisions and evaluating evidence objectively?

Let’s consider these results in light of the moral wiggle room research I presented last post. When the dictator can choose between a similar interest $6/$5 option or a $5/$1 option, they probably won’t be judged positively, no matter which they choose; if they pick the first option, well that was in their interest anyway so the judgment should be neutral, and if they pick the second, they’re just a spiteful dic…tator.  When someone has to choose between the conflicting payoffs, either $6/$1 or a $5/$5 split, they probably have a chance for some social advancement in the latter choice -it’s only moral if you give something up to help someone else – but plenty of room for moral condemnation with the former.

What would people’s judgments be of dictators who had the $6/$? or a $5/$? payoff and chose to not know? My guess is that it would fall somewhere between the neutral and negative side, closer to the neutral side. Even though their reputation may suffer somewhat due to their willful ignorance, people seem to take definite harm into account more than potential harm (drunk drivers suffer lower penalties than drunk drivers who hit something/someone by accident, essentially meaning it’s more “against the law” to be reckless and unlucky than just reckless. Judging actions morally by their outcomes is another topic, no less interesting).

But what about the poor, misunderstood dictators? I’d also guess that the dictators would rate their behavior quite differently than the crowds of people with colorful signs and rhyming chants about who or what has “got to go”. Those in the similar interest group would probably say they behaved morally positively and did so intentionally – and who are we to question their motives? – as would those in the conflicting group who chose the $5/$5 split. The ones who chose the $6/$1 would probably rate their behavior as neutral, justifying it by saying it was an economic decision, not a moral one. Those in the ignorant condition would probably rate their behavior somewhere between morally positive and neutral, after all, they didn’t intentionally hurt anyone, nor do they know they even hurt anyone at all, so, you know, they’re probably upstanding people.

References: Knobe, J. (2003). Intentional Action and Side Effects in Ordinary Language. Analysis, 63, 190-193

I Read It, So It Must Be True

The first principle [in science] is that you must not fool yourself, and you are the easiest person to fool – Richard Feynman

It feels nice to start a post with a quote from some famous guy; it just makes the whole thing that much more official. I decided to break from my proud, two-note, tradition and write about some psychology that doesn’t have to do with gay sex, much to the disappointment of myself and my hordes of dedicated fans. Instead, today I’ll be examining the standards of evidence with which claims are evaluated.

Were we all surveyed, the majority of us would report that we’re – without a doubt – in at least the top half of the population in terms of intelligence, morality, and penis size. We’d also probably report that we’re relatively free from bias in our examination of evidence, relative to our peers; the unwashed masses of the world, sheep that they are, lack the critical thinking abilities we do. Were we shown the results of research that said the majority of people surveyed also consider themselves remarkably free of bias, relative to the rest of the world – a statistical impossibility – we’d shake our head at how blind other people can be to their own biases, all the while assuring the researcher that we really are that good.

I think you see where I’m going with this, since most everyone is above average in their reasoning abilities.

In some (most?) cases when evidence isn’t present, it’s simply assumed to exist. If I asked you whether making birth control pills more available would increase or decrease the happiness of women, on average, I’d guess you would probably have an answer for me that didn’t include the phrase “I don’t know”. How do you suppose you’d respond if you read about some research evidence that contradicted your answer about it?

In real life, evidence is a tricky thing. Results from almost any source can be tainted from any number of known and unknown factors. Publication bias alone can lead to positive results being published more often than null results, leading to an increase in the number of false positives, not to mention other statistical sleights of hand that won’t be dealt with here. The way questions are asked can lead respondents towards giving certain answers. Sometimes the researchers think they’re measuring something they aren’t. Sometimes they’re asking the wrong questions. Sometimes they’re only asking certain groups of people who differ in important ways from other people. Sometimes (often) the answers people give to questions don’t correspond well to actual behavior. There are countless possible flaws, uncontrolled variables, and noise that can throw a result off.

Here’s the good news: people are pretty alright at picking out those issues (and I do stress alright; I’m not sure I’d call them good at it). Here’s the bad news: people are also substantially worse at doing it when the information agrees with what they already think.

Two papers examined this tendency: Lord, Ross, & Lepper (1979) and Koehler (1993). In the first, subjects were surveyed about their views regarding the death penalty and were categorized as those who were either strongly in favor of it or strongly opposed. The subjects were then given a hypothetical research project and its results to evaluate; results that either supported the usefulness of the death penalty in reducing crime or opposing its usefulness. Following this, they were then given another study that came to the opposite conclusion. So here we have people with very strong views being given ambiguous evidence. Surely, seeing the evidence was mixed, people would begin to mellow in their views, perhaps compromising to simply breaking a thief’s hands over killing him or letting him escape unharmed, right?

Well, the short answer is “no”; the somewhat longer answer is “nooooooo”. When subjects rated the research they were presented with, they pointed out the possible ways that the research opposing their views could have been misconducted and why the results aren’t valid to their satisfaction. However, they found no corresponding problems with the results that supported their views, or at least no problems really worth worrying about. Bear in mind, they read this evidence back to back. Their views on the subject, both pro and con, remained unchanged; if anything, they became slightly more polarized than they already were at the beginning.

Koehler (1993) found a similar result: when graduate students were evaluating hypothetical research projects, those research projects that found results consistent with the student’s beliefs were rated more favorably than those with opposing results. We’re not just talking unwashed masses anymore; we’re talking about unwashed and opinionated graduate students. There was also an interaction effect: specifically, the stronger the preexisting belief, the more favorably agreeing studies were rated. A second study replicated this effect using a population of skeptics and paranormal researchers examining evidence for ESP (if you’re curious, the biases of the paranormal researchers seemed somewhat less pronounced. Are you still feeling smug about the biases of others, or are you feeling the results aren’t quite right?).

The pattern that emerges is that bias progressively creeps in as investment in a subject increases. We see high-profile examples of it all the time in politics: statistics are often cited that are flimsy at best and made up at worst. While we often chalk this up to politicians simply outright lying, the truth is probably that they legitimately believe what they are saying is true, but it could be something they accepted without even looking into it, or looking into the matter with a somewhat relaxed critical view.

And before we – with our statistically large penises and massive intellect – get all high and mighty about how all politicians are corrupt liars, we’d do well to remember the research I just talked about didn’t focus on politicians. The real difference between non-politicians and official politicians is that the decisions of the latter group tend to carry consequences and are often the center of public attention. You’re probably no different; you’re just not being recorded and watched by millions of people when you do it.

Recently, I had someone cite a statistic at me that the average lifespan of a transsexual was 23 years old. As far as I can tell, the source of that statistic is that someone said it once, and it was repeated. I’m sure many people have heard some statistics about how many prostitutes are actually being coerced to work against their will; you might do well to consider this. Many are probably familiar with the statistic that women earn 75 cents to every dollar a man earns as a result of sexism and discrimination. Some of you will be pleased to know that discrepancy drops very sharply once you actually start to control for basic things, like number of hours worked, education, field of work, etc. Is some percentage of whatever gap remains due to sexism? Probably, but its far, far smaller than many would make it out to be; the mere existence of a gap is not direct evidence of sexism.

Not only are unreliable statistics like those parroted back by people who want to believe (or disbelieve) them for one reason or another, but the interpretations of those statistics are open to the same problem. I’m sure we can all think of times other people made this mistake, but I’ll bet most of us would struggle to think of times we did it ourselves, smart and good looking as we all are.

References: Koehler, J.J. (1993). The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Organizational Behavior and Human Decision Processes, 56, 28-55.

Lord, C.G., Ross, L., & Lepper, M.R. (1979). Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology, 37, 2098-2109.