The “I” In The Eye Of The Beholder

When “doing the right thing” is easy, people tend to not give others much credit for it. Chris Rock made light of this particular fact briefly in one of his more popular routines – so popular, in fact, that it has its own Wikipedia page. In this routine, Chris Rock says “Niggas always want some credit for some shit they supposed to do…a Nigga will brag about some shit a normal man just does”. At this point, it’s probably worth pointing out that Chris Rock has an issue with using the word “Nigga” because he felt it gave racist people the feeling they had the license to use it. However, he apparently has no issue at all using the word “Faggot”. The hypocrisy of the human mind is always fun.

So here’s a question: which opinion represents Chris Rock’s real opinion? Does Chris believe in not using a word – even comically – that could be considered offensive because it might give some people with ill-intentions license to use it or does he not? If you understand the concept of modularity properly, you should also understand that the question itself is ill-phrased; implicit in the question is an assumption of a single true self somewhere in Chris Rock’s brain, but that idea is no more than a (generally) useful fiction.

On second though, maybe I don’t really want to see your true colors…

Examples of this line of thought abound, however, despite the notion being faulty. For instance, Daniel Kahneman, who, when working as a psychologist (that apparently didn’t appreciate modularity) in the military, felt he was observing people’s true nature under conditions of stress.There’s something called the Implicit Association Test – IAT for short. The basic principle behind the IAT is that people will respond more quickly and accurately when matching terms that are more strongly mentally associated compared to ones that aren’t, and the speed of your responses demonstrates what’s really in your mind. So let’s say you have a series of faces and words that pop up in the middle of screen; your task is to hit one button if the face is white or the word is positive, and a different button if the face is black or the word is negative (also vice versa; i.e. one button for black person or positive word, and another button for white person or negative word). The administrators and supporters of this test often claim things like: It is well known that people don’t always ‘speak their minds’, and it is suspected that people don’t always ‘know their minds’, though the interpretation of the results of such a test is, well, open to interpretation.

“Your IAT results came back positive for hating pretty much everything about this guy”

Enter Nichols and Knobe,who conducted a little test to see if people were compatibilists or incompatibilists (that is, whether people feel the concept of moral responsibility is compatible with determinism or not). It turns out how you phrase the question matters: when people were asked to assume the universe was completely deterministic and given a concrete case of immoral behavior (in this case, a man killing his wife and three kids to run off with his secretary), 72% of people said he was fully morally responsible for his actions. Following this, they asked some other people about the abstract question (“in a completely deterministic universe, are people completely morally responsible for their actions?”), and, lo and behold, the answers do a complete flip; now, 86% of people endorsed an incompatibilist stance, saying people aren’t morally responsible.

That’s a pretty neat finding, to be sure. What caught my eye was what followed, when the author’s write: “In the abstract condition, people’s underlying theory is revealed for what it is ─ incompatibilist” (p.16, emphasis mine). To continue to beat the point to death, the problem here is that the brain is not a single organ; it’s composed of different, functionally specific, information processing modules, and the output of these modules is going to depend on specific contexts. Thus, asking about what the underlying theory is makes the question ill-phrased from the start. So let’s jet over to the Occupy Wall Street movement that Jay-Z has recently decided to attempt to make some money on (money which he will not be using to support the movement, by the way):

For every 99 dollars he makes, you get none.

When people demand that the 1% “pay their fair share” of the taxes, what is the word “fair” supposed to refer to? Currently, the federal tax code is progressive (that is, if you make more money, the proportion of that money that is taxed goes up), and if fairness is truly the goal, you’d suspect these people should be lobbying for a flat tax, demanding that everyone – no matter how rich or poor – pay the same proportion of what they make. It goes without saying that many people seem to oppose this idea, which raises some red flags about their use of the word “fair”. Indeed, Pillutla and Murnighan (2003) make an excellent case for just how easy it is for people to manipulate the meaning of the concept to suit their own purposes in a given situation. I’ll let them explain:

“Arguments that an action, an outcome, or decisions are not fair, when uttered by a recipient, most often reflect a strategic use of fairness, even if this is neither acknowledged nor even perceived by the claimant…The logical extension of these arguments is that claims of fairness are really cheap talk, i.e., unverifiable, costless information, possibly representing ulterior motives” (p.258)

The concept of fairness, in that sense, is a lot like the concept of a true, single self; it’s a useful fiction that tends to be deployed strategically. It makes good sense to be a hypocrite when being a hypocrite is going to pay. Being logically consistent is not useful to you if it only ensures you give up benefits you could otherwise have and/or forces you to suffer loses you could avoid. The real trick to hypocrisy then, according to Pillutla and Murnighan, is to appear consistent to others. If your cover gets blown, the resulting loss of face is potentially costly, depending, of course, on the specific set of circumstances.

References: Pillutla, M.M. & Murnighan, J.K. (2003). Fairness in bargaining. Social Justice Research, 16, 241- 262

Proximately – Not Ultimately – Anonymous

As part of my recent reading for an upcoming research project, I’ve been poking around some of the literature on cooperation and punishment, specifically second- vs. third-party punishment. Let’s say you have three people: A, B, and X. Person A and B are in a classic Prisoner’s Dilemma; they can each opt to either cooperate or defect and receive payments according to their decisions. In the case of second-party punishment, person A or B can give up some of their payment to reduce the other player’s payment after the choices have been made. For instance, once the game was run, person A could then give up points, with each point they give up reducing the payment of B by 3 points. This is akin to someone flirting with your boyfriend or girlfriend and you then blowing up the offender’s car; sure, it cost you a little cash for the gas, bottle, rag, and lighter, but the losses suffered by the other party are far greater.

Not only does it serve them right, but it’s also a more romantic gesture than flowers.

Third-party punishment involves another person, X, who observes the interaction between A and B. While X is unaffected by the outcome of the interaction itself, they are then given the option to give up some payment of their own to reduce the payment of A or B. Essentially, person X would be Batman swinging in to deliver some street justice, even if X’s parents may not have been murdered in front of their eyes.

Classic economic rationality would predict that no one should ever give up any of their payment to punish another player if the game is a one-shot deal. Paying to punish other players would only ensure that the punisher walks away with less money than they would otherwise have. Of course, we do see punishment in these games from both second- and third-parties when the option is available (though second-parties punish far more than third-parties). The reasons second-party punishment evolved don’t appear terribly mysterious: games like these are rarely one-shot deals in real life, and punishment sends a clear signal that one is not to be shortchanged, encouraging future cooperation and avoiding future losses. The benefits to this in the long-term can overcome the short-term cost of the punishment, for if person A knows person B is unable or unwilling to punish transgressions, person A would be able to continuously take advantage of B. If I know that you won’t waste your time pursuing me for burning your car down – since it won’t bring your car back – there’s nothing to dissuade me from burning it a second or tenth time.

Third-party punishment poses a bit more of a puzzle, which brings us to a paper by Fehr and Fischbacher (2004), who appear to be arguing in favor of group selection (at the very least, they don’t seem to find the idea implausible, despite it being just that). Since third-parties aren’t affected by the behavior of the others directly, there’s less of a reason to get involved. Being Batman might seem glamorous, but I doubt many people would be willing to invest that much time and money – while incurring huge risks to their own life – to anonymously deliver a benefit to a stranger. One of the possible ways third-party punishment could have benefited the punisher, as the authors note, is through reputational benefits: person X punishes person A for behaving unfairly, signaling to others that X is a cooperator and a friend – who also shouldn’t be trifled with – and that kindness would be reciprocated in turn. In an attempt to control for these factors, Fehr and Fischbacer ran some one-shot economic games where all players were anonymous and there was no possibility of reciprocation. The authors seemed to imply that any punishment in these anonymous cases is ultimately driven by something other than reputational self-interest.

“We just had everyone wear one of these. Problem solved”

The real question is do playing these games in an anonymous, one-shot fashion actually control for these factors or remove them from consideration? I doubt that they fully do, and here’s an example why: Alexander and Fisher (2003) surveyed men and women about their sexual history in anonymous and (potentially) non-anonymous conditions. Men reported an average of 3.7 partners in the non-anonymous condition and 4.2 in the anonymous one; women reported averages of 2.6 and 3.4 respectively. So there’s some evidence that the anonymous conditions do help.

However, there was also a third condition where the participants were hooked up to a fake lie detector machine – though ‘real’ lie detector machines don’t actually detect lies – and here the numbers (for women) changed again: 4 for men, 4.4 for women. While men’s answers weren’t particularly different across the three conditions, women’s number of sexual partners rose from 2.6 to 3.4 to 4.4. This difference may not have reached statistical significance, but the pattern is unmistakable.

On paper, she assured us that she found him sexy, and said her decision had nothing to do with his money. Good enough for me.

What I’m getting at is that it should not just be taken for granted that telling someone they’re in an anonymous condition automatically makes people’s psychology behave as if no one is watching, nor does it suggest that moral sentiments could have arisen via group selection (it’s my intuition that truly anonymous one-shot conditions in our evolutionary history were probably rarely encountered, especially as far as punishment was concerned). Consider a few other examples: people don’t enjoy eating fudge in the shape of dog shit, drinking juice that has been in contact with a sterilized cockroach, holding rubber vomit in their mouth, eating soup from a never-used bedpan, or using sugar from a glass labeled “cyanide”, even if they labeled it themselves (Rozin, Millman, & Nemeroff 1986). Even though these people “know” that there’s no real reason to be disgusted by rubber, metal, fudge, or a label, their psychology still (partly) functions as if there was one.

I’ll leave you with one final example of how explicitly “knowing” something (i.e. this survey is anonymous; the sugar really isn’t cyanide) can alter the functioning of your psychology in some cases, to some degree, but not in all cases.

If I tell you you’re supposed to see a dalmatian in the left-hand picture, you’ll quickly see it and never be able to look at that picture again without automatically seeing the dog. If I told you that the squares labeled A and B are actually the same color in the right-hand picture, you’d probably not believe me at first. Then, when you cover up all of that picture except A and B and find out that they actually are the same color you’ll realize why people mistake me for Chris Angel from time to time.Also, when you are looking at the whole picture, you’ll never be able to see A and B as the same color, because that explicit knowledge doesn’t always filter down into other perceptual systems.

References: Alexander, M.G. & Fisher, T.D. (2003). Truth and consequences. Using the bogus pipeline to examine sex differences in self-reported sexuality. The Journal of Sex Research, 40, 27-35

Fehr, E. & Fischbacher, U. (2004). Third-party punishment and social norms. Evolution and Human Behavior, 25, 63-87

Rozin, P., Millman, L., & Nemeroff, C. (1986). Operation of the laws of sympathetic magic in disgust and other domains. Journal of Personality and Social Psychology, 50, 703-712

Excuses, Excuses, Excuses

I recently finished the latest book by Robert Trivers (2011), The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life, which is an odd title considering how little of the book is devoted to the logic of the intended topic. A better title would probably have been Things Robert Trivers Finds Interesting. After straining to stay awake through most of the 337 tedious pages of the book, I can’t say I came away with any new insights or information on the subject of deception, though I did get the sense Trivers enjoys flirting with undergrads.

And who wouldn’t? It’s just one of the many, many benefits of getting tenure.

As Matt Ridley notes, Robert Kurzban (2010) also released a book not too long ago called Why Everyone (else) is a Hypocrite – which I can’t recommend highly enough – that made a solid case for why “self” based research is problematic in the first place. The mind isn’t a singular entity, but is rather a collection of different mental organs, each a functionally specific information processing mechanism. Any real mention of modularity is absent from Trivers’ book, much less an active appreciation of it. I’d hesitate to say Trivers takes any idea further (as Matt does); if anything, Trivers stalls and rolls slightly backwards. Another impression I got from reading the book is that I can expect an angry phone call from Trivers if he ever reads this.

I’d like to discuss the merits of your recent review of my book in a calm, academic fashion.

How might this false conception of a self effect thinking in other domains? One good example could be in the domain of morality. In this area, I get the sense the concept of the self has been tied heavily to moral culpability, where consciousness is king. Influences that are seen as originating outside the realm of conscious awareness are often used as attempts to exculpate various behaviors.

As an example, I’d offer up a paper by Sumithran et al (2011), examining how overweight people on diets often relapse and gain weight back after initial success at dropping some pounds. The authors measured various hormone levels in subject’s bodies that are known to influence hunger and related behaviors, like energy expenditure and food intake, finding that dieting leads to changes in these circulating hormone levels. This could be the reason, they argue, that many dieters don’t show long-term maintenance of weight loss. Fine. However, the authors lose me when they write this:

“…[A]n important finding of this study is that many of these alterations persist for 12 months after weight loss, even after the onset of weight regain, suggesting that the high rate of relapse among obese people who have lost weight has a strong physiological basis and is not simply the result of the voluntary resumption of old habits.” (p. 1602, emphasis mine)

Apparently, the authors find it interesting that they found a physiological basis for people not keeping the weight off, contrasting it with “voluntary” actions. My question would be, “What else would you even expect to find; a non-physiological basis?” After all, we are physical beings, so any changes in our thoughts or behaviors need to be the result of other physical changes. The implication seems to be that truly voluntary actions are supposed to be uninfluenced by physiology, while somehow having an influence on the behavior of the physical body.

“It’s not my choice, as I happen to have hormones”

This doesn’t seem to be a terribly uncommon thought process; while sometimes people actively deny any influences of biology on behavior out of fear of justifying it, or claim (correctly) that biological doesn’t justify behavior, those same people can very quickly accept behavior as being biologically based in the hopes of making it acceptable by saying “it’s not a choice”. That’s some interesting hypocrisy there. Did I mention there’s a very interesting – and a not so interesting – book that deals with that topic?

References: Kurzban, R. (2010). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton, NJ: Princeton University Press.

Sumithran, P., Prendergast, L.A., Delbridge, E., Purcell, K., Shulks, A., Kriketos, A.K., & Proietto, J. (2011). Long-term persistence of hormonal adaptations to weight loss. The New England Journal of Medicine, 365, 1597-1604.

Trivers, R. (2011). The folly of fools: The logic of deceit and self-deception in human life. New York, NY: Basic Books. 

Blame And Intent

The vice president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits and it will also harm the environment.” The chairman of the board answered, “I don’t care at all about harming the environment, I just want to make as much profit as I can. Let’s start the new program.” They started the new program. Sure enough, the environment was harmed.

How much blame do you think the chairman deserves for what he did (from 0-6)? Did the chairman intentionally harm the environment?

Those were questions posed to the people who read the story quoted above (Knobe, 2003). If you’re like most people, you’re probably basking in some sweet moral outrage right about now at the thought of the hypothetical chairman’s action; the kind people of fantasy-land will have to drink from polluted rivers, leading to the death of imaginary fish populations, harming the livelihood of the poor fishermen who were going to kill them anyway, and they all have the chairman to thank for it. To make the example a little more real, think about how certain economies recently took a big hit due to shady business practices, leading to some people occupying Wall Street getting a face full of pepper spray. 82% of participants said the chairman acted intentionally, and deserved to be blamed at about a 5 on average.

So now that we’ve established that the chairman definitely should be blamed for what he did since he was acting intentionally, do me a favor: go back to the original quote and read it again, replacing “harm” with “help”, then answer those first two questions, just replacing “blame” with “praise” and “harm with “help” again.

If you’re anything like me, you’re probably really handsome. If you’re anything like the participants in the study, your answers to the questions probably did a 180; of course the chairmen doesn’t deserve praise for what he did and he certainly didn’t act intentionally. In fact, 77% of participants now believe the chairman did not act intentionally and deserves to be praised at about 1.5.

Remember how I said people are bad at logically justifying their decisions and evaluating evidence objectively?

Let’s consider these results in light of the moral wiggle room research I presented last post. When the dictator can choose between a similar interest $6/$5 option or a $5/$1 option, they probably won’t be judged positively, no matter which they choose; if they pick the first option, well that was in their interest anyway so the judgment should be neutral, and if they pick the second, they’re just a spiteful dic…tator.  When someone has to choose between the conflicting payoffs, either $6/$1 or a $5/$5 split, they probably have a chance for some social advancement in the latter choice -it’s only moral if you give something up to help someone else – but plenty of room for moral condemnation with the former.

What would people’s judgments be of dictators who had the $6/$? or a $5/$? payoff and chose to not know? My guess is that it would fall somewhere between the neutral and negative side, closer to the neutral side. Even though their reputation may suffer somewhat due to their willful ignorance, people seem to take definite harm into account more than potential harm (drunk drivers suffer lower penalties than drunk drivers who hit something/someone by accident, essentially meaning it’s more “against the law” to be reckless and unlucky than just reckless. Judging actions morally by their outcomes is another topic, no less interesting).

But what about the poor, misunderstood dictators? I’d also guess that the dictators would rate their behavior quite differently than the crowds of people with colorful signs and rhyming chants about who or what has “got to go”. Those in the similar interest group would probably say they behaved morally positively and did so intentionally – and who are we to question their motives? – as would those in the conflicting group who chose the $5/$5 split. The ones who chose the $6/$1 would probably rate their behavior as neutral, justifying it by saying it was an economic decision, not a moral one. Those in the ignorant condition would probably rate their behavior somewhere between morally positive and neutral, after all, they didn’t intentionally hurt anyone, nor do they know they even hurt anyone at all, so, you know, they’re probably upstanding people.

References: Knobe, J. (2003). Intentional Action and Side Effects in Ordinary Language. Analysis, 63, 190-193

Red Herrings And Moral Wiggle Room

I don’t know whether it was actually J.K. Rowling who wrote/said that, but as a shallow, vain, and boring person who also happens to be in great shape, this quote really speaks to me; no doubt it also speaks to the deeper, selfless, and interesting portion of the population who read the quote surrounded by stacks of old pizza boxes and empty cartons of Ben and Jerry’s, but I get the sense it speaks to us in different ways (I also get the sense my keyboard is substantially less sticky).

Whoever is being quoted would appear to be implying something like the following: “Someone might be unhealthy/unattractive, but they could be X instead, which is worse. Therefore, being fat is OK”.  The logical shortcomings of that implication are so vast that the author has either never taken a philosophy class or has a PhD in the subject. I’d doubt Rowling’s(?) commitment to that line of thought in any case, simply by pointing out the actors in the Harry Potter films are less than fat – far less so than the population at large – meaning plenty of good actors probably got passed over because they were fat. More importantly, the entire quote is a red herring; a statement intended to distract attention away from the matter at hand.

Consider an alternative, but similar statement: Is a little thievery the worst thing someone can do? Is it worse than murder, rape, or physical assault? Not to me. The problem should become apparent here very quickly; whether or not Y is worse than X should have no bearing on the status of X. Whatever X is, it needs to be able to stand on its own feet. There is one case where there is an exception, and that’s concerning a certain parking ticket issued to me. Parking police, go fight some real crime. There are bigger concerns out there than whether I probably accidentally parked in a handicapped spot for a few hours. I already had to get my car out of impound; isn’t that enough for you people? It’s not like I was drunk driving. Or fat.

So  what does this quote tell us about how human psychology functions? One potential lesson we could take from it is that people are bad at justifying things coherently (see my last post, and probably future ones). Of greater interest, however, is the fact that people are interested in justifying their behavior as intensely as they are.

I remember seeing a commercial for yogurt on TV not so long ago that made fun of this bit of peculiar psychology. A woman is seen standing in front of a fridge, eying some cake. She thinks to herself that she could eat some celery and the cake, and somehow the celery would cancel the cake out. Women; am-i-right fellas? She wants to eat that cake and is trying to justify doing so to herself. There are two things to say about that: first, it’s great evidence for modularity of the mind, as if anymore was needed. Second, what good could those justifications possibly be? (They certainly don’t work out all the time)

To start at answering that question, let’s examine some research by Larson and Capra (2009) on the subject of what’s called “Moral Wiggle Room”. The research involved a dictator game; it’s a classic economic research design in which one player is designated the “dictator” and another player is designated to lie there and take it…. I mean, the “receiver”. The dictator is given a sum of money, say $10, and the ability to decide how the money is to be divided. Whatever the dictator decides is what goes, so if the dictator wants to keep $9 and give $1 to the receiver, so be it. Not only is it a neat way to examine certain aspects of our psychology, but it’s also an effective way of disappointing people in the name of science. Talk about killing two birds with one stone.

Research on Moral Wiggle Room goes (basically) as follows: In one group, the dictators need to decide between a higher payoff for themselves and a lower payoff for someone else, or a lower payoff for themselves that gives the receiver more money (A $6/$1 option or a $5/$5 option), or between payoffs that benefit both parties ($6/$5 option or a $5/$1 option). Another group of dictator’s payments look like the following: $6/$? or $5/$? While these dictators don’t know up front what the receiver will get, they can find out for free. With the click of a button, they can reveal the payoffs; it costs nothing in terms of time or money to find out. So what do people do?

In the first group, where payments are known, about 75% of dictators choose the fair option, so maybe life in the Soviet Union really wasn’t that bad. What happens when dictators are given the choice to not know how their actions will affect others? Slightly more than half of them choose to remain ignorant and not reveal the payoffs; of that 50%, who didn’t reveal, 100% took the higher offer. (The ones who revealed weren’t exactly saints either, since over half took the higher payment at the expense of the receiver)

Why might people not want to know how their actions effect others, even when it costs nothing to know? For starters, being strategically ignorant can only help them in terms of payoff: best case scenario, they find out the option that’s better for them is better for someone else as well  and they take it anyway; worst case, they now have access to information that opens possibilities for creating expectations of certain treatment in which their wallets are now a slightly lighter (in theory anyway. While these games are played anonymously, we’re all still sitting here judging their actions to ourselves, demonstrating the point). However, by remaining ignorant, they can also honestly claim they didn’t know they were making someone worse off. This could allow them to benefit indirectly (they may be able to better persuade others that they’re morally upstanding citizens or avoid punishment for their actions more effectively, should they come to light, without needing to lie about it) in addition to the direct benefits (they made more money).

Of course, these games are played at low-stakes, information is free, easy to obtain, and unambiguous, while decisions are made anonymously. One could think of how the results could change when any of those factors do. While things can get messy quickly, there are clearly cases in which the reasons for not knowing are better than knowing for some. My guess is that those reasons center around persuasion through justification, specifically being able to convince people that what you did was OK because you didn’t know how other people would feel. While that argument happens to be a red herring in the case of the research reviewed here – they could have found out if they wanted – we should not forget that red herrings are used as often as they are because they have a habit of working. And by working, I mean they can help make people forget you’re full of shit because they’re looking somewhere else.

References: Larson, T. & Capra, C. M. (2009). Exploiting moral wiggle room: illusory preference for fairness? A comment. Judgment and Decision Making, 4, 467-474.