No Pain, No Gain: Victimhood And Selfishness

If you’re the kind of person who has an active social life, including things like friends and intimate sexual relationships, then you’re probably the type of person who has something better to do than read this page. In the event you’re reading this and also manage to have those kinds of relationships, you might have noticed that people who have recently (or not so recently) been hurt will sometimes go a bit off the rails. Perhaps they opt to drink to the point of blacking out and burning down their neighbor’s pets, or they might take a more subtle approach and simply become a slightly different person for a time (a little more extroverted, a little less concerned about safe sex, or just ball up in their closet and eat ice cream for days on end). I’ve kind of touched on this issue before when considering the function of depression, so today I’m going to shift gears a bit.

Right after I treat myself to some retail therapy…

I’ve written about victimhood several times in the past, and today I’d like to tie two pieces of information together to help understand a third. The first piece to consider is the Banker’s Paradox: people need to judge where to invest their social capital in so as to get the most from their investment over the long-term. Part of this assessment involves considering (a) whom your social capital would be most valuable for and (b) how likely the recipient of that investment would be to return in. Certain classes of people make better investment targets than others, depending on the context you currently find yourself (and them) in. The second piece of information involves how people attribute blame to victims and heroes: victims (conceptualized as someone that has a bad thing happen to them) are blamed less than heroes (conceptualized as someone who does good deeds) for later identical misdeeds.

In light of the Banker’s Paradox, the second finding makes more sense: victims may often make better social investment targets than heroes, at least with regard to their need for the assistance. Accordingly, if you’re looking to invest in someone and foster a relationship with them, casting them in a negative moral light probably won’t go very far towards achieving that goal. At the very least, if victims make better investment targets for other people, even if you aren’t looking to invest in that victim yourself, you run the risk of drawing condemnation from those other parties by siding against the victim they’re trying to invest in. To clarify this point, consider the story of Robin Hood; even if you were not benefiting directly from Robin stealing from the rich and giving to the poor, condemning him for theft would be unlikely to win you much support from the lower class he was helping. Unless you were looking to experience a public embarrassment and/or beating, keeping quiet about the whole stealing thing might be in your best interests.

With that in mind, it’s time to turn this analysis towards the perspective of the victimized party, who, up until now, has been treated in a relatively passive way; as someone who things happen to, or someone who receives investment. Third parties, in this scenario, would invest socially in victims because the victims hold real or potential social value. This value could in turn be strategically leveraged by victis in order to achieve other useful outcomes for themselves. For instance, if victims are less likely to be morally condemned for their actions by others, this would give victims a bit of moral wiggle room to behave more selfishly towards others while more effectively avoiding the consequences of their actions. As summed up by The Joker in The Dark Knight, “If you’re good at something [or valuable to someone], never do it [or be that] for free”.

“Also, don’t exceed the recommended daily dosage on your drugs”.

This hypothesis was inadvertently tested by Zitek et al (2010), who examined the link between perceiving oneself as a victim and subsequent feelings of entitlement. I say inadvertently, because, as is usually the case in psychology research, their experiment was carried out with no theory worth mentioning. Essentially, this research was done a hunch, and the results were in no way explained. While I’d prefer not to have to harp on this point so frequently, it’s just too frequent of an issue to ignore. Moving on…

In the first experiment, Zitek et al (2010) asked two groups of subject to either (a) write about a time they were bored or (b) write about a time life treated them unfairly. Following this, subjects were asked three questions relating to their sense of entitlement. Lastly, subjects were given a chance to help the experimenter with an additional task, ostensibly unrelated to the current experiment. This final measure could be considered an indirect assessment of the subject’s current altruistic tendencies. When it came to the measures of entitlement, subjects who wrote about an unfair instance in their life rated themselves as more entitled than the control group (4.34 vs 3.85 out of 7, respectively). Further, subjects were less likely to help the experimenter out with the additional task in the unfair condition, relative t the bored one (60% vs 81% of subjects helped, respectively).

A second experiment altered the entitlement questions somewhat and also asked about the subject’s selfish behavioral intentions in lieu of asking subjects to help out on the additional task. Zitek et al (2010) also asked about other aspects of the subject’s current negative emotions, such as anger. The results for this experiment showed that subjects in the unfair condition were slightly more likely to report selfish behavioral intentions (3.78) than subjects in the bored condition (3.42). Similarly, subjects in the unfair condition also reported a greater sense of entitlement (4.91) relative to the bored group (4.54). The subject’s current feelings of anger and frustration did not mediate this effect significantly, whereas feelings of entitlement did, suggesting there is something special about entitlement in this regard.

One final experiment got a little more personal: instead of just asking subjects about a time life was unfair to them, an experiment was run where subjects would lose out on a prize either fairly or unfairly. In the unfair loss condition, what appeared to be a computer glitch prevented subjects from being able to win, whereas in the fair loss condition the task subjects were given was designed to appear as if they were simply unable to solve it successfully in the time allotted. After this loss, subjects were asked how they would allocate money for a hypothetical experiment between themselves and another player, contingent on them outperforming that other player 70% of the time. Again, the same effect popped up, where subjects in the unfair loss condition suggested they should get more money in the hypothetical experiment ($3.93 out of 6) relative to the fair loss condition ($3.64). While that might not seem like much of a difference, when considering only the most selfish allocations of money, those in the unfair condition were more than twice as likely (19%) to make such divisions (8% in the bored group).

The forth experiment might have got a little out of hand…

So, being a victim would seem to make people slightly more selfish in these cases. The size of this effect wasn’t particularly impressive in the current experiments, but the measures of victimization were rather tame; two involved experiences long past, and the third involved victimization at the hands of a relatively impersonal force – a computer glitch. More recent and more intense victimization might do something to change the extent of this effect, but that will have to be a matter for future research to sort out; research that might be a little difficult to conduct, as most review boards aren’t too keen on approving research that aims to cause significant discomfort for the subjects.

That said, many people might have easily made the opposite prediction: that being victimized would lead people to become more altruistic and less selfish, perhaps based in some proximate empathy model (i.e. “I don’t want to see people hurt the way I was”). While I certainly wouldn’t want to write off such a possible outcome, given the proper context, discussing that possibility will be a job for another day. What I will suggest is that we shouldn’t expect victimhood to make people do one and only one thing; we should expect their behavior will be highly dependent on their contexts. After all, the appropriateness of a behavior can only be determined contextually, and behaving selfishly with reckless abandon is still a risky proposition, victim or not.

References: Zitek EM, Jordan AH, Monin B, & Leach FR (2010). Victim entitlement to behave selfishly. Journal of personality and social psychology, 98 (2), 245-55 PMID: 20085398

Can Situations Be Strong Or Weak?

“The correspondence bias is the tendency to draw inferences about a person’s unique and enduring dispositions from behaviors that can be entirely explained by the situation in which they occur. Although this tendency is one of the most fundamental phenomena in social psychology, its causes and consequences remain poorly understood” – Gilbert and Malone, 1995

Social psychologists are not renowned for being particularly good at understanding things, even things which are (supposedly) fundamental to their field of study. Like the proverbial drunk looking for his missing keys at night under a streetlight rather than in the park where he lost them “because the light is better”, part of the reason social psychologists are not very good at providing genuine understanding is because they often begin with some false premise or assumption. In the case of the correspondence bias, as defined by Gilbert & Malone (1995), I feel one of these misunderstandings is the idea that behavior can be caused or explained by the situation at all (let alone ‘entirely’); that is, unless one defines “the situation” in a way that ceases to be of any real value.

Which is about as valuable as the average research paper in psychology.

According to Gilbert and Malone (1995), an “eminently reasonable” rule is that “…one should not explain with dispositions that which has already been explained by the situation”. They go on to suggest that people tend to “underestimate the power of situations”, frequently mistaking “…strong situation[s] for relatively weak one[s]. To use some more concrete examples, people seemed to perform poorly at tasks like predicting how much shock subjects in the Milgram experiment on obedience would give when asked to by an experimenter, or tend to do things like judge basketball players as less competent when they were shooting free throws in a dimly-lit room, relative to a well-lit one. In these experiments, the command of an experimenter and the lighting of a room are supposed to be, I think, “strong” situations that “highly constrain” behavior. Not to put too fine of a point on it, but that makes no sense.

A simple example should demonstrate why. Let’s say you wanted to see how “strong” of a situation a hamburger is, and you measure the strength of the situation by how much subjects are willing to pay for that burger. An initial experiment finds that subjects are, on average, willing to pay a maximum of about $5 for that average burger. Good to know. Now, a second experiment is run, but this time subjects are divided into three groups: group 1 has just finished eating a larger meal, group 2 ate that same meal four hours prior and nothing else since, and group 3 ate that meal 8 hours prior and nothing else since. These three groups are now presented with the same average hamburger. The results you’d now find is that group 1 seems relatively uninterested in paying for that burger (say, $0.50, on average), group 2 is somewhat interested in paying for it ($5), and group 3 is very interested in paying for it ($10).

From this hypothetical pattern of results, what are we to conclude about the “strength” of the situation of an opportunity to buy a burger? Obviously, the burger itself (the input provided by the environment) explains nothing about the behavior of the subjects and has no intrinsic “strength”. This shouldn’t be terribly surprising because abstract situations aren’t what generate behavior; psychological modules do. Whether that burger is currently valuable or not is going to depend crucially on the outputs of certain modules monitoring things like one’s current caloric state, and other modules recognizing the hamburger as a good source of potential calories. That’s not to say that the situations are irrelevant to the behavior that is eventually generated, of course; it just implies that which aspects of the environment matter, how much they matter, when they matter, and why they matter, are all determined by the current state of the existing psychological structures of the organism in question. A resource that is highly valuable in one situation is not necessarily valuable in another.

“If it makes my car go fast, just imagine how much time it’ll cut off my 100m”

Despite their use of inconsistent and sloppy language regarding the interaction between environments and dispositions in generating behavior, Glibert and Malone (1995) seem to understand that point to some extent. Concerning an example where a debate coach instructs subjects to write a pro-Castro speech, the authors write:

“…[T]he debate coach’s instructions merely alter the payoffs associated with the two behavioral options…the essayist’s behavioral options are not altered by the debate coach’s instructions; rather, the essayist’s motivation to enact each of the behavioral options is altered.”

What people are often underestimating (or overestimating, depending on context), then, is not the strength of the situation, but the strength of other competing dispositions, given a certain set of environmental inputs. While this might seem like a minor semantic issue, I feel it might holds a deeper significance, insomuch as it leads researchers to ask the wrong kinds of questions. For instance, what’s noteworthy to me about Gilbert and Malone’s (1995) analysis of the ultimate causes of the correspondence bias are not the candidate explanations they put forward, but rather which questions they don’t ask and what explanations they don’t give.

The authors suggest that the correspondence bias might not have historically had many negative consequences for a number of reasons that I won’t get into here. The only possible positive consequence they discuss is that the bias might allow people to predict the behavior of others. This is a rather strange benefit to posit, I feel, given that almost the entirety of their paper up to that point had been focused on how this bias is likely to lead to incorrect predictions, all things considered. Even granting that the correspondence bias might only tend to be an actual problem in contexts artificially created in psychology experiments (such as by randomly assigning subjects to groups), in no case does it seem to lead to more accurate predictions of others’ behavior.

The ultimate explanations offered for the correspondence bias left me feeling like (and I could be wrong about this) the authors were still thinking about the bias as an error in the way we think; they don’t seem to give the impression that the bias had an real function. Now, that could be true; the bias might well be a neutral-to-maladaptive byproduct, though what the bias would be a byproduct of isn’t immediately clear. While, from a strictly accuracy-based point of view, the bias might often lead to inaccurate conclusions, as I’ve mentioned before, accuracy is only important to the extent that it helps organisms do useful things. The question that Gilbert and Malone (1995) fail to ask, given their focus on accuracy, is why would people bother attributing the behavior of others to situational or dispositional characteristics in the first place?

My road rage happens to be indifferent to whether you were lost or just a slow driver.

Being able to predict the behavior of other organisms is useful, no doubt; it lets you know who is likely to be a good social investment and who isn’t, which will in turn affect the way you behave towards others. Given the stakes at hand, and since you’re dealing with organisms that can be persuaded, accuracy in perceptions might not always be the best policy. Suppose you’re in competition with a rival over some resource; since the Olympics are currently going on, you’re now a particularly good swimmer and competing in some respective event. Let’s say you don’t come in first; you end up placing behind one of your country’s bitter rivals. How are you going to explain that loss to other people? You might concede that your rival was simply a better swimmer than you, but that’s not likely to garner you a whole lot of support. Alternatively, you might suggest that you were really the better swimmer, but some aspect of the situation ended up giving your rival a temporary upper-hand. What you’d be particularly unlikely to do would be to both suggest that your rival was actually the better swimmer and still beat you despite some situational factor that ended up putting you at an advantage.

As Gilbert and Malone (1995) mention in their introduction, a niece who is perceived as intentionally breaking a vase by their aunt would receive the thumbscrews, while the niece who is perceived as breaking a vase on accident would not. Depending on the nature of the situation – whether it’s one that will result in blame or praise – it might serve you will to minimize or maximize the perception of your involvement in bringing the events about. It would similarly serve you will to manipulate the perceptions of other people’s involvement in act. One way of doing this would involve going after the perceptions of whether a behavior was caused by a situation or a disposition; whether the outcome was a fluke or likely to be consistent across situations. This would lead to the straight-forward prediction that such attributional biases will tend to look remarkably self-serving, rather than just wrong in some general way. I’ll leave it up to you as to whether or not that seems to be the case.

References: Gilbert, D.T., & Malone, P.S. (1995). The correspondence bias Psychological Bulletin, 117, 21-38 DOI: 10.1037/0033-2909.117.1.21

Why All The Fuss About Equality?

In my last post, I discussed why the term “inequality aversion” is a rather inelegant way to express certain human motivations. A desire to not be personally disadvantaged is not the same thing as a desire for equity, more generally. Further, people readily tolerate and create inequality when it’s advantageous for them to do so, and such behavior is readily understandable from an evolutionary perspective. What isn’t as easily understandable – at least not as immediately – is why equality should matter at all. Most research on the subject of equality would appear to just take it for granted (more-or-less) that equality matters without making a concerted attempt to understand why that should be the case. This paper isn’t much different.

On the plus side, at least they’re consistent.

The paper by Raihani & McAuliffe (2012) sought to disentangle two possible competing motives when it comes to punishing behavior: inequality and reciprocity. The authors note that previous research examining punishment often confounds these two possible motives. For instance, let’s say you’re playing a standard prisoner’s dilemma game: you have a choice to either cooperate or defect, and let’s further say that you opt for cooperation. If your opponent defects, not only do you lose out on the money you would have made had he cooperated, but your opponent also ends up with more money than you do overall. If you decided to punish the defector, the question arises of whether that punishment is driven by the loss of money, the aversion to the disadvantageous inequality, or some combination of the two.

To separate the two motives out, Raihani & McAuliffe (2012) used a taking game. Subjects would play the role of either player 1 or player 2 in one of three condition. In all conditions, player 1 was given an initial endowment of seventy cents; player 2, on the other hand, started out with either ten cents, thirty cents, or seventy cents. In all conditions, player 2 was then given the option of taking twenty cents from player 1, and following that decision player 1 was then given an option of playing ten cents to reduce player 2′s payoff by thirty cents. The significance of these conditions is that, in the first two, if player 2 takes the twenty cents, no disadvantageous inequality is created for player 1. However, in the last condition, by taking the twenty cents, player 2 creates that inequality. While the overall loss of money across conditions is identical, the context of that loss in terms of equality is not. The question, then, is how often player 1 would punish player 2.

In  the first two conditions, where no disadvantageous inequality was created for player 1, player 1 didn’t punish significantly more whether player 2 had taken money or not (approximately 13%). In the third treatment, where player 2′s taking did create that kind of inequality, player 1 was now far more likely to pay to punish (approximately 40%). So this is a pretty neat result, and it mirrors past work that came at the question from the opposite angle (Xiao & Houser, 2010; see here). The real question, though, concerns how we are to interpret this finding. These results, in and of themselves, don’t tell us a whole lot about why equality matters when it comes to punishment decisions.

They also doesn’t tell me much about this terrible itch I’ve been experiencing lately, but that’s a post for another day.

I think it’s worth noting that the study still does, despite it’s best efforts, confound losing money and generating inequality; in no condition can player 2 create disadvantageous inequality for player 1 without taking money away as well. Accordingly, I can’t bring myself to agree with the authors, who conclude:

  Together, these results suggest that disadvantageous inequality is the driving force motivating punishment, implying that the proximate motives underpinning human punishment might therefore stem from inequality aversion rather than the desire to reciprocate losses.

It could still well be the case that player one would rather not have twenty cents taken from them, thank you very much, but don’t reciprocate with punishment for other reasons. To use a more real-life context, let’s say you have a guest come to your house. At some point after that guest has left, you discover that he apparently also left with some of the cash you had been saving to buy whatever expensive thing you had your eye on at the time. When it came to deciding whether or not you desired to see that person punished for what they did, precisely how well off they were relative to you might not be your first concern. The theft would not, I imagine, automatically become OK in the event that the guy only took your money because you were richer than he was. A psychology that was designed to function in such a manner would leave one wide-open for exploitation by selfish others.

However, how well off you were, relative to how needy the person in question was, might have a larger effect in the minds of other third party condemners. The sentiment behind the tale of Robin Hood serves as an example towards that end: stealing from the rich is less likely to be condemned by others than stealing from one of lower standing. If other third parties are less likely, for whatever reason, to support your decision to punish another individual in contexts where you’re advantaged over the person being punished, punishment immediately risks becoming more costly. At that point. it might be less costly to tolerate the theft rather than risking condemnation by others for taking action against it.

What might be better referred to as, “The Metallica V. Napster Principle”.

One final issue I have with the paper is a semantic one: the authors label the act of player 2 taking money as cheating, which doesn’t fit my preferred definition (or, frankly, any definition of cheating I’ve ever seen). I favor the Tooby and Cosmides definition where a cheater is defined as “…an individual who accepts a benefit without satisfying the requirements that provision of that benefit was made contingent upon.” As there was no condition required for player 2 to be allowed to take money from player 1, it could hardly be considered an act of cheating. This seemingly minor issue, however, might actually hold some real significance, in the Freudian sense of the word.

To me, that choice of phrasing implies that the authors realize that, as I previously suggested, player 1s would really prefer if player 2s didn’t take any money from them; after all, why would they? More money is better than less. This highlights, for me, the very real and very likely possibility that what player 1s were actually punishing was having money taken from them, rather than the inequality, but they were only willing to punish in force when that punishment could more successfully be justified to others.

References: Raihani NJ, & McAuliffe K (2012). Human punishment is motivated by inequity aversion, not a desire for reciprocity. Biology letters PMID: 22809719

Xiao, E., & Houser, D. (2010). When equality trumps reciprocity Journal of Economic Psychology, 31, 456-470 DOI: 10.1016/j.joep.2010.02.001

Inequality Aversion Aversion

While I’ve touched on the issues surrounding the concept of “fairness” before, there’s one particular term that tends to follow the concept around like a bad case of fleas: inequality aversion. Following the proud tradition of most psychological research, the term manages to both describe certain states of affairs (kind of) without so much as an iota of explanatory power, while at the same time placing the emphasis on, conceptually, the wrong variable. In order to better understand why (some) people (some of the time) behave “fairly” towards others, we’re going to need to address both of the problems with the term. So, let’s tear the thing down to the foundation and see what we’re working with.

“Be careful; this whole thing could collapse for, like, no reason”

Let’s start off with the former issue: when people talk about inequality aversion, what are they referring to? Unsurprisingly, the term would appear to refer to the fact that people tend to show some degree of concern for how resources are divided among multiple parties. We can use the classic dictator game as a good example: when given full power over the ability to divide some amount of money, dictator players often split the money equally (or near-equally) between themselves and another player. Further, the receivers in the dictator games also tend to both respond to equal offers with favorable remarks and respond to unequal offered with negative remarks (Ellingsen & Johannesson, 2008). The remaining issue, then, concerns how are we to interpret findings like this, and why should we interpret them in such a fashion

Simply stating that people are averse to inequality is, at best, a restatement of those findings. At worst, it’s misleading, as people will readily tolerate inequality when it benefits them. Take the dictators in the example above: many of them (in fact, the majority of them) appear perfectly willing to make unequal offers so long as they’re on the side that’s benefiting from that inequality. This phenomena is also illustrated by the fact that, when given access to asymmetrical knowledge, almost all people take advantage of that knowledge for their own benefit (Pillutla & Murnighan,1995). As a final demonstration, take two groups of subjects; each subject given the task of assigning themselves and another subject to one of two tasks: the first task is described as allowing the subject a chance to win $30, while the other task has no reward and is described as being dull and boring.

In the first of these two groups, since subjects can assign themselves to whichever task they want, it’s perhaps unsurprising that 90% of the subjects assigned themselves to the more attractive task; that’s just simple, boring, self-interest. Making money is certainly preferable to being bored out of your mind, but automatically assigning yourself to the positive task might not be considered the fairest option The second group, however, flipped a coin in private first to determine how they would assign tasks, and following that flip made their assignment. In this group, since coins are impartial and all, it should not come as a surprise that…90% of the subjects again assigned themselves to the positive task when all was said and done (Batson, 2008). How very inequality averse and fair of them.

“Heads I win; Tails I also win.”

A recent (and brief) paper by Houser and Xiao (2010) examined the extent to which people are apparently just fine with inequality, but from the opposite direction: taking money away instead of offering it. In their experiment, subjects played a standard dictator game at first. The dictator had $10 to divide however they chose. Following this division, both the dictator and the receiver were given an additional $2. Finally, the receiver was given the opportunity to pay a fixed cost of $1 for the ability to reduce the dictator’s payoff by any amount. Another experimental group took part in the same task, except the dictator was passive in the second experiment; the division of the $10 was made at random by a computer program, representing simple chance factors.

A general preference to avoid inequality would, one could predict, be relatively unconcerned with the nature of that inequality: whether it came about through chance factors or intentional behavior should be irrelevant. For instance, if I don’t like drinking coffee, I should be relatively averse to the idea whether I was randomly assigned to drink it or whether someone intentionally assigned me to drink it. However, when it came to the receivers deciding whether or not to “correct” the inequality, precisely how that inequality came about mattered: when the division was randomly determined, about 20% of subjects paid the $1 in order to reduce the other player’s payoff, as opposed to the 54% of subjects who paid the cost in the intentional condition (Note: both of these percentages refer to cases in which the receiver was given less than half of the dictator’s initial endowment). Further still, the subjects in the random treatment deducted less, on average, than the subjects in the intention treatment.

The other interesting part about this punishment, as it pertains to inequality aversion, is that most people who did punish did not just make the payoffs even; the receivers deducted money from the dictators to the point that the receivers ended up with more money overall in the end. Rather than seeking equality, the punishing receivers brought about inequality that favored themselves, to the tune of 73% of the punishers in the intentional treatment and 66% in the random treatment (which did not differ significantly). The authors conclude:

…[O]ur data suggest that people are more willing to tolerate inequality when it is cause by nature than when it is intentionally created by humans. Nevertheless, in both cases, a large majority of punishers attempt to achieve advantageous inequality. (p.22)

Now that the demolition is over, we can start rebuilding.

This punishment finding also sheds some conceptual light on why inequality aversion puts the emphasis on the wrong variable: people are not averse to inequality, per se, but rather seem to be averse to punishment and condemnation, and one way of avoiding punishment is to make equal offers (of the dictators that made an equal or better offer, only 4.5% were punished). This finding highlights the problem of assuming a preference based on an outcome: just because some subjects make equal offers in a dictator game, it does not follow that they have a genuine preference for making equal offers. Similarly, just because men and women (by mathematical definition) are going to have the same number of opposite-sex sexual partners, it does not follow that this outcome was obtained because they desired the same number.

That is all, of course, not to say that preferences for equality don’t exist at all, it’s just that while people may have some motivations that incline them towards equality in some cases, those motivations come with some rather extreme caveats. People do not appear averse to inequality generally, but rather appear strategically interested in (at least) appearing fair. Then again, fairness really is a fuzzy concept, isn’t it?

References: Batson, C.D. (2008). Moral masquerades: Experimental exploration of the nature of moral motivation. Phenomenology and the Cognitive Sciences, 7, 51-66

Ellingsen, T., & Johannesson, M. (2008). Anticipated Verbal Feedback Induces Altruistic Behavior. Evolution and Human Behavior DOI: 10.1016/j.evolhumbehav.2007.11.001

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment Economic Letters DOI: 10.1016/j.econlet.2010.07.008

Pillutla, M.M. & Murnighan, J.K. (1995). Being fair or appearing fair: Strategic behavior in ultimatum bargaining. Academy of Management Journal, 38, 1408-1426.

50 Shades Of Grey (When It Comes To Defining Rape)

For those of you who haven’t have been following such things lately, Daniel Tosh recently catalyzed an internet firestorm of offense.The story goes something like this: at one of his shows, he was making some jokes or comments about rape. One woman in the audience was upset by whatever Daniel said and yelled out that rape jokes are never funny. In response to the heckler, Tosh either (a) made a comment about how the heckler was probably raped herself, or (b) suggested it would be funny were the heckler to get raped, depending upon which story you favor. The ensuing outrage seems to have culminated in a petition to have Daniel Tosh fired from Comedy Central, which many people ironically suggested has nothing at all to do with censorship.

This whole issue has proved quite interesting to me for several reasons. First, it highlights some of the problems I recently discussed concerning third party coordination: namely that publicly observable signals aren’t much use to people who aren’t at least eye witnesses. We need to rely on what other people tell us, and that can be problematic in the face of conflicting stories. It also demonstrated the issues third parties face when it comes to inferring things like harm and intentions: the comments about the incident ranged from a heckler getting what they deserved through to the comment being construed as a rape threat. Words like “rape-apologist” then got thrown around a lot towards Tosh and his supporters.

Just like how whoever made this is probably an anti-Semite and a Nazi sympathizer

While reading perhaps the most widely-circulated article about the affair, I happened to come across another perceptual claim that I’d like to talk about today:

According to the CDC, one in four female college students report that they’ve been sexually assaulted (and when you consider how many rapes go unreported, because of the way we shame victims and trivialize rape, the actual number is almost certainly much higher).

Twenty-five percent would appear alarmingly high; perhaps too high, especially when placed in the context of a verbal mud-slinging. A slightly involved example should demonstrate why this claim shouldn’t be taken at face value: in 2008, the United States population was roughly 300 million (rounding down). To make things simple, let’s assume (a) half the population is made up of women, (b) the average woman finishing college is around 22 and (c) any woman’s chances of being raped are equal, set at 25%. Now, in 2008, there were roughly 15 million women in the 18-24 age group; they are our first sample. If the 25% number was accurate, you’d expect that, among women ages 18-24, 3.75 million of them should have been raped at some point throughout their lives, or roughly 170,000 rape victims per year in that cohort (assuming rape rates are constant from birth to 24). In other words, each year, roughly 1.13% of the women who hadn’t previously been raped would be raped (and no one else would be).

Let’s compare that 1% number to the number of  reported rapes in the entire US in 2008: thirty rapes per hundred-thousand people, or 0.03%. Even after doubling that number (assuming all reported rapes come from women, and women are half the population, so the reported number is out of fifty-thousand, not a hundred-thousand) we only make it to 0.06%. In order to make it to 1.13% you would have to posit that for each reported rape there were about 19 unreported ones. For those who are following along with the math, that would mean that roughly 95% of rapes would never have been reported. While 95% unreported might seem like a plausible rate to some it’s a bit difficult to verify.

Rape, of course, doesn’t have a cut-off point for age, so let’s expand our sample to include women ages 25-44. Using the same assumptions of a 1% growth rate in rape victims per year rate, that would now mean that by age 44 almost half of all women would have experienced an instance of rape. We’re venturing farther into the realm of claims losing face value. Combining those two figures would also imply that a woman between 18 and 44 is getting raped in the US roughly every 30 seconds. So what gives: are things really that bad, are the assumptions wrong, is my math off, or is something else amiss?

Since I can’t really make heads or tails of any of this, I’m going with my math.

Some of the assumptions are, in fact, not likely to be accurate (such as a consistent rate of victimization across age groups), but there’s more to it than that. Another part of the issue stems from defining the term “rape” in the first place. As Koss (1993) notes, the way she defined rape in her own research – the research that came upon that 25% figure – appeared to differ tremendously from the way her subjects did. The difference was so stark that roughly 75% of the participants that Koss had labeled as having experiencing rape did not, themselves, consider the experience to be rape. This is somewhat concerning for two big reasons: first, the perceived rate of rape might be a low-ball estimate (we’ll call this the ignorance hypothesis), or the rate of rape might be being inflated rather dramatically by definitional issues (we’ll call this the arrogance hypothesis).

Depending on your point of view – what you perceive to be, or label as, rape – either one of these hypotheses could be true. What is not true is the notion that one in four college-aged women report that they’ve been sexually assaulted; they might report they’ve had unwanted sex, or have been coerced into having sex, but not that they were assaulted. As it turns out, that’s quite a valuable distinction to make.

Hamby and Koss (2003) expanded on this issue, using focus groups to help understand this discrepancy. Whereas one in four women might describe their first act of intercourse as something they went along with but was unwanted, only one in twenty-five report that it was forced (in, ironically, a forced-choice survey). Similarly, while one in four women might report that they gave into having sex due to verbal or psychological pressure, only one in ten report that they engaged in sexual intercourse because of the use or threat of physical force. It would seem that there is a great deal of ambiguity surrounding words like coercion, force, voluntary, or unwanted when it comes to asking about sexual matters: was the mere fear of force, absent any explicit uses or threats enough to count? If the woman didn’t want to have sex, but said yes to try and maintain a relationship, did that count as coercion? The focus groups had many questions, and I feel that means many researchers might be measuring a number of factors they hadn’t intended on, lumping all of them together under the umbrella of sexual assault.

The focus groups, unsurprisingly, made distinctions between wanting sex and voluntarily having sex; they also noted that it might often be difficult for people to distinguish between internal and external pressures to have sex. These are, frankly, good distinctions to make. I might not want to go into work, but that I show up there anyway doesn’t mean I was being made to work involuntarily. I might also not have any internal motivation to work, per se, but rather be motivated to make money; that I can only make money if I work doesn’t mean most people would agree that the person I work for is effectively forcing me to work.

No one makes me wear it; I just do because I think it’s got swag

When we include sex that was acquiesced to, but unwanted, in these figures – rather than what the women themselves consider rape – then you’ll no doubt find more rape. Which is fine, as far as definitional issues go; it just requires the people reporting these numbers to be specific as to what they’re reporting about. As concepts like wanting, forcing, and coercing are measured in degree rather than kind, one could, in principle, define rape in a seemingly endless number of ways.This puts the burden on researchers to be as specific as possible when formulating these questions and drawing their conclusions, as it can be difficult to accurately infer what subjects were thinking about when they were answering the questions.

References: Hamby, S.L., & Koss, M.P. (2003). Shades of gray: A qualitative study of terms used in the measurement of sexual victimization. Psychology of Women Quarterly DOI: 10.1111/1471-6402.00104

Koss. M.P. (1993). Detecting the scope of rape: A review of prevalence research methods. Journal of Interpersonal Violence DOI: 10.1177/088626093008002004

Some Research I Was Going To Do

I have a number of research projects lined up for my upcoming dissertation, and, as anyone familiar with my ideas can tell you, they’re all brilliant. You can imagine my disappointment, then, to find out that not only had one of my experiments been scooped by another author three years prior, but they found the precise patterns of results I had predicted. Adding insult to injury, the theoretical underpinnings of the work are all but non-existent (as is the case in the vast majority of the psychology research literature), meaning I got scooped by someone who doesn’t seem to have a good idea why they found the results they did. My consolation prize is that I get to write about it earlier than expected, so there’s that, I suppose…

What? No, I’m not crying; I just got something in my eye. Or allergies. Whatever.

The experiment itself (Morewedge, 2009) resembles a Turing test. Subjects come into the lab to play a series of ultimatum games. One person has to divide a pot of $3 one of three ways – $2.25/$0.75 (favoring either the divider or the receiver) or evenly ($1.50 for each) – and the receiver can either accept or reject these offers. The variable of interest, however, is not whether the receiver will accept the money; it’s whether the receiver perceives the offer to have been made by a real person or a computer program, as all the subjects were informed beforehand that the proposers they would encounter were drawn randomly from a pool of computer programs or real players. In essence, the experiment was examining whether or not participants perceived they were playing with an intentional agent (a human) or a non-intentional agent, representing chance (a computer), contingent on the outcome. A brilliant experiment that I thought of first, a mere three years after it had already been published.

Overall, the subjects were no more or less likely to suggest they were playing against a person or a computer, and were also no more likely to see a person or a computer as being responsible when they received an even split. However, this was not the case across the other two offers: when subjects received an unusually positive offer ($2.25 for themselves), they tended to attribute this offer to a computer; when the offer was negative ($0.75 for themselves) they overwhelmingly attributed it to a person. When causation was ambiguous, “harm” (used rather loosely here) seemed to trigger the perception of an intentional agent, whereas a benefit tended to have the opposite effect; a decrease in perceptions of intentionality.

This pattern of results could be explained by referencing some kind of self-interested motive: given what we know about people (they seem to prefer more money to less), an unusually altruistic offer is just that – unusual. It would seem more likely that selfish offers come from self-interested agents, rather than an altruistic offer coming from a selfish agent. In an attempt to remove such thinking from (conscious) consideration, a second experiment was run. This time, the person deciding how the pot would be split had no personal stake in the decision; that is to say, subjects were told that their partners would not receive any payment from the games. No matter whether the proposer offered $0.75 or $2.25, no matter whether the offer was accepted or rejected, proposers made no money. However, the results in this case still looked similar: selfish offers were seen as being made predominately by people, and altruistic offers were seen as being made predominately by computers.

But so help me if a computer has been screwing me over again…

In the next two experiments Morewedge (2009) got a bit more explicit about the role of chance. In the first case, subjects came into the experiment and met a confederate before both were taken into separate rooms and seated at computers. On the computer was a circle divided up into five sections with a spinning arrow in the middle. Each of the five sections of the circle was filled with one of two colors, with the numbers of each color varying from one to four across each spin (so the circle could have 4 blue and 1 yellow slots in one round, then 3 blue and 2 yellow in the next, and so on). Subjects chose one color for the experiment beforehand, and if the spinner landed on their color in any given round, they would win the amount of money listed on the screen; if it didn’t land on their color, they would lose the amount on the screen. These amounts varied between zero and one dollar in $0.25 increments.

The subjects were informed of two other things: first, that the confederate was in a similar position (the confederate had picked one of the two colors as well, so they might both have the same color or not), and second, that the confederate could control the outcome of five of the forty rounds that would be played. It was the subject’s job to report the extent to which they felt the confederate had used their influence on each round. When the subjects had lost money, they tended to attribute responsibility for the action more towards the confederate than chance, and this tendency was exacerbated as the monetary stakes got higher. This effect is all the more interesting when considering the fact that subjects couldn’t be sure whether the confederate had similar or divergent interests. If they had the same interests, the confederate would be hurting both of them if he used his influence to purposely lose. Also of interest is that, when asked to estimate how much money they had ended up with – $5 in all cases; the initial endowment they were given – subjects underestimated, reporting that ended up with only $2.89 on average. They were either bad at math, or their memory was distorted towards recalling losses selectively.

Finally, the last experiment used a paradigm similar to that typically used for children: how long participants would spend looking at something. In this case, that something was information presented to the participant following a round with the spinner. The set up was similar, except the subjects were told a confederate could control half of the outcomes, and after each round the subject was told whether the confederate had controlled the round or not. Once presented with this information, subjects had to press a button to proceed to the next screen, and the amount of time they waited before pressing that button was used as a measure of the time participants were processing the information on the screen. Participants ended up spending more time looking at the screen when it was revealed that the confederate was responsible for their win, relative to being responsible for their loss, but looked equally as long when chance was implicated as responsible. This result could tentatively suggest that participants found it surprising that the confederate was responsible for their wins, implying that the more automatic process might be one of blaming others for losses.

For example, I’d be surprised if he still has that suit and that prostitute after the dice land.

Now to the theory. I will give credit where credit is due: Morewedge (2009) does at least suggest there might have been evolutionary advantages to such a bias, but promptly fails to elaborate on them in any substantial way. The first possible explanation given is that this bias could be used to defer responsibility for negative outcomes from oneself to others, which is an odd explanation given that the subjects in this experiment had no responsibility to defer. The second possible explanation is that people might attribute negative outcomes to others in order to not feel sad, which is, frankly, silly. The forth (going out of order) explanation is that such a bias might just represent a common a mild form of “disordered” thinking concerning a persecution complex, which is really no explanation at all. The third, least silly explanation, is that:

“By assuming the presence of antagonist, one may better be able to avoid a quick repetition of the unpleasant event one has just experienced” (p. 543)

Here, though, I feel Morewedge is making the mistake of assuming past selection pressures resembled the conditions set up in the experiment. I’m not quite sure how else to read that section, nor do I feel that experimental paradigm was particularly representative of past selection pressures or relevant environmental contexts

Now, if Morewedge had placed his findings in some framework concerning how social actions are all partly a result of intentional and chance factors, how perpetrators tend to conceal or downplay their immoral actions or intentions, how victims need to convince third parties to punish others who haven’t wronged them directly, and how certain inputs (such as harm) might better allow victims to persuade others, he’d have a very nice paper indeed. Unfortunately, he misses the strategic, functional element to these biases. When taken out of context, such biases can look “disordered” indeed, in much the same way that, when put underwater, my car seems disordered in its use as a submarine.

References: Morewedge CK (2009). Negativity bias in attribution of external agency. Journal of experimental psychology. General, 138 (4), 535-45 PMID: 19883135

Social Banking

“Bankers have a limited amount of money, and must choose who to invest it in. Each choice is a gamble: taken together, they must ultimately yield a net profit, or the banker will go out of business. This set of incentives yield a common complaint about the banking system: that bankers will only lend money to individuals who don’t need it. The harsh irony of the banker’s paradox is this: just when individuals need the money most desperately, they are also the poorest credit risk and, therefore, the least likely to be selected to receive a loan” – Tooby & Cosmides (1996, p. 131)

While perhaps more of a set of unfortunately circumstances than an actual paradox (in true Alanis Morrissette fashion), the banker’s paradox can be a useful metaphor for understanding social interactions. Specifically, it can help guide predictions as to how we would expect the victim/perpetrator/third party dynamic to play itself out, and, more importantly, help explain why we would have such expectations. The time and energy we can invest in others socially – in terms of building and maintaining friendships – is a lot like money; we cannot spend it in two places at once. Given that we have a limited budget with which to build and maintain relationships, it’s of vital importance for some cognitive system to assess the probability of social returns from its investment; likewise, individuals have a vested interest in manipulating that assessment in others in order to further their goals.

And, for the record, reading my site will yield a large social return on your investment. Promise.

The first matter to touch on is why a third party would feel compelled to get involved in other people’s disputes. One reason might be the potential for the third party to gain accurate information about the likely behavior of others. If person A claims that person B is a liar, and it’s true, person C could potentially benefit from knowing that. Of course, if it’s not true, then person C would likely have been better off ignoring that information. Further, if the behavior of person B towards person A lacks predictive value of how person B will behave towards person C, then the usefulness of such information is again compromised. For instance, while an older sibling might physically dominate a younger sibling, it does not mean that older sibling will in turn dominate his other classmates or his friends. Given the twin possibilities of either receiving inaccurate information or accurate but useless information, it remains questionable as to how much third party involvement this hypothesis could explain.

Beyond information value, however, third parties may also get involved in others’ conflicts in the service of forming and maintaining valuable social alliances. Here, the accuracy of the information is less of an issue. Even if it’s true that person B is an unsavory character, he may also be a useful person to have as an ally (or at least, not have as enemy; as the now famous quote goes, more or less: “He might be a son of a bitch, but he’s our son of a bitch”). As I touched on previously the accuracy of our perceptions are only relevant to the extent that accuracy leads to useful outcomes; accuracy for its own sake is not something that could be selected for. This suggests that we shouldn’t expect our evaluations of victimhood claims to be objective or consistent; we should expect them to be useful and strategic. Our moral templates shouldn’t be automatically completed in all cases, as our visual templates are for the Kanizsa Triangle; in fact, we should expect inputs to often be erased from our moral templates – something of an automatic removal.

Let’s now return to the banker’s paradox. In the moral realm, our investments come with higher stakes than they do in the friendship realm. To side with one party in a moral judgment is not to simply invest your time in one person over another; it involves actively harming other potential investment partners, potentially alienating them directly and their allies indirectly (and harming them can bring with it associated retribution). That said, aligning yourself with someone making a moral claim can bring huge benefits, in the form of reciprocal social support and building alliances. As Tooby and Cosmides put it:

…[I]f you are unusually or uniquely valuable to someone else – for whatever reason – then that person has an uncommonly strong interest in your survival during times of difficulty. The interest they have in your survival makes them, therefore, highly valuable to you. (p.140)

So the question remains: In the context of claims to victimhood, how does someone make themselves appear valuable to others, in order to recruit their support?

Please say the answer involves trips to the red light district…

There are two distinct ways of doing this which come to mind: making yourself look like a better investment, and/or make others appear to be a worse investment. Victims face a tricky dilemma in regard to the first item: they need to make themselves appear to genuinely have been a victim while not making themselves look too easily victimizable. To make oneself look to victimizable is to make oneself look like a bad investment; one that will frequently need support and be relatively inept at returning the assistance. Going too far in other direction though, by making oneself out to be relatively unharmable, could have a negative effect on your ability to recruit support as well. This is because, as in the banker’s paradox, rich people don’t really need money, and, accordingly, people in strong positions socially are not generally viewed as needing help either; you rarely find people concerned with the plight of the rich and powerful. Those who don’t need help may not be the most grateful for it, nor the most likely to reciprocate it. Tooby and Cosmides (1996) recognized this issue, writing:

“…[I]f a person’s trouble is temporary or they can easily be returned to a position of full benefit-dispersing competence by feasible amounts of assistance…then personal troubles should not make someone a less attractive object of assistance. Indeed, a person who is in this kind of trouble might be a more attractive object of investment than one who is currently safe, because the same delivered investment will be valued more by the person in dire need.” (p.132)

Such a line of reasoning would imply we should expect to find victims trying to manipulate (a) the perceptions others have of their need, (b) their eventual usefulness, and (c) the perceptions others have concerning the needs and usefulness of the perpetrator. Likewise, perpetrators should be engaging counter manipulation along precisely the same dimensions. We would  also expect that victims and perpetrators might try and sway the cost/benefit analysis in third parties via the use of warnings and threats – implicit or explicit – about the consequences of siding with one party or another. Remember, third parties are not making these judgments in a vacuum; if the majority of third parties side with person A, third parties that sided with person B might now find themselves on the receiving end of social sanctions or ostracism.

DeScioli & Kurzban (2012), realizing this issue, posit that human mind contains adaptions for coordinating which side to take in a dispute with other third parties, so as to avoid the costs of potential despotism on the one hand, and the costs of inter-alliance fighting on the other. If a publicly observable signal not tied to one’s individual identity is used for coordinating third party involvement – i.e. all third parties will align together against an actor for doing X (killing, lying, saying the wrong thing, etc), no matter who does it – third parties can solve the problem of discoordination with one another. However, one notable problem with this approach is the informational hurdle I mentioned previously: most people are not witnesses to the vast majority of acts people engage in. Now, if person A suggests that person B has done something morally wrong, and person B denies it, provided the two are the only witnesses to the act, there’s not a whole lot to go on in terms of publicly observable signals. Without such signals (and even with them), the mind needs to use whatever available information it has to make such a judgment, and that information largely revolves around the identity of the actors in question.

And some people just aren’t very good actors.

I’d like to return briefly to a finding I’ve discussed before: men and women agree that women tend to be more discriminated against than men, even in the face of contradictory evidence. This finding might arise because people are perceiving – accurately – that women tend to be objectively more victimized. It might also arise because certain classes of people – in this case, women, relative to men – are viewed as being better investments of limited social capital. For instance, in terms of future rewards, it might be a good idea for a man to align himself with a woman – or, at the very least, not align himself against her – even in the event she’s guilty; moral condemnation does not tend to get the romance following, from my limited understanding of human interaction.

It would follow, then, that the automatic completion vs automatic deletion threshold for our moral templates should vary, contingent on the actor in question: friends and family have a different threshold than strangers; possible romantic interests have a different threshold than those we find romantically repulsive. Alliances might even serve as potential tipping points for third parties. Let’s say person A and B get involved in a dispute; even if person A is clearly in the wrong, if person A already has a large number of partial backers, the playing field is no longer level for third party involvement. Third party involvement can be driven by a large number of factors, and we shouldn’t expect all moral claims to be viewed equally, even in cases where the underlying logic is the same. The goal is usefulness; not consistency.

References: DeScioli, P., & Kurzban, R. (2012). A Solution to the Mysteries of Morality Psychological Bulletin DOI: 10.1037/a0029065

Tooby, J. & Cosmides, L. (1996). Friendship and the banker’s paradox:Other pathways to the evolution of adaptations for altruism. Proceedings of the British Academy, 88, 119-143

The Myths That Never Were

I recently came across a post over at Psychology Today entitled, “Six Myths About Female Sexuality and Why They’re Myths” by one Susan Whitbourne. I feel the need to discuss it here for two reasons: first, it’s a terrible piece; not only does Susan get a lot wrong, she gets it wrong badly while bad-mouthing my field. So that’s kind of annoying, but it’s not the main reason. That reason is because, at the time I’m writing this, there are five comments on the article; there were ten comments on it before I had left mine last time I had checked it. This means at least six comments (all the highly critical ones, I might add) were deleted. This has, in turn, activated my moral template for automatic completion, and I find myself perceiving an incompetent writer trying to hide criticism instead of engaging it, instead of all the negative comments vanishing into the internet magically.

“Just throw a rug over it and you’re good to go”

I’d like to first mention that the source Susan is drawing her information from is Terri Conley. You know, the one who suggested that sexual reproduction is a byproduct of sexual pleasure. That is to say, sexual reproduction did not directly contribute to reproduction, which is kind of an odd claim to make. The paper itself was also discussed at some length here roughly a year ago, so what I’m doing is largely repetition; you know, standing on the shoulders of giants and all. Anyway, on to the matter of figuring out what institution is giving out psychology PhDs to people who clearly don’t deserve them (I’m looking at you…Columbia University? Really? Well, fancy that).

Susan – who I should remind everyone again says she has a PhD – in her first point suggests that men and women value the traits of status, youth, and attractiveness equally. Since this point is about preferences, it’s wrong on the grounds that massive amounts of evidence from surveys the world over demonstrate precisely that pattern of preferences. However, just because someone has certain preferences, it does not imply that their partner – should they eventually have one – will manifest any or all aspects of those preferences, as tradeoffs need to be made. If everyone expressed an interest for an attractive partner and there are only so many attractive partners to go around, someone’s going to be disappointed; many someones, in fact. Accordingly, it might not be such a good idea to attempt and invest a lot of energy in a long-shot, no matter how attractive the payoff might be (but more about speed dating below).

The second “myth” is that men and women desire (and have) different numbers of partners. In the realm of desire, men do indeed desire a greater number of partners than women. However, when, as Susan suggests, “appropriate statistical controls were used”, this difference goes away. In the current context, “appropriate” means “using the median instead of a mean”, or, as I might put it, ignoring all the inconvenient data. This is not a first time that a median, rather than mean, has been used to ignore data that doesn’t fit preconceptions. Now, of course, men and women need to have the same number of opposite sex partners; that’s just basic statistics. Despite this, men tend to claim to have more partners than women, so someone must be lying. In this case, the person lying is… the author, Susan. She reported that when hooked up to a fake lie detector, men adjusted their number down, which is peculiar, given that the study she’s talking about (Alexander & Fisher, 2003) found that men were consistent across groups; it was the women who were under-reporting. Way to bust myths, Susan.

You clearly put in the effort instead of just bullshitting it.

The third myth is that men think about sex more than women. This myth is a myth, according to Susan, in that it’s true. Men do, in fact, think about sex more, according to Conley et al (2011); they also think about food and sleep more. So let’s examine the logic here: Men do X more than women. Men also do Y and Z more than women. Therefore, men don’t do X more. Sure, that might seem like a basic failure of reasoning abilities, since the conclusion in no way follows from the premises, but, bear in mind, this woman does have a PhD from Columbia, so she clearly must understand this problem better than those who pointed out this huge failing. Better to just delete the comments of people pointing this out, rather than risk Susan wasting her valuable time engaging in debate with them.

Myth four is another one of those true myths: women have orgasms less frequently than men. However, it’s not just a true myth; it’s also one of those things Susan lies about. Susan grants that this orgasm differential exists in hookups, but not in romantic relationships. Conley et al (2011) report on data showing that, during hookups, women orgasm 32% and 49% as frequently as men (in first and repeat hookups respectively). However, in established relationships, women orgasmed 80% as much. Thus, according to Susan, a 20% gap in frequency amounts to no gap. Frankly, I’m surprised that Mythbusters hasn’t snatched Susan up yet, given her impressive logical and basic reading abilities.

Myth five is that, apparently, men like casual sex more than woman. I know; I was shocked to hear people thought that too. I already linked to the discussion of the article Conley (2011) uses to support the notion that men and women like causal sex just as much, but here it is again. The long and short of the paper is that, when women and men considered casual sex offers from very attractive and famous people, there was no difference. Women were also just as likely as men to accept a casual sex offer from a close friend who they thought would provide a positive sexual experience. It might be worth pointing that most people aren’t very attractive, famous, familiar, and skilled in bed, and women tend to judge most men as lacking in this department (given that 0% accepted offers for casual sex in the classic Clark and Hatfield paper), whereas the same dimensions don’t seem to matter to men nearly as much (given the roughly 75% acceptance rate). It might be worth pointing that out, that is, if you know what you’re talking about, which Susan and Conley clearly don’t.

Finally, we arrive at Myth Six: women are choosier than men. The Clark and Hatfield results, along with evidence from every culture across the globe and many species on the planet, might seem to confirm this myth. However, the results of a single speed-dating survey where no sexual behavior actually took place and no sex difference was fully reversed could overturn it all. In this study, depending on who approached who at a speed-dating event, there was  a (relatively minor) effect on feelings of romantic desire, chemistry, and a desire to see the other partner again. That said, women tend to not approach men as much as men approach women in the world outside of the speed-dating scenarios, and when women approached men in the Clark and Hatfield study the men overwhelmingly said “yes” (while the women universally said “no” when approached by a man), and the study didn’t track whether anything ever came of the speed-dating, and a certain type of person might be interested in speed-dating, and speed-dating might not be terribly ecologically valid, and….you get the idea.

But other than being a total failure, your article was a great success.

Susan caps off her article by demonstrating that she doesn’t understand that the nature/nurture debate has long ago ended and that evolutionary psychologists reject such a dichotomy in the first place by asking about whether these behaviors are genetically or environmentally based. It’s nice to see that Susan comes full circle from her introduction where she suggests that she doesn’t understand evolution isn’t working to “keep the species afloat”. Finally, she asked why some people, who ought to know better, favor an evolutionary-based theory in their research. One can only wonder, Susan. I’ll leave it to people like you, who clearly know better, to lead the way. I just hope for all of our sakes that whatever path you end up leading us down doesn’t involve you having to read or understand anything.

References: Alexander MG, & Fisher TD (2003). Truth and consequences: using the bogus pipeline to examine sex differences in self-reported sexuality. Journal of sex research, 40 (1), 27-35 PMID: 12806529

Conley, T.D., Moors, A.C., Matsick, J.L., Ziegler, A., & Valentine, B.A. (2011). Women, Men, and the Bedroom: Methodological and Conceptual Insights That Narrow, Reframe, and Eliminate Gender Differences in Sexuality Psychological Science DOI: 10.1177/0963721411418467

Kanizsa’s Morality

While we rely on our senses to navigate through life, there are certain quirks about the way our perception works that we often aren’t consciously aware of. It’s only when we encounter illusions, the most well-known of which tend to inhabit the visual domain, that certain inner workings of our perception modules become apparent. Take the following example as a good for instance: the checkerboard illusion. Given the proper context, our visual system is capable perceiving the two squares to be different colors despite the fact that they are the same color. On top of bringing certain facets of our visual modules into stark relief, the illusion demonstrates one other very important fact about our cognition: accuracy need not always be the goal. Our visual systems were only selected to be as good as they needed to be in order for us to do useful things, given the environment we tended to find ourselves in; they were not selected to be perfectly accurate in each and every situation they might find themselves in.

See, Criss Angel? It’s not that hard to do your job.

That endearing little figure is known as Kanizsa’s Triangle. While there is no actual triangle in the figure, some cognitive template is being automatically filled in given inputs from certain modules (probably ones designed for detecting edges and contrast), and the end result is that illusory perception; our mind automatically completes the picture, so to speak. This kind of automatic completion can have its uses, like allowing inferences to be drawn from a limited amount of information relatively quickly. Without such cognitive templates, tasks like learning language – or not walking into things – would be far more difficult, if not downright impossible. While picking up on recurrent and useful patterns of information in the world might lead to a perceptual quirk here and there, especially in highly abnormal and contrived scenarios like the previous two illusions, the occasional misfire is worth the associated gains.

Now let’s suppose that instead of detecting edges and contrasts we’re talking about detecting intentions and harm – the realm of morality. Might there be some input conditions that (to some extent) automatically result in a cognitive moral template being completed? Perhaps the most notable case came from Knobe (2003):

The vice president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits and it will also harm the environment.” The chairman of the board answered, “I don’t care at all about harming the environment, I just want to make as much profit as I can. Let’s start the new program.” They started the new program. Sure enough, the environment was harmed.

When asked, in this case most people suggested that the negative outcome was brought about intentionally and the chairman should be punished. When the word “harm” is replaced by help, people’s answers reverse and they now say the chairman wasn’t helping intentionally and deserves no praise.

Further research on the subject by Guglielmo & Malle (2010) found that the “I don’t care at all about…” in the preceding paragraph was indeed viewed differently by people depending on whether or not the person who said it was conforming to or violating a norm. When violating a norm, people tended to perceive some desire for that outcome in the violator, despite the violator stating they don’t care one way or the other; when conforming to a norm, people didn’t perceive that same desire given the same statement of indifference. The violation of a norm, then, might be used as part of an input for automatically filling in some moral template concerning perceptions of the violator’s desires, much like the Kanizsa Triangle. It can cause people to perceive a desire, even if there is none. This finding is very similar to another input condition I recently touched on: the perception of a person’s desires on their blameworthiness contingent on their ability to benefit from others being harmed (even if the person in question didn’t directly or indirectly cause the harm).

“I don’t care at all about the NFL’s dress code!”

A recent paper by Grey et al (PDF here) builds upon this analogy rather explicitly. In it, they point out two important things: first, any moral judgment requires there be a victim and a perpetrator dyad; this represents the basic cognitive template of moral interactions through which all moral judgments can be understood. Second, the authors note that there need not actually be a real perpetrator or a victim for a moral judgment to take place; all that’s required is the perception of this pair.

Let’s return briefly to vision: when it comes to seeing the physical world, it’s better to have an accurate picture of it. This is because you can’t do things like persuade a cliff it’s not actually a cliff, or tell gravity to not pull you down. Thankfully, since the physical environment isn’t trying to persuade us of anything in particular either, accurate pictures of the world are relatively easy to come by. The social world, however, is full of agents that might be (and thus, probably are) misrepresenting information for their own benefit.

Taken together with the work just reviewed, this suggests that the moral template can be automatically completed: people can be led to either perceive victims or perpetrators where there are none (given they already perceive one or the other), or fail perceive victims and perpetrators that actually exist (given that they fail to perceive one or the other). Since accuracy isn’t the goal of these perceptions per se, whether the inputs given to the moral template are erased or cause it to be filled in will likely depend on their context; that is to say people should strategically “see” or “fail to see” victims or perpetrators, quite unlike whether people see the Kanizsa Triangle (they almost universally do). Some of possible reasons why people might fall in one direction or the other will be the topic of the next post.

References: Guglielmo, S., & Malle, B.F. (2010). Can Unintended Side Effects Be Intentional? Resolving a Controversy Over Intentionality and Morality Personality and Social Psychology Bulletin DOI: 10.1177/0146167210386733

Knobe, J. (2003). Intentional Action and Side Effects in Ordinary Language Analysis DOI: 10.1093/analys/63.3.190

 

Predictably Lacking

Due to a particularly engaging high school teacher, my undergraduate minor was in economics. Upon taking a number of classes in economics at the college level, I realized that most of the assumptions made by economists about how people should be expected to behave were about as useful for understanding human behavior as most of my undergraduate psychology classes; that is to say not very. It was through Dan Ariely’s books that I was initially exposed to behavioral economics; a field which seemed to take a stand against the nonsensical assumptions of traditional economics. Happy as I was to see that first step, my enthusiasm was dampened somewhat by the fact that behavioral economics was not evolutionary economics. Economists, behavioral or otherwise, were still dealing with the human mind, and they lacked a good theory for understanding how and why the mind works. On a related note, I just finished Dan Ariely’s latest offering, The (Honest) Truth About Dishonesty: How We Lie to Everyone – Especially Ourselves. (2012).

There’s only room enough in my life for one book with parentheses in the title, and this is that book.

Due to a miscommunication with Amazon, I actually ended up getting my copy of this book for free, and, beyond simply saving money, I’m quite happy I did for a simple reason: I don’t think Dan’s new book is really worth spending the money on, (the book jacket suggested a retail of $27) especially if you’ve already read his first two offerings. In the (ostensibly selfless) interests of saving others time and money, here’s the main finding of the research presented in the book: given the opportunity to cheat, most people will cheat to some (relatively small) degree, with very few will going all out and cheating as much as possible. Of course, the precise degree to which people cheat is flexible, and various contexts make it more or less likely that people will cheat. This might suggest that there are certain parts of the mind monitoring various environmental cues in an attempt to determine when cheating would be profitable, and to what extent one should cheat.

The research that Dan reviews cuts against what he calls the “Simple Model of Rational Crime”, in which people consciously think through the costs and benefits when it comes to deciding whether or not to commit a crime (or act immorally, more generally). Standard economic assumptions don’t seem to pan out well, and anyone familiar with Dan’s previous work will already know that. Unfortunately, Ariely replaces that simple model with his own – arguably simpler – model that goes like this:

In a nutshell, the central thesis is that our behavior is driven by two opposing motivations. On the one hand, we want to view ourselves as honest, honorable people. We want to be able to look at ourselves in the mirror and feel good about ourselves (psychologists call this ego motivation). On the other hand, we want to benefit from cheating as get as much money as possible. (p.27)

Right from the start, Ariely’s central thesis is deeply flawed. As others have pointed out (Kurzban, 2010), “feeling good” about ourselves is not a plausible function for any part of our psychology. Evolution is (metaphorically) blind to what organisms feel; it can only see what organisms do. An organism that feels terrible but does useful things would win out against an organism that feels great but doesn’t do useful things every single time. A quick example should demonstrate why. Let’s say feeling good is actually important, in and of itself. There are two organisms presented with potential benefit from cheating: the first organism cheats, but only cheats a little bit in order to maintain its positive sense that it’s an honest individual; after all, it didn’t cheat that much, and it wasn’t doing any real harm, so it’s probably still a morally upstanding creature. The second organism cheats as much as it can and feels pretty good about its cheating; it doesn’t try to feel good by justifying its behavior, it just feels good about what it does generally.

You know what else feels good? Getting a perfect score.

That example should make the problem with Ariely’s central thesis stand out in stark relief: why should an organism care about seeing itself as a morally upstanding creature, and why should seeing itself as such hinge on its perception of its own integrity? By focusing on a conscience-centric model without making the function of such a perspective clear, Ariely misses the mark. As DeScioli & Kurzban (2009) suggest, we cannot understand the function of conscience without first examining condemnation. In a world where others judge our actions, and those judgments cause those others to behave in certain ways towards us, conscience can serve as a defense mechanism. Rather than risking costly punishment and social sanction from behaving in a manner others perceive as immoral, potentially detrimental actions can be avoided in the first place.

Now one might counter that, in the experiments Ariely reports on, there was no risk of subjects being caught or punished, and further that the subjects knew this; since any fear of punishment should have been, effectively removed, concerns for condemnation can’t explain these results. However, to do so would be to make the basic error of failing to understand the difference between adapted and adaptive. Just because someone might consciously report that they understand there was no real risk, it doesn’t mean other modules in their brain came to the same conclusion.

One final point I’d like to touch on is the chapter concerning self-control. Not to rely too heavily on Kurzban (2010) here, but self-control is not like a muscle, and thinking of it as such leads one to an incorrect model of the mind. (For references, see here, here, and here). Since an incorrect model of the mind seems to be the central thesis of the book, it’s at least consistent in that regard. There are, no doubt, some interesting things to be learned from the research in Dan’s book. However, you’ll need to figure them out, more or less, on your own.

References: Ariely, D. (2012). The honest truth about dishonesty: How we lie to everyone else – especially ourselves. New York, NY: HarperCollins

DeScioli P, & Kurzban R (2009). Mysteries of morality. Cognition, 112 (2), 281-99 PMID: 19505683

Kurzban, R. (2010). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton, NJ: Princeton University Press.