Which Ideas Are Ready To Go To Florida?

Recently, Edge.org posed it’s yearly question to a number of different thinkers who were given 1,000 words or less to provide their answer. This year, the topic was, “What scientific idea is ready for retirement?” and responses were received from about 175 people. This question appeared to come on heels of a quote by Max Planck, who suggested at one point that new ideas tend to gain their prominence not through actively convincing their opponents that they are correct, but rather when those who hold to alternative ideas die off themselves. Now Edge.org did not query my opinion on the matter (and the NBA has yet to draft me, for some unknowable reason), so I find myself relegated to the side-lines, engaging in the always-fun past time of lobbing criticisms at others. Though I did not read through all the responses – as many of them fall outside of my area of expertise and I’m already suffering from many demands for my time that are a bit more pressing – I did have some general reactions to some of the answers people provided to this question.

Sticks and stones can break their bones? Good enough for me.

The first reaction I had is with respect to the question itself. Planck was likely onto something when he noted that ideas are not necessarily accepted by others owing to their truth value. As I have discussed a few times before, there are, in my mind, anyway, some pretty compelling reasons for viewing human reasoning abilities as something other than truth-finders: first, people’s ability to successfully reason about a topic often hinges heavily on the domain in question. Whereas people are skilled reasoners when it comes to social contracts, they are poor at reasoning about more content-neutral domains. In that regard, there doesn’t seem to be a general-purpose reasoning mechanism that works equally well in all scenarios. Second, people’s judgements of their performance on tasks of reasoning abilities are often relatively uncorrelated with their actual performance. Most people appear to rate their performance in line with how easy or difficult a task felt, and, in some cases, being wrong happens to feel a lot like being right. Third, people are often found to ignore or find fault with evidence that doesn’t support their view, but will often accept evidence that does fit with their beliefs much less critically. Importantly, this seems to hold when the relative quality of the evidence in question is held constant. Having a WEIRD sample might be a problem for a study that reaches a conclusion unpalatable to the person assessing it, but is unlikely to be mentioned if the results are more agreeable.

Finally, there are good theoretical reasons for thinking that reasoning can be better understood by positing that it functions to persuade others, rather than to seek truth per se. This owes itself to the fact that being right is not necessarily always the most useful thing to be. If I’m not actually going to end up being successful in the future, for instance, it still might pay for me to try and convince other people that my prospects are actually pretty good so they won’t abandon me like the poor investment I am. Similarly, if I happen to advocate a particular theory that most of my career is built upon, abandoning that idea because it’s wrong could mean doing severe damage to my reputation and job prospects. In other words, there are certain ways that people can capture benefits in the social world by convincing others of untrue things. While that’s all well and good, it would seem to frame the Edge question is a very peculiar light: we might expect that people – those who responded to the Edge’s question included – tend to advocate that certain ideas should be relinquished, but their motivations and reasons (conscious or otherwise) for making that suggestion are based in many things which are not the idea’s truth value. As the old quote about evolution goes, “Let’s hope it is not true. But if it is true, let’s pray that it doesn’t become widely known”.

As an example, let’s consider the reply from Matt Ridley, who suggests that Malthus’ ideas on population growth were wrong. The basic idea that Malthus had was that resources are finite, and that populations, if unchecked, would continue to grow to the point that people would eventually live pretty unhappy lives, owing to the scarcity of resources relative to the size of the population. There would be more mouths wanting food than available food, which is a pretty unsatisfactory way to live life. Matt states, in his words, that “Malthus and his followers were wrong, wrong, wrong”. Human ingenuity has helped come to the rescue, and people have been becoming better and better at using the available resources in more efficient manners. The human population has continued to grow, often unchecked by famine (at least not in most first world nations). If anything, many people have access to too much food, leading to wide-spread obesity. While all this is true enough, and Malthus appeared to be wrong with respect to certain specifics, one would be hard-pressed to say that the basic insights themselves are worthy of retirement. For starters, human population growth has often come at the expense of many other species, plant and animal alike; we’ve made more room for ourselves not just by getting better at using what resources we do have, but by ensuring other species can’t use them either.

As it turns out, dead things are pretty poor competition for us.

Not only has our expansion has come at the expense of other species that find themselves suddenly faced with a variety of scarcities, but there’s also no denying that population growth will, at some point, be checked by resource availability. Given that humans are discovering new ways of doing things more efficiently than we used to, we might not have hit that point yet and we might not hit it for some time. It does not follow, however, that such a point does not, at least in principle, exist. While there is no theoretical upper limit on the number of people which might exist, the ability of human ingenuity to continuously improve the ability of our planet to support all those people is by no means a guarantee. While technology has improved markedly since the time of Malthus, there’s no telling how long such improvements will be sustained. Perhaps technology could continue to improve infinitely, just as populations can grow if unconstrained, but I wouldn’t bet on it. While Malthus might have been wrong about some details, I would hesitate to find his underlying ideas a home on a golf course close to the beach.

Another reply which stood out to me came from Martin Nowak. I have been critical of his ideas about group selection before, and I’m equally as critical of his answer to the Edge question. Nowak wants to prematurely retire the 50 year old idea of inclusive fitness: the idea that genes can benefit themselves by benefiting various bodies that contain copies of them, discounted by the probability of their being in that other body.  Nowak seems to want to retire the concept for two primary reasons: first, he suggests it’s mathematically inelegant. In the process of doing so, Nowak appears to insinuate that inclusive fitness represents some special kind of calculation that is both (a) mathematically impossible and (b) identical to calculations derived from standard evolutionary fitness calculations. On this account, Nowak seems to be confused: if the inclusive fitness calculations lead to the same outcome as standard fitness calculations, then there’s either something impossible about standard evolutionary theory (there isn’t), or inclusive fitness isn’t some special kind of calculation (it isn’t).

Charitable guy that Nowak is he does mention that the inclusive fitness approach has generated a vast literature of theoretically and empirically useful findings. Again, this seems strange if we’re going to take him at his word that the idea obviously wrong and one that should be retired. If it’s still doing useful work, retirement seems premature. Nowak doesn’t stop there, though: he claims that no one has empirically tested inclusive fitness theory because researchers haven’t been making precise fitness calculations in wild populations.This latter criticism is odd on a number of fronts. First, it seems to misunderstand the type of evidence that evolutionary researchers look for, which is evidence of special design; counting offspring directly is often not terribly useful in that regard. The second issue I see with that suggestion is that, perhaps ironically, Nowak’s favored alternative – group selection – has yet to make a single empirical prediction that could not also be made by an inclusive fitness approach (though inclusive fitness theorizing has successfully generated and supported many predictions which group selection cannot readily explain). Of all of Nowak’s work I have come across, I haven’t found an empirical test in any of his papers. Perhaps they exist, but if he is so sure that inclusive fitness theory doesn’t work (or is identical to other methods), then demonstrating so empirically should be a cake walk for him. I’ll eagerly await his research on that front.

I’m sure they’ll be here any day now…

While this only scratches the surface of the responses to the question, I would caution against retiring many of the ideas that were singled out in the answer section. Just as a general rule, ideas in science should be retired, in my mind, when they can be demonstrated without (much of) a doubt to be wrong and to seriously lead people astray in their thinking. Even then, it might only require us to retire an underlying assumption, rather than the core of the idea itself. Saying that we should retire inclusive fitness “because no one ever really tested it as I would like” is a poor reason for retirement; retiring the ideas of Malthus because we aren’t starving in the streets at the moment also seems premature. Instead of talking about what ideas should be retired wholesale, a better question would be to consider, “what evidence would convince you that you’re mistaken, and why would that evidence do so?” Questions like that not only help ferret out problematic assumption, but they might also help make useful forward momentum empirically and theoretically. Maybe the Edge could consider some variant of that question for next year.

What Makes Incest Morally Wrong?

There are many things that people generally tend to view to be disgusting or otherwise unpleasant. Certain shows, like Fear Factor, capitalize on those aversions, offering people rewards if they can manage to suppress those feelings to a greater degree than their competitors. Of the people who watched the show, many would probably tell you that they would be personally unwilling to engage in such behaviors; what many do not seem to say, however, is that others should not be allowed to engage in those behaviors because they are morally wrong. Fear or disgust-inducing, yes, but not behavior explicitly punishable by others. Well, most of the time, anyway; a stunt involving drinking donkey semen apparently made the network hesitant about airing it, likely owing to the idea that some moral condemnation would follow in its wake. So what might help us differentiate between understanding why some disgusting behaviors – like eating live cockroaches or submerging one’s arm in spiders – are not morally condemned while others – like incest – tend to be?

Emphasis on the “tend to be” in that last sentence.

To begin our exploration of the issue, we could examine some research on some cognitive mechanisms for incest aversion. Now, in theory, incest should be an appealing strategy from a gene’s eye perspective. This is due to the manner in which sexual reproduction works: by mating with a full sibling, your offspring would carry 75% of your genes in common by descent, rather than the 50% you’d expect if you mated with a stranger. If those hyper-related siblings in turn mated with one another, after a few generations you’d have people giving birth to infants that were essentially genetic clones. However, such inbreeding appears to carry a number of potentially harmful consequences. Without going into too much detail, here are two candidate explanations one might consider for why inbreeding isn’t a more popular strategy: first, it increases the chances that two harmful, but otherwise rare, recessive alleles will match up with on another. The result of this frequently involves all sorts of nasty developmental problems that don’t bode well for one’s fitness.

A second potential issue involves what is called the Red Queen hypothesis. The basic idea here is that the asexual parasites that seek to exploit their host’s body reproduce far quicker than their hosts tend to. A bacteria can go through thousands of generations in the time humans go through one. If we were giving birth to genetically-identical clones, then, the parasites would find themselves well-adapted to life inside their host’s offspring, and might quickly end up exploiting said offspring. The genetic variability introduced by sexual reproduction might help larger, longer-lived hosts keep up in the evolutionary race against their parasites. Though there may well be other viable hypotheses concerning why inbreeding is avoided in many species, the take-home point for our current purposes is that organisms often appear as if they are designed to avoid breeding with close relatives. This poses many species with a problem they need to solve, however: how do you know who your close kin are? Barring some effective spatial dispersion, organisms will need some proximate cues that help them differentiate between their kin and non-kin so as to determine which others are their best bets for reproductive success.

We’ll start with perhaps the most well-known of the research on incest avoidance in humans. The Westermarck effect refers to the idea that humans appear to become sexually disinterested in those with whom they spent most of their early life. The logic of this effect goes (roughly) as follows: your mother is likely to be investing heavily in you when you’re an infant, in no small part owing to the fact that she needs to breastfeed you (prior to the advent of alternative technologies). Since those who spend a lot of time around you and your mother are more likely to be kin than those who spend less time in your proximity. That degree of that proximity ought to in turn generate some kinship index with others that would generate disinterest in sexual experiences with such individuals. While such an effect doesn’t lend itself nicely to controlled experiments, there are some natural contexts that can be examined as pseudo-experiments. One of these was the Israeli Kibbutz, where children were predominately raised in similarly-aged, mixed-sex peer groups. Of the approximately 3000 children that were examined from these Kibbutz, there were only 14 cases of marriage between individuals from the same group, and almost all of them were between people introduced to the group after the age of 6 (Shepher, 1971).

Which is probably why this seemed like a good idea.

The effect of being raised in such a context didn’t appear to provide all the cues required to trigger the full suite of incest aversion mechanisms, however, as evidenced by some follow-up research by Shor & Simchai (2009). The pair carried out some interviews with 60 of the members of the Kibbutz to examine the feelings that these members had towards each other. A little more than half of the sample reported having either moderate or strong attractions towards other members of their cohort at some point; almost all the rest reported sexual indifference, as opposed to the typical kind of aversion or disgust people report in response to questions about sexual attraction towards their blood siblings. This finding, while interesting, needs to be considered in light of the fact that almost no sexual interactions occurred between members of the same peer group; it should also be considered in light of the fact that there did not appear to exist any strong moral prohibition against such behavior.

Something like a Westermarck effect might explain why people weren’t terribly inclined to have intercourse with their own kin, but it would not explain why people think that others having sex with close kin is morally wrong. Moral condemnation is not required for guiding one’s own behavior; it appears more suited for attempting to guide the behavior of others. When it comes to incest, a likely other whose behavior one might wish to guide would be their close kin. This is what led Lieberman et al (2003) to deliver some predictions about what factors might drive people’s moral attitudes about incest: the presence of others who are liable to be your close kin, especially if those kin are of the opposite sex. If duration of co-residence during infancy is used a proximate input cue for determining kinship, then that duration might also be used as an input condition for determining one’s moral views about the acceptability of incest. Accordingly, Lieberman et al (2003) surveyed 186 individuals about their history of co-residence with other family members and their attitudes towards how morally unacceptable incest is, along with a few other variables.

What the research uncovered was that duration of co-residence with an opposite-sex sibling predicted the subject’s moral judgments concerning incest. For women, the total years of co-residence with a brother was correlated with judgments of wrongness for incest at about r = 0.23, and that held whether the time period from 0 to 10 or 0 to 18 was under investigation; for men with a sister, a slightly higher correlation emerged from 0 to 10 years (r = 0.29), but an even-larger correlation was observed when the period was expanded to age 18 (r = 0.40). Further, such effects remained largely static even after the number of siblings, parental attitudes, sexual orientation, and the actual degree of relatedness between those individuals was controlled for. None of those factors managed to uniquely predict moral attitudes towards incest once duration of co-residence was controlled for, suggesting that it was the duration of co-residence itself driving these effects of moral judgments. So why did this effect not appear to show up in the case of the Kibbutz?

Perhaps the driving cues were too distracted?

If the cues to kinship are somewhat incomplete – as they likely were in the Kibbutz – then we ought to expect moral condemnation of such relationships to be incomplete as well.  Unfortunately, there doesn’t exist much good data on that point that I am aware of, but, on the basis of Shor & Simchai’s (2009) account, there was no condemnation of such relationships in the Kibbutz that rivaled the kind seen in the case of actual families. What their account does suggest is that more cohesive groups experienced less sexual interest in their peers; a finding that dovetails with the results from Lieberman et al (2003): cohesive groups might well have spent more time together, resulting in less sexual attraction due to greater degrees of co-residence. Despite Shor & Simchai’s suggestion to the contrary, their results appear to be consistent with a Westermarck kind of effect, albeit an incomplete one. Though the duration of co-residence clearly seems to matter, the precise way in which it matters likely involves more than a single cue to kinship. What connection might exist between moral condemnation and active aversion to the idea of intercourse with those one grew up around is a matter I leave to you.

References: Lieberman, D., Tooby, J., & Cosmides, L. (2003). Does morality have a biological basis? An empirical test of the factors governing moral sentiments relating to incest. Proceedings of the Royal Society of London B, 270, 819-826.

Shepher, J. (1971). Mate Selection Among Second Generation Kibbutz Adolescents and Adults: Incest Avoidance and Negative Imprinting. Archives of Sexual Behavior, 1, 293-307.

Shor, E. & Simchai, D. (2009). Incest Avoidance, the Incest Taboo, and Social Cohesion: Revisiting Westermarck and the Case of the Israeli Kibbutzim. American Journal of Sociology, 114, 1803-1846,

Proximate And Ultimate Moral Culpability

Back in September, I floated an idea about or moral judgments: that intervening causes between an action and outcome could serve to partially mitigate their severity. This would owe itself to the potential that each intervening cause has for presenting a new potential target of moral responsibility and blame (i.e. “if only the parents had properly locked up their liquor cabinet, then their son wouldn’t have gotten drunk and wrecked their car”). As the number of these intervening causes increases, the potential number of blameable targets increases, which should be expected to diminish the ability of third-party condemners to achieve any kind of coordination their decisions. Without coordination, enacting moral punishment becomes costlier, all else being equal, and thus we might expect people to condemn others less harshly in such situations. Well, as it turns out, there’s some research that has been conducted on this topic a mere four decades ago that I was unaware of at the time. Someone call Cold Stone, because it seems I’ve been scooped again.

To get your mind off that stupid pun, here’s another.

One of these studies comes from Brickman et al (1975), and involved examining how people would assign responsibility for a car accident that had more than one potential cause. Since there are a number of comparisons and causes I’ll be discussing, I’ve labeled them for ease of following along. The first of these causes were  proximate in nature: internal alone (1. a man hit a tree because he wasn’t looking at the road) or external alone (2. a man hit a tree because his steering failed). However, there were also two ultimate causes for these proximate causes, leading to four additional sets: two internal (3. a man hit a tree because he wasn’t looking at the road; he wasn’t looking at the road because he was daydreaming), two external (4. a man hit a tree because his steering failed; his steering failed because the mechanic had assembled it poorly when repairing it), or a mix of the two. The first of these (5) mixes was a man hitting a tree because his steering failed, but his steering failed because he had neglected to get it checked in over a year; the second (6) concerned a man hitting a tree because he wasn’t paying attention to the road due to someone on the side of the road yelling.

After the participants had read about one of these scenarios, they were asked to indicate how responsible the driver was for the accident, how foreseeable the accident was, and how much control the driver had in the situation. Internal causes for the accident resulted in higher scores on all these variables relative to external ones (1 vs. 2). There’s nothing too surprising there: people get blamed less for their steering failing than their not paying attention to the road. The next analysis compared the presence of one type of cause alone to that type of cause with an identical ultimate cause (1 vs. 3, and 2 vs. 4). When both proximate and ultimate causes were internal (1 vs 3), no difference was observed in the judgments of responsibility. However, when both proximate and ultimate causes were external (2 vs. 4), moral condemnation appeared to be softened by the presence of an ultimate explanation. Two internal causes didn’t budge judgments from a single cause, but two external judgments diminished perceptions of responsibility beyond a single one.

Next, Brickman et al (1975) turned to the matter of what happens when the proximate and ultimate causes were of different types (1 vs. 6 and 2 vs. 5). When the proximate cause was internal but the ultimate cause was external (1 vs. 6), there was a drop in judgments of moral responsibility (from 5.4 to 3.7 on a 0 to 6 scale), foreseeability (from 3.7 to 2.4), and control (from 3.4 to 2.7). The exact opposite trend was observed when the proximate cause was external, but the ultimate cause was internal 2 vs. 5). In that case, there was an increase in judgments of responsibility (from 2.3 to 4.1), foreseeability (from 2.3 to 3.4) and control (2.6 to 3.4). As Brickman et al (1975) put it:

“…the nature of prior cause eliminated the effects of the immediate cause on attributions of foreseeability and control, although a main effect of immediate cause remained for attributions of responsibility,”

So that’s some pretty neat stuff and, despite the research not being specifically about the topic, I think these findings might have some broader implications for understanding the opposition to evolutionary psychology more generally.

They’re so broad people with higher BMIs might call the suggestion insensitive.

As a fair warning, this section will contain a fair bit of speculation, since there doesn’t exist much data (that I know of, anyway) bearing on people’s opposition towards evolutionary explanations. That said, let’s talk about what anecdata we do have. The first curious thing that has struck me about the opposition to certain evolutionary hypotheses is that they tend to focus exclusively or nearly-exclusively on topics that have some moral relevance. I’ve seen fairly-common complaints about evolutionary explanations for hypotheses that concern moralized topics like violence, sexual behavior, sexual orientation, and male/female differences. What you don’t tend to see are complaints about research in areas that do not tend to be moralized, like vision, language, or taste preference. That’s not to say that such objections don’t ever crop up, of course; just that complaints about the latter do not appear to be as frequent or protracted as the former. Further, when the latter topics do appear, it’s typically in the middle of some other moral issue surrounding the topic.

This piece of anecdata ties in with another, related piece: one of the more common complaints against evolutionary explanations is that people perceive evolutionary researchers as trying to justify some particular morally-blameworthy behavior. The criticism, misguided as is it, tends to go something like this: “if [Behavior X] is the product of selection, then we can’t hold people accountable for what they do. Further, we can’t hope to do much to change people’s behavior, so why bother?”. As the old saying goes, if some behavior is the product of selection, we might as well just lie back and think of England. Since people don’t want to just accept these behaviors (and because they note, correctly, that behavior is modifiable), they go on to suggest that it’s the ultimate explanation must be wrong, rather than their assessment of its implications.

“Whatever; go ahead and kill people, I guess. I don’t care…”

The similarities between these criticisms of evolutionary hypotheses and the current study are particularly striking: if selection is responsible for people’s behavior, then the people themselves seem to be less responsible and in control of their behavior. Since people want to condemn others for this behavior, they have a strategic interest in downplaying the role of other causes in generating it. The fewer potential causes for a behavior there are the more easily moral condemnation can be targeted, and the more likely others are to join in the punishment. It doesn’t hurt that what ultimate explanations are invoked – patriarchy being the most common, in my experience – are also things that these people are interesting in morally condemning.

What’s interesting – and perhaps ironic – about the whole issue to me is that there are also the parallels to the debates people have about free will and moral responsibility. Let’s grant that the aforementioned criticisms were accurate and evolutionary explanations offer some kind of justification for things like murder, rape, and the like. It would seem, then, that such evolutionary explanations could similarly justify the moral condemnation and punishment of such behaviors just as well. Surely, there are adaptations we possess to avoid outcomes like being killed, and we also possess adaptations capable of condemning such behavior. We wouldn’t need to justify our condemnation of them anymore than people would need to justify their committing the act itself. If murder could be justified, then surely punishing murders could be as well.

References: Brickman, P., Ryan, K., & Wortman, C. (1975). Causal chains: Attribution of responsibility as a function of immediate and prior causes. Journal of Personality and Social Psychology, 32, 1060-1067.

Begging Questions About Sexualization

There’s an old joke that goes something like this: If a man wants to make a woman happy, it’s really quite simple. All he has to do is be a chef, a carpenter, brave, a friend, a good listener, responsible, clean, warm, athletic, attractive, tender, strong, tolerant, understanding, stable, ambitious, and compassionate. Men should also not forget to compliment a women frequently, give her attention while expecting little in return, give her freedom to do what she wants without asking too many questions, and love to go shopping with her, or at least support the habit. So long as a man does/is all those things, if he manages to never forget birthdays, anniversaries, or other important dates, he should be easily able to make a woman happy. Women, on the other hand, can also make men happily with a few simple steps: show up naked and bring beer. (For the unabridged list, see here). While this joke, like many great jokes, contains an exaggeration, it also manages to capture a certain truth: the qualities that make a man attractive to a woman seem to be a bit more varied than the qualities that make a woman attractive to a man.

“Yeah; he’s alright, I guess. Could be a bit taller and richer…”

Even if men did value the same number of traits in women that women value in men, the two sexes do not necessarily value the same kinds of traits, or value them to the same degree (though there is, of course, frequently some amount of overlap). Given that men and women tend to value different qualities in one another, what should this tell us about the signals that each sex sends to appeal to the other? The likely answer is that men and women might end up behaving or altering their appearance in different ways when it comes to appealing to the opposite sex. As a highly-simplified example, men might tend to value looks and women might tend to value status. If a man is trying to appeal to women under such circumstance, it does him little good to signal his good looks, just as it does a woman no favors to try and signal her high status to men.

So when people start making claims about how one sex – typically women – are being “sexualized” to a much greater extent than the other, we should be very specific about what we mean by the term. A recent paper by Hatton & Trautner (2011) set forth to examine (a) how sexualized men and women tend to be in American culture and (b) whether that sexualization has seen a rise over time. The proxy measure they made use of for their analysis were about four decades worth of Rolling Stone covers, spanning from 1967 to 2009, as these covers contain pictures of various male and female cultural figures. The authors suggest that this research has value because of various other lines of research suggesting that these depictions might have negative effects on women’s body satisfaction, men’s negative attitudes about women, as well threatening to increase the amount of sexual harassment that women face. Somewhat surprisingly, in the laundry list of references attesting to these negative effects on women, there is no explicit mention of any possible negative effects on men. I find that interesting. Anyway…

As for the research itself, Hatton & Trautner (2011) examined approximately 1000 covers of Rolling Stone, of which 720 focused on men and 380 focused on women. The pictures were coded with respect to (a) the degree of nudity, from unrevealing to naked on a 6-point scale, (b) whether there was touching, from none to explicitly sexual on a 4-point scale, (c) pose, from standing upright to explicitly sexual on a 3-point scale, (d) mouth…pose (I guess), from not sexual to sexual on a 3-point scale, (e)  whether breasts/chest, genitals, or buttocks were exposed and/or the focal point of the image, all on 3-point scales, (f) whether the text on the cover line related to sex, (g) whether the shot focused on the head or the body, (h) whether the model was engaged in a sex act or not, and finally (i) whether there were hints of sexual role play suggested at. So, on the one hand, it seems like these pictures were analyzed thoroughly. On the other, however, consider this list of variables they were assessing and compare them to the initial joke. By my count, all of them appear to fall more on the end of “what makes men happy” rather than “what makes women happy”.

Which might cause a problem in translation from one sex to the other

Images were considered to be “hypersexualized” if they scored 10 or more points (out of the possible 23), but only regular “sexualized” if they scored from 5 to 9 points. In terms of sexualization, the authors found that it appeared to be increasing over time: in the ’60s, 11% of men and 44% of women were sexualized; by the ’00s these rose to 17% and 89% respectively. So Hatton & Trautner (2011) concluded that men were being sexualized less than women overall, which is reasonable given their criteria. However, those percentages captured both the “sexualized” and “hypersexualized” pictures. Examining the two groups separately, the authors found that around 1-3% of men on the covers were hypersexualized in any given decade, whereas the comparable range for women was 6% to 61%. Not only did women tend to be sexualized more often, they also tended to sexualized to a great degree. The authors go so far as to suggest that the only appropriate label for such depictions of women were as sex objects.

The major interpretative problem that is left unaddressed by Hatton & Trautner (2011) and their “high-powered sociological lens”, of course, is that they fail to consider whether the same kinds of displays make men and women equally sexually appealing. As the initial joke might suggest, men are unlikely to win many brownie points with a prospective date if they showed up naked with beer; they might win a place on some sex-offender list though, which falls short of the happy ending they would have liked. Indeed, many of the characteristics highlighted in the list of ways to make a woman happy – such as warmth, emotional stability, and listening skills – are not quite as easily captured by a picture, relative to physical appearance. To make matters even more challenging for the interpretation of the authors, there is the looming fact that men tend to be far more open to offers of casual sex in the first place. In other words, there might about as much value to signaling that a man is “ready for sex” as there is to signaling that a starving child is “ready for food”. It’s something that is liable to be assumed already.

To put this study in context, imagine I was to run a similar analysis to the authors, but started my study with the following rationale: “It’s well known that women tend to value the financial prospects of their sexual partners. Accordingly, we should be able to measure the degree of sexualization on Rolling Stone covers by assessing the net wealth of the people being photographed”.  All I would have to do is add in some moralizing about how depiction of rich men is bad for poorer men’s self-esteem and women’s preferences in relationships, and that paper would be a reasonable facsimile to the current one. If this analysis found that the depicted men tended to be wealthier than the depicted women, this would not necessarily indicate that the men, rather than the women, were being depicted as more attractive mates. This is due to the simple, aforementioned fact, that we should expect an interaction between signalers and receivers. It doesn’t pay for a signaler to send a signal that the intended receiver is all but oblivious to: rather, we should expect the signals to be tailored to the details of the receptive systems it is attempting to influence.

The sexualization of images like this might go otherwise unnoticed.

It seems that the assumptions made by the authors stacked the deck in favor of them finding what they thought they would. By defining sexualization in a particular way, they partially begged their way to their conclusion. If we instead defined sexualization in other ways that considered variables beyond how much or what kind of skin was showing, we’d likely come to different conclusions about the degree of sexualization. That’s not to say that we would find an equal degree of it between the sexes, mind you, but it would be a realization that there are many factors that can go into making someone sexually attractive which are not always able to be captured in a photo. We’ve seen complaints of sexualization like these leveled against the costumes that superheroes of various sexes tend to wear, and the same oversight is present in them as well. Unless the initial joke would work just as well if the sexes were reversed, these discussions will require more nuance concerning sexualization to be of much profitable use.

References: Hatton E. & Trautner, M. (2011). Equal opportunity objectification? The sexualization of men and women on the cover of Rolling Stone. Sexuality and Culture, 15, 256-278.

Truth And Non-Consequences

A topic I’ve been giving some thought to lately concerns the following question: are our moral judgments consequentialist or nonconsequentialist? As the words might suggest, the question concerns to what extent our moral judgments are based in the consequences that result from an action or the behavior per se that people engage in. We frequently see a healthy degree of inconsistency around the issue. Today I’d like to highlight a case I came across while rereading The Blank Slate, by Steven Pinker. Here’s part of what Steven had to say about whether any biological differences between groups could justify racism or sexism:

“So could discoveries in biology turn out to justify racism and sexism? Absolutely not! The case against bigotry is not a factual claim that humans are biologically indistinguishable. It is a moral stance that condemns judging an individual according to the average traits of certain groups to which the individual belongs.”

This seems like a reasonable statement, on the face of it. Differences between groups, on the whole, does not necessarily mean any differences on the same trait between any two given individuals. If a job calls for a certain height, in other words, we should not discriminate against women just because men tend to be taller. That average difference does not mean that many men and women are the not same height, or that the reverse relationship never holds.

Even if it generally does…

Nevertheless, there is something not entirely satisfying about Steven’s position, namely that people are not generally content to say “discrimination is just wrong“. People like to try and justify their stance that it is wrong, lest the proposition be taken to simply be an arbitrary statement with no more intrinsic appeal than “trimming your beard is just wrong“. Steven, like the rest us, thus tries to justify his moral stance on the issue of discrimination:

Regardless of IQ or physical strength or any other trait that can vary, all humans can be assumed to have certain traits in common. No one likes being enslaved. No one likes being humiliated. No one likes being treated unfairly, that is, according to traits that the person cannot control. The revulsion we feel toward discrimination and slavery comes from a conviction that however much people vary on some traits, they do not vary on these.”

Here, Steven seems to be trying to have his nonconsequentialist cake and eat it too*. If the case against bigotry is “absolutely not” based on discoveries in biology or a claim that people are biologically indistinguishable, then it seems peculiar to reference biological facts concerning some universal traits to try and justify one’s stance. Would the discovery that certain people might dislike being treated unfairly to different degrees justify doing so, all else being equal? If it would, the first quoted idea is wrong; if it would not, the second statement doesn’t make much sense. What is also notable about these two quotes is that they are not cherry-picked from difference sections of the book; the second quote comes from the paragraph immediately following the first. I found their juxtaposition is rather striking.

With respect to the consequentialism debate, the fact that people try to justify their moral stances in the first place seems strange from a nonconsequentialist perspective: if a behavior is just wrong, regardless of the consequences, then it needs no explanation. or justification. Stealing, in that view, should be just wrong; it should matter who stole from who, or the value of the stolen goods. A child stealing a piece of candy from a corner store should be just as wrong as an adult stealing a TV from Best Buy; it shouldn’t matter that Robin Hood stole from the rich and gave to the poor, because stealing is wrong no matter the consequences and he should be condemned for it. Many people would, I imagine, agree that not all acts of theft are created equal though. On the topic of severity, many people would also agree that murder is generally worse than theft. Again, from a nonconsequentialist perspective, this should only be the case for arbitrary reasons, or at least reasons that have nothing at all to do with the fact that murder and theft have different consequences. I have tried to think of what those other, nonconsequentialist reasons might be, but I appear to suffer from a failure of imagination in that respect.

Might there be some findings that one might ostensibly support the notion that moral judgments are, at least in certain respects, nonconsequentialist? Yes; in fact there are. The first of these are a pair of related dilemmas known as the trolley and footbridge dilemmas. In both contexts one life can be sacrificed so that five lives are saved. In the former dilemma, a train heading towards five hikers can be diverted to a side track where there is only a single hiker; in the latter, a train heading towards five hikers can be stopped by pushing a person in front of it. In both cases the welfare outcomes are identical (one dead; five not), so it seems that if moral judgments only track welfare outcomes, there should be no difference between these scenarios. Yet there are: about 90% of people will support diverting the train, and only 10% tend to support pushing (Mikhail, 2007). This would certainly be a problem for any theory of morality that claimed the function of moral judgments more broadly is to make people better off on the whole. Moral judgments that fail to maximize welfare would be indicative of poor design for such a function.

Like how this bathroom was poorly optimized for personal comfort.

There are concerns with the idea that this finding supports moral nonconsequentialism, however: namely, the judgments of moral wrongness for pushing or redirecting are not definitively nonconsequentialist. People oppose pushing others in front of trains, I would imagine, because of the costs that pushing inflicts on the individual being pushed. If the dilemma was reworded to one in which acting on a person would not harm them but save the lives of others, you’d likely find very little opposition to it (i.e. pushing someone in front a train in order to send a signal to the driver, but with enough time so the pushed individual can exit the track and escape harm safely). This relationship holds in the trolley dilemma: when an empty side track is available, redirection to said track is almost universally preferred, as might be expected (Huebner & Hauser, 2011).  One who favors the nonconsequentialist account might suggest that such a manipulation is missing the point: after all, it’s not that pushing someone in front a train is immoral, but rather that killing someone is immoral. This rejoinder would seem to blur the issue, as it suggests, somewhat confusingly, that people might judge certain consequences non-consequentially. Intentionally shooting someone in the head, in this line of reasoning, would be wrong not because it results in death, but because killing is wrong; death just so happens to be a necessary consequence of killing. Either I’m missing some crucial detail or distinction seems unhelpful, so I won’t spend anymore time on it. 

Another matter of evidence touted as evidence of moral nonconsequentialism is the research done on moral dumbfounding (Haidt et al, 2000). In brief, research has found that when presented with cases where objective harms are absent, many people continue to insist that certain acts are wrong. The most well-known of these involves a bother-sister case of consensual incest on a single occasion. The sister is using birth control and the brother wears a condom; they keep their behavior a secret and feel closer because of it. Many subjects (about 80%) insisted that the act was wrong. When pressed for an explanation, many initially referenced harms that might occur as a result, those these harms were always countered by the context (no pregnancy, no emotional harm, no social stigma, etc). From this, it was concluded that conscious concerns for harm appear to represent post hoc justifications for an intuitive moral intuition.

One needs to be cautious in interpreting these results as evidence of moral nonconsequentialism, though, and a simple example would explain why. Imagine in that experiment what was being asked about was not whether the incest itself was wrong, but instead why the brother and sister pair had sex in the first place. Due to the dual contraceptive use, there was no probability of conception. Therefore, a similar interpretation might say, this shows that people are not consciously motivated to have sex because of children. While true enough that most acts of intercourse might not be motivated by the conscious desire for children, and while the part of the brain that’s talking might not have access to information concerning how other cognitive decision rules are enacted, it doesn’t mean the probability of conception plays no role shaping in the decision to engage in intercourse; despite what others have suggested, sexual pleasure per se is not adaptive. In fact, I would go so far as to say that the moral dumbfounding results are only particularly interesting because, most of the time, harm is expected to play a major role in our moral judgments. Pornography manages to “trick” our evolved sexual motivation systems by providing them with inputs similar to those that reliably correlate with the potential for conception; perhaps certain experimental designs – like the case of brother-sister incest – manage to similarly “trick” our evolved moral systems by providing them with inputs similar to those that reliably correlated with harm.

Or illusions; whatever your preferred term is.

In terms of making progress the consequentialism debate, it seems useful to do away with the idea that moral condemnation functions to increase welfare in general: not only are such claims clearly empirically falsified, they could only even be plausible in the realm of group selection, which is a topic we should have all stopped bothering with long ago. Just because moral judgments fail the test of group welfare improvement, however, it does not suddenly make the nonconsequentialist position tenable. There are more ways of being consequentialist than with respect to the total amount of welfare increase. It would be beneficial to turn our eye towards considering strategic welfare consequences that likely to accrue to actors, second parties, and third parties as a result of these behaviors. In fact, we should be able to use such considerations to predict contexts under which people should flip back and forth from consciously favoring consequentialist and nonconsequentialist kinds of moral reasoning. Evolution is a consequentialist process, and we should expect it to produce consequentialist mechanisms. To the extent we are not finding them, the problem might owe itself more to a failure of our expectations for the shape of these consequences than an actual nonconsequentialist mechanism.

References: Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished manuscript.

Huebner, B. & Hauser, M. (2011). Moral judgments about altruistic self-sacrifice: When philosophical and folk intuitions clash. Philosophical Psychology, 24, 73-94.

Mikhail, J. (2007). Universal moral grammar: Theory, evidence, and the future. Trends in Cognitive Science, 11, 143-151.

 

*Later, Steven writes:

“Acknowledging the naturalistic fallacy does not mean that facts about human nature are irrelevant to our choices…Acknowledging the naturalistic fallacy implies only that discoveries about human nature do not, by themselves, dictate our choices…”

I am certainly sympathetic to such arguments and, as usual, Steven’s views on the topic are more nuanced than the these quotes alone are capable of displaying. Steven does, in fact, suggest that all good justifications for moral stances concern harms and benefits. Those two particular quotes are only used to highlight the frequent inconsistencies between people’s stated views.

Towards Understanding The Action-Omission Distinction

In moral psychology, one of the most well-known methods of parsing the reasons outcomes obtain involves the categories of actions and omissions. Actions are intuitively understandable: they are behaviors which bring about certain consequences directly. By contrast, omissions represent failures to act that result in certain consequences. As a quick example, a man who steals your wallet commits an act; a man who finds your lost wallet, keeps it for himself, and says nothing to you commits an omission. Though actions and omissions might result in precisely the same consequences (in that case, you end up with less money and the man ends up with more), they do not tend to be judged the same way. Specifically, actions tend to be judged as more morally wrong than comparable omissions and more deserving of punishment. While this state of affairs might seem perfectly normal to you or I, a deeper understanding of it requires us to take a step back and consider why it is, in fact, rather strange.

And so long as I omit the intellectual source of that strategy, I sound more creative.

From an evolutionary standpoint this action-omission distinction is strange for a clear reason: evolution is a consequentialist process. If I’m worse off because you stole from me or because you failed to return my wallet when you could have, I’m still worse off. Organisms should be expected to avoid costs, regardless of their origin. Importantly, costs need not only be conceptualized as what one might typically envision them to be, like inflictions of physical damage or stealing resources; they can also be understood as failures to deliver benefits. Consider a new mother: though the mother might not kill the child directly, if she fails to provision the infant with food, the infant will die all the same. From the perspective of the child, the failure of the mother to provide food could well be considered a cost inflicted by negligence. So, if someone could avoid harming me – or could provide me with some benefit -  but does not, why should it matter whether that outcome obtained because of an action or an omission?

The first part of that answer concerns a concept I mentioned in my last post: the welfare tradeoff ratio. Omissions are, generally speaking, less indicative of one’s underlying WTR than acts. Let’s consider the wallet example again: when a wallet is stolen, this act expresses that one is willing to make me suffer a cost so they can benefit; when the wallet is found and not returned, this represents a failure of an individual to deliver a benefit to me at some cost to themselves (the time required to track me down and forgoing the money in my wallet). While the former expresses a negative WTR, the latter simply fails to express an overtly-positive one. To the extent that moral punishment is designed to recalibrate WTRs, then, acts provide us with more accurate estimates of WTRs, and might subsequently tend to recruit those cognitive moral systems to a greater degree. Unfortunately, this explanation is not entirely fulfilling yet, owing to the consequentialist facts of the matter: it can be as good, from my perspective, to increase the WTR of the thief towards me as it is for me to increase the omitter’s WTR. Doing either means I would have more money than if I had not, which is a useful outcome. Costs and benefits, in this world, are tallied on the same score board.

The second part of the answer, then, needs to invoke the costs inherent in enacting this modification of WTRs through moral punishment. Just as it’s good for me if others hold a high WTR with respect me, it’s similarly good for others if I held a high WTR with respect to them. This means that people, unsurprisingly, are often less-than-accommodating when it comes to giving up their welfare for another without the proper persuasion; persuasion which happens to take time and energy to enact, and comes with certain risks of retaliation. Accordingly, we ought to expect mechanisms that function to enact moral condemnation strategically: when the costs of doing so are sufficiently low or the benefits to doing so are sufficiently high. After all, it’s the case that every living person right now could, in principle, increase their WTR towards you, but trying to morally condemn every living person for not doing so is unlikely to be a productive strategy. Not only would such a strategy result in the condemner undertaking many endeavors that are unlikely to be successful relative to the invested effort, but someone increasing their WTR towards you requires they lower their WTR towards someone else, and those someone elses would typically not be tickled by the prospect.

“You want my friend’s investment? Then come and take it, tough guy”

Given the costs involved in indiscriminate moral condemnation on non-maximal WTRs, we can focus the considerations of the action-omission distinction down to the following question: what is it about punishing omissions that tends to be less-productive than punishing actions? One possible explanation comes from DeScioli, Bruening, & Kurzban (2011). The trio posit that omissions are judged less harshly than actions because omissions tend to leave less overt evidence of wrongdoing. As punishment costs tend to decrease as the number of punishers increases, if third party punishers make use of evidence in deciding whether or not to become involved, then material evidence should make punishment easier to enact. Unfortunately, the design that the researchers used in their experiments does not appear to definitively speak to their hypothesis. Specifically, they found the effect they were looking for – namely, the reduction of the action-omission effect – but they only managed to do so via reframing an omission (failing to turn a train or stop a demolition) into an action (pressing a button that failed to turn a train or stop a demolition). It is not clear that such a manipulation solely varied the evidence available without fundamentally altering other morally-relevant factors.

There is another experiment that did manage to substantially reduce the action-omission effect without introducing such a confound, however: Haidt & Baron (1996). In this paper, the authors presented subjects with a story about a person selling his car. The seller knows that there is a 1/3 chance the car contains a manufacturing defect that will cause it to fall apart soon; a potential defect specific to the year the car was made. When a buyer inquires about the year of the manufacturing defect the seller either (a) lies about it or (b) doesn’t correct the buyer, who had suddenly exclaimed that they remember which year it was, though they were incorrect. When asked how wrong it was for the seller to do (or fail to do) what they did, the action-omission effect was observed when the buyer was not personally known to the seller. However, if the seller happened to be good friends with the buyer, the degree of the effect was reduced by almost half. In other words, when the buyer and seller were good friends, it mattered less whether the seller cheated the buyer through action or omission; both were deemed to be relatively unacceptable (and, interestingly, both were deemed to be more wrong overall as well). However, when the buyer and the seller were all but strangers, people rated the cheat via omission to be relatively less wrong than the action. Moral judgments in close relationships appeared to generally become more consequentialist.

If evidence was the deciding factor in the action-omission distinction, then the closeness of the relationship between the actor or omitted and the target should not be expected to have any effect on moral judgments (as the nature of the relationship does not itself generate any additional observable evidence). While this finding does not rule out the role of evidence in the action-omission distinction altogether, it does suggest that evidence concerns alone are insufficient for understanding the distinction. The nature of the relationship between the actor and victim is, however, predicted to have an effect when considering the WTR model. We expect our friends, especially our close friends, to have relatively high WTRs with respect to us; we might even expect them to go out of their way to suffer costs to help us if necessary. Indications that they are unwilling to do so – whether through action or omission – represent betrayals of that friendship. Further, when a friend behaves in a manner indicating a negative WTR towards us, the gulf between the expected (highly positive) and actual (negative) WTR is far greater than if a stranger behaved comparably (as we might expect a neutral starting point for strangers).

“I hate when girls lie online about having a torso!”

Though this analysis does not provide a complete explanation of the action/omission distinction by any means, it does point us in the right direction. It would seem that actions actively advertise WTRs, whereas omissions do not necessarily do likewise. Morally condemning all those who do not display positive WTRs per se does not make much sense, as the costs involved in doing so are so high as to preclude efficiency. Further, those who simply fail to express a positive WTR towards you might be less liable to inflict future costs, relative to those who express a negative one (i.e. the man who fails to return your wallet is not necessarily as liable to hurt you in the future as the one who directly steals from you). Selectively directing that condemnation at those who display negative appreciably low or negative WTRs, then, appears to be a more viable strategy: it could help direct condemnation towards where it’s liable to do the most good. This basic premise should hold especially given a close relationship with the perpetrator: such relationships entail more frequent contact and, accordingly, more opportunities for one’s WTR towards you to matter.

References: DeScioli, P., Bruening, R., & Kurzban. R. (2011). The omission effect in moral cognition: Toward a functional explanation. Evolution and Human Behavior, 32, 204-215.

Haidt, J. & Baron, J. (1996). Social roles and the moral judgment of acts and omissions. European Journal of Social Psychology, 26, 201-218.

When Giving To Disaster Victims Is Morally Wrong

Here’s a curious story: Kim Kardashian recently decided to sell some personal items on eBay. She also mentioned that 10% of the proceeds would be donated to typhoon relief in the Philippines. On the face of it, there doesn’t appear to be anything morally objectionable going on here: Kim is selling items on eBay (not an immoral behavior) and then giving some of her money freely to charity (not immoral). Further, she made this information publicly available, so she’s not lying or being deceitful about how much money she intends to keep and how much she intends to give (also not immoral). If the coverage of the story and the comments about it are any indication, however, Kim has done something morally condemnable. To select a few choice quotes, Kim is, apparently “the centre of all evil in the universe“, is “insulting” and “degrading” people, is “greedy” and “vile“. She’s also a “horrible bitch” and anyone who takes part in the auction is “retarded“. One of the authors expressed the hope that ”…[the disaster victims] give you back your insulting “portion of the proceeds” which is a measly 10% back to you so you can choke on it“. Yikes.

Just shred the money, add some chicken and ranch, and you’re good to go.

Now one could wonder whether the victims of this disaster would actually care that some of the money being used to help them came from someone who only donated 10% of her eBay sales. Sure; I’d bet the victims would likely prefer to have more money donated from every donor (and non-donor), but I think just about everyone in the world would rather have more money than they currently do. Though I might be mistaken, I don’t think there are many victims who would insist that the money be sent back because there wasn’t enough of it. I would also guess that, in terms of the actual dollar amount provided, Kim’s auctions probably resulted in more giving than many or most other actual donors, and definitely more than anyone lambasting Kim who did not personally give (of which I assume there are many). Besides the elements of hypocrisy that are typical to disputes on this nature, there is one facet of this condemnation that really caught my attention: people are saying Kim is a bad person for doing this not because she did anything immoral per se, but because she failed to do something laudable to a great-enough degree. This is akin to suggesting someone should be punished for only holding a door open for five people, despite them not being required to hold it open for anyone.

Now one might suggest that what Kim did wasn’t actually praiseworthy because she made money off of it: Kim is self-interested and is using this tragedy to advance her personal interests, or so the argument goes. Perhaps Kim was banking on the idea that giving 10% to charity would result in people paying more for the items themselves and offsetting the cost. Even if that was the case, however, it still wouldn’t make what she was doing wrong for two reasons: first, people profit from selling good or services continuously, and, most of the time, people don’t deem those acts as morally wrong. For instance, I just bought groceries, but I didn’t feel a moral outrage that the store I bought them from profited off me. Secondly, it would seem that even if Kim did benefit by doing this, it’s a win-win situation for her and the typhoon victims. While mutual benefit make make gauging Kim’s altruistic intentions difficult, it would not make the act immoral per se. Furthermore, it’s not as if Kim’s charity auction coerced anyone into paying more than they otherwise would have; how much to pay would be the decision of the buyers, whom Kim could not directly control. If Kim ended up making more money off those than she otherwise would have, it’s only because other people willingly gave her more. So why are people attempting to morally condemn her? She wasn’t dishonest, she didn’t do anyone direct harm, she didn’t engage in any behavior that is typically deemed “immoral”, and the result of her actions were that people were better off. If one wants to locate the focal point of people’s moral outrage about Kim’s auction, then, it will involve digging a little deeper psychologically.

One promising avenue to begin our exploration of the matter is a chapter by Petersen, Sell, Tooby, & Cosmides (2010) that discussed our evolved intuitions about criminal justice. In it, they discuss the concept of a welfare tradeoff ratio (WTR). A WTR is, essentially, one person’s willingness to give up some amount of personal welfare to deliver some amount of welfare to another. For instance, if you were given the choice between $6 for yourself and $1 for someone else or $5 for both of you, choosing the latter would represent a higher WTR: you would be willing to forgo $1 so that another individual could have an additional $4. Obviously, it would be good for you if other people maintained a high WTR towards you, but others are not so willing to give up their own welfare without some persuasion. One way (among many) of persuading someone to put more stock in your welfare is to make yourself look like a good social investment. If benefiting you will benefit the giver in the long run – perhaps because you are currently experiencing the bad luck of a typhoon destroying your home, but you can return to being a productive associate in the future if you get help – then we should expect people to up-regulate their WTR towards you.

Some other pleas for assistance are less liable to net good payoffs.

The intuition that Kim’s moral detractors appear to be expressing, then, is not that Kim is wrong for displaying a mildly positive WTR per se, but that the WTR she displayed was not sufficiently high, given her relative wealth and the disaster victim’s relative need. This makes her appear to be a potentially-poor social investment, as she is relatively-unwilling to give up much of her own welfare to help others, even when they are in desperate need. Framing the discussion in this light is useful insomuch as it points us in the right direction, but it only gets us so far. We are left with the matter of figuring out why, for instance, most other people who were giving to charity were not condemned for not giving as much as they realistically could have, even if it meant them foregoing or giving up some personal items or pleasurable experiences themselves (i.e. “if you ate out less this week, or sold some of your clothing, you too could have contributed more to the aid efforts; you’re a greedy bitch for not doing so”).

It also doesn’t explain why anyone would suggest that it would have been better for Kim to have given nothing at all instead of what she did give. Though we see that kind of rejection of low offers in bargaining contexts – like ultimatum games – we typically don’t see as much of it in altruistic ones. This is because rejecting the money in bargaining contexts has an effect on the proposer’s payoff; in altruistic contexts, rejection has no negative effect on the giver and should effect their behavior far less. Even more curious, though: if the function of such moral condemnation is to increase one’s WTR towards others more generally, suggesting that Kim giving no amount would have been somehow better than what she did give is exceedingly counterproductive. If increasing WTRs was the primary function of moral condemnation, it seems like the more appropriate strategy would be to start with condemning those people – rich or not – who contributed nothing, rather than something (as those who give nothing, arguably, displayed a lower WTR towards the typhoon victims than Kim did). Despite that, I have yet to come across any articles berating specific individuals or groups for not giving at all; they might be out there, but they generated much less publicity if they were. We need something else to complete the account of why people seem to hate Kim Kardashian for not giving more.

Perhaps that something more is that the other people who did not donate were also not trying to suggest they were behaving altruistically; that is, they were not trying to reap the benefits of being known as an altruist, whereas Kim was, but only halfheartedly. This would mean Kim was sending a less-than-honest signal. A major complication with that account, however, is that Kim was, for all intents and purposes, acting altruistically; she could have been praised very little for what she did, rather than condemned. Thankfully, the condemnation towards Kim is not the only example of this we have to draw upon. These kinds of claims have been advanced before: when Tucker Max tried to donate $500,000 to planned parenthood, only to be rejected because some people didn’t want to associate with him. The arguments being made against accepting that sizable donation centered around (a) the notion that he was giving for selfish reasons and (b) that others would stop supporting planned parenthood if Tucker became associated with them. My guess is that something similar is at play here. Celebrities can be polarizing figures (for reasons which I won’t speculate about here), drawing overly hostile or positive reactions from people who are not affected by them personally. For whatever reason, there are many people who dislike Kim and would like to either avoid being associated with her altogether and/or see her fall from her current position in society. This no doubt has an effect on how they view her behavior. If Kim wasn’t Kim, there’s a good chance no one would care about this kind of charity-involving auction.

Much better; now giving only 10% is laudable.

As I mentioned in my last post, children appear to condone harming others with whom they do not share a common interest. The same behavior – in this case, giving 10% of your sales to help others – is likely to be judged substantially differently contingent on who precisely is enacting the behavior. Understanding why people express moral outrage at welfare-increasing behaviors requires a deeper examination of their personal strategic interests in the matter. We should expect that state of affairs for a simple reason: benefiting others more generally is not universally useful, in the evolutionary sense of the word. Sometimes it’s good for you if certain other people are worse off (though this argument is seldom made explicitly). Now, of course, that does mean that people will, at times, ostensibly advocate for helping a group of needy people, but then shun help, even substantial amounts of help, when it comes from the “wrong” sources. They likely do what they do because such condemnation will either harm those “wrong” sources directly or because allowing the association could harm the condemner in some way. Yes; that does mean the behavior of these condemners has a self-interested component; the very thing they criticized Kim for. Without considerations of these strategic, self-interested motivations, we’d be at a loss for understanding why giving to typhoon victims is sometimes morally wrong.

References: Petersen, M.B., Sell, A., Tooby, J., & Cosmides, L. (2010). Evolutionary psychology and criminal justice: A recalibration theory of punishment and reconciliation. In Human Morality & Sociality: Evolutionary & Comparative Perspectives, edited by Hogh-Oleson, H., Palgrace MacMillian, New York.

The Enemy Of My Dissimilar-Other Isn’t My Enemy

Some time ago, I alluded to a  very real moral problem: Observed behavior, on its own, does not necessarily give you much insight into the moral value of the action. While people can generally agree in the abstract that killing is morally wrong, there appear to be some unspoken assumptions that go into such a thought. Without such additional assumptions, there would be no understanding why killing in self-defense is frequently morally excused or occasionally even praised, despite the general prohibition. In short: when “bad” things happen to “bad” people, that is often assessed as a “good” state of affairs. The reference point for such statements like “killing is wrong”, then, seems to be that killing is bad, given that it has happened to someone who was undeserving. Similarly, while most of us would balk at the idea of forcibly removing someone from their home and confining them against their will to dangerous areas in small rooms, we also would not advocate for people to stop being arrested and jailed, despite the latter being a fairly accurate description of the former.

It’s a travesty and all, but it makes for really good TV.

Figuring out the various contextual factors affecting our judgments concerning who does or does not deserve blame and punishment helps keep researchers like me busy (preferably in a paying context, fun as recreational arguing can be. A big wink to the NSF). Some new research on that front comes to us from Hamlin et al (2013), who were examining preverbal children’s responses to harm-doing and help-giving. Given that these young children aren’t very keen on filling out surveys, researchers need alternative methods of determining what’s going on inside their minds. Towards that end, Hamlin et al (2013) settled on an infant-choice style of task: when infants are presented a the choice between items, which one they select is thought to correlate with the child’s liking of, or preference for, that item. Accordingly, if these items are puppets that infants perceive as acting, then their selections ought to be a decent – if less-than-precise – index of whether the infants approve or disapprove of the actions the puppet took.

In the first stage of the experiment, 9- and 14-month old children were given a choice between green beans and graham crackers (somewhat surprisingly, appreciable percentages of the children chose the green beans). Once a child had made their choice, they then observed two puppets trying each of the foods: one puppet was shown to like the food the child picked and dislike the unselected item, while the second puppet liked and disliked the opposite foods. In the next stage, the child observed one of the two puppets playing with a ball. This ball was being bounced off the wall, and eventually ended up by one of two puppet dogs by accident. The dog with the ball either took it and ran away (harming), or picked up the ball and brought it back (helping). Finally, children were provided with a choice between the two dog puppets.

Which dog puppet the infant preferred depended on the expressed food preferences of the first puppet: if the puppet expressed the same food preferences as the child, then the child preferred the helping dog (75% of the 9-month-olds and 100% of the 14-month-olds); if the puppet expressed the opposite food preference, then the child preferred the harming dog (81% of 9-month-olds and 100% of 14-month-olds). The children seemed to overwhelming prefer dogs that helped those similar to themselves or did not help those who were dissimilar. This finding potentially echos the problem I raised at the beginning of this post: whether or not an act is deemed morally wrong or not depends, in part, on the person towards whom the act is directed. It’s not that children universally preferred puppets who were harmful or helpful; the target of that harm or help matters. It would seem that, in the case of children, at least, something as trivial as food preferences is apparently capable of generating a dramatic shift in perceptions concerning what behavior is acceptable.

In her defense, she did say she didn’t want broccoli…

The effect was then mostly replicated in a second experiment. The setup remained largely the same with the addition of a neutral dog puppet that did not act in anyway. Again, 14-month-old children preferred the puppet that harmed the dissimilar other over the puppet that did nothing (94%), and preferred the puppet that did nothing over the puppet that helped (81%). These effects were reversed in the similar other condition, with 75% preferring the dog that helped the similar other over the neutral dog, and preferred the neutral over the harmful puppet 69% of the time. 9-month-olds did not quite show the same pattern in the second experiment, however. While none of the results went in the opposite direction to the predicted pattern, the ones that did exist generally failed to reach significance. This is in some accordance with the first experiment, where 9-month-olds exhibited the tendency to a lesser degree than the 14-month-olds.

So this is a pretty neat research paradigm. Admittedly, one needs to make certain assumptions about what was going on in the infant’s heads to make any sense of the results, but assumptions will always be required when dealing with individuals that can’t tell you much about what they’re thinking or feeling (and even with the ones who can). Assuming that the infant’s selections indicate something about their willingness to condemn or condone helpful or harmful behavior, we again return to the initial point: the same action can be potentially condemned or not, depending on the target of that action. While this might sound trivially true (as opposed to other psychological research, which is often perceived to be trivially false), it is important to bear in mind that our psychology need not be that way: we could have been designed to punish anyone who committed a particular act, regardless of target. For instance, the infants could have displayed a preference towards helping dogs, regardless of whether or not they were helping someone similar or dissimilar to them, or we could view murder as always wrong, even in cases of self-defense.

While such a preference might sound appealing to many people (it would be pretty nice of us to always prefer to help helpful individuals), it is important to note that such a preference might also not end up doing anything evolutionarily-useful. That state of affairs would owe itself to the fact that help directed towards one individual is, essentially, help not directed at any other individual. Provided that help directed towards one person is less likely to pay off in the long run (such as individuals who do not share your preferences) relative to help directed towards others (such as individuals who do share you preferences), we ought to expect people to direct their investments and condemnations strategically. Unfortunately, this is where empirical matters can become complicated, as strategic interests often differ on an individual-to-individual, or even day-to-day basis, regardless of there being some degree of overlap between some broad groups within a population over time.

At least we can all come together to destroy a mutual enemy.

Finally, I see plenty of room for expanding this kind of research. In the current experiments, the infants knew nothing about the preferences of the helper or harmer dogs. Accordingly, it would be interesting to see a simple variant of the present research: it would involve children observing the preferences of the helper and harmer puppets, but not the preferences of the target of that help or harm. Would children still “approve” of the actions of the puppet with similar tastes and “disapprove” of the puppet with dissimilar tastes, regardless of what action they took, relative to a neutral puppet? While it would be ideal to have conditions in which children knew about the preferences of all the puppets involved as well, the risks of getting messy data from more complicated designs might be exacerbated in young children. Thankfully, this research need not (and should not) stick to young children.

References: Hamlin, J., Mahajan, N., Liberman, Z., Wynn, K., (2013). Not like me = bad: Infants prefer those who harm dissimilar others. Psychological Science.

What Predicts Religiosity: Cooperation Or Sex?

When trying to explain the evolutionary function of religious belief, there’s a popular story that goes something like this: individuals who believe in a deity that monitors our behavior and punishes or rewards us accordingly might be less likely transgress against others. In other words, religious beliefs function to makes people unusually cooperative. There are two big conceptual problems with such a suggestion: the first is that, to the extent that these rewards and punishments occur after death (heaven, hell, or some form of reincarnation as a “lower” animal, for instance), they would have no impact on reproductive fitness in the current world. With no impact on reproduction, no selection for such beliefs would be possible, even were they true. The second major problem is that in the event that such beliefs are false, they would not lead to better fitness outcomes. This is due to the simple fact that incorrect representations of our world do not generally tend to lead to better decisions and outcomes than accurate representations. For example, if you believe, incorrectly, that you can win a fight you actually cannot, you’re liable to suffer the costs of being beaten up; conversely, if you incorrectly believe you cannot win a fight you actually can, you might back down too soon and miss out on some resource. False beliefs don’t often help you make good decisions.

“I don’t care what you believe, R. Kelly; there’s no way this will end well”

So if one believes they being constantly observed by an agent that will punish them for behaving selfishly and that belief happens to be wrong, they will tend to make worse decisions, from a reproductive fitness standpoint, than an individual without such beliefs. On top of those conceptual problems, there is now an even larger problem for the religion-encouraging-cooperation idea: a massive data set doesn’t really support it. When I say massive, I do mean massive: the data set examined by Weeden & Kurzban (2013) comprised approximately 300,000 people from all across of the globe. Of interest from the data set were 14 questions relating to religious behavior (such as the belief in God and frequency of attendance at religious services), 13 questions relating to cooperative morals (like avoiding paying a fare on public transport and lying in one’s own interests), and 7 questions relating to sexual morals (such as the acceptability of causal sex or prostitution). The analysis concerned how well the latter two variable sets uniquely predicted the former one.

When considered in isolation in a regression analysis, the cooperative morals were slightly predictive of the variability in religious beliefs: the standardized beta values for the cooperative variables ranged from a low of 0.034 to a high of 0.104. So a one standard deviation increase in cooperative morals predicted, approximately, one-twentieth of a standard deviation increase in religious behave. On the other hand, the sexual morality questions did substantially better: the standardized betas there ranged from a low of 0.143 to a high of 0.38. Considering these variables in isolation only gives us so much of the picture, however, and the case got even bleaker for the cooperative variables once they were entered into the regression model at the same time as the sexual ones. While the betas on the sexual variables remained relatively unchanged (if anything, they got a little higher, ranging from 0.144 to 0.392) the betas on the cooperative variables dropped substantially, often into the negatives (ranging from -0.045 to 0.13). In non-statistical terms, this means that the more one endorsed more conservative sexual morals, the more religious one tended to be; the more one endorsed cooperative morals, the less religious one tended to be, though this latter tendency was very slight.

This evidence appears to directly contradict the cooperative account: religious beliefs don’t seem to result in more cooperative behaviors or moral stances (if anything, it results slightly fewer of them once you take sex into account). Rather than dealing with loving their neighbor, religious beliefs appeared to deal more with who and how their neighbor loved. This connection between religious beliefs and sexual morals, while consistently positive across all regions sampled, did vary in strength from place to place, being about four-times stronger in wealthy areas, compared to poorer ones. The reasons for this are not discussed at any length within the paper itself and I don’t feel I have anything to add on that point which wouldn’t be purely speculative.

“My stance on speculation stated, let’s speculate about something else…”

This leaves open the question of why religious beliefs would be associated with a more-monogamous mating style in particular. After all, it seem plausible that a community of people relatively interested in promoting a more long-term mating strategy and condemning short-term strategies need not come with the prerequisite of believing in a deity. People apparently don’t need a deity to condemn people for lying, stealing, or killing, so what would make sexual strategy any different? Perhaps the fact that sexual morals show substantially more variation that morals regarding, say, killing. Here’s what Weeden & Kurzban (2013) suggest:

We view expressed religious beliefs as potentially serving a number of functions, including not just the guidance of believers’ own behaviors, but as markers of group affiliation or as part of self-presentational efforts to claim higher authority or deflect the attribution of self-interested motives when it comes to imposing contested moral restrictions on those outside of the religious group. (p.2, emphasis mine)

As for whether or not belief in a deity might serve as a group marker, well, it certainly seems to be a potential candidate. Of course, so is pretty much anything else, from style of dress, to musical taste, to tattoos or other ornaments. In terms of displaying group membership, belief in God doesn’t seem particularly special compared to any other candidate. Perhaps belief in God simply ended up being the most common ornament of choice for groups of people who, among other things, wanted to restrict the sexuality of others. Such an argument would need to account for the fact that belief in God and sexual morals seem to correlate in groups all over the world, meaning they all stumbled upon that marker independently time and again (unlikely), that such a marker has a common origin in a time before humans began to migrate over the globe (possible, but hard to confirm), or posit some third option. In any case, while belief in God might serve such a group-marking function, it doesn’t seem to explain the connection with sexuality per se.

The other posited function – of involving a higher moral authority – raises some additional questions: First, if the long-term maters are adopting beliefs in God so as to try and speak from a position of higher (or impartial) authority, this raises the question of why other parties, presumably ones who don’t share such a belief, would be persuaded by that claim in anyway. Were I to advance the claim that I was speaking on behalf of God, I get the distinct sense that other people would dismiss my claims in most cases. Though I might be benefited if they believed me, I would also be benefited if people just started handing me money; that there doesn’t seem to be a benefit for other parties in doing these things, however, suggests to me that I shouldn’t expect such treatment. Unless people already believe in said higher power, claiming impartiality in its name doesn’t seem like it should tread much persuasive water.

Second, even were we to grant that such statements would be believed and have the desired effect, why wouldn’t the more promiscuous maters also adopt a belief in a deity that just so happens to smile on, or at least not care about, promiscuous mating? Even if we grant the more promiscuous individuals were not trying to condemn people for being monogamous (and so have no self-interested motives to deflect), having a deity on your side seems like a pretty reasonable way to strengthen your defense against people trying to condemn your mating style. At the very least, it would seem to weaken the moralizer’s offensive abilities. Now perhaps that’s along the lines of what atheism represents; rather than suggesting that there is a separate deity that likes what one prefers, people might simply suggest there is no deity in order to remove some of the moral force from the argument. Without a deity, one could not deflect the self-interest argument as readily. This, however, again returns us to the previous point: unless there’s some reason to assume that third parties would impressed by the claims of a God initially, it’s questionable as to whether such claims would carry any force that needed to be undermined.

Some gods are a bit more lax about the whole “infidelity” thing.

Of course, it is possible that such beliefs are just byproducts of something else that ties in with sexual strategy. Unfortunately, byproduct claims don’t tend to make much in the way of textured predictions as for what design features we ought to expect to find, so that suggestion, while plausible, doesn’t appear to lend itself to much empirical analysis. Though this leaves us without a great of satisfaction in explaining why religious belief and regulation of sexuality appear to be linked, it does provide us with the knowledge that religious belief does not primarily seem to concern itself with cooperation more generally. Whatever the function, or lack thereof, of religious belief, it is unlikely to be in promoting morality in general.

References: Weeden, J., & Kurzban, R. (2013). What predict religiosity? A multinational analysis of reproductive and cooperative morals. Evolution and Human Behavior (34), 440-445

Pay No Attention To The Calories Behind The Curtain

Obesity is a touchy issue for many, as a recent twitter debacle demonstrated. However, there is little denying that the average body composition in the US has been changing in the past few decades: this helpful data and interactive map from the CDC shows the average BMI increasing substantially from year to year. In 1985, there was no state in which the percentage of residents with a BMI over 30 exceeded 14%; by 2010, there was no state for which that percentage was below 20%, and several for which it was over 30%. One can, of course, have debates over whether BMI is a good measure of obesity or health; at 6’1″ and 190 pounds, my BMI is approximately 25, nudging me ever so-slightly into the “overweight” category, though I am by no stretch of the imagination fat or unhealty. Nevertheless, these increases in BMI are indicative of something; unless that something is people putting on substantially more muscle relative to their height in recent decades – a doubtful proposition – the clear explanation is that people have been getting fatter.

Poorly-Marketed: The Self-Esteem-Destroying Scale

This steep rise in body mass in the recent years requires an explanation, and some explanations are more plausible than others. Trying to nominate genetic factors isn’t terribly helpful for a few reasons: first, we’re talking about drastic changes over the span of about a generation, which typically isn’t enough time for much appreciable genetic change, barring very extreme selection pressures. Second, saying that some trait or behavior has a “genetic component” is all but meaningless, since all traits are products of genetic and environmental interactions. Saying a trait has a genetic component is like saying that the area of a rectangle is related to its width; true, but unhelpful. Even if genetics were helpful as an explanation, however, referencing genetic factors would only help explain the increased weight in younger individuals, as the genetics of already-existing people haven’t been changing substantially over the period of BMI growth. You would need to reference some existing genetic susceptibility to some new environmental change.

Other voices have suggested that the causes of obesity are complex, unable to be expressed by a simple “calories-in/calories-out” formula.  This idea is a bit more pernicious, as the former half of that sentence is true, but the latter half does not follow from it. Like the point about genetic components, this explanation also suffers from the idea that it’s particularly unlikely the formula for determining weight gain or loss has become substantially more complicated in the span of a single generation. There is little doubt that the calories-in/calories-out formula is a complicated one, with many psychological and biological factors playing various roles, but its logic is undeniable: you cannot put on weight without an excess of incoming energy (or a backpack); that’s basic physics. No matter how many factors affect this caloric formula, they must ultimately have their effect through a modification of how many calories come in and go out. Thus, if you are capable of monitoring and restricting the number of calories you take in, you ought to have a fail-proof method of weight management (albeit a less-than-ideal one in terms of the pleasure people derive from eating).

For some people, however, this method seems flawed: they will report restricted-calorie diets, but they don’t lose weight. In fact, some might even end up gaining. The fail-proof methods fails. This means either something is wrong with physics, or there’s something wrong with the reports. A natural starting point for examining why people have difficulty managing their weight, even when they report calorically-restrictive diets, then, might be to examine whether people are accurately monitoring and reporting their intakes and outputs. After all, people do, occasionally, make incorrect self-reports. Towards this end, Lichtman et al (1992) recruited a sample of 10 diet-resistant individuals (those who reported eating under 1200 calories a day for some time and did not lose weight) and 80 control participants (all had BMIs of 27 of higher). The 10 subjects in the first group and 6 from the second were evaluated for reported intake, physical activity, body composition, and energy expenditure over two weeks. Metabolic rate was also measured for all the subjects in the diet-resistant group and for 75 of the controls.

Predicting the winner between physics and human estimation shouldn’t be hard.

First, we could consider the data from the metabolic rate: the daily estimated metabolic rate relative to fat-free body mass did not differ between the groups, and deviations of more than 10% from the group’s mean metabolic rate were rare. While there was clearly variation there, it wasn’t systematically favoring either group. Further, the total energy expenditure by fat-free body mass did not differ between the two groups either. When it came to losing weight, the diet-resistant individuals did not seem to be experiencing problems because they used more or less energy. So what about intake? Well, the diet-resistant individuals reported taking in an average of 1028 calories a day. This is somewhat odd, on account of them actually taking in around 2081 calories a day. The control group weren’t exactly accurate either, reporting 1694 calories in a day when they actually took in 2386. In terms of percentages, however, these differences are stark: the diet-resistant sample’s underestimates were about 150% as large as the controls.

In terms of estimates of energy expenditure, the picture was no brighter: diet-resistant individuals reported expending 1022 calories through physical activity each day, on average, when they actually exerted 771; the control group though the expended 1006, when they actually exerted 877. This means the diet-resistant sample were overestimating by almost twice as much as the controls. Despite this, those in the diet-resistant group also held more strongly to the belief that their obesity was caused by genetic and metabolic factors, and not their overeating, relative to controls. Now it’s likely that these subjects aren’t lying; they’re just not accurate in their estimates, though they earnestly believe them. Indeed, Lichtman et al (1992) reported that many of the subjects were distressed when they were presented with these results. I can only imagine what it must feel like to report having tried dieting 20 times or more only to be confronted with the knowledge that you likely weren’t doing so effectively. It sounds upsetting.

Now while that’s all well and good, one might object to these results on the basis of sample size: a sample size of about 10 per group clearly leaves a lot to be desired. Accordingly, a brief consideration of a new report examining people’s reported intakes is in order. Archer, Hand, and Blair (2013) examined people’s self-reports of intake relative to their estimated output across 40 years of  U.S. nutritional data. The authors were examining what percentage of people were reporting biologically-implausible caloric intakes. As they put it:

“it is highly unlikely that any normal, healthy free-living person could habitually exist at a PAL [i.e., TEE/BMR] of less than 1.35’”

Despite that minor complication of not being able to perpetually exist past a certain intake/output ratio, people of all BMIs appeared to be offering unrealistic estimates of their caloric intake; in fact, the majority of subjects reported values that were biologically-implausible, but the problem got worse as BMI increased. Normal-weight BMI women, for instance, offered up biologically-plausible values around 32-50% of the time; obese women reported plausible values around 12 to 31% of the time. In terms of calories, it was estimated that obese men and women tended to underreport by about 700 to 850 calories, on average (which is comparable to the estimates obtained from the previous study), whereas the overall sample underestimated around 280-360. People just seemed fairly inaccurate as estimating their intake all around.

“I’d estimate there are about 30 jellybeans in the picture…”

Now it’s not particularly odd that people underestimate how many calories they eat in general; I’d imagine there was never much selective pressure for great accuracy in calorie-counting over human evolutionary history. What might need more of an explanation is why obese individuals, especially those who reported resistance to dieting, tended to underreport substantially more than non-obese ones. Were I to offer my speculation on the matter, it would have something to do with (likely non-conscious) attempts to avoid the negative social consequences associated with obesity (obese people probably aren’t lying; just not perceiving their world accurately in this respect). Regardless of whether one feels those social consequences associated with obesity are deserved or not, they do exist, and one method of reducing consequences of that nature is to nominate alternative casual agents for the situation, especially ones – like genetics – that many people feel you can’t do much about, even if you tried. As one becomes more obese, then, they might face increased negative social pressures of that nature, resulting in their being more liable to learn, and subsequently reference, the socially-acceptable responses and behaviors (i.e. “it’s due to my genetics”, or, “I only ate 1000 calories today”; a speculation echoed by Archer, Hand, and Blair (2013)). Such an explanation is at least biologically-plausible, unlike most people’s estimates of their diets.

References: Archer, E., Hand, G., & Blair, S. (2013). Validity of U.S. national surveillance: National health and nutrition examination survey caloric energy intake data, 1971-2010. PLoS ONE, 8, e76632. doi:10.1371/journal.pone.0076632.

Lichtman et al. (1992). Discrepancy between self-reported and actual caloric intake and exercise in obese subjects. The New England Journal of Medicine, 327, 1893-1898.