The Enemy Of My Dissimilar-Other Isn’t My Enemy

Some time ago, I alluded to a  very real moral problem: Observed behavior, on its own, does not necessarily give you much insight into the moral value of the action. While people can generally agree in the abstract that killing is morally wrong, there appear to be some unspoken assumptions that go into such a thought. Without such additional assumptions, there would be no understanding why killing in self-defense is frequently morally excused or occasionally even praised, despite the general prohibition. In short: when “bad” things happen to “bad” people, that is often assessed as a “good” state of affairs. The reference point for such statements like “killing is wrong”, then, seems to be that killing is bad, given that it has happened to someone who was undeserving. Similarly, while most of us would balk at the idea of forcibly removing someone from their home and confining them against their will to dangerous areas in small rooms, we also would not advocate for people to stop being arrested and jailed, despite the latter being a fairly accurate description of the former.

It’s a travesty and all, but it makes for really good TV.

Figuring out the various contextual factors affecting our judgments concerning who does or does not deserve blame and punishment helps keep researchers like me busy (preferably in a paying context, fun as recreational arguing can be. A big wink to the NSF). Some new research on that front comes to us from Hamlin et al (2013), who were examining preverbal children’s responses to harm-doing and help-giving. Given that these young children aren’t very keen on filling out surveys, researchers need alternative methods of determining what’s going on inside their minds. Towards that end, Hamlin et al (2013) settled on an infant-choice style of task: when infants are presented a the choice between items, which one they select is thought to correlate with the child’s liking of, or preference for, that item. Accordingly, if these items are puppets that infants perceive as acting, then their selections ought to be a decent – if less-than-precise – index of whether the infants approve or disapprove of the actions the puppet took.

In the first stage of the experiment, 9- and 14-month old children were given a choice between green beans and graham crackers (somewhat surprisingly, appreciable percentages of the children chose the green beans). Once a child had made their choice, they then observed two puppets trying each of the foods: one puppet was shown to like the food the child picked and dislike the unselected item, while the second puppet liked and disliked the opposite foods. In the next stage, the child observed one of the two puppets playing with a ball. This ball was being bounced off the wall, and eventually ended up by one of two puppet dogs by accident. The dog with the ball either took it and ran away (harming), or picked up the ball and brought it back (helping). Finally, children were provided with a choice between the two dog puppets.

Which dog puppet the infant preferred depended on the expressed food preferences of the first puppet: if the puppet expressed the same food preferences as the child, then the child preferred the helping dog (75% of the 9-month-olds and 100% of the 14-month-olds); if the puppet expressed the opposite food preference, then the child preferred the harming dog (81% of 9-month-olds and 100% of 14-month-olds). The children seemed to overwhelming prefer dogs that helped those similar to themselves or did not help those who were dissimilar. This finding potentially echos the problem I raised at the beginning of this post: whether or not an act is deemed morally wrong or not depends, in part, on the person towards whom the act is directed. It’s not that children universally preferred puppets who were harmful or helpful; the target of that harm or help matters. It would seem that, in the case of children, at least, something as trivial as food preferences is apparently capable of generating a dramatic shift in perceptions concerning what behavior is acceptable.

In her defense, she did say she didn’t want broccoli…

The effect was then mostly replicated in a second experiment. The setup remained largely the same with the addition of a neutral dog puppet that did not act in anyway. Again, 14-month-old children preferred the puppet that harmed the dissimilar other over the puppet that did nothing (94%), and preferred the puppet that did nothing over the puppet that helped (81%). These effects were reversed in the similar other condition, with 75% preferring the dog that helped the similar other over the neutral dog, and preferred the neutral over the harmful puppet 69% of the time. 9-month-olds did not quite show the same pattern in the second experiment, however. While none of the results went in the opposite direction to the predicted pattern, the ones that did exist generally failed to reach significance. This is in some accordance with the first experiment, where 9-month-olds exhibited the tendency to a lesser degree than the 14-month-olds.

So this is a pretty neat research paradigm. Admittedly, one needs to make certain assumptions about what was going on in the infant’s heads to make any sense of the results, but assumptions will always be required when dealing with individuals that can’t tell you much about what they’re thinking or feeling (and even with the ones who can). Assuming that the infant’s selections indicate something about their willingness to condemn or condone helpful or harmful behavior, we again return to the initial point: the same action can be potentially condemned or not, depending on the target of that action. While this might sound trivially true (as opposed to other psychological research, which is often perceived to be trivially false), it is important to bear in mind that our psychology need not be that way: we could have been designed to punish anyone who committed a particular act, regardless of target. For instance, the infants could have displayed a preference towards helping dogs, regardless of whether or not they were helping someone similar or dissimilar to them, or we could view murder as always wrong, even in cases of self-defense.

While such a preference might sound appealing to many people (it would be pretty nice of us to always prefer to help helpful individuals), it is important to note that such a preference might also not end up doing anything evolutionarily-useful. That state of affairs would owe itself to the fact that help directed towards one individual is, essentially, help not directed at any other individual. Provided that help directed towards one person is less likely to pay off in the long run (such as individuals who do not share your preferences) relative to help directed towards others (such as individuals who do share you preferences), we ought to expect people to direct their investments and condemnations strategically. Unfortunately, this is where empirical matters can become complicated, as strategic interests often differ on an individual-to-individual, or even day-to-day basis, regardless of there being some degree of overlap between some broad groups within a population over time.

At least we can all come together to destroy a mutual enemy.

Finally, I see plenty of room for expanding this kind of research. In the current experiments, the infants knew nothing about the preferences of the helper or harmer dogs. Accordingly, it would be interesting to see a simple variant of the present research: it would involve children observing the preferences of the helper and harmer puppets, but not the preferences of the target of that help or harm. Would children still “approve” of the actions of the puppet with similar tastes and “disapprove” of the puppet with dissimilar tastes, regardless of what action they took, relative to a neutral puppet? While it would be ideal to have conditions in which children knew about the preferences of all the puppets involved as well, the risks of getting messy data from more complicated designs might be exacerbated in young children. Thankfully, this research need not (and should not) stick to young children.

References: Hamlin, J., Mahajan, N., Liberman, Z., Wynn, K., (2013). Not like me = bad: Infants prefer those who harm dissimilar others. Psychological Science.

What Predicts Religiosity: Cooperation Or Sex?

When trying to explain the evolutionary function of religious belief, there’s a popular story that goes something like this: individuals who believe in a deity that monitors our behavior and punishes or rewards us accordingly might be less likely transgress against others. In other words, religious beliefs function to makes people unusually cooperative. There are two big conceptual problems with such a suggestion: the first is that, to the extent that these rewards and punishments occur after death (heaven, hell, or some form of reincarnation as a “lower” animal, for instance), they would have no impact on reproductive fitness in the current world. With no impact on reproduction, no selection for such beliefs would be possible, even were they true. The second major problem is that in the event that such beliefs are false, they would not lead to better fitness outcomes. This is due to the simple fact that incorrect representations of our world do not generally tend to lead to better decisions and outcomes than accurate representations. For example, if you believe, incorrectly, that you can win a fight you actually cannot, you’re liable to suffer the costs of being beaten up; conversely, if you incorrectly believe you cannot win a fight you actually can, you might back down too soon and miss out on some resource. False beliefs don’t often help you make good decisions.

“I don’t care what you believe, R. Kelly; there’s no way this will end well”

So if one believes they being constantly observed by an agent that will punish them for behaving selfishly and that belief happens to be wrong, they will tend to make worse decisions, from a reproductive fitness standpoint, than an individual without such beliefs. On top of those conceptual problems, there is now an even larger problem for the religion-encouraging-cooperation idea: a massive data set doesn’t really support it. When I say massive, I do mean massive: the data set examined by Weeden & Kurzban (2013) comprised approximately 300,000 people from all across of the globe. Of interest from the data set were 14 questions relating to religious behavior (such as the belief in God and frequency of attendance at religious services), 13 questions relating to cooperative morals (like avoiding paying a fare on public transport and lying in one’s own interests), and 7 questions relating to sexual morals (such as the acceptability of causal sex or prostitution). The analysis concerned how well the latter two variable sets uniquely predicted the former one.

When considered in isolation in a regression analysis, the cooperative morals were slightly predictive of the variability in religious beliefs: the standardized beta values for the cooperative variables ranged from a low of 0.034 to a high of 0.104. So a one standard deviation increase in cooperative morals predicted, approximately, one-twentieth of a standard deviation increase in religious behave. On the other hand, the sexual morality questions did substantially better: the standardized betas there ranged from a low of 0.143 to a high of 0.38. Considering these variables in isolation only gives us so much of the picture, however, and the case got even bleaker for the cooperative variables once they were entered into the regression model at the same time as the sexual ones. While the betas on the sexual variables remained relatively unchanged (if anything, they got a little higher, ranging from 0.144 to 0.392) the betas on the cooperative variables dropped substantially, often into the negatives (ranging from -0.045 to 0.13). In non-statistical terms, this means that the more one endorsed more conservative sexual morals, the more religious one tended to be; the more one endorsed cooperative morals, the less religious one tended to be, though this latter tendency was very slight.

This evidence appears to directly contradict the cooperative account: religious beliefs don’t seem to result in more cooperative behaviors or moral stances (if anything, it results slightly fewer of them once you take sex into account). Rather than dealing with loving their neighbor, religious beliefs appeared to deal more with who and how their neighbor loved. This connection between religious beliefs and sexual morals, while consistently positive across all regions sampled, did vary in strength from place to place, being about four-times stronger in wealthy areas, compared to poorer ones. The reasons for this are not discussed at any length within the paper itself and I don’t feel I have anything to add on that point which wouldn’t be purely speculative.

“My stance on speculation stated, let’s speculate about something else…”

This leaves open the question of why religious beliefs would be associated with a more-monogamous mating style in particular. After all, it seem plausible that a community of people relatively interested in promoting a more long-term mating strategy and condemning short-term strategies need not come with the prerequisite of believing in a deity. People apparently don’t need a deity to condemn people for lying, stealing, or killing, so what would make sexual strategy any different? Perhaps the fact that sexual morals show substantially more variation that morals regarding, say, killing. Here’s what Weeden & Kurzban (2013) suggest:

We view expressed religious beliefs as potentially serving a number of functions, including not just the guidance of believers’ own behaviors, but as markers of group affiliation or as part of self-presentational efforts to claim higher authority or deflect the attribution of self-interested motives when it comes to imposing contested moral restrictions on those outside of the religious group. (p.2, emphasis mine)

As for whether or not belief in a deity might serve as a group marker, well, it certainly seems to be a potential candidate. Of course, so is pretty much anything else, from style of dress, to musical taste, to tattoos or other ornaments. In terms of displaying group membership, belief in God doesn’t seem particularly special compared to any other candidate. Perhaps belief in God simply ended up being the most common ornament of choice for groups of people who, among other things, wanted to restrict the sexuality of others. Such an argument would need to account for the fact that belief in God and sexual morals seem to correlate in groups all over the world, meaning they all stumbled upon that marker independently time and again (unlikely), that such a marker has a common origin in a time before humans began to migrate over the globe (possible, but hard to confirm), or posit some third option. In any case, while belief in God might serve such a group-marking function, it doesn’t seem to explain the connection with sexuality per se.

The other posited function – of involving a higher moral authority – raises some additional questions: First, if the long-term maters are adopting beliefs in God so as to try and speak from a position of higher (or impartial) authority, this raises the question of why other parties, presumably ones who don’t share such a belief, would be persuaded by that claim in anyway. Were I to advance the claim that I was speaking on behalf of God, I get the distinct sense that other people would dismiss my claims in most cases. Though I might be benefited if they believed me, I would also be benefited if people just started handing me money; that there doesn’t seem to be a benefit for other parties in doing these things, however, suggests to me that I shouldn’t expect such treatment. Unless people already believe in said higher power, claiming impartiality in its name doesn’t seem like it should tread much persuasive water.

Second, even were we to grant that such statements would be believed and have the desired effect, why wouldn’t the more promiscuous maters also adopt a belief in a deity that just so happens to smile on, or at least not care about, promiscuous mating? Even if we grant the more promiscuous individuals were not trying to condemn people for being monogamous (and so have no self-interested motives to deflect), having a deity on your side seems like a pretty reasonable way to strengthen your defense against people trying to condemn your mating style. At the very least, it would seem to weaken the moralizer’s offensive abilities. Now perhaps that’s along the lines of what atheism represents; rather than suggesting that there is a separate deity that likes what one prefers, people might simply suggest there is no deity in order to remove some of the moral force from the argument. Without a deity, one could not deflect the self-interest argument as readily. This, however, again returns us to the previous point: unless there’s some reason to assume that third parties would impressed by the claims of a God initially, it’s questionable as to whether such claims would carry any force that needed to be undermined.

Some gods are a bit more lax about the whole “infidelity” thing.

Of course, it is possible that such beliefs are just byproducts of something else that ties in with sexual strategy. Unfortunately, byproduct claims don’t tend to make much in the way of textured predictions as for what design features we ought to expect to find, so that suggestion, while plausible, doesn’t appear to lend itself to much empirical analysis. Though this leaves us without a great of satisfaction in explaining why religious belief and regulation of sexuality appear to be linked, it does provide us with the knowledge that religious belief does not primarily seem to concern itself with cooperation more generally. Whatever the function, or lack thereof, of religious belief, it is unlikely to be in promoting morality in general.

References: Weeden, J., & Kurzban, R. (2013). What predict religiosity? A multinational analysis of reproductive and cooperative morals. Evolution and Human Behavior (34), 440-445

Pay No Attention To The Calories Behind The Curtain

Obesity is a touchy issue for many, as a recent twitter debacle demonstrated. However, there is little denying that the average body composition in the US has been changing in the past few decades: this helpful data and interactive map from the CDC shows the average BMI increasing substantially from year to year. In 1985, there was no state in which the percentage of residents with a BMI over 30 exceeded 14%; by 2010, there was no state for which that percentage was below 20%, and several for which it was over 30%. One can, of course, have debates over whether BMI is a good measure of obesity or health; at 6’1″ and 190 pounds, my BMI is approximately 25, nudging me ever so-slightly into the “overweight” category, though I am by no stretch of the imagination fat or unhealty. Nevertheless, these increases in BMI are indicative of something; unless that something is people putting on substantially more muscle relative to their height in recent decades – a doubtful proposition – the clear explanation is that people have been getting fatter.

Poorly-Marketed: The Self-Esteem-Destroying Scale

This steep rise in body mass in the recent years requires an explanation, and some explanations are more plausible than others. Trying to nominate genetic factors isn’t terribly helpful for a few reasons: first, we’re talking about drastic changes over the span of about a generation, which typically isn’t enough time for much appreciable genetic change, barring very extreme selection pressures. Second, saying that some trait or behavior has a “genetic component” is all but meaningless, since all traits are products of genetic and environmental interactions. Saying a trait has a genetic component is like saying that the area of a rectangle is related to its width; true, but unhelpful. Even if genetics were helpful as an explanation, however, referencing genetic factors would only help explain the increased weight in younger individuals, as the genetics of already-existing people haven’t been changing substantially over the period of BMI growth. You would need to reference some existing genetic susceptibility to some new environmental change.

Other voices have suggested that the causes of obesity are complex, unable to be expressed by a simple “calories-in/calories-out” formula.  This idea is a bit more pernicious, as the former half of that sentence is true, but the latter half does not follow from it. Like the point about genetic components, this explanation also suffers from the idea that it’s particularly unlikely the formula for determining weight gain or loss has become substantially more complicated in the span of a single generation. There is little doubt that the calories-in/calories-out formula is a complicated one, with many psychological and biological factors playing various roles, but its logic is undeniable: you cannot put on weight without an excess of incoming energy (or a backpack); that’s basic physics. No matter how many factors affect this caloric formula, they must ultimately have their effect through a modification of how many calories come in and go out. Thus, if you are capable of monitoring and restricting the number of calories you take in, you ought to have a fail-proof method of weight management (albeit a less-than-ideal one in terms of the pleasure people derive from eating).

For some people, however, this method seems flawed: they will report restricted-calorie diets, but they don’t lose weight. In fact, some might even end up gaining. The fail-proof methods fails. This means either something is wrong with physics, or there’s something wrong with the reports. A natural starting point for examining why people have difficulty managing their weight, even when they report calorically-restrictive diets, then, might be to examine whether people are accurately monitoring and reporting their intakes and outputs. After all, people do, occasionally, make incorrect self-reports. Towards this end, Lichtman et al (1992) recruited a sample of 10 diet-resistant individuals (those who reported eating under 1200 calories a day for some time and did not lose weight) and 80 control participants (all had BMIs of 27 of higher). The 10 subjects in the first group and 6 from the second were evaluated for reported intake, physical activity, body composition, and energy expenditure over two weeks. Metabolic rate was also measured for all the subjects in the diet-resistant group and for 75 of the controls.

Predicting the winner between physics and human estimation shouldn’t be hard.

First, we could consider the data from the metabolic rate: the daily estimated metabolic rate relative to fat-free body mass did not differ between the groups, and deviations of more than 10% from the group’s mean metabolic rate were rare. While there was clearly variation there, it wasn’t systematically favoring either group. Further, the total energy expenditure by fat-free body mass did not differ between the two groups either. When it came to losing weight, the diet-resistant individuals did not seem to be experiencing problems because they used more or less energy. So what about intake? Well, the diet-resistant individuals reported taking in an average of 1028 calories a day. This is somewhat odd, on account of them actually taking in around 2081 calories a day. The control group weren’t exactly accurate either, reporting 1694 calories in a day when they actually took in 2386. In terms of percentages, however, these differences are stark: the diet-resistant sample’s underestimates were about 150% as large as the controls.

In terms of estimates of energy expenditure, the picture was no brighter: diet-resistant individuals reported expending 1022 calories through physical activity each day, on average, when they actually exerted 771; the control group though the expended 1006, when they actually exerted 877. This means the diet-resistant sample were overestimating by almost twice as much as the controls. Despite this, those in the diet-resistant group also held more strongly to the belief that their obesity was caused by genetic and metabolic factors, and not their overeating, relative to controls. Now it’s likely that these subjects aren’t lying; they’re just not accurate in their estimates, though they earnestly believe them. Indeed, Lichtman et al (1992) reported that many of the subjects were distressed when they were presented with these results. I can only imagine what it must feel like to report having tried dieting 20 times or more only to be confronted with the knowledge that you likely weren’t doing so effectively. It sounds upsetting.

Now while that’s all well and good, one might object to these results on the basis of sample size: a sample size of about 10 per group clearly leaves a lot to be desired. Accordingly, a brief consideration of a new report examining people’s reported intakes is in order. Archer, Hand, and Blair (2013) examined people’s self-reports of intake relative to their estimated output across 40 years of  U.S. nutritional data. The authors were examining what percentage of people were reporting biologically-implausible caloric intakes. As they put it:

“it is highly unlikely that any normal, healthy free-living person could habitually exist at a PAL [i.e., TEE/BMR] of less than 1.35’”

Despite that minor complication of not being able to perpetually exist past a certain intake/output ratio, people of all BMIs appeared to be offering unrealistic estimates of their caloric intake; in fact, the majority of subjects reported values that were biologically-implausible, but the problem got worse as BMI increased. Normal-weight BMI women, for instance, offered up biologically-plausible values around 32-50% of the time; obese women reported plausible values around 12 to 31% of the time. In terms of calories, it was estimated that obese men and women tended to underreport by about 700 to 850 calories, on average (which is comparable to the estimates obtained from the previous study), whereas the overall sample underestimated around 280-360. People just seemed fairly inaccurate as estimating their intake all around.

“I’d estimate there are about 30 jellybeans in the picture…”

Now it’s not particularly odd that people underestimate how many calories they eat in general; I’d imagine there was never much selective pressure for great accuracy in calorie-counting over human evolutionary history. What might need more of an explanation is why obese individuals, especially those who reported resistance to dieting, tended to underreport substantially more than non-obese ones. Were I to offer my speculation on the matter, it would have something to do with (likely non-conscious) attempts to avoid the negative social consequences associated with obesity (obese people probably aren’t lying; just not perceiving their world accurately in this respect). Regardless of whether one feels those social consequences associated with obesity are deserved or not, they do exist, and one method of reducing consequences of that nature is to nominate alternative casual agents for the situation, especially ones – like genetics – that many people feel you can’t do much about, even if you tried. As one becomes more obese, then, they might face increased negative social pressures of that nature, resulting in their being more liable to learn, and subsequently reference, the socially-acceptable responses and behaviors (i.e. “it’s due to my genetics”, or, “I only ate 1000 calories today”; a speculation echoed by Archer, Hand, and Blair (2013)). Such an explanation is at least biologically-plausible, unlike most people’s estimates of their diets.

References: Archer, E., Hand, G., & Blair, S. (2013). Validity of U.S. national surveillance: National health and nutrition examination survey caloric energy intake data, 1971-2010. PLoS ONE, 8, e76632. doi:10.1371/journal.pone.0076632.

Lichtman et al. (1992). Discrepancy between self-reported and actual caloric intake and exercise in obese subjects. The New England Journal of Medicine, 327, 1893-1898.

 

A Curious Case Of Vegan Moral Hypocrisy

I’ve decided to take one of my always-fun breaks from discussing strictly academic matters to instead examine a case of moral hypocrisy I came across recently involving a vegan: one Piper Hoffman, over at Our Hen House. Piper, as you can no doubt guess, frowns upon at least certain aspects of the lifestyles of almost every American (and probably most people in the world as well). In her words, most of us are, “arrogant flesh eaters“, who are condemnable moral hypocrites for both tending to do things like love our pets and eat other animals. There are so many interesting ideas found in that sentence that it’s hard to know where to start. First is the matter of why people tend to nurture members of other species in a manner resembling the way we nurture our own children. There’s also the matter of why someone like Piper would adopt a moral stance that involves protecting non-human animals. Sure; such a motivation might be intuitively understood when it happens to be people doing it, but the same cannot be said of non-human species. That is, it would appear to be particularly strange if you found, say, a lion that simply refused to eat meat on moral grounds.

She might be a flesh-eater, but at least she’s not all arrogant about it.

The third thing I find interesting about Piper’s particular moral stance is that it’s severely unpopular: less than 1% of the US population would identify as being a vegan and, in practice, even self-reported vegetarians were more likely to have eaten meat in the last 24 hours than to have not done so. Now while diet might often be the primary focus when people think of the word ‘vegan’, Piper assures us that’s there’s more to being a vegan than what you put in your mouth. Here is Piper’s preferred definition:

“Veganism is a way of living which excludes all forms of exploitation of, and cruelty to, the animal kingdom, and includes a reverence for life. It applies to the practice of living on the products of the plant kingdom to the exclusion of flesh, fish, fowl, eggs, honey, animal milk and its derivatives, and encourages the use of alternatives for all commodities derived wholly or in part from animals.”

Accordingly, not only should vegans avoid eating animal-related foods, they also should also not do things like wear fur, leather, wool or silk, as all involve the suffering or exploitation of the animal kingdom. Bear the word “silk” in mind, as we’ll be returning to it shortly.

Taken together, what emerges is the following picture: a member of species X has begun to adopt all sorts of personally-costly behaviors (like avoiding certain types of food or tools) in order to attempt to avoid reducing the welfare of pretty much any other living organisms, irrespective of their identity. Further still, that member of species X is not content with just personally behaving in such a manner: she has also taken it upon herself to attempt to try and regulate the behavior of others around her to do similarly, morally condemning them if they do not. That latter factor is especially curious, given that most other members of her species are not so inclined. This means her moral stance could potentially threaten otherwise-valuable social ties, and is unlikely to receive the broad social support capable of reducing the costs inherent in moral condemnation. I would like to stress again how absolutely bizarre such behavior would seem to be if we observed it in pretty much any other species.

Without venturing a tentative explanation for what cognitive systems might be generating such stances at the present time, I would like to consider another post Piper made on October 21st of this year. While in her apartment, Piper heard some strange sounds and, upon investigation, discovered that a colony of ants had taken over her bedroom. Being a vegan who avoids all form of cruelty and exploitation of animals, Piper did what one might expect from one who displays a reverence for life: she bought some canisters of insect poison, personally gassed thousands of the ants herself, then called in professionals to finish the job and kill the rest of them that were living in the walls. Now one might, as Piper did, suggest that it’s unclear as to whether insects feel pain; perhaps they do, and perhaps they don’t. What is clear, however, is that Piper previously stated a moral rule against wearing products made from silk. Apparently the silk production is exploitative in a way mass murder is not. In any case, the comments on Piper’s blog are what one might expect from a vegan crowd who condemns cruelty and reveres life: unanimous agreement that mass killing was an appropriate response because, after all, people, even vegans, aren’t perfect.

“If it’s any consolation, I felt bad afterwards. I mean, c’mon; no one’s perfect”

This situation raises plenty of debatable and valuable questions. One is the matter of the hypocrisy itself: why didn’t Piper’s conscience stop her from acting? Another is the matter of those who commented on the article: why was Piper supported by other (presumed) vegans, rather than condemned for a clear act of selfish cruelty? A third is that it is clear Piper did not reduce or prevent animal suffering in anyway in the story, so is she, and the vegan code of conduct for generally, truly designed/attempting to reduce suffering per se? If the answer to the last question is “yes”, then one might ask whether or not the vegan lifestyle encourages people to engage in the proper behaviors capable of doing more to reduce suffering. While these are worthwhile questions that can shed light on all sorts of curious aspects of human psychology, I would like to focus on the last point.

Consider the following proposition: humans should exterminate all carnivorous species. This act might seem reasonable from a standpoint of reducing suffering. Why? By their very nature, carnivorous species require that other animals suffer and die so the carnivore can continue living. Since these murder-hungry species are unlikely to respond affirmatively to our polite requests that they kindly stop killing things, we could stop them from doing so, now and forever. Provided one wishes to reduce the suffering in the world, then, there are really only three answers to the question regarding whether we should exterminate all meat-eating species: “Yes”, because they cause more suffering than they offset (however that’s measured); “No”, because they offset more suffering than they cause; or “I don’t know” because we can’t calculate such things for sure.

Though I would find either of the first two answers acceptable from a consistency perspective, I have yet to find anyone who advocated for either of those options. What I have come across are people who posit the third answer with some frequency. I will of course grant that such things are incredibly difficult to calculate, especially with a high degree of accurately, but this clearly do not pose a problem in all cases. Refusing to wear silk clothes, for instance, seemed to be easy enough for Piper to calculate; it’s morally wrong because it involved animal suffering and/or exploitation. Similarly, I imagine most of us would not refrain from judging someone who slowly tortured our pet dog because we can’t be 100% sure that their actions were, on the whole, causing more suffering than they offset. If we cannot calculate welfare tradeoffs in situations like these with some certainty, then any argument for veganism built on the foundation of reducing animal suffering crumbles, as such a goal would be completely ineffective in guiding our actions.

Still having trouble calculating welfare impacts?

All the previous examples do is make people confront a simple fact: they’re often not all that interested in actually “minimizing suffering“. While it sounds like a noble goal – since most people don’t like suffering in the abstract – it’s too broadly phrased to be of any use. This should be expected for a number of reasons, namely that “reducing suffering per se” is a biologically-implausible function for any cognitive mechanism and, even if reducing suffering is the proximate goal in question, there’s pretty much always something else one could do to reduce it. Despite the latter fact, many people, like Piper, effectively give up on the idea once it becomes too much of a personal burden; they’re interested in reducing suffering, just so long as it’s not terribly inconvenient. But if people are not interested in minimizing suffering per se, what is actually motivating that stated interest? Presumably, it has something to do with the signal one sends by taking such moral a stance. I won’t discuss the precise nature of that signal at the present time, but feel free to offer speculations in the comments section.

Might Doesn’t Make Right, But It Helps

There’s no denying the importance and ubiquitousness of violence and aggression. Despite the suggestion of the owner of the swamp castle in Monty Python’s Quest for the Holy Grail, people continue to “bicker and argue about who killed who“. Given that anger is often a key motivator of aggression, developing a satisfying account of anger can go a long way towards understanding and predicting when people will be likely to aggress against others. While there has been a great deal of focus placed on reducing violence, there tends to somewhat less mind paid to understanding the functions and uses of anger. The American Psychology Association, for instance, notes that anger can be a good thing because, “it can give you a way to express negative feelings…or motivate you to find solutions to problems”. They also warn that anger can “get out of hand”. While such suggestions sound plausible (minus the idea that “expressing” an emotion is good, in and of itself), they tend to lack the ability to deliver suitably textured predictions about the correlates or shape of anger, much less qualify what counts as “getting out of hand”.

Seems like he had that situation completely under control to me.

Of course, that’s not to suggest that is anger is always going to be useful in precisely the same measure as it gets delivered; just that we ought to be interested in attempting to understand the emotion before trying to diagnose the problems with it (in much the same fashion, one might wish to understand the function of, say, a fever, before figuring out whether we should try to reduce them). Towards that end, I would like to turn to a paper by Sell, Tooby, & Cosmides (2009) who posit an altogether more specific and biologically-plausible function for anger: the regulation and modification of welfare-tradeoff ratios (WTRs). These ratios essentially represent how much of your own welfare you’re willing to give up to improve the welfare of another. To use a simple economics example, imagine choosing between two options: $6 for yourself and $1 for someone else, or $5 for yourself and $5 for someone else. One’s WTR towards that someone else could be approximated, at least in some sense, by their choice in that and other dilemmas. This propensity to suffer losses to benefit others varies considerably across individuals.

This basic concept can be readily expanded to the wider social world: everything we do tends to have an effect on others and ourselves, and we would be better off, on the whole, if other people were relatively more willing to take our welfare into account when they acted. Sometimes that works out favorably for both parties, as is often the case in kin relationships (shared genetic interests tend to increase the willingness to trade off your own welfare for another’s); other times, it won’t work out so nicely. Since everyone would be better off if they could increase their WTRs with others and not everyone can possibly achieve that goal at once, WTRs tend to be aligned in non-optimal ways from at least someone’s perspective, if not most people’s. So let’s say someone isn’t taking my welfare into account in a way I deem acceptable when they act; what’s a guy to do? One available option is to attempt and “renegotiate” their WTR towards me through the threat of inflicting costs or withdrawing benefits; the  kinds of behaviors that anger helps motivate. Anger, then, might serve the function of attempting to regulate other people’s WTRs towards you (or your allies, and you by extension) by signalling the intention to inflict costs after behavior indicative of an unacceptably-low WTR.

This function immediately suggests some design features we ought to expect to find in the cognitive systems regulating anger, because not everyone is equally capable of inflicting costs on others. Accordingly, someone in a better position to inflict costs on others ought to be more readily roused to anger. One obvious indicator of that capacity to inflict costs would be one’s physical formidability: physically stronger males should be more capable of inflicting costs on others, and thus more willing to do so in order to modify the WTRs held by said others. This prediction was born out well in the data Sell et al (2009) collected: across various measures of men’s strength, the correlation between physical formidability and proneness to anger, history of fighting, sense of entitlement, and the perceived usefulness of violence were all high, ranging from approximately r = 0.3 to 0.5; for women, the same correlations were around 0.05 to 0.1. It was only physical formidability in men that proved to be a good predictor of aggression and anger, which makes a good deal of sense in light of the fact that women tend to be substantially less physically formidable in general.

A relationship that holds even when measured in Hulks.

Women are not without power, though, even if typically falling behind men in physical strength. Perhaps owing to their ability to recruit the physical strength of others, or leverage some other social capital, attractive women might also be especially prone to anger. This set of predictions was also confirmed: women who perceived themselves to be attractive – like strong men – were more prone to anger, felt greater entitlement, were more successful in conflicts, and found violence to be more useful after controlling for physical strength. Attractiveness, however, did not predict history of fighting in women, as was expected. While attractive men also tended to feel a greater sense of entitlement and reported more success in conflicts, the variables relating to fighting ability did not reliably correlate with attractiveness once the effect of physical formidability was partial outed. In other words, in relation to anger, what physical strength was for men, attractiveness was for women.

It should also be noted that neither attractiveness or physical strength correlated well with how long people tended to ruminate when angry. It wasn’t simply the case that strong men/attractive women were angrier for longer periods of time. We ought to expect anger to be roused strategically and contextually in order to solve specific problems; not just generally, as that is liable to cause more problems than it solves. These results also cut against some popular misconceptions, like people being angry to compensate for a lack of physical strength or attractiveness, as the people who lacked those qualities tended to be less prone to anger. These data would also cut against the suggestions from the APA that I initially mentioned: unless there’s some compelling reason to predict that physically strong males/attractive females are particularly likely to be prone to anger in order to “express their emotions” or “solve problems” more generally, we can see that those ostensible functions for anger are clearly lacking in some regards. They fail to deliver good predictions or satisfyingly account for the existing data.

These findings do raise some questions bearing deeper examination. The first of these concerns the often ambiguous nature of casual arrows: do men become more prone to anger and aggression as they become physically stronger, or might there be some developmental window at which point aggressive tendencies tend to become relatively canalized (i.e. does current physical strength matter, or does one’s strength at, say, age 16 matter more)? What role does social influence – in the form of larger groups of allies – bring? Are well-liked, but physically-weak men less or more likely to become angry easily? Does it matter whether one’s friends are physically imposing? How about if the target of one’s anger is more formidable than the one experiencing the anger? Admittedly, these are tricky questions to answer, owing largely to potential logistical issues in conducting the research in an ecologically-valid context, but they’re certainly worth considering.

“Experimental Recruitment: Please bring a dozen close friends”

Returning to the initial point about when anger gets “out of control”, we can see the question becomes a significantly more nuanced one. For starters, “out of control” will clearly depend on who you ask: while the angry individual might feel that they are not being treated appropriately by others in their social world, the targets of that anger might insist that the angry individual is being unreasonable in their requests for some particular treatment. Further, “out of control” for one individual does not necessarily equal the same amount of aggression for any other, at least in terms of the adaptive value of the behavior. One might also consider, at least at times, a lack of aggression and anger to be unsuitable behavior, such as when meek children are told to stand up to their bullies. The key point here is that we ought to expect all these considerations to vary strategically, rather than as a function of someone needing to “express their emotions” by “venting” them. If Sell et al (2009) are correct, anger can likely be reduced by altering these WTRs in non-aggressive fashions. Once the expected WTR for one party has been reached, the anger systems ought to be deactivated. Whether such methods are likely to be practically feasible is another matter entirely.

References: Sell, A., Tooby, J., & Cosmides, L. (2009). Formidability and the logic of human anger.  Proceedings of the National Academy of Sciences, 106, 15073-78.