Imagine Psychology Without People

In 1971, John Lennon released the now-iconic song “Imagine“. In the song, Lennon invites us to imagine a world without religion, countries, or personal possessions where everyone coexists in peace with one another. Now, of course, this is not the world in which we exist. In fact, Lennon apparently preferred to keep this kind of world in the realm of imagination himself, using his substantial personal wealth to live a life well-beyond his needs; a fact which Elton John once poked fun at, rewriting to lyrics to imagine to begin: “Imagine six apartments; it isn’t hard to do. One’s full of fur coats; the other’s full of shoes”. While Lennon’s song might appear to have an uplifting message (at least superficially; I doubt many of us would really want to live in that kind of world if given the opportunity), the message of the song does not invite us to understand the world as it is: we are asked to imagine another world; not to figure out why our world bears little resemblance to that one.

My imaginations may differ a bit from John’s, but to each their own.

Having recently returned from the SPSP conference (Society of Personality and Social Psychology), I would like to offer my personal reflections about the general state of psychological research from my brief overview of what I saw at the conference. In the sake of full disclosure, I did not attend many of the talks and I only casually browsed over most of the posters that I saw. The reason for this state of affairs, however, is what I would like to focus on today. After all, it’s not that I’m a habitual talk-avoider: at last year’s HBES conference (Human Behavior and Evolution Society), I found myself attending talks around the clock; in fact, I was actually disappointed that I didn’t get to attend more of them (owing in part to the fact that pools tend to conceal how much you’ve been drinking). So what accounted for the differences in my academic attendance at these two conferences? There are two particular factors I would like to draw attention to, which I think paint a good picture my general impressions of the field of psychology.

The first of these factors was the organization of the two conferences. At HBES, the talks were organized, more or less, by topics: one room had talks on morality, another on life history, the next on cooperation, and so on. At SPSP, the talks were organized, as far as I could tell, anyway, with no particular theme. The talks at SPSP seemed to be organized around whatever people putting various symposiums together wanted to talk about, and that topic tended to be, at least from what I saw, rather narrow in its focus. This brings me to the first big difference between the two conferences, then: the degree of consilience each evidenced. At HBES, almost all the speakers and researchers seemed to share a broader, common theoretical foundation: evolutionary theory. This common understanding was then applied to different sub-fields, but managed to connect all of them into some larger whole. The talks on cooperation played by the same rules, so to speak, as the talks on aggression. By contrast, the psychologists at SPSP did not seem to be working under any common framework. The result of this lack of common grounding is that most of these talks were islands unto themselves, and attending one of them probably wouldn’t tell you much about any others. That is to say that a talk at SPSP might give you a piece of evidence concerning a particular topic, but it wouldn’t help you understand how to think about psychology (or even that topic) more generally. The talks on self-affirmation probably wouldn’t tell you anything about the talks on self-regulation, which in turn bear little resemblance to talks on sexism.

The second big issue is related to the first, and where our tie in to John Lennon’s song arises. I want you to imagine a world in which psychology was not, by in large, the study of human psychology and behavior in particular, but rather the study of psychology among life in general. In this world we’re imagining, humans, as a species, don’t exist as far as psychological research is concerned.  Admittedly, such a suggestion might not lend itself as well to song as Lennon’s “Imagine”, but unlike Lennon’s song, this imagination actually leads us to a potentially useful insight. In this new world – psychology without people – I only anticipate that one of these two conferences would actually exist: HBES. The theoretical framework of the researchers at HBES can help us understand things like cooperation, the importance of kinship, signaling, and aggression regardless of what species we happen to be talking about. Again, there’s consilience when using evolutionary theory to study psychology. But what about the SPSP conference? If we weren’t talking about humans, would anyone seriously try to use concepts like the “glass ceiling”, “self-affirmation”, “stereotypes”, or “sexism” to explain the behavior of any non-human organisms? Perhaps; I’ve just never seen it happen.

“Methods: We exposed birds to a stereotype threat condition…”

Now, sure; plenty of you might be thinking something along the lines of, “but humans are special and unique; we don’t play by the same rules that all other life on this planet does. Besides, what can the behavior of mosquitoes, or the testicle size of apes tell us about human psychology anyway?” Such a sentiment appears to be fairly common. What’s interesting to note about that thought, however, would not only be that it seem to confirm that psychology suffers from a lack of consilience, but, more importantly, it would be markedly mistaken. Yes; humans are a unique species, but then so is every other species on the planet. It doesn’t follow from our uniqueness that we’re not still playing the same game, so to speak, and being governed by the same rules. For instance, all species, unique as they are, are still subject to gravitational forces. By understanding gravity we can understand the behavior of many different falling objects; we don’t need separate fields of inquiry as to how one set of objects falls uniquely from the others. Insisting that humans are special in this regard would be a bit like an ornithologist insisting that the laws of gravity don’t apply to most bird species because they don’t fall like rocks tend to. Similarly, all life plays by the rules of evolution. By understanding a few key evolutionary principles, we can explain a remarkable amount of the variance in the way organisms behave without needing disparate fields for each species (or, in the case of psychology, disparate fields for every topic).

Let’s continue to imagine a bit more: if psychology had to go forward without studying people, how often do you think would find people advocating suggestions like this:

If our university community opposes racism, sexism, and heterosexism, why should we put up with research that counters our goals simply in the name of “academic freedom”?…When an academic community observes research promoting or justifying oppression, it should ensure that this research does not continue.

Maybe in our imaginary world of psychological research without people there would be some who seriously suggested that we should not put up with certain lines of research. Maybe research on, say, the psychology of mating in rabbits should not be tolerated, not because it’s inaccurate, mind you, but rather because the results of it might be opposed to the predetermined conclusions of anti-rabbit-heterosexism-oppression groups. Perhaps research on how malaria seems to affect the behavior of mosquitoes shouldn’t be tolerated because it might be used to oppress mosquitoes with seemingly “deviant” or extreme preferences for human blood. Perhaps these criticisms might come up, but I don’t imagine such opposition would be terribly common when the topic wasn’t humans.

“Methods: We threatened the elephant seal’s masculinity…”

So why didn’t I attend as many talks at SPSP as I did at HBES? First, there was the lack of consilience: without the use or consideration of evolutionary theory explicitly, a lot of the abstracts for research at SPSP sounded as if they would represent more of an intellectual spinning of wheels rather than a forwarding of our knowledge. This perception, I would add, doesn’t appear to be unique to me; certain psychological concepts seem to have a nasty habit of decaying in popularity over time. I would chalk that up to their lack of being anchored to or drawn from some underlying theoretical concept, but I don’t have the data on hand to back that up empirically at the moment. The second reason I didn’t attend as many talks at SPSP was because some of them left me with the distinct sense that the research was being conducted with some social or political goal in mind. While that’s not to say it necessarily disqualifies the research from being valuable, it does immediately make me skeptical (for instance, if you’re researching “stereotypes”, you might want to test their accuracy before you write them off as a sign of bias. This was not done at the talks I saw).

Now all of this is not simply said in the service of being a contrarian (fun as that can be) nor am I saying that every piece of research to come out of an evolutionary paradigm is good; I have attended many low- to mid-quality talks and posters at the evolutionary conferences I’ve been to. Rather, I say all this because I think there’s a lot of potential for psychological research in general to improve, and the improvement itself wouldn’t be terribly burdensome to achieve. The tools are already at our disposal. If we can collectively manage to stop thinking of human behavior as something requiring a special set of explanations and start seeing it within a larger evolutionary perspective, a substantial amount of the battle will already be won. It just takes a little imagination.

Does Grief Help Recalibrate Behavior?

Here’s a story which might sound familiar to all of you: one day, a young child is wandering around in the kitchen while his parents are cooking. This child, having never encountered a hot stove before, reaches up and brushes his hand against the hot metal. Naturally, the child experiences a physical pain and withdraws his hand. In order to recalibrate his behavior so as to not avoid future harms, then, the child spends the next week unable to get out of bed – owing to a persistent low-energy – and repeatedly thinks about touching the hot stove and how sad it made him feel. For the next year, the child returns to the spot where he burned his hand, leaving flowers on the spot, and cries for a time in remembrance. OK; so maybe that story doesn’t sound familiar at all. In fact, the story seems absurd on the face of it: why would the child go through all that grief in order to recalibrate their stove-touching behavior when they could, it seems, simply avoid touching the hot stove again? What good would all that additional costly grief and depression do? Excellent question.

Unfortunately, chain emails do not offer learning trials for recalibration.

In the case of the hot stove, we could conclude that grief would likely not add a whole lot to the child’s ability to recalibrate their behavior away from stove-touching. It doesn’t seem like a very efficient way of doing so, and the fit between the design features of grief and recalibration seem more than a bit mismatched. I bring these questions up in response to a suggestion I recently came across by Tooby & Cosmides, with whom I generally find myself in agreement with (it’s not a new suggestion; I just happened to come across it now). The pair, in discussing emotions, have this to say about grief:

Paradoxically, grief provoked by death may be a byproduct of mechanisms designed to take imagined situations as input: it may be intense so that, if triggered by imagination in advance, it is properly deterrent. Alternatively-or additionally-grief may be intense in order to recalibrate weightings in the decision rules that governed choices prior to the death. If your child died because you made an incorrect choice (and given the absence of a controlled study with alternative realities, a bad outcome always raises the probability that you made an incorrect choice), then experiencing grief will recalibrate you for subsequent choices. Death may involve guilt, grief, and depression because of the problem of recalibration of weights on courses of action. One may be haunted by guilt, meaning that courses of action retrospectively judged to be erroneous may be replayed in imagination over and over again, until the reweighting is accomplished.

So Tooby and Cosmides posit two possible functions for grief here: (1) there isn’t a function per se; it’s just a byproduct of a mechanism designed to use imagined stimuli to guide future behavior, and (2) grief might help recalibrate behavior so as to avoid outcomes that previously have carried negative fitness consequences. I want to focus on the second possibility because, as I initially hinted at, I’m having a difficult time seeing the logic in it.

One issue I seem to be having concerns the suggestion that people might cognitively replay traumatic or grief-inducing events over and over in order to better learn from them. Much like the explanation often on offer for depression, then, grief might function to help people make better decisions in the future. That seems to be the suggestion Tooby & Cosmides are getting at, anyway. As I’ve written before, I don’t think this explanation in plausible on the face of it. At least in terms of depression, there’s very little evidence that depression actually helps people make better decisions. Even if it did, however, it would raise the question as to why people ever don’t make use of this strategy. Presumably, if people could learn better by replaying events over and over, one might wonder why we ever don’t do that; why would we ever perform worse, when we could be performing better?  In order to avoid making what I nicknamed the Dire Straits fallacy ( from their lyric “money for nothing and the chicks for free“), the answer to that question would inevitably involve referencing some costs to replaying events over and over again. If there were no such costs to replay, and replay led to better outcomes, replay should be universal, which it isn’t; at least not to nearly the same degree. Accordingly, any explanation for understanding why people use grief as a mechanism for improved learning outcomes would need to make some reference as to why grief-struck individuals are more able to suffer those costs for the benefits continuous replay provides. Perhaps such an explanation exists, but it’s not present here.

One might also wonder what replaying some tragic event over and over would help one learn from it. That is, does the replaying the event actually help one extract additional useful information from the memory? As we can see from the initial example, rumination is often not required to quickly and efficiently learn connections between behaviors and outcomes. To use the Tooby & Cosmides example, if your child died because you made an incorrect choice, why would ruminating for weeks or longer help you avoid making that choice again? The answer to that question should also explain why rumination would not be required for effective learning in the case of touching the hot stove.

It should only be a few more weeks of this until she figures out that babies need food.

One might also suggest that once the useful behavioral-recalibration-related information has been extracted from the situation, replaying the grief-inducing event would seem to be wasted time, so the grief should stop. Tooby & Cosmides make this suggestion, writing:

After the 6-18 month period, the unbidden images suddenly stop, in a way that is sometimes described as “like a fever breaking”: this would be the point at which the calibration is either done or there is no more to be learned from the experience

The issue I see with that idea, however, is that unless one is positing it can take weeks, months, or even years to extract the useful information from the event, then it seems unlikely that much of that replay involves helping people learn and extract information. Importantly, to the extent that decisions like these (i.e. “what were you doing that led to your child’s death that you shouldn’t do again”) were historically recurrent and posed adaptive problems, we should expect evolved cognitive decision making modules to learn from them fast and efficiently. A mechanism that takes weeks, months, or even years to learn from an event by playing it over and over again should be at a massive disadvantage, relative to a mechanism that can make those same learning gains in seconds or minutes. A child that needed months to learn to not touch a hot stove might be at a risk of touching the stove again; if the child immediately learned to not do so, there’s little need to go over grieving about it for months following the initial encounter. Slow learning is, on the whole, a bad thing which carries fitness costs; not a benefit. Unless there’s something special about grief-related learning that requires it takes so long – some particularly computationally-demanding problem – then the length of grief seems like a peculiar design feature for recalibrating one’s own behavior.

This, of course, all presumes that the grief-recalibration learning mechanisms know how to recalibrate behavior in the first place. If your child died because of a decision you made, there are likely very many decisions that you made which might or might not have contributed to that outcome. Accordingly, there are very many ways in which you might potentially recalibrate your behavior to avoid such a future outcome again, very few of which will actually be of any use. So your grief mechanism should need to know which decisions to focus on at a minimum. Further still, the mechanism would need to know if recalibration was even possible in the first place. In the case of a spouse dying from something related to old age or a child dying from an illness or accident, all the grieving in the world wouldn’t necessarily be able to effect any useful change the next time around. So we might predict that people should only tend to grieve selectively: when doing so might help avoid such outcomes in the future. This means people shouldn’t tend to grieve when they’re older (since they have less time to potentially change anything) or about negative outcomes beyond their control (since no recalibration would help). As far as I know (which, admittedly, isn’t terribly far in this domain) this isn’t that case. Perhaps an astute reader could direct me to research where predictions like these have been tested.

Finally, humans are far from the only species which might need to recalibrate their behavior. Now it’s difficult to say precisely as to what other species feel, since you can’t just ask them, but do other species feel grief the same way humans do? The grief-as-recalibration model might predict that they should. Now, again, the depth of my knowledge on grief is minimal, so I’m forced to ask these questions out of genuine curiosity: do other species evidence grief-related behaviors? If so, in what contexts are these behaviors common, and why might those contexts be expected to require more behavioral recalibration than non-grief-inducing situations? If animals do not show any evidence of grief-related behaviors, why not? These are all matters which would need to be sorted out. To avoid the risk of being critical without offering any alternative insight, I would propose an alternative function for grief similar to what Ed Hagen proposed for depression: grief functions to credibly signal one’s social need.

“Aww. Looks like someone needs a hug”

Events that induce grief – like the loss of close social others or other major fitness costs – might tend to leave the griever in a weakened social position. The loss of mates, allies, or access to resources poses major problems to species like us. In order to entice investment from others to help remedy these problems, however, you need to convince those others that you actually do have a legitimate need. If your need is not legitimate, then investment in your might be less liable to payoff. The costly extended periods of grief, then, might help signal to others that one’s need is legitimate, and make one appear to be a better target of subsequent investment. The adaptive value of grief in this account lies not in what it makes the griever do per se; what the griever is doing is likely maladaptive in and of itself. However, that personally-maladaptive behavior can have an effect on others, leading them to provide benefits to the grieving individuals in an adaptive fashion. In other words, grief doesn’t serve to recalibrate the griever’s behavior so much as it serves to recalibrate the behavior of social others who might invest in you.

Of Pathogens And Social Support

Though I’m usually consistent with updating about once a week, this last week and a half has found me out of sorts. Apparently, some infection managed to get the better of my body for a while, and most of the available time I had went into managing my sickness and taking care of the most important tasks. Unfortunately, that also meant taking time away from writing, but now that I’m back on my feet I would like to offer some reflections on that rather grueling experience. One rather interesting – or annoying, if you’re me – facet of this last infection was the level of emotional intensity I found myself experiencing: I felt as if I wanted to be around other people while I was sick, which is something of an unusual experience for me; I found myself experiencing a greater degree of empathy with other people’s experiences than usual; I also found myself feeling, for lack of a better word, lonely, and a bit on the anxious side. Being the psychologist that I am, I couldn’t help but wonder what the ultimate function of these emotional experiences was. They certainly seemed to be driving me towards spending time around other people, but why?

And don’t you dare tell me it’s because company is pleasant; we all know that’s a lie.

Specifically, my question was whether these feelings of wanting to spend more time around others were being driven primarily by some psychological mechanism of mine functioning in my own fitness interests, or whether they might have been being driven by whatever parasite had colonized parts of my body. A case could be made for either option, though the case for parasite manipulation is admittedly more speculative, so let’s start with the idea that my increased desire for human contact might have been the result of the proper functioning of my psychology. Though I do not have any research on hand that directly examines the link between sickness and the desire for social closeness with others, I happen to have what is, perhaps, the next best thing: a paper by Aaroe & Petersen (2013) examining what effects hunger has on people’s willingness to advocate for resource-sharing behavior. Since the underlying theory behind the sickness-induced emotionality on my part and the hunger-induced resource sharing are broadly similar, examining the latter can help us understand the former.

Aaroe & Petersen (2013) begin with a relatively basic suggestion: solving the problems of resource acquisition posed an adaptive problem to ancestral human populations. We all need caloric resources to build and maintain our bodies, as well as to do all the reproductively-useful things that organisms which move about their environment do. One way of solving this problem, of course, is to go out hunting or foraging for food oneself. However, this strategy can, at times, be unsuccessful. Every now and again, people will come home empty-handed and hungry. If one happens to be a member of social species, like us, that’s not the only game in town, though: if you’re particularly cunning, you can manipulate successful others into sharing some of their resources with you. Accordingly, Aaroe & Petersen (2013) further suggest that humans might have evolved some cognitive mechanisms that responds to bodily signals of energy scarcity by attempting to persuade others to share more. Specifically, if your blood glucose level is low, you might be inclined to advocate for social policies that encourage others to share their resources with you.

As an initial test of this idea, the researchers had 104 undergraduates fast for four hours prior to the experiment. As if not eating for 4 hours wasn’t already a lot to ask. upon their arrival at the experiment, all the participants had their blood glucose levels measured in a process I can only assume (unfortunately for them) involved a needle. After the initial measurement, half the subjects were either given a sugar-rich drink (Spite) or a sugarless drink (Sprite Zero). Ten minutes after the drink, the blood glucose levels were measured again (and a third time as they leaving, which is a lot of pokes), and participants were asked about their support for various social redistribution policies. They were also asked to play a dictator game and divide approximately $350 between them and another participant, with one set of participants actually getting the money in that division. So the first test was designed to see whether participants would advocate for more sharing behavior when they were hungry, whereas the second test was designed to see if participants would actually demonstrate more generous behavior themselves.

Way to really earn your required undergrad research credits.

The results showed that the participants who had consumed the sugar-rich drink had higher blood glucose levels than the control group, and were also approximately 10% less supportive of social-welfare policies than those in the sugar-free condition. This lends some support to the idea that our current hunger level, at least as assessed by blood glucose levels, helps determine how much we are willing to advocate that other people share with one another: hungry individuals wanted more sharing, whereas less-hungry individuals wanted less. What about their actual sharing behavior, though? As it turns out, those who support social-welfare policies are more likely to share with others, but those who had low blood-glucose were less likely to do so. These two effects ended up washing out, with the result being that blood glucose had no effect on how much the participants actually decided to divide a potential resource themselves. While hungry individuals advocated that other people should share, then, they were no more likely to share themselves. They wanted others to be more generous without paying the costs of such generosity personally.

So perhaps my sickness-induced emotionality reflected something along those same lines: sick individuals find themselves unable to complete all sorts of tasks – such as resource acquisition or defense – as effectively as non-sick individuals. Our caloric resources are likely being devoted to other tasks, such as revving up our immune response. Thus, I might have desired that other people, in essence, take care of me while I was sick, with those emotions – such as increased loneliness or empathy – providing the proximate motivation to seek out such investment. If the current results are any indication, however, I would be unlikely to practice what I preach; I would want people to take care of me without my helping them anymore than usual. How very selfish of me and my emotions. So that covers the idea that my behavior was driven by some personal fitness benefits, but what about the alternative? The pathogens that were exploiting my body have their own set of fitness interests, after all, and part of those interests involves finding new hosts in which to exploit and reproduce. It follows, at least in theory, then, that the pathogens might be able to increase their own fitness by manipulating my mind in such a way so as to encourage me to seek out other conspecifics in my environment.

The more time I spent around others individuals, the greater the chance I would spread the infection, especially given how much I was coughing. If the pathogens affect my desire to be around others by making me feel lonely or anxious, then, they can increase their own fitness. This idea is by no means far-fetched. There are many known instances of pathogens influencing their host’s behavior, and I’ve written a little bit before about one of them: the psychological effects that malaria can have on the behavior of their host mosquitoes. Mosquitoes which are infected with malaria seem to preferentially feed from humans, whereas mosquitoes not so infected do not show any evidence of such preferential behavior. This likely owes to the malaria benefiting itself by manipulating the behavior of their mosquito host. The malaria wants to get from human to human, but it needs to do so via mosquito bites. If the malaria can make their host preferentially try and feed from humans, the malaria can reproduce quicker and more effectively. There are also some plausible theoretical reasons for suspecting that some pathogen(s) might play a role in the maintenance of human homosexual orientations, at least in males. The idea that pathogens can affect our psychologies more generally, then, is far from an impossibility.

“We hope you don’t mind us making your life miserable for this next week too much, because we’re doing it anyway.”

The question of interest, however, is whether the pathogens were responsible for my behavior directly or not. As promised, I don’t have an answer to the question. I don’t know what I was infected with specifically, much less what compounds it was or wasn’t releasing into my body, or what effect they might have had on my behavior. Further, if I already possessed some adaptions for seeking out social support when sick, there would be less of a selective pressure for the pathogens to encourage my doing so; I would already be spreading the pathogen incidentally through my behavior. The real point of this question is not to necessarily answer it, however, as much as it’s to get us thinking about how our psychology might not, at least at times, be our own, so to speak. There are countless other organisms living within (and outside of) our bodies that have their own sets of fitness interests which they might prefer we indulge, even at the expense of our own. As for me, I’m just happy to be healthy again, and to feel like my head is screwing back on to where it used to be.

References: Aaroe, L. & Petersen, M. (2013). Hunger games: Fluctuations in blood glucose levels influence support for social welfare. Psychological Science, 24, 2550-2556.

Why Parents Affect Children Less Than Many People Assume

Despite what a small handful of detractors have had to say, inclusive fitness theory has proved to be one of most valuable ideas we have for understanding much of the altruism we observe in both human and non-human species. The basic logic of inclusive fitness theory is simple: genes can increase their reproductive fitness by benefiting other bodies that contain copies of them. So, since you happen to share 50% of your genes in common by descent with a full sibling, you can, to some extent, increase your own reproductive fitness by increasing theirs. This logic is captured by the deceptively-tiny formula of rb > c. In English, rather than math, the formula states that altruism will be favored so long as the benefit delivered to the receiver, discounted by the degree of relatedness between the two, is greater than the cost to the giver. To use the sibling example again, altruism would be favored by selection if the the benefit you provided to a full sibling increased their reproductive success by twice as much (or more) than it cost you to give even if there was zero reciprocation.

“You scratch my back, and then you scratch my back again”

While this equation highlights why a lot of “good/nice” behaviors are observed – like childcare – there’s a darker side to this equation as well. By dividing each side of the inclusive fitness equation by r, you get this: b > c/r. What this new equation highlights is the selfish nature of these interactions: relatives can be selected to benefit themselves by inflicting costs on their kin. In the case of full siblings, I should be expected to value my benefiting twice as much, relative to theirs; for half siblings, I should value myself four-times as much, and so on. Let’s stick to full-siblings for now, just to stay consistent. Each sibling within a family should, all else being equal, be expected to value itself twice as much as they value any other sibling. The parents of these siblings, however, see things very differently: from the perspective of the parent, each of these siblings is equally related to them, so, in theory, they should value each of these offspring equally (again, all else being equal. All else is almost never equal, but let’s assume it is to keep the math easy).

This means that parents should prefer that their children act in a particular way: specifically, parents should prefer their children to help each other when the benefit to one outweighs the cost to the other, or b > c. The children, on the other hand, should only wish to behave that way when the benefit to their sibling is twice the cost of themselves, or 2b > c. This yields the following conclusion: how parents would like their children to behave does not necessarily correspond to what is in the child’s best fitness interests. Parents hoping to maximize their own fitness have different best interests from the children hoping to maximize theirs. Children who behave as their parents would prefer would be at a reproductive disadvantage, then, relative to children who were resistant to such parental expectations. This insight was formalized by Trivers (1974) when he wrote:

  “…an important feature of the argument presented here is that offspring cannot rely on parents for disinterested guidance. One expects the offspring to be pre-programmed to resist some parental teachings while being open to other forms. This is particularly true, as argued below, for parental teachings that affects the altruistic and egoistic tendencies of the offspring.” (p. 258)

While parents might feel as if they only acting in the best interests of their children, the logic of inclusive fitness suggests strongly that this feeling might represent an attempt at manipulating others, rather than a statement of fact. To avoid the risk of sounding one-sided, this argument cuts in the other direction as well: children might experience their parent’s treatment of them as being less-fair than it actually is, as each child would like to receive twice the investment that parents should be willing to give naturally. The take-home message of this point, however, is simply that children who were readily molded by their parents should be expected to have reproduced those tendencies less, relative to children who were not so affected. In some regards, children should be expected to actively disregard what their parents want for them.

“My parents want me to brush my teeth. They’re such fascists sometimes.”

There are other reasons to expect that parents should not tend to leave lasting impressions on their children’s eventual personalities. One of those very good reasons also has to do with the inclusive fitness logic laid out initially: because parents tend to be 50% genetically related to their children, parents should be expected to invest in their children fairly heavily, relative to non-children at least. The corollary to this idea is that non-parents of the child should be expected to treat them substantially different than their parents do. This means that a child should be relatively unable to learn what counts as appropriate behavior towards others more generally from their interactions with their parents. Just because a proud parent has hung their child’s scribbled artwork on the household refrigerator, it doesn’t mean that anyone else will come to think of the child as a great artist. A relationship with your parents is different than a relationship with your friends which is different from a sexual relationship in a great many ways. Even within these broad classes of relationships, you might behave differently with one friend than you do with another.

We should expect our behavior around these different individuals to be context-specific. What you learn about one relationship might not readily transfer to any other. Though a child might be unable to physically dominate their parents, they might be able to dominate their peers; some jokes might be appropriate amongst friends, but not with your boss. Though some of what you learn about how to behave around your parents might transfer to other situations (such as the language you speak, if your parents happen to speakers of the native tongue), it also may not. When it does not transfer, we should expect children to discard what they learned about how to behave around their parents in favor of more context-appropriate behaviors (indeed, when children find their parents speak a different language than their peers, the child will predominately learn to speak as their peers do; not their parents). While a parent’s behavior should be expected to influence how that child behaves around that parent, we should not necessarily expect it to influence the child’s behavior around anyone else.

It should come as little surprise, then, that being raised by the same parents doesn’t actually tend to make children any more similar with respect to their personality than being raised by different ones. Tellegan et al (1988) compared 44 identical twin (MZ) pairs raised apart with 217 identical twins reared together, along with 27 fraternal twins (DZ) reared apart and 114 reared together. In terms of their personality measures, the MZ twins were far more alike than the DZ twins,  as one would expect from their shared genetics. When it came to the personality measures, however, MZ twins reared together were more highly correlated on 7 of the measures, while those reared apart were more highly correlated on 6 of them. In terms of the DZ twins, those reared together were higher on 9 of the variables, whereas those reared apart were higher on the remaining 5. The size of these differences when they did exist was often exceedingly small, typically amounting to a correlation difference of about 0.1 between the pairs, or 1% of the variance.

Pick the one you want to keep. I’d recommend the cuter one.

Even if twins reared together ended up being substantially more similar than twins reared apart – which they didn’t – this would still not demonstrate that parenting was the cause of that similarity. After all, twins reared together tend to share more than their parents; they also tend to share various aspects of their wider social life, such as extended families, peer groups, and other social settings. There are good empirical and theoretical reasons for thinking that parents have less of a lasting effect on their children than many often suppose. That’s not to say that parents don’t have any effects on their children, mind you; just that the effects that they have ought to be largely limited to their particular relationship with the child in question, barring the infliction of any serious injuries or other such issues that will transfer from one context to another. Parents can certainly make their children more or less happy when they’re in each others presence, but so can friends and more intimate partners. In terms of shaping their children’s later personality, it truly takes a village.

References: Tellegen et al. (1988). Personality similarity in twins reared apart and together. Journal of Personality and Social Psychology, 54, 1031-1039.

Trivers, R. (1974). Parent-Offspring conflict. American Zoologist, 14, 249-264.

What Makes Incest Morally Wrong?

There are many things that people generally tend to view to be disgusting or otherwise unpleasant. Certain shows, like Fear Factor, capitalize on those aversions, offering people rewards if they can manage to suppress those feelings to a greater degree than their competitors. Of the people who watched the show, many would probably tell you that they would be personally unwilling to engage in such behaviors; what many do not seem to say, however, is that others should not be allowed to engage in those behaviors because they are morally wrong. Fear or disgust-inducing, yes, but not behavior explicitly punishable by others. Well, most of the time, anyway; a stunt involving drinking donkey semen apparently made the network hesitant about airing it, likely owing to the idea that some moral condemnation would follow in its wake. So what might help us differentiate between understanding why some disgusting behaviors – like eating live cockroaches or submerging one’s arm in spiders – are not morally condemned while others – like incest – tend to be?

Emphasis on the “tend to be” in that last sentence.

To begin our exploration of the issue, we could examine some research on some cognitive mechanisms for incest aversion. Now, in theory, incest should be an appealing strategy from a gene’s eye perspective. This is due to the manner in which sexual reproduction works: by mating with a full sibling, your offspring would carry 75% of your genes in common by descent, rather than the 50% you’d expect if you mated with a stranger. If those hyper-related siblings in turn mated with one another, after a few generations you’d have people giving birth to infants that were essentially genetic clones. However, such inbreeding appears to carry a number of potentially harmful consequences. Without going into too much detail, here are two candidate explanations one might consider for why inbreeding isn’t a more popular strategy: first, it increases the chances that two harmful, but otherwise rare, recessive alleles will match up with on another. The result of this frequently involves all sorts of nasty developmental problems that don’t bode well for one’s fitness.

A second potential issue involves what is called the Red Queen hypothesis. The basic idea here is that the asexual parasites that seek to exploit their host’s body reproduce far quicker than their hosts tend to. A bacteria can go through thousands of generations in the time humans go through one. If we were giving birth to genetically-identical clones, then, the parasites would find themselves well-adapted to life inside their host’s offspring, and might quickly end up exploiting said offspring. The genetic variability introduced by sexual reproduction might help larger, longer-lived hosts keep up in the evolutionary race against their parasites. Though there may well be other viable hypotheses concerning why inbreeding is avoided in many species, the take-home point for our current purposes is that organisms often appear as if they are designed to avoid breeding with close relatives. This poses many species with a problem they need to solve, however: how do you know who your close kin are? Barring some effective spatial dispersion, organisms will need some proximate cues that help them differentiate between their kin and non-kin so as to determine which others are their best bets for reproductive success.

We’ll start with perhaps the most well-known of the research on incest avoidance in humans. The Westermarck effect refers to the idea that humans appear to become sexually disinterested in those with whom they spent most of their early life. The logic of this effect goes (roughly) as follows: your mother is likely to be investing heavily in you when you’re an infant, in no small part owing to the fact that she needs to breastfeed you (prior to the advent of alternative technologies). Since those who spend a lot of time around you and your mother are more likely to be kin than those who spend less time in your proximity. That degree of that proximity ought to in turn generate some kinship index with others that would generate disinterest in sexual experiences with such individuals. While such an effect doesn’t lend itself nicely to controlled experiments, there are some natural contexts that can be examined as pseudo-experiments. One of these was the Israeli Kibbutz, where children were predominately raised in similarly-aged, mixed-sex peer groups. Of the approximately 3000 children that were examined from these Kibbutz, there were only 14 cases of marriage between individuals from the same group, and almost all of them were between people introduced to the group after the age of 6 (Shepher, 1971).

Which is probably why this seemed like a good idea.

The effect of being raised in such a context didn’t appear to provide all the cues required to trigger the full suite of incest aversion mechanisms, however, as evidenced by some follow-up research by Shor & Simchai (2009). The pair carried out some interviews with 60 of the members of the Kibbutz to examine the feelings that these members had towards each other. A little more than half of the sample reported having either moderate or strong attractions towards other members of their cohort at some point; almost all the rest reported sexual indifference, as opposed to the typical kind of aversion or disgust people report in response to questions about sexual attraction towards their blood siblings. This finding, while interesting, needs to be considered in light of the fact that almost no sexual interactions occurred between members of the same peer group; it should also be considered in light of the fact that there did not appear to exist any strong moral prohibition against such behavior.

Something like a Westermarck effect might explain why people weren’t terribly inclined to have intercourse with their own kin, but it would not explain why people think that others having sex with close kin is morally wrong. Moral condemnation is not required for guiding one’s own behavior; it appears more suited for attempting to guide the behavior of others. When it comes to incest, a likely other whose behavior one might wish to guide would be their close kin. This is what led Lieberman et al (2003) to deliver some predictions about what factors might drive people’s moral attitudes about incest: the presence of others who are liable to be your close kin, especially if those kin are of the opposite sex. If duration of co-residence during infancy is used a proximate input cue for determining kinship, then that duration might also be used as an input condition for determining one’s moral views about the acceptability of incest. Accordingly, Lieberman et al (2003) surveyed 186 individuals about their history of co-residence with other family members and their attitudes towards how morally unacceptable incest is, along with a few other variables.

What the research uncovered was that duration of co-residence with an opposite-sex sibling predicted the subject’s moral judgments concerning incest. For women, the total years of co-residence with a brother was correlated with judgments of wrongness for incest at about r = 0.23, and that held whether the time period from 0 to 10 or 0 to 18 was under investigation; for men with a sister, a slightly higher correlation emerged from 0 to 10 years (r = 0.29), but an even-larger correlation was observed when the period was expanded to age 18 (r = 0.40). Further, such effects remained largely static even after the number of siblings, parental attitudes, sexual orientation, and the actual degree of relatedness between those individuals was controlled for. None of those factors managed to uniquely predict moral attitudes towards incest once duration of co-residence was controlled for, suggesting that it was the duration of co-residence itself driving these effects of moral judgments. So why did this effect not appear to show up in the case of the Kibbutz?

Perhaps the driving cues were too distracted?

If the cues to kinship are somewhat incomplete – as they likely were in the Kibbutz – then we ought to expect moral condemnation of such relationships to be incomplete as well.  Unfortunately, there doesn’t exist much good data on that point that I am aware of, but, on the basis of Shor & Simchai’s (2009) account, there was no condemnation of such relationships in the Kibbutz that rivaled the kind seen in the case of actual families. What their account does suggest is that more cohesive groups experienced less sexual interest in their peers; a finding that dovetails with the results from Lieberman et al (2003): cohesive groups might well have spent more time together, resulting in less sexual attraction due to greater degrees of co-residence. Despite Shor & Simchai’s suggestion to the contrary, their results appear to be consistent with a Westermarck kind of effect, albeit an incomplete one. Though the duration of co-residence clearly seems to matter, the precise way in which it matters likely involves more than a single cue to kinship. What connection might exist between moral condemnation and active aversion to the idea of intercourse with those one grew up around is a matter I leave to you.

References: Lieberman, D., Tooby, J., & Cosmides, L. (2003). Does morality have a biological basis? An empirical test of the factors governing moral sentiments relating to incest. Proceedings of the Royal Society of London B, 270, 819-826.

Shepher, J. (1971). Mate Selection Among Second Generation Kibbutz Adolescents and Adults: Incest Avoidance and Negative Imprinting. Archives of Sexual Behavior, 1, 293-307.

Shor, E. & Simchai, D. (2009). Incest Avoidance, the Incest Taboo, and Social Cohesion: Revisiting Westermarck and the Case of the Israeli Kibbutzim. American Journal of Sociology, 114, 1803-1846,

Proximate And Ultimate Moral Culpability

Back in September, I floated an idea about or moral judgments: that intervening causes between an action and outcome could serve to partially mitigate their severity. This would owe itself to the potential that each intervening cause has for presenting a new potential target of moral responsibility and blame (i.e. “if only the parents had properly locked up their liquor cabinet, then their son wouldn’t have gotten drunk and wrecked their car”). As the number of these intervening causes increases, the potential number of blameable targets increases, which should be expected to diminish the ability of third-party condemners to achieve any kind of coordination their decisions. Without coordination, enacting moral punishment becomes costlier, all else being equal, and thus we might expect people to condemn others less harshly in such situations. Well, as it turns out, there’s some research that has been conducted on this topic a mere four decades ago that I was unaware of at the time. Someone call Cold Stone, because it seems I’ve been scooped again.

To get your mind off that stupid pun, here’s another.

One of these studies comes from Brickman et al (1975), and involved examining how people would assign responsibility for a car accident that had more than one potential cause. Since there are a number of comparisons and causes I’ll be discussing, I’ve labeled them for ease of following along. The first of these causes were  proximate in nature: internal alone (1. a man hit a tree because he wasn’t looking at the road) or external alone (2. a man hit a tree because his steering failed). However, there were also two ultimate causes for these proximate causes, leading to four additional sets: two internal (3. a man hit a tree because he wasn’t looking at the road; he wasn’t looking at the road because he was daydreaming), two external (4. a man hit a tree because his steering failed; his steering failed because the mechanic had assembled it poorly when repairing it), or a mix of the two. The first of these (5) mixes was a man hitting a tree because his steering failed, but his steering failed because he had neglected to get it checked in over a year; the second (6) concerned a man hitting a tree because he wasn’t paying attention to the road due to someone on the side of the road yelling.

After the participants had read about one of these scenarios, they were asked to indicate how responsible the driver was for the accident, how foreseeable the accident was, and how much control the driver had in the situation. Internal causes for the accident resulted in higher scores on all these variables relative to external ones (1 vs. 2). There’s nothing too surprising there: people get blamed less for their steering failing than their not paying attention to the road. The next analysis compared the presence of one type of cause alone to that type of cause with an identical ultimate cause (1 vs. 3, and 2 vs. 4). When both proximate and ultimate causes were internal (1 vs 3), no difference was observed in the judgments of responsibility. However, when both proximate and ultimate causes were external (2 vs. 4), moral condemnation appeared to be softened by the presence of an ultimate explanation. Two internal causes didn’t budge judgments from a single cause, but two external judgments diminished perceptions of responsibility beyond a single one.

Next, Brickman et al (1975) turned to the matter of what happens when the proximate and ultimate causes were of different types (1 vs. 6 and 2 vs. 5). When the proximate cause was internal but the ultimate cause was external (1 vs. 6), there was a drop in judgments of moral responsibility (from 5.4 to 3.7 on a 0 to 6 scale), foreseeability (from 3.7 to 2.4), and control (from 3.4 to 2.7). The exact opposite trend was observed when the proximate cause was external, but the ultimate cause was internal 2 vs. 5). In that case, there was an increase in judgments of responsibility (from 2.3 to 4.1), foreseeability (from 2.3 to 3.4) and control (2.6 to 3.4). As Brickman et al (1975) put it:

“…the nature of prior cause eliminated the effects of the immediate cause on attributions of foreseeability and control, although a main effect of immediate cause remained for attributions of responsibility,”

So that’s some pretty neat stuff and, despite the research not being specifically about the topic, I think these findings might have some broader implications for understanding the opposition to evolutionary psychology more generally.

They’re so broad people with higher BMIs might call the suggestion insensitive.

As a fair warning, this section will contain a fair bit of speculation, since there doesn’t exist much data (that I know of, anyway) bearing on people’s opposition towards evolutionary explanations. That said, let’s talk about what anecdata we do have. The first curious thing that has struck me about the opposition to certain evolutionary hypotheses is that they tend to focus exclusively or nearly-exclusively on topics that have some moral relevance. I’ve seen fairly-common complaints about evolutionary explanations for hypotheses that concern moralized topics like violence, sexual behavior, sexual orientation, and male/female differences. What you don’t tend to see are complaints about research in areas that do not tend to be moralized, like vision, language, or taste preference. That’s not to say that such objections don’t ever crop up, of course; just that complaints about the latter do not appear to be as frequent or protracted as the former. Further, when the latter topics do appear, it’s typically in the middle of some other moral issue surrounding the topic.

This piece of anecdata ties in with another, related piece: one of the more common complaints against evolutionary explanations is that people perceive evolutionary researchers as trying to justify some particular morally-blameworthy behavior. The criticism, misguided as is it, tends to go something like this: “if [Behavior X] is the product of selection, then we can’t hold people accountable for what they do. Further, we can’t hope to do much to change people’s behavior, so why bother?”. As the old saying goes, if some behavior is the product of selection, we might as well just lie back and think of England. Since people don’t want to just accept these behaviors (and because they note, correctly, that behavior is modifiable), they go on to suggest that it’s the ultimate explanation must be wrong, rather than their assessment of its implications.

“Whatever; go ahead and kill people, I guess. I don’t care…”

The similarities between these criticisms of evolutionary hypotheses and the current study are particularly striking: if selection is responsible for people’s behavior, then the people themselves seem to be less responsible and in control of their behavior. Since people want to condemn others for this behavior, they have a strategic interest in downplaying the role of other causes in generating it. The fewer potential causes for a behavior there are the more easily moral condemnation can be targeted, and the more likely others are to join in the punishment. It doesn’t hurt that what ultimate explanations are invoked – patriarchy being the most common, in my experience – are also things that these people are interesting in morally condemning.

What’s interesting – and perhaps ironic – about the whole issue to me is that there are also the parallels to the debates people have about free will and moral responsibility. Let’s grant that the aforementioned criticisms were accurate and evolutionary explanations offer some kind of justification for things like murder, rape, and the like. It would seem, then, that such evolutionary explanations could similarly justify the moral condemnation and punishment of such behaviors just as well. Surely, there are adaptations we possess to avoid outcomes like being killed, and we also possess adaptations capable of condemning such behavior. We wouldn’t need to justify our condemnation of them anymore than people would need to justify their committing the act itself. If murder could be justified, then surely punishing murders could be as well.

References: Brickman, P., Ryan, K., & Wortman, C. (1975). Causal chains: Attribution of responsibility as a function of immediate and prior causes. Journal of Personality and Social Psychology, 32, 1060-1067.

Begging Questions About Sexualization

There’s an old joke that goes something like this: If a man wants to make a woman happy, it’s really quite simple. All he has to do is be a chef, a carpenter, brave, a friend, a good listener, responsible, clean, warm, athletic, attractive, tender, strong, tolerant, understanding, stable, ambitious, and compassionate. Men should also not forget to compliment a women frequently, give her attention while expecting little in return, give her freedom to do what she wants without asking too many questions, and love to go shopping with her, or at least support the habit. So long as a man does/is all those things, if he manages to never forget birthdays, anniversaries, or other important dates, he should be easily able to make a woman happy. Women, on the other hand, can also make men happily with a few simple steps: show up naked and bring beer. (For the unabridged list, see here). While this joke, like many great jokes, contains an exaggeration, it also manages to capture a certain truth: the qualities that make a man attractive to a woman seem to be a bit more varied than the qualities that make a woman attractive to a man.

“Yeah; he’s alright, I guess. Could be a bit taller and richer…”

Even if men did value the same number of traits in women that women value in men, the two sexes do not necessarily value the same kinds of traits, or value them to the same degree (though there is, of course, frequently some amount of overlap). Given that men and women tend to value different qualities in one another, what should this tell us about the signals that each sex sends to appeal to the other? The likely answer is that men and women might end up behaving or altering their appearance in different ways when it comes to appealing to the opposite sex. As a highly-simplified example, men might tend to value looks and women might tend to value status. If a man is trying to appeal to women under such circumstance, it does him little good to signal his good looks, just as it does a woman no favors to try and signal her high status to men.

So when people start making claims about how one sex – typically women – are being “sexualized” to a much greater extent than the other, we should be very specific about what we mean by the term. A recent paper by Hatton & Trautner (2011) set forth to examine (a) how sexualized men and women tend to be in American culture and (b) whether that sexualization has seen a rise over time. The proxy measure they made use of for their analysis were about four decades worth of Rolling Stone covers, spanning from 1967 to 2009, as these covers contain pictures of various male and female cultural figures. The authors suggest that this research has value because of various other lines of research suggesting that these depictions might have negative effects on women’s body satisfaction, men’s negative attitudes about women, as well threatening to increase the amount of sexual harassment that women face. Somewhat surprisingly, in the laundry list of references attesting to these negative effects on women, there is no explicit mention of any possible negative effects on men. I find that interesting. Anyway…

As for the research itself, Hatton & Trautner (2011) examined approximately 1000 covers of Rolling Stone, of which 720 focused on men and 380 focused on women. The pictures were coded with respect to (a) the degree of nudity, from unrevealing to naked on a 6-point scale, (b) whether there was touching, from none to explicitly sexual on a 4-point scale, (c) pose, from standing upright to explicitly sexual on a 3-point scale, (d) mouth…pose (I guess), from not sexual to sexual on a 3-point scale, (e)  whether breasts/chest, genitals, or buttocks were exposed and/or the focal point of the image, all on 3-point scales, (f) whether the text on the cover line related to sex, (g) whether the shot focused on the head or the body, (h) whether the model was engaged in a sex act or not, and finally (i) whether there were hints of sexual role play suggested at. So, on the one hand, it seems like these pictures were analyzed thoroughly. On the other, however, consider this list of variables they were assessing and compare them to the initial joke. By my count, all of them appear to fall more on the end of “what makes men happy” rather than “what makes women happy”.

Which might cause a problem in translation from one sex to the other

Images were considered to be “hypersexualized” if they scored 10 or more points (out of the possible 23), but only regular “sexualized” if they scored from 5 to 9 points. In terms of sexualization, the authors found that it appeared to be increasing over time: in the ’60s, 11% of men and 44% of women were sexualized; by the ’00s these rose to 17% and 89% respectively. So Hatton & Trautner (2011) concluded that men were being sexualized less than women overall, which is reasonable given their criteria. However, those percentages captured both the “sexualized” and “hypersexualized” pictures. Examining the two groups separately, the authors found that around 1-3% of men on the covers were hypersexualized in any given decade, whereas the comparable range for women was 6% to 61%. Not only did women tend to be sexualized more often, they also tended to sexualized to a great degree. The authors go so far as to suggest that the only appropriate label for such depictions of women were as sex objects.

The major interpretative problem that is left unaddressed by Hatton & Trautner (2011) and their “high-powered sociological lens”, of course, is that they fail to consider whether the same kinds of displays make men and women equally sexually appealing. As the initial joke might suggest, men are unlikely to win many brownie points with a prospective date if they showed up naked with beer; they might win a place on some sex-offender list though, which falls short of the happy ending they would have liked. Indeed, many of the characteristics highlighted in the list of ways to make a woman happy – such as warmth, emotional stability, and listening skills – are not quite as easily captured by a picture, relative to physical appearance. To make matters even more challenging for the interpretation of the authors, there is the looming fact that men tend to be far more open to offers of casual sex in the first place. In other words, there might about as much value to signaling that a man is “ready for sex” as there is to signaling that a starving child is “ready for food”. It’s something that is liable to be assumed already.

To put this study in context, imagine I was to run a similar analysis to the authors, but started my study with the following rationale: “It’s well known that women tend to value the financial prospects of their sexual partners. Accordingly, we should be able to measure the degree of sexualization on Rolling Stone covers by assessing the net wealth of the people being photographed”.  All I would have to do is add in some moralizing about how depiction of rich men is bad for poorer men’s self-esteem and women’s preferences in relationships, and that paper would be a reasonable facsimile to the current one. If this analysis found that the depicted men tended to be wealthier than the depicted women, this would not necessarily indicate that the men, rather than the women, were being depicted as more attractive mates. This is due to the simple, aforementioned fact, that we should expect an interaction between signalers and receivers. It doesn’t pay for a signaler to send a signal that the intended receiver is all but oblivious to: rather, we should expect the signals to be tailored to the details of the receptive systems it is attempting to influence.

The sexualization of images like this might go otherwise unnoticed.

It seems that the assumptions made by the authors stacked the deck in favor of them finding what they thought they would. By defining sexualization in a particular way, they partially begged their way to their conclusion. If we instead defined sexualization in other ways that considered variables beyond how much or what kind of skin was showing, we’d likely come to different conclusions about the degree of sexualization. That’s not to say that we would find an equal degree of it between the sexes, mind you, but it would be a realization that there are many factors that can go into making someone sexually attractive which are not always able to be captured in a photo. We’ve seen complaints of sexualization like these leveled against the costumes that superheroes of various sexes tend to wear, and the same oversight is present in them as well. Unless the initial joke would work just as well if the sexes were reversed, these discussions will require more nuance concerning sexualization to be of much profitable use.

References: Hatton E. & Trautner, M. (2011). Equal opportunity objectification? The sexualization of men and women on the cover of Rolling Stone. Sexuality and Culture, 15, 256-278.

Towards Understanding The Action-Omission Distinction

In moral psychology, one of the most well-known methods of parsing the reasons outcomes obtain involves the categories of actions and omissions. Actions are intuitively understandable: they are behaviors which bring about certain consequences directly. By contrast, omissions represent failures to act that result in certain consequences. As a quick example, a man who steals your wallet commits an act; a man who finds your lost wallet, keeps it for himself, and says nothing to you commits an omission. Though actions and omissions might result in precisely the same consequences (in that case, you end up with less money and the man ends up with more), they do not tend to be judged the same way. Specifically, actions tend to be judged as more morally wrong than comparable omissions and more deserving of punishment. While this state of affairs might seem perfectly normal to you or I, a deeper understanding of it requires us to take a step back and consider why it is, in fact, rather strange.

And so long as I omit the intellectual source of that strategy, I sound more creative.

From an evolutionary standpoint this action-omission distinction is strange for a clear reason: evolution is a consequentialist process. If I’m worse off because you stole from me or because you failed to return my wallet when you could have, I’m still worse off. Organisms should be expected to avoid costs, regardless of their origin. Importantly, costs need not only be conceptualized as what one might typically envision them to be, like inflictions of physical damage or stealing resources; they can also be understood as failures to deliver benefits. Consider a new mother: though the mother might not kill the child directly, if she fails to provision the infant with food, the infant will die all the same. From the perspective of the child, the failure of the mother to provide food could well be considered a cost inflicted by negligence. So, if someone could avoid harming me – or could provide me with some benefit -  but does not, why should it matter whether that outcome obtained because of an action or an omission?

The first part of that answer concerns a concept I mentioned in my last post: the welfare tradeoff ratio. Omissions are, generally speaking, less indicative of one’s underlying WTR than acts. Let’s consider the wallet example again: when a wallet is stolen, this act expresses that one is willing to make me suffer a cost so they can benefit; when the wallet is found and not returned, this represents a failure of an individual to deliver a benefit to me at some cost to themselves (the time required to track me down and forgoing the money in my wallet). While the former expresses a negative WTR, the latter simply fails to express an overtly-positive one. To the extent that moral punishment is designed to recalibrate WTRs, then, acts provide us with more accurate estimates of WTRs, and might subsequently tend to recruit those cognitive moral systems to a greater degree. Unfortunately, this explanation is not entirely fulfilling yet, owing to the consequentialist facts of the matter: it can be as good, from my perspective, to increase the WTR of the thief towards me as it is for me to increase the omitter’s WTR. Doing either means I would have more money than if I had not, which is a useful outcome. Costs and benefits, in this world, are tallied on the same score board.

The second part of the answer, then, needs to invoke the costs inherent in enacting this modification of WTRs through moral punishment. Just as it’s good for me if others hold a high WTR with respect me, it’s similarly good for others if I held a high WTR with respect to them. This means that people, unsurprisingly, are often less-than-accommodating when it comes to giving up their welfare for another without the proper persuasion; persuasion which happens to take time and energy to enact, and comes with certain risks of retaliation. Accordingly, we ought to expect mechanisms that function to enact moral condemnation strategically: when the costs of doing so are sufficiently low or the benefits to doing so are sufficiently high. After all, it’s the case that every living person right now could, in principle, increase their WTR towards you, but trying to morally condemn every living person for not doing so is unlikely to be a productive strategy. Not only would such a strategy result in the condemner undertaking many endeavors that are unlikely to be successful relative to the invested effort, but someone increasing their WTR towards you requires they lower their WTR towards someone else, and those someone elses would typically not be tickled by the prospect.

“You want my friend’s investment? Then come and take it, tough guy”

Given the costs involved in indiscriminate moral condemnation on non-maximal WTRs, we can focus the considerations of the action-omission distinction down to the following question: what is it about punishing omissions that tends to be less-productive than punishing actions? One possible explanation comes from DeScioli, Bruening, & Kurzban (2011). The trio posit that omissions are judged less harshly than actions because omissions tend to leave less overt evidence of wrongdoing. As punishment costs tend to decrease as the number of punishers increases, if third party punishers make use of evidence in deciding whether or not to become involved, then material evidence should make punishment easier to enact. Unfortunately, the design that the researchers used in their experiments does not appear to definitively speak to their hypothesis. Specifically, they found the effect they were looking for – namely, the reduction of the action-omission effect – but they only managed to do so via reframing an omission (failing to turn a train or stop a demolition) into an action (pressing a button that failed to turn a train or stop a demolition). It is not clear that such a manipulation solely varied the evidence available without fundamentally altering other morally-relevant factors.

There is another experiment that did manage to substantially reduce the action-omission effect without introducing such a confound, however: Haidt & Baron (1996). In this paper, the authors presented subjects with a story about a person selling his car. The seller knows that there is a 1/3 chance the car contains a manufacturing defect that will cause it to fall apart soon; a potential defect specific to the year the car was made. When a buyer inquires about the year of the manufacturing defect the seller either (a) lies about it or (b) doesn’t correct the buyer, who had suddenly exclaimed that they remember which year it was, though they were incorrect. When asked how wrong it was for the seller to do (or fail to do) what they did, the action-omission effect was observed when the buyer was not personally known to the seller. However, if the seller happened to be good friends with the buyer, the degree of the effect was reduced by almost half. In other words, when the buyer and seller were good friends, it mattered less whether the seller cheated the buyer through action or omission; both were deemed to be relatively unacceptable (and, interestingly, both were deemed to be more wrong overall as well). However, when the buyer and the seller were all but strangers, people rated the cheat via omission to be relatively less wrong than the action. Moral judgments in close relationships appeared to generally become more consequentialist.

If evidence was the deciding factor in the action-omission distinction, then the closeness of the relationship between the actor or omitted and the target should not be expected to have any effect on moral judgments (as the nature of the relationship does not itself generate any additional observable evidence). While this finding does not rule out the role of evidence in the action-omission distinction altogether, it does suggest that evidence concerns alone are insufficient for understanding the distinction. The nature of the relationship between the actor and victim is, however, predicted to have an effect when considering the WTR model. We expect our friends, especially our close friends, to have relatively high WTRs with respect to us; we might even expect them to go out of their way to suffer costs to help us if necessary. Indications that they are unwilling to do so – whether through action or omission – represent betrayals of that friendship. Further, when a friend behaves in a manner indicating a negative WTR towards us, the gulf between the expected (highly positive) and actual (negative) WTR is far greater than if a stranger behaved comparably (as we might expect a neutral starting point for strangers).

“I hate when girls lie online about having a torso!”

Though this analysis does not provide a complete explanation of the action/omission distinction by any means, it does point us in the right direction. It would seem that actions actively advertise WTRs, whereas omissions do not necessarily do likewise. Morally condemning all those who do not display positive WTRs per se does not make much sense, as the costs involved in doing so are so high as to preclude efficiency. Further, those who simply fail to express a positive WTR towards you might be less liable to inflict future costs, relative to those who express a negative one (i.e. the man who fails to return your wallet is not necessarily as liable to hurt you in the future as the one who directly steals from you). Selectively directing that condemnation at those who display negative appreciably low or negative WTRs, then, appears to be a more viable strategy: it could help direct condemnation towards where it’s liable to do the most good. This basic premise should hold especially given a close relationship with the perpetrator: such relationships entail more frequent contact and, accordingly, more opportunities for one’s WTR towards you to matter.

References: DeScioli, P., Bruening, R., & Kurzban. R. (2011). The omission effect in moral cognition: Toward a functional explanation. Evolution and Human Behavior, 32, 204-215.

Haidt, J. & Baron, J. (1996). Social roles and the moral judgment of acts and omissions. European Journal of Social Psychology, 26, 201-218.

When Giving To Disaster Victims Is Morally Wrong

Here’s a curious story: Kim Kardashian recently decided to sell some personal items on eBay. She also mentioned that 10% of the proceeds would be donated to typhoon relief in the Philippines. On the face of it, there doesn’t appear to be anything morally objectionable going on here: Kim is selling items on eBay (not an immoral behavior) and then giving some of her money freely to charity (not immoral). Further, she made this information publicly available, so she’s not lying or being deceitful about how much money she intends to keep and how much she intends to give (also not immoral). If the coverage of the story and the comments about it are any indication, however, Kim has done something morally condemnable. To select a few choice quotes, Kim is, apparently “the centre of all evil in the universe“, is “insulting” and “degrading” people, is “greedy” and “vile“. She’s also a “horrible bitch” and anyone who takes part in the auction is “retarded“. One of the authors expressed the hope that ”…[the disaster victims] give you back your insulting “portion of the proceeds” which is a measly 10% back to you so you can choke on it“. Yikes.

Just shred the money, add some chicken and ranch, and you’re good to go.

Now one could wonder whether the victims of this disaster would actually care that some of the money being used to help them came from someone who only donated 10% of her eBay sales. Sure; I’d bet the victims would likely prefer to have more money donated from every donor (and non-donor), but I think just about everyone in the world would rather have more money than they currently do. Though I might be mistaken, I don’t think there are many victims who would insist that the money be sent back because there wasn’t enough of it. I would also guess that, in terms of the actual dollar amount provided, Kim’s auctions probably resulted in more giving than many or most other actual donors, and definitely more than anyone lambasting Kim who did not personally give (of which I assume there are many). Besides the elements of hypocrisy that are typical to disputes on this nature, there is one facet of this condemnation that really caught my attention: people are saying Kim is a bad person for doing this not because she did anything immoral per se, but because she failed to do something laudable to a great-enough degree. This is akin to suggesting someone should be punished for only holding a door open for five people, despite them not being required to hold it open for anyone.

Now one might suggest that what Kim did wasn’t actually praiseworthy because she made money off of it: Kim is self-interested and is using this tragedy to advance her personal interests, or so the argument goes. Perhaps Kim was banking on the idea that giving 10% to charity would result in people paying more for the items themselves and offsetting the cost. Even if that was the case, however, it still wouldn’t make what she was doing wrong for two reasons: first, people profit from selling good or services continuously, and, most of the time, people don’t deem those acts as morally wrong. For instance, I just bought groceries, but I didn’t feel a moral outrage that the store I bought them from profited off me. Secondly, it would seem that even if Kim did benefit by doing this, it’s a win-win situation for her and the typhoon victims. While mutual benefit make make gauging Kim’s altruistic intentions difficult, it would not make the act immoral per se. Furthermore, it’s not as if Kim’s charity auction coerced anyone into paying more than they otherwise would have; how much to pay would be the decision of the buyers, whom Kim could not directly control. If Kim ended up making more money off those than she otherwise would have, it’s only because other people willingly gave her more. So why are people attempting to morally condemn her? She wasn’t dishonest, she didn’t do anyone direct harm, she didn’t engage in any behavior that is typically deemed “immoral”, and the result of her actions were that people were better off. If one wants to locate the focal point of people’s moral outrage about Kim’s auction, then, it will involve digging a little deeper psychologically.

One promising avenue to begin our exploration of the matter is a chapter by Petersen, Sell, Tooby, & Cosmides (2010) that discussed our evolved intuitions about criminal justice. In it, they discuss the concept of a welfare tradeoff ratio (WTR). A WTR is, essentially, one person’s willingness to give up some amount of personal welfare to deliver some amount of welfare to another. For instance, if you were given the choice between $6 for yourself and $1 for someone else or $5 for both of you, choosing the latter would represent a higher WTR: you would be willing to forgo $1 so that another individual could have an additional $4. Obviously, it would be good for you if other people maintained a high WTR towards you, but others are not so willing to give up their own welfare without some persuasion. One way (among many) of persuading someone to put more stock in your welfare is to make yourself look like a good social investment. If benefiting you will benefit the giver in the long run – perhaps because you are currently experiencing the bad luck of a typhoon destroying your home, but you can return to being a productive associate in the future if you get help – then we should expect people to up-regulate their WTR towards you.

Some other pleas for assistance are less liable to net good payoffs.

The intuition that Kim’s moral detractors appear to be expressing, then, is not that Kim is wrong for displaying a mildly positive WTR per se, but that the WTR she displayed was not sufficiently high, given her relative wealth and the disaster victim’s relative need. This makes her appear to be a potentially-poor social investment, as she is relatively-unwilling to give up much of her own welfare to help others, even when they are in desperate need. Framing the discussion in this light is useful insomuch as it points us in the right direction, but it only gets us so far. We are left with the matter of figuring out why, for instance, most other people who were giving to charity were not condemned for not giving as much as they realistically could have, even if it meant them foregoing or giving up some personal items or pleasurable experiences themselves (i.e. “if you ate out less this week, or sold some of your clothing, you too could have contributed more to the aid efforts; you’re a greedy bitch for not doing so”).

It also doesn’t explain why anyone would suggest that it would have been better for Kim to have given nothing at all instead of what she did give. Though we see that kind of rejection of low offers in bargaining contexts – like ultimatum games – we typically don’t see as much of it in altruistic ones. This is because rejecting the money in bargaining contexts has an effect on the proposer’s payoff; in altruistic contexts, rejection has no negative effect on the giver and should effect their behavior far less. Even more curious, though: if the function of such moral condemnation is to increase one’s WTR towards others more generally, suggesting that Kim giving no amount would have been somehow better than what she did give is exceedingly counterproductive. If increasing WTRs was the primary function of moral condemnation, it seems like the more appropriate strategy would be to start with condemning those people – rich or not – who contributed nothing, rather than something (as those who give nothing, arguably, displayed a lower WTR towards the typhoon victims than Kim did). Despite that, I have yet to come across any articles berating specific individuals or groups for not giving at all; they might be out there, but they generated much less publicity if they were. We need something else to complete the account of why people seem to hate Kim Kardashian for not giving more.

Perhaps that something more is that the other people who did not donate were also not trying to suggest they were behaving altruistically; that is, they were not trying to reap the benefits of being known as an altruist, whereas Kim was, but only halfheartedly. This would mean Kim was sending a less-than-honest signal. A major complication with that account, however, is that Kim was, for all intents and purposes, acting altruistically; she could have been praised very little for what she did, rather than condemned. Thankfully, the condemnation towards Kim is not the only example of this we have to draw upon. These kinds of claims have been advanced before: when Tucker Max tried to donate $500,000 to planned parenthood, only to be rejected because some people didn’t want to associate with him. The arguments being made against accepting that sizable donation centered around (a) the notion that he was giving for selfish reasons and (b) that others would stop supporting planned parenthood if Tucker became associated with them. My guess is that something similar is at play here. Celebrities can be polarizing figures (for reasons which I won’t speculate about here), drawing overly hostile or positive reactions from people who are not affected by them personally. For whatever reason, there are many people who dislike Kim and would like to either avoid being associated with her altogether and/or see her fall from her current position in society. This no doubt has an effect on how they view her behavior. If Kim wasn’t Kim, there’s a good chance no one would care about this kind of charity-involving auction.

Much better; now giving only 10% is laudable.

As I mentioned in my last post, children appear to condone harming others with whom they do not share a common interest. The same behavior – in this case, giving 10% of your sales to help others – is likely to be judged substantially differently contingent on who precisely is enacting the behavior. Understanding why people express moral outrage at welfare-increasing behaviors requires a deeper examination of their personal strategic interests in the matter. We should expect that state of affairs for a simple reason: benefiting others more generally is not universally useful, in the evolutionary sense of the word. Sometimes it’s good for you if certain other people are worse off (though this argument is seldom made explicitly). Now, of course, that does mean that people will, at times, ostensibly advocate for helping a group of needy people, but then shun help, even substantial amounts of help, when it comes from the “wrong” sources. They likely do what they do because such condemnation will either harm those “wrong” sources directly or because allowing the association could harm the condemner in some way. Yes; that does mean the behavior of these condemners has a self-interested component; the very thing they criticized Kim for. Without considerations of these strategic, self-interested motivations, we’d be at a loss for understanding why giving to typhoon victims is sometimes morally wrong.

References: Petersen, M.B., Sell, A., Tooby, J., & Cosmides, L. (2010). Evolutionary psychology and criminal justice: A recalibration theory of punishment and reconciliation. In Human Morality & Sociality: Evolutionary & Comparative Perspectives, edited by Hogh-Oleson, H., Palgrace MacMillian, New York.

What Predicts Religiosity: Cooperation Or Sex?

When trying to explain the evolutionary function of religious belief, there’s a popular story that goes something like this: individuals who believe in a deity that monitors our behavior and punishes or rewards us accordingly might be less likely transgress against others. In other words, religious beliefs function to makes people unusually cooperative. There are two big conceptual problems with such a suggestion: the first is that, to the extent that these rewards and punishments occur after death (heaven, hell, or some form of reincarnation as a “lower” animal, for instance), they would have no impact on reproductive fitness in the current world. With no impact on reproduction, no selection for such beliefs would be possible, even were they true. The second major problem is that in the event that such beliefs are false, they would not lead to better fitness outcomes. This is due to the simple fact that incorrect representations of our world do not generally tend to lead to better decisions and outcomes than accurate representations. For example, if you believe, incorrectly, that you can win a fight you actually cannot, you’re liable to suffer the costs of being beaten up; conversely, if you incorrectly believe you cannot win a fight you actually can, you might back down too soon and miss out on some resource. False beliefs don’t often help you make good decisions.

“I don’t care what you believe, R. Kelly; there’s no way this will end well”

So if one believes they being constantly observed by an agent that will punish them for behaving selfishly and that belief happens to be wrong, they will tend to make worse decisions, from a reproductive fitness standpoint, than an individual without such beliefs. On top of those conceptual problems, there is now an even larger problem for the religion-encouraging-cooperation idea: a massive data set doesn’t really support it. When I say massive, I do mean massive: the data set examined by Weeden & Kurzban (2013) comprised approximately 300,000 people from all across of the globe. Of interest from the data set were 14 questions relating to religious behavior (such as the belief in God and frequency of attendance at religious services), 13 questions relating to cooperative morals (like avoiding paying a fare on public transport and lying in one’s own interests), and 7 questions relating to sexual morals (such as the acceptability of causal sex or prostitution). The analysis concerned how well the latter two variable sets uniquely predicted the former one.

When considered in isolation in a regression analysis, the cooperative morals were slightly predictive of the variability in religious beliefs: the standardized beta values for the cooperative variables ranged from a low of 0.034 to a high of 0.104. So a one standard deviation increase in cooperative morals predicted, approximately, one-twentieth of a standard deviation increase in religious behave. On the other hand, the sexual morality questions did substantially better: the standardized betas there ranged from a low of 0.143 to a high of 0.38. Considering these variables in isolation only gives us so much of the picture, however, and the case got even bleaker for the cooperative variables once they were entered into the regression model at the same time as the sexual ones. While the betas on the sexual variables remained relatively unchanged (if anything, they got a little higher, ranging from 0.144 to 0.392) the betas on the cooperative variables dropped substantially, often into the negatives (ranging from -0.045 to 0.13). In non-statistical terms, this means that the more one endorsed more conservative sexual morals, the more religious one tended to be; the more one endorsed cooperative morals, the less religious one tended to be, though this latter tendency was very slight.

This evidence appears to directly contradict the cooperative account: religious beliefs don’t seem to result in more cooperative behaviors or moral stances (if anything, it results slightly fewer of them once you take sex into account). Rather than dealing with loving their neighbor, religious beliefs appeared to deal more with who and how their neighbor loved. This connection between religious beliefs and sexual morals, while consistently positive across all regions sampled, did vary in strength from place to place, being about four-times stronger in wealthy areas, compared to poorer ones. The reasons for this are not discussed at any length within the paper itself and I don’t feel I have anything to add on that point which wouldn’t be purely speculative.

“My stance on speculation stated, let’s speculate about something else…”

This leaves open the question of why religious beliefs would be associated with a more-monogamous mating style in particular. After all, it seem plausible that a community of people relatively interested in promoting a more long-term mating strategy and condemning short-term strategies need not come with the prerequisite of believing in a deity. People apparently don’t need a deity to condemn people for lying, stealing, or killing, so what would make sexual strategy any different? Perhaps the fact that sexual morals show substantially more variation that morals regarding, say, killing. Here’s what Weeden & Kurzban (2013) suggest:

We view expressed religious beliefs as potentially serving a number of functions, including not just the guidance of believers’ own behaviors, but as markers of group affiliation or as part of self-presentational efforts to claim higher authority or deflect the attribution of self-interested motives when it comes to imposing contested moral restrictions on those outside of the religious group. (p.2, emphasis mine)

As for whether or not belief in a deity might serve as a group marker, well, it certainly seems to be a potential candidate. Of course, so is pretty much anything else, from style of dress, to musical taste, to tattoos or other ornaments. In terms of displaying group membership, belief in God doesn’t seem particularly special compared to any other candidate. Perhaps belief in God simply ended up being the most common ornament of choice for groups of people who, among other things, wanted to restrict the sexuality of others. Such an argument would need to account for the fact that belief in God and sexual morals seem to correlate in groups all over the world, meaning they all stumbled upon that marker independently time and again (unlikely), that such a marker has a common origin in a time before humans began to migrate over the globe (possible, but hard to confirm), or posit some third option. In any case, while belief in God might serve such a group-marking function, it doesn’t seem to explain the connection with sexuality per se.

The other posited function – of involving a higher moral authority – raises some additional questions: First, if the long-term maters are adopting beliefs in God so as to try and speak from a position of higher (or impartial) authority, this raises the question of why other parties, presumably ones who don’t share such a belief, would be persuaded by that claim in anyway. Were I to advance the claim that I was speaking on behalf of God, I get the distinct sense that other people would dismiss my claims in most cases. Though I might be benefited if they believed me, I would also be benefited if people just started handing me money; that there doesn’t seem to be a benefit for other parties in doing these things, however, suggests to me that I shouldn’t expect such treatment. Unless people already believe in said higher power, claiming impartiality in its name doesn’t seem like it should tread much persuasive water.

Second, even were we to grant that such statements would be believed and have the desired effect, why wouldn’t the more promiscuous maters also adopt a belief in a deity that just so happens to smile on, or at least not care about, promiscuous mating? Even if we grant the more promiscuous individuals were not trying to condemn people for being monogamous (and so have no self-interested motives to deflect), having a deity on your side seems like a pretty reasonable way to strengthen your defense against people trying to condemn your mating style. At the very least, it would seem to weaken the moralizer’s offensive abilities. Now perhaps that’s along the lines of what atheism represents; rather than suggesting that there is a separate deity that likes what one prefers, people might simply suggest there is no deity in order to remove some of the moral force from the argument. Without a deity, one could not deflect the self-interest argument as readily. This, however, again returns us to the previous point: unless there’s some reason to assume that third parties would impressed by the claims of a God initially, it’s questionable as to whether such claims would carry any force that needed to be undermined.

Some gods are a bit more lax about the whole “infidelity” thing.

Of course, it is possible that such beliefs are just byproducts of something else that ties in with sexual strategy. Unfortunately, byproduct claims don’t tend to make much in the way of textured predictions as for what design features we ought to expect to find, so that suggestion, while plausible, doesn’t appear to lend itself to much empirical analysis. Though this leaves us without a great of satisfaction in explaining why religious belief and regulation of sexuality appear to be linked, it does provide us with the knowledge that religious belief does not primarily seem to concern itself with cooperation more generally. Whatever the function, or lack thereof, of religious belief, it is unlikely to be in promoting morality in general.

References: Weeden, J., & Kurzban, R. (2013). What predict religiosity? A multinational analysis of reproductive and cooperative morals. Evolution and Human Behavior (34), 440-445