When Are Equivalent Acts Not Equal?

There’s been an ongoing debating in the philosophical literature on morality for some time. That debate focuses on whether the morality of an act should be determined on the basis of either (a) the act’s outcome, in terms of its net effects on people’s welfare, or (b) whether the morality of an act is determined by…something else; intuitions, feelings, or what have you (i.e. “Incest is just wrong, even if nothing but good were to come of it”). These stances can be called the consequentialist and nonconsequentialist stances, respectively, and it’s at topic I’ve touched upon before. When I touched on the issue, I had this to say:

There are more ways of being consequentialist than with respect to the total amount of welfare increase. It would be beneficial to turn our eye towards considering strategic welfare consequences that likely to accrue to actors, second parties, and third parties as a result of these behaviors.

In other words, moral judgments might focus not only on the acts per se (the nonconsequentalist aspects) or their net welfare outcomes (the consequences), but also on the distribution of those consequences. Well, I’m happy to report that some very new, very cool research speaks to that issue and appears to confirms my intuition. I happen to know the authors of this paper personally and let me tell you this: the only thing about the authors that are more noteworthy than their good looks and charm is how humble one of them happens to be.

Guess which of us is the humble one?

The research (Marczyk & Marks, in press) was examining responses to the classic trolley dilemma and a variant of it. For those not well-versed in the trolley dilemma, here’s the setup: there’s an out-of-control train heading towards five hikers who cannot get out of the way in time. If the train continues on it’s part, then all five wills surely die. However, there’s a lever which can be pulled to redirect the train onto a side track where a single hiker is stuck. If the lever is pulled, the five will live, but the one will die (pictured here). Typically, when asked whether it would be acceptable for someone to pull the switch, the majority of people will say that it is. However, in past research examining the issue, the person pulling the switch has been a third party; that is, the puller was not directly involved in the situation, and didn’t stand to personally benefit or suffer because of the decision. But what would happen if the person pulling the switch was one of the hikers on one of the tracks; either on the side track (self-sacrifice) or the main track (self-saving)? Would it make a difference in terms of people’s moral judgments?

Well, the nonconsequentist account would say, “no; it shouldn’t matter”, because the behavior itself (redirecting a train onto a side track where it will kill one) remains constant; the welfare-maximizing consequentialist account would also say, “no; it shouldn’t matter”, because the welfare calculations haven’t changed (five live; one dies). However, this is not what we observe. When asked about how immoral it was for the puller to redirect the train, ratings were lowest in the self-sacrifice condition (M = 1.40/1.16 on a 1 to 5 scale in international and US samples, respectively), in the middle for the standard third-party context (M = 2.02/1.95), and highest in the self-saving condition (M = 2.52/2.10). In terms of whether or not it was morally acceptable to redirect the train, similar judgments cropped up: the percentage of US participants who said it was acceptable dropped as self-interested reasons began to enter into the question (the international sample wasn’t asked this question). In the self-sacrifice condition, these judgments of acceptability were highest (98%), followed by the third-party condition (84%), with the self-saving condition being the lowest (77%).

Participants also viewed the intentions of the pullers to be different, contingent on their location in this dilemma: specifically, the more one could benefit him or herself by pulling, the more people assumed that was the motivation for doing so (as compared with the puller’s motivations to help others: the more they could help themself, the less they were viewed as intending to help others). Now that might seem unsurprising: “of course people should be motivated to help themselves”, you might say. However, nothing in the dilemma itself spoke directly to the puller’s intentions. For instance, we could consider the case where a puller just happens to be saving their own life by redirecting the train away from others. From that act alone, we learn nothing about whether or not they would sacrifice their own life to save the lives of others. That is, one’s position in the self-beneficial context might simply be incidental; their primary motivation might have been to save the largest number of lives, and that just so happens to mean saving their own in the process. However, this was not the conclusion people seemed to be drawing.

*Side effects of saving yourself include increased moral condemnation.

Next, we examined a variant of the trolley dilemma that contained three tracks: again, there were five people on the main track and one person on each side track. As before, we varied who was pulling the switch: either the hiker on the main track (self-saving) or the hiker on the side track. However, we now varied what the options of the hiker on the side track were: specifically, he could direct the train away from the five on the main track, but either send the train towards or away from himself (the self-sacrifice and other-killing conditions, respectively). The intentions of the hiker on the side track, now, should have been disambiguated to some degree: if he intended to save the lives of others with no regard for his own, he would send the train towards himself; if he intended to save the lives of the hikers on the main track while not harming himself, he would send the train towards another individual. The intentions of the hiker on the main track, by contrast, should be just as ambiguous as before; we shouldn’t know whether that hiker would or would not sacrifice himself, given the chance.

What is particularly interesting about the results from this experiment is how closely the ratings of the self-saving and other-killing actors matched up. Whether in terms of how immoral it was to direct the train, whether the puller should be punished, how much they should be punished, or how much they intended to help themselves and others, ratings were similar across the board in both US and international samples. Even more curious is that the self-saving puller – the one whose intentions should be the most ambiguous – was typically rated as behaving more immorally and self-interestedly – not less – though this difference wasn’t often significant. Being in a position to benefit yourself from acting in this context seems to do people no favors in terms of escaping moral condemnation, even if alternative courses of actions aren’t available and the act is morally acceptable otherwise.

One final very interesting result of this experiment concerned the responses participants gave to the open-ended questions, “How many people [died/lived] because the lever was pulled?” On a factual level, these answers should be “1″ and “5″ respectively. However, our participants had a somewhat different sense of things. In the self-saving condition, 35% of the international sample and 12% of the US sample suggest that only 4 people were saved (in the other-killing condition, these percentages were 1% and 9%, and in the self-sacrifice condition they were 1.9% and 0%, respectively). Other people said 6 lives had been saved: 23% and 50% in the self-sacrifice condition, 1.7% and 36% in the self-saving condition, and 13% and 31% in the (international and US respectively). Finally, a minority of participants suggested that 0 people died because the train was redirected (13% and 11%), and these responses were almost exclusively found in the self-sacrifice conditions. These results suggest that our participants were treating the welfare of the puller in a distinct manner from the welfare of others in the dilemma. The consequences of acting, it would seem, were not judged to be equivalent across scenarios, even though the same number of people actually lived and died in each.

“Thanks to the guy who was hit by the train, no one had to die!”

In sum, the experiments seemed to demonstrate that these questions of morality are not to be limited to considerations of just actions and net consequences: to whom those consequences accrue seems to matter as well. Phrased more simply, in terms of moral judgments, the identity of actors seems to matter: my benefiting myself at someone else’s expense seems to have much different moral feel than someone else benefiting me by doing exactly the same thing. Additionally, the inferences we draw about why people did what they did – what their intentions were – appear to be strongly affected by whether that person is perceived to have benefited as a result of their actions. Importantly, this appears to be true regardless of whether that person even had any alternative courses of action available to them. That latter finding is particularly noteworthy, as it might imply that moral judgments are, at least occasionally, driving judgments of intentions, rather than the typically-assumed reverse (that intentions determine moral judgments). Now if only there was a humble and certainly not self-promoting psychologist who would propose some theory for figuring out how and why the identity of actors and victims tends to matter…

References: Marczyk, J. & Marks, M. (in press). Does it matter who pulls the switch? Perceptions of intentions in the trolley dilemma. Evolution and Human Behavior.

Imagine Psychology Without People

In 1971, John Lennon released the now-iconic song “Imagine“. In the song, Lennon invites us to imagine a world without religion, countries, or personal possessions where everyone coexists in peace with one another. Now, of course, this is not the world in which we exist. In fact, Lennon apparently preferred to keep this kind of world in the realm of imagination himself, using his substantial personal wealth to live a life well-beyond his needs; a fact which Elton John once poked fun at, rewriting to lyrics to imagine to begin: “Imagine six apartments; it isn’t hard to do. One’s full of fur coats; the other’s full of shoes”. While Lennon’s song might appear to have an uplifting message (at least superficially; I doubt many of us would really want to live in that kind of world if given the opportunity), the message of the song does not invite us to understand the world as it is: we are asked to imagine another world; not to figure out why our world bears little resemblance to that one.

My imaginations may differ a bit from John’s, but to each their own.

Having recently returned from the SPSP conference (Society of Personality and Social Psychology), I would like to offer my personal reflections about the general state of psychological research from my brief overview of what I saw at the conference. In the sake of full disclosure, I did not attend many of the talks and I only casually browsed over most of the posters that I saw. The reason for this state of affairs, however, is what I would like to focus on today. After all, it’s not that I’m a habitual talk-avoider: at last year’s HBES conference (Human Behavior and Evolution Society), I found myself attending talks around the clock; in fact, I was actually disappointed that I didn’t get to attend more of them (owing in part to the fact that pools tend to conceal how much you’ve been drinking). So what accounted for the differences in my academic attendance at these two conferences? There are two particular factors I would like to draw attention to, which I think paint a good picture my general impressions of the field of psychology.

The first of these factors was the organization of the two conferences. At HBES, the talks were organized, more or less, by topics: one room had talks on morality, another on life history, the next on cooperation, and so on. At SPSP, the talks were organized, as far as I could tell, anyway, with no particular theme. The talks at SPSP seemed to be organized around whatever people putting various symposiums together wanted to talk about, and that topic tended to be, at least from what I saw, rather narrow in its focus. This brings me to the first big difference between the two conferences, then: the degree of consilience each evidenced. At HBES, almost all the speakers and researchers seemed to share a broader, common theoretical foundation: evolutionary theory. This common understanding was then applied to different sub-fields, but managed to connect all of them into some larger whole. The talks on cooperation played by the same rules, so to speak, as the talks on aggression. By contrast, the psychologists at SPSP did not seem to be working under any common framework. The result of this lack of common grounding is that most of these talks were islands unto themselves, and attending one of them probably wouldn’t tell you much about any others. That is to say that a talk at SPSP might give you a piece of evidence concerning a particular topic, but it wouldn’t help you understand how to think about psychology (or even that topic) more generally. The talks on self-affirmation probably wouldn’t tell you anything about the talks on self-regulation, which in turn bear little resemblance to talks on sexism.

The second big issue is related to the first, and where our tie in to John Lennon’s song arises. I want you to imagine a world in which psychology was not, by in large, the study of human psychology and behavior in particular, but rather the study of psychology among life in general. In this world we’re imagining, humans, as a species, don’t exist as far as psychological research is concerned.  Admittedly, such a suggestion might not lend itself as well to song as Lennon’s “Imagine”, but unlike Lennon’s song, this imagination actually leads us to a potentially useful insight. In this new world – psychology without people – I only anticipate that one of these two conferences would actually exist: HBES. The theoretical framework of the researchers at HBES can help us understand things like cooperation, the importance of kinship, signaling, and aggression regardless of what species we happen to be talking about. Again, there’s consilience when using evolutionary theory to study psychology. But what about the SPSP conference? If we weren’t talking about humans, would anyone seriously try to use concepts like the “glass ceiling”, “self-affirmation”, “stereotypes”, or “sexism” to explain the behavior of any non-human organisms? Perhaps; I’ve just never seen it happen.

“Methods: We exposed birds to a stereotype threat condition…”

Now, sure; plenty of you might be thinking something along the lines of, “but humans are special and unique; we don’t play by the same rules that all other life on this planet does. Besides, what can the behavior of mosquitoes, or the testicle size of apes tell us about human psychology anyway?” Such a sentiment appears to be fairly common. What’s interesting to note about that thought, however, would not only be that it seem to confirm that psychology suffers from a lack of consilience, but, more importantly, it would be markedly mistaken. Yes; humans are a unique species, but then so is every other species on the planet. It doesn’t follow from our uniqueness that we’re not still playing the same game, so to speak, and being governed by the same rules. For instance, all species, unique as they are, are still subject to gravitational forces. By understanding gravity we can understand the behavior of many different falling objects; we don’t need separate fields of inquiry as to how one set of objects falls uniquely from the others. Insisting that humans are special in this regard would be a bit like an ornithologist insisting that the laws of gravity don’t apply to most bird species because they don’t fall like rocks tend to. Similarly, all life plays by the rules of evolution. By understanding a few key evolutionary principles, we can explain a remarkable amount of the variance in the way organisms behave without needing disparate fields for each species (or, in the case of psychology, disparate fields for every topic).

Let’s continue to imagine a bit more: if psychology had to go forward without studying people, how often do you think would find people advocating suggestions like this:

If our university community opposes racism, sexism, and heterosexism, why should we put up with research that counters our goals simply in the name of “academic freedom”?…When an academic community observes research promoting or justifying oppression, it should ensure that this research does not continue.

Maybe in our imaginary world of psychological research without people there would be some who seriously suggested that we should not put up with certain lines of research. Maybe research on, say, the psychology of mating in rabbits should not be tolerated, not because it’s inaccurate, mind you, but rather because the results of it might be opposed to the predetermined conclusions of anti-rabbit-heterosexism-oppression groups. Perhaps research on how malaria seems to affect the behavior of mosquitoes shouldn’t be tolerated because it might be used to oppress mosquitoes with seemingly “deviant” or extreme preferences for human blood. Perhaps these criticisms might come up, but I don’t imagine such opposition would be terribly common when the topic wasn’t humans.

“Methods: We threatened the elephant seal’s masculinity…”

So why didn’t I attend as many talks at SPSP as I did at HBES? First, there was the lack of consilience: without the use or consideration of evolutionary theory explicitly, a lot of the abstracts for research at SPSP sounded as if they would represent more of an intellectual spinning of wheels rather than a forwarding of our knowledge. This perception, I would add, doesn’t appear to be unique to me; certain psychological concepts seem to have a nasty habit of decaying in popularity over time. I would chalk that up to their lack of being anchored to or drawn from some underlying theoretical concept, but I don’t have the data on hand to back that up empirically at the moment. The second reason I didn’t attend as many talks at SPSP was because some of them left me with the distinct sense that the research was being conducted with some social or political goal in mind. While that’s not to say it necessarily disqualifies the research from being valuable, it does immediately make me skeptical (for instance, if you’re researching “stereotypes”, you might want to test their accuracy before you write them off as a sign of bias. This was not done at the talks I saw).

Now all of this is not simply said in the service of being a contrarian (fun as that can be) nor am I saying that every piece of research to come out of an evolutionary paradigm is good; I have attended many low- to mid-quality talks and posters at the evolutionary conferences I’ve been to. Rather, I say all this because I think there’s a lot of potential for psychological research in general to improve, and the improvement itself wouldn’t be terribly burdensome to achieve. The tools are already at our disposal. If we can collectively manage to stop thinking of human behavior as something requiring a special set of explanations and start seeing it within a larger evolutionary perspective, a substantial amount of the battle will already be won. It just takes a little imagination.

Does Grief Help Recalibrate Behavior?

Here’s a story which might sound familiar to all of you: one day, a young child is wandering around in the kitchen while his parents are cooking. This child, having never encountered a hot stove before, reaches up and brushes his hand against the hot metal. Naturally, the child experiences a physical pain and withdraws his hand. In order to recalibrate his behavior so as to not avoid future harms, then, the child spends the next week unable to get out of bed – owing to a persistent low-energy – and repeatedly thinks about touching the hot stove and how sad it made him feel. For the next year, the child returns to the spot where he burned his hand, leaving flowers on the spot, and cries for a time in remembrance. OK; so maybe that story doesn’t sound familiar at all. In fact, the story seems absurd on the face of it: why would the child go through all that grief in order to recalibrate their stove-touching behavior when they could, it seems, simply avoid touching the hot stove again? What good would all that additional costly grief and depression do? Excellent question.

Unfortunately, chain emails do not offer learning trials for recalibration.

In the case of the hot stove, we could conclude that grief would likely not add a whole lot to the child’s ability to recalibrate their behavior away from stove-touching. It doesn’t seem like a very efficient way of doing so, and the fit between the design features of grief and recalibration seem more than a bit mismatched. I bring these questions up in response to a suggestion I recently came across by Tooby & Cosmides, with whom I generally find myself in agreement with (it’s not a new suggestion; I just happened to come across it now). The pair, in discussing emotions, have this to say about grief:

Paradoxically, grief provoked by death may be a byproduct of mechanisms designed to take imagined situations as input: it may be intense so that, if triggered by imagination in advance, it is properly deterrent. Alternatively-or additionally-grief may be intense in order to recalibrate weightings in the decision rules that governed choices prior to the death. If your child died because you made an incorrect choice (and given the absence of a controlled study with alternative realities, a bad outcome always raises the probability that you made an incorrect choice), then experiencing grief will recalibrate you for subsequent choices. Death may involve guilt, grief, and depression because of the problem of recalibration of weights on courses of action. One may be haunted by guilt, meaning that courses of action retrospectively judged to be erroneous may be replayed in imagination over and over again, until the reweighting is accomplished.

So Tooby and Cosmides posit two possible functions for grief here: (1) there isn’t a function per se; it’s just a byproduct of a mechanism designed to use imagined stimuli to guide future behavior, and (2) grief might help recalibrate behavior so as to avoid outcomes that previously have carried negative fitness consequences. I want to focus on the second possibility because, as I initially hinted at, I’m having a difficult time seeing the logic in it.

One issue I seem to be having concerns the suggestion that people might cognitively replay traumatic or grief-inducing events over and over in order to better learn from them. Much like the explanation often on offer for depression, then, grief might function to help people make better decisions in the future. That seems to be the suggestion Tooby & Cosmides are getting at, anyway. As I’ve written before, I don’t think this explanation in plausible on the face of it. At least in terms of depression, there’s very little evidence that depression actually helps people make better decisions. Even if it did, however, it would raise the question as to why people ever don’t make use of this strategy. Presumably, if people could learn better by replaying events over and over, one might wonder why we ever don’t do that; why would we ever perform worse, when we could be performing better?  In order to avoid making what I nicknamed the Dire Straits fallacy ( from their lyric “money for nothing and the chicks for free“), the answer to that question would inevitably involve referencing some costs to replaying events over and over again. If there were no such costs to replay, and replay led to better outcomes, replay should be universal, which it isn’t; at least not to nearly the same degree. Accordingly, any explanation for understanding why people use grief as a mechanism for improved learning outcomes would need to make some reference as to why grief-struck individuals are more able to suffer those costs for the benefits continuous replay provides. Perhaps such an explanation exists, but it’s not present here.

One might also wonder what replaying some tragic event over and over would help one learn from it. That is, does the replaying the event actually help one extract additional useful information from the memory? As we can see from the initial example, rumination is often not required to quickly and efficiently learn connections between behaviors and outcomes. To use the Tooby & Cosmides example, if your child died because you made an incorrect choice, why would ruminating for weeks or longer help you avoid making that choice again? The answer to that question should also explain why rumination would not be required for effective learning in the case of touching the hot stove.

It should only be a few more weeks of this until she figures out that babies need food.

One might also suggest that once the useful behavioral-recalibration-related information has been extracted from the situation, replaying the grief-inducing event would seem to be wasted time, so the grief should stop. Tooby & Cosmides make this suggestion, writing:

After the 6-18 month period, the unbidden images suddenly stop, in a way that is sometimes described as “like a fever breaking”: this would be the point at which the calibration is either done or there is no more to be learned from the experience

The issue I see with that idea, however, is that unless one is positing it can take weeks, months, or even years to extract the useful information from the event, then it seems unlikely that much of that replay involves helping people learn and extract information. Importantly, to the extent that decisions like these (i.e. “what were you doing that led to your child’s death that you shouldn’t do again”) were historically recurrent and posed adaptive problems, we should expect evolved cognitive decision making modules to learn from them fast and efficiently. A mechanism that takes weeks, months, or even years to learn from an event by playing it over and over again should be at a massive disadvantage, relative to a mechanism that can make those same learning gains in seconds or minutes. A child that needed months to learn to not touch a hot stove might be at a risk of touching the stove again; if the child immediately learned to not do so, there’s little need to go over grieving about it for months following the initial encounter. Slow learning is, on the whole, a bad thing which carries fitness costs; not a benefit. Unless there’s something special about grief-related learning that requires it takes so long – some particularly computationally-demanding problem – then the length of grief seems like a peculiar design feature for recalibrating one’s own behavior.

This, of course, all presumes that the grief-recalibration learning mechanisms know how to recalibrate behavior in the first place. If your child died because of a decision you made, there are likely very many decisions that you made which might or might not have contributed to that outcome. Accordingly, there are very many ways in which you might potentially recalibrate your behavior to avoid such a future outcome again, very few of which will actually be of any use. So your grief mechanism should need to know which decisions to focus on at a minimum. Further still, the mechanism would need to know if recalibration was even possible in the first place. In the case of a spouse dying from something related to old age or a child dying from an illness or accident, all the grieving in the world wouldn’t necessarily be able to effect any useful change the next time around. So we might predict that people should only tend to grieve selectively: when doing so might help avoid such outcomes in the future. This means people shouldn’t tend to grieve when they’re older (since they have less time to potentially change anything) or about negative outcomes beyond their control (since no recalibration would help). As far as I know (which, admittedly, isn’t terribly far in this domain) this isn’t that case. Perhaps an astute reader could direct me to research where predictions like these have been tested.

Finally, humans are far from the only species which might need to recalibrate their behavior. Now it’s difficult to say precisely as to what other species feel, since you can’t just ask them, but do other species feel grief the same way humans do? The grief-as-recalibration model might predict that they should. Now, again, the depth of my knowledge on grief is minimal, so I’m forced to ask these questions out of genuine curiosity: do other species evidence grief-related behaviors? If so, in what contexts are these behaviors common, and why might those contexts be expected to require more behavioral recalibration than non-grief-inducing situations? If animals do not show any evidence of grief-related behaviors, why not? These are all matters which would need to be sorted out. To avoid the risk of being critical without offering any alternative insight, I would propose an alternative function for grief similar to what Ed Hagen proposed for depression: grief functions to credibly signal one’s social need.

“Aww. Looks like someone needs a hug”

Events that induce grief – like the loss of close social others or other major fitness costs – might tend to leave the griever in a weakened social position. The loss of mates, allies, or access to resources poses major problems to species like us. In order to entice investment from others to help remedy these problems, however, you need to convince those others that you actually do have a legitimate need. If your need is not legitimate, then investment in your might be less liable to payoff. The costly extended periods of grief, then, might help signal to others that one’s need is legitimate, and make one appear to be a better target of subsequent investment. The adaptive value of grief in this account lies not in what it makes the griever do per se; what the griever is doing is likely maladaptive in and of itself. However, that personally-maladaptive behavior can have an effect on others, leading them to provide benefits to the grieving individuals in an adaptive fashion. In other words, grief doesn’t serve to recalibrate the griever’s behavior so much as it serves to recalibrate the behavior of social others who might invest in you.

Of Pathogens And Social Support

Though I’m usually consistent with updating about once a week, this last week and a half has found me out of sorts. Apparently, some infection managed to get the better of my body for a while, and most of the available time I had went into managing my sickness and taking care of the most important tasks. Unfortunately, that also meant taking time away from writing, but now that I’m back on my feet I would like to offer some reflections on that rather grueling experience. One rather interesting – or annoying, if you’re me – facet of this last infection was the level of emotional intensity I found myself experiencing: I felt as if I wanted to be around other people while I was sick, which is something of an unusual experience for me; I found myself experiencing a greater degree of empathy with other people’s experiences than usual; I also found myself feeling, for lack of a better word, lonely, and a bit on the anxious side. Being the psychologist that I am, I couldn’t help but wonder what the ultimate function of these emotional experiences was. They certainly seemed to be driving me towards spending time around other people, but why?

And don’t you dare tell me it’s because company is pleasant; we all know that’s a lie.

Specifically, my question was whether these feelings of wanting to spend more time around others were being driven primarily by some psychological mechanism of mine functioning in my own fitness interests, or whether they might have been being driven by whatever parasite had colonized parts of my body. A case could be made for either option, though the case for parasite manipulation is admittedly more speculative, so let’s start with the idea that my increased desire for human contact might have been the result of the proper functioning of my psychology. Though I do not have any research on hand that directly examines the link between sickness and the desire for social closeness with others, I happen to have what is, perhaps, the next best thing: a paper by Aaroe & Petersen (2013) examining what effects hunger has on people’s willingness to advocate for resource-sharing behavior. Since the underlying theory behind the sickness-induced emotionality on my part and the hunger-induced resource sharing are broadly similar, examining the latter can help us understand the former.

Aaroe & Petersen (2013) begin with a relatively basic suggestion: solving the problems of resource acquisition posed an adaptive problem to ancestral human populations. We all need caloric resources to build and maintain our bodies, as well as to do all the reproductively-useful things that organisms which move about their environment do. One way of solving this problem, of course, is to go out hunting or foraging for food oneself. However, this strategy can, at times, be unsuccessful. Every now and again, people will come home empty-handed and hungry. If one happens to be a member of social species, like us, that’s not the only game in town, though: if you’re particularly cunning, you can manipulate successful others into sharing some of their resources with you. Accordingly, Aaroe & Petersen (2013) further suggest that humans might have evolved some cognitive mechanisms that responds to bodily signals of energy scarcity by attempting to persuade others to share more. Specifically, if your blood glucose level is low, you might be inclined to advocate for social policies that encourage others to share their resources with you.

As an initial test of this idea, the researchers had 104 undergraduates fast for four hours prior to the experiment. As if not eating for 4 hours wasn’t already a lot to ask. upon their arrival at the experiment, all the participants had their blood glucose levels measured in a process I can only assume (unfortunately for them) involved a needle. After the initial measurement, half the subjects were either given a sugar-rich drink (Spite) or a sugarless drink (Sprite Zero). Ten minutes after the drink, the blood glucose levels were measured again (and a third time as they leaving, which is a lot of pokes), and participants were asked about their support for various social redistribution policies. They were also asked to play a dictator game and divide approximately $350 between them and another participant, with one set of participants actually getting the money in that division. So the first test was designed to see whether participants would advocate for more sharing behavior when they were hungry, whereas the second test was designed to see if participants would actually demonstrate more generous behavior themselves.

Way to really earn your required undergrad research credits.

The results showed that the participants who had consumed the sugar-rich drink had higher blood glucose levels than the control group, and were also approximately 10% less supportive of social-welfare policies than those in the sugar-free condition. This lends some support to the idea that our current hunger level, at least as assessed by blood glucose levels, helps determine how much we are willing to advocate that other people share with one another: hungry individuals wanted more sharing, whereas less-hungry individuals wanted less. What about their actual sharing behavior, though? As it turns out, those who support social-welfare policies are more likely to share with others, but those who had low blood-glucose were less likely to do so. These two effects ended up washing out, with the result being that blood glucose had no effect on how much the participants actually decided to divide a potential resource themselves. While hungry individuals advocated that other people should share, then, they were no more likely to share themselves. They wanted others to be more generous without paying the costs of such generosity personally.

So perhaps my sickness-induced emotionality reflected something along those same lines: sick individuals find themselves unable to complete all sorts of tasks – such as resource acquisition or defense – as effectively as non-sick individuals. Our caloric resources are likely being devoted to other tasks, such as revving up our immune response. Thus, I might have desired that other people, in essence, take care of me while I was sick, with those emotions – such as increased loneliness or empathy – providing the proximate motivation to seek out such investment. If the current results are any indication, however, I would be unlikely to practice what I preach; I would want people to take care of me without my helping them anymore than usual. How very selfish of me and my emotions. So that covers the idea that my behavior was driven by some personal fitness benefits, but what about the alternative? The pathogens that were exploiting my body have their own set of fitness interests, after all, and part of those interests involves finding new hosts in which to exploit and reproduce. It follows, at least in theory, then, that the pathogens might be able to increase their own fitness by manipulating my mind in such a way so as to encourage me to seek out other conspecifics in my environment.

The more time I spent around others individuals, the greater the chance I would spread the infection, especially given how much I was coughing. If the pathogens affect my desire to be around others by making me feel lonely or anxious, then, they can increase their own fitness. This idea is by no means far-fetched. There are many known instances of pathogens influencing their host’s behavior, and I’ve written a little bit before about one of them: the psychological effects that malaria can have on the behavior of their host mosquitoes. Mosquitoes which are infected with malaria seem to preferentially feed from humans, whereas mosquitoes not so infected do not show any evidence of such preferential behavior. This likely owes to the malaria benefiting itself by manipulating the behavior of their mosquito host. The malaria wants to get from human to human, but it needs to do so via mosquito bites. If the malaria can make their host preferentially try and feed from humans, the malaria can reproduce quicker and more effectively. There are also some plausible theoretical reasons for suspecting that some pathogen(s) might play a role in the maintenance of human homosexual orientations, at least in males. The idea that pathogens can affect our psychologies more generally, then, is far from an impossibility.

“We hope you don’t mind us making your life miserable for this next week too much, because we’re doing it anyway.”

The question of interest, however, is whether the pathogens were responsible for my behavior directly or not. As promised, I don’t have an answer to the question. I don’t know what I was infected with specifically, much less what compounds it was or wasn’t releasing into my body, or what effect they might have had on my behavior. Further, if I already possessed some adaptions for seeking out social support when sick, there would be less of a selective pressure for the pathogens to encourage my doing so; I would already be spreading the pathogen incidentally through my behavior. The real point of this question is not to necessarily answer it, however, as much as it’s to get us thinking about how our psychology might not, at least at times, be our own, so to speak. There are countless other organisms living within (and outside of) our bodies that have their own sets of fitness interests which they might prefer we indulge, even at the expense of our own. As for me, I’m just happy to be healthy again, and to feel like my head is screwing back on to where it used to be.

References: Aaroe, L. & Petersen, M. (2013). Hunger games: Fluctuations in blood glucose levels influence support for social welfare. Psychological Science, 24, 2550-2556.

Why Parents Affect Children Less Than Many People Assume

Despite what a small handful of detractors have had to say, inclusive fitness theory has proved to be one of most valuable ideas we have for understanding much of the altruism we observe in both human and non-human species. The basic logic of inclusive fitness theory is simple: genes can increase their reproductive fitness by benefiting other bodies that contain copies of them. So, since you happen to share 50% of your genes in common by descent with a full sibling, you can, to some extent, increase your own reproductive fitness by increasing theirs. This logic is captured by the deceptively-tiny formula of rb > c. In English, rather than math, the formula states that altruism will be favored so long as the benefit delivered to the receiver, discounted by the degree of relatedness between the two, is greater than the cost to the giver. To use the sibling example again, altruism would be favored by selection if the the benefit you provided to a full sibling increased their reproductive success by twice as much (or more) than it cost you to give even if there was zero reciprocation.

“You scratch my back, and then you scratch my back again”

While this equation highlights why a lot of “good/nice” behaviors are observed – like childcare – there’s a darker side to this equation as well. By dividing each side of the inclusive fitness equation by r, you get this: b > c/r. What this new equation highlights is the selfish nature of these interactions: relatives can be selected to benefit themselves by inflicting costs on their kin. In the case of full siblings, I should be expected to value my benefiting twice as much, relative to theirs; for half siblings, I should value myself four-times as much, and so on. Let’s stick to full-siblings for now, just to stay consistent. Each sibling within a family should, all else being equal, be expected to value itself twice as much as they value any other sibling. The parents of these siblings, however, see things very differently: from the perspective of the parent, each of these siblings is equally related to them, so, in theory, they should value each of these offspring equally (again, all else being equal. All else is almost never equal, but let’s assume it is to keep the math easy).

This means that parents should prefer that their children act in a particular way: specifically, parents should prefer their children to help each other when the benefit to one outweighs the cost to the other, or b > c. The children, on the other hand, should only wish to behave that way when the benefit to their sibling is twice the cost of themselves, or 2b > c. This yields the following conclusion: how parents would like their children to behave does not necessarily correspond to what is in the child’s best fitness interests. Parents hoping to maximize their own fitness have different best interests from the children hoping to maximize theirs. Children who behave as their parents would prefer would be at a reproductive disadvantage, then, relative to children who were resistant to such parental expectations. This insight was formalized by Trivers (1974) when he wrote:

  “…an important feature of the argument presented here is that offspring cannot rely on parents for disinterested guidance. One expects the offspring to be pre-programmed to resist some parental teachings while being open to other forms. This is particularly true, as argued below, for parental teachings that affects the altruistic and egoistic tendencies of the offspring.” (p. 258)

While parents might feel as if they only acting in the best interests of their children, the logic of inclusive fitness suggests strongly that this feeling might represent an attempt at manipulating others, rather than a statement of fact. To avoid the risk of sounding one-sided, this argument cuts in the other direction as well: children might experience their parent’s treatment of them as being less-fair than it actually is, as each child would like to receive twice the investment that parents should be willing to give naturally. The take-home message of this point, however, is simply that children who were readily molded by their parents should be expected to have reproduced those tendencies less, relative to children who were not so affected. In some regards, children should be expected to actively disregard what their parents want for them.

“My parents want me to brush my teeth. They’re such fascists sometimes.”

There are other reasons to expect that parents should not tend to leave lasting impressions on their children’s eventual personalities. One of those very good reasons also has to do with the inclusive fitness logic laid out initially: because parents tend to be 50% genetically related to their children, parents should be expected to invest in their children fairly heavily, relative to non-children at least. The corollary to this idea is that non-parents of the child should be expected to treat them substantially different than their parents do. This means that a child should be relatively unable to learn what counts as appropriate behavior towards others more generally from their interactions with their parents. Just because a proud parent has hung their child’s scribbled artwork on the household refrigerator, it doesn’t mean that anyone else will come to think of the child as a great artist. A relationship with your parents is different than a relationship with your friends which is different from a sexual relationship in a great many ways. Even within these broad classes of relationships, you might behave differently with one friend than you do with another.

We should expect our behavior around these different individuals to be context-specific. What you learn about one relationship might not readily transfer to any other. Though a child might be unable to physically dominate their parents, they might be able to dominate their peers; some jokes might be appropriate amongst friends, but not with your boss. Though some of what you learn about how to behave around your parents might transfer to other situations (such as the language you speak, if your parents happen to speakers of the native tongue), it also may not. When it does not transfer, we should expect children to discard what they learned about how to behave around their parents in favor of more context-appropriate behaviors (indeed, when children find their parents speak a different language than their peers, the child will predominately learn to speak as their peers do; not their parents). While a parent’s behavior should be expected to influence how that child behaves around that parent, we should not necessarily expect it to influence the child’s behavior around anyone else.

It should come as little surprise, then, that being raised by the same parents doesn’t actually tend to make children any more similar with respect to their personality than being raised by different ones. Tellegan et al (1988) compared 44 identical twin (MZ) pairs raised apart with 217 identical twins reared together, along with 27 fraternal twins (DZ) reared apart and 114 reared together. In terms of their personality measures, the MZ twins were far more alike than the DZ twins,  as one would expect from their shared genetics. When it came to the personality measures, however, MZ twins reared together were more highly correlated on 7 of the measures, while those reared apart were more highly correlated on 6 of them. In terms of the DZ twins, those reared together were higher on 9 of the variables, whereas those reared apart were higher on the remaining 5. The size of these differences when they did exist was often exceedingly small, typically amounting to a correlation difference of about 0.1 between the pairs, or 1% of the variance.

Pick the one you want to keep. I’d recommend the cuter one.

Even if twins reared together ended up being substantially more similar than twins reared apart – which they didn’t – this would still not demonstrate that parenting was the cause of that similarity. After all, twins reared together tend to share more than their parents; they also tend to share various aspects of their wider social life, such as extended families, peer groups, and other social settings. There are good empirical and theoretical reasons for thinking that parents have less of a lasting effect on their children than many often suppose. That’s not to say that parents don’t have any effects on their children, mind you; just that the effects that they have ought to be largely limited to their particular relationship with the child in question, barring the infliction of any serious injuries or other such issues that will transfer from one context to another. Parents can certainly make their children more or less happy when they’re in each others presence, but so can friends and more intimate partners. In terms of shaping their children’s later personality, it truly takes a village.

References: Tellegen et al. (1988). Personality similarity in twins reared apart and together. Journal of Personality and Social Psychology, 54, 1031-1039.

Trivers, R. (1974). Parent-Offspring conflict. American Zoologist, 14, 249-264.

Which Ideas Are Ready To Go To Florida?

Recently, Edge.org posed it’s yearly question to a number of different thinkers who were given 1,000 words or less to provide their answer. This year, the topic was, “What scientific idea is ready for retirement?” and responses were received from about 175 people. This question appeared to come on heels of a quote by Max Planck, who suggested at one point that new ideas tend to gain their prominence not through actively convincing their opponents that they are correct, but rather when those who hold to alternative ideas die off themselves. Now Edge.org did not query my opinion on the matter (and the NBA has yet to draft me, for some unknowable reason), so I find myself relegated to the side-lines, engaging in the always-fun past time of lobbing criticisms at others. Though I did not read through all the responses – as many of them fall outside of my area of expertise and I’m already suffering from many demands for my time that are a bit more pressing – I did have some general reactions to some of the answers people provided to this question.

Sticks and stones can break their bones? Good enough for me.

The first reaction I had is with respect to the question itself. Planck was likely onto something when he noted that ideas are not necessarily accepted by others owing to their truth value. As I have discussed a few times before, there are, in my mind, anyway, some pretty compelling reasons for viewing human reasoning abilities as something other than truth-finders: first, people’s ability to successfully reason about a topic often hinges heavily on the domain in question. Whereas people are skilled reasoners when it comes to social contracts, they are poor at reasoning about more content-neutral domains. In that regard, there doesn’t seem to be a general-purpose reasoning mechanism that works equally well in all scenarios. Second, people’s judgements of their performance on tasks of reasoning abilities are often relatively uncorrelated with their actual performance. Most people appear to rate their performance in line with how easy or difficult a task felt, and, in some cases, being wrong happens to feel a lot like being right. Third, people are often found to ignore or find fault with evidence that doesn’t support their view, but will often accept evidence that does fit with their beliefs much less critically. Importantly, this seems to hold when the relative quality of the evidence in question is held constant. Having a WEIRD sample might be a problem for a study that reaches a conclusion unpalatable to the person assessing it, but is unlikely to be mentioned if the results are more agreeable.

Finally, there are good theoretical reasons for thinking that reasoning can be better understood by positing that it functions to persuade others, rather than to seek truth per se. This owes itself to the fact that being right is not necessarily always the most useful thing to be. If I’m not actually going to end up being successful in the future, for instance, it still might pay for me to try and convince other people that my prospects are actually pretty good so they won’t abandon me like the poor investment I am. Similarly, if I happen to advocate a particular theory that most of my career is built upon, abandoning that idea because it’s wrong could mean doing severe damage to my reputation and job prospects. In other words, there are certain ways that people can capture benefits in the social world by convincing others of untrue things. While that’s all well and good, it would seem to frame the Edge question is a very peculiar light: we might expect that people – those who responded to the Edge’s question included – tend to advocate that certain ideas should be relinquished, but their motivations and reasons (conscious or otherwise) for making that suggestion are based in many things which are not the idea’s truth value. As the old quote about evolution goes, “Let’s hope it is not true. But if it is true, let’s pray that it doesn’t become widely known”.

As an example, let’s consider the reply from Matt Ridley, who suggests that Malthus’ ideas on population growth were wrong. The basic idea that Malthus had was that resources are finite, and that populations, if unchecked, would continue to grow to the point that people would eventually live pretty unhappy lives, owing to the scarcity of resources relative to the size of the population. There would be more mouths wanting food than available food, which is a pretty unsatisfactory way to live life. Matt states, in his words, that “Malthus and his followers were wrong, wrong, wrong”. Human ingenuity has helped come to the rescue, and people have been becoming better and better at using the available resources in more efficient manners. The human population has continued to grow, often unchecked by famine (at least not in most first world nations). If anything, many people have access to too much food, leading to wide-spread obesity. While all this is true enough, and Malthus appeared to be wrong with respect to certain specifics, one would be hard-pressed to say that the basic insights themselves are worthy of retirement. For starters, human population growth has often come at the expense of many other species, plant and animal alike; we’ve made more room for ourselves not just by getting better at using what resources we do have, but by ensuring other species can’t use them either.

As it turns out, dead things are pretty poor competition for us.

Not only has our expansion has come at the expense of other species that find themselves suddenly faced with a variety of scarcities, but there’s also no denying that population growth will, at some point, be checked by resource availability. Given that humans are discovering new ways of doing things more efficiently than we used to, we might not have hit that point yet and we might not hit it for some time. It does not follow, however, that such a point does not, at least in principle, exist. While there is no theoretical upper limit on the number of people which might exist, the ability of human ingenuity to continuously improve the ability of our planet to support all those people is by no means a guarantee. While technology has improved markedly since the time of Malthus, there’s no telling how long such improvements will be sustained. Perhaps technology could continue to improve infinitely, just as populations can grow if unconstrained, but I wouldn’t bet on it. While Malthus might have been wrong about some details, I would hesitate to find his underlying ideas a home on a golf course close to the beach.

Another reply which stood out to me came from Martin Nowak. I have been critical of his ideas about group selection before, and I’m equally as critical of his answer to the Edge question. Nowak wants to prematurely retire the 50 year old idea of inclusive fitness: the idea that genes can benefit themselves by benefiting various bodies that contain copies of them, discounted by the probability of their being in that other body.  Nowak seems to want to retire the concept for two primary reasons: first, he suggests it’s mathematically inelegant. In the process of doing so, Nowak appears to insinuate that inclusive fitness represents some special kind of calculation that is both (a) mathematically impossible and (b) identical to calculations derived from standard evolutionary fitness calculations. On this account, Nowak seems to be confused: if the inclusive fitness calculations lead to the same outcome as standard fitness calculations, then there’s either something impossible about standard evolutionary theory (there isn’t), or inclusive fitness isn’t some special kind of calculation (it isn’t).

Charitable guy that Nowak is he does mention that the inclusive fitness approach has generated a vast literature of theoretically and empirically useful findings. Again, this seems strange if we’re going to take him at his word that the idea obviously wrong and one that should be retired. If it’s still doing useful work, retirement seems premature. Nowak doesn’t stop there, though: he claims that no one has empirically tested inclusive fitness theory because researchers haven’t been making precise fitness calculations in wild populations.This latter criticism is odd on a number of fronts. First, it seems to misunderstand the type of evidence that evolutionary researchers look for, which is evidence of special design; counting offspring directly is often not terribly useful in that regard. The second issue I see with that suggestion is that, perhaps ironically, Nowak’s favored alternative – group selection – has yet to make a single empirical prediction that could not also be made by an inclusive fitness approach (though inclusive fitness theorizing has successfully generated and supported many predictions which group selection cannot readily explain). Of all of Nowak’s work I have come across, I haven’t found an empirical test in any of his papers. Perhaps they exist, but if he is so sure that inclusive fitness theory doesn’t work (or is identical to other methods), then demonstrating so empirically should be a cake walk for him. I’ll eagerly await his research on that front.

I’m sure they’ll be here any day now…

While this only scratches the surface of the responses to the question, I would caution against retiring many of the ideas that were singled out in the answer section. Just as a general rule, ideas in science should be retired, in my mind, when they can be demonstrated without (much of) a doubt to be wrong and to seriously lead people astray in their thinking. Even then, it might only require us to retire an underlying assumption, rather than the core of the idea itself. Saying that we should retire inclusive fitness “because no one ever really tested it as I would like” is a poor reason for retirement; retiring the ideas of Malthus because we aren’t starving in the streets at the moment also seems premature. Instead of talking about what ideas should be retired wholesale, a better question would be to consider, “what evidence would convince you that you’re mistaken, and why would that evidence do so?” Questions like that not only help ferret out problematic assumption, but they might also help make useful forward momentum empirically and theoretically. Maybe the Edge could consider some variant of that question for next year.

Truth And Non-Consequences

A topic I’ve been giving some thought to lately concerns the following question: are our moral judgments consequentialist or nonconsequentialist? As the words might suggest, the question concerns to what extent our moral judgments are based in the consequences that result from an action or the behavior per se that people engage in. We frequently see a healthy degree of inconsistency around the issue. Today I’d like to highlight a case I came across while rereading The Blank Slate, by Steven Pinker. Here’s part of what Steven had to say about whether any biological differences between groups could justify racism or sexism:

“So could discoveries in biology turn out to justify racism and sexism? Absolutely not! The case against bigotry is not a factual claim that humans are biologically indistinguishable. It is a moral stance that condemns judging an individual according to the average traits of certain groups to which the individual belongs.”

This seems like a reasonable statement, on the face of it. Differences between groups, on the whole, does not necessarily mean any differences on the same trait between any two given individuals. If a job calls for a certain height, in other words, we should not discriminate against women just because men tend to be taller. That average difference does not mean that many men and women are the not same height, or that the reverse relationship never holds.

Even if it generally does…

Nevertheless, there is something not entirely satisfying about Steven’s position, namely that people are not generally content to say “discrimination is just wrong“. People like to try and justify their stance that it is wrong, lest the proposition be taken to simply be an arbitrary statement with no more intrinsic appeal than “trimming your beard is just wrong“. Steven, like the rest us, thus tries to justify his moral stance on the issue of discrimination:

Regardless of IQ or physical strength or any other trait that can vary, all humans can be assumed to have certain traits in common. No one likes being enslaved. No one likes being humiliated. No one likes being treated unfairly, that is, according to traits that the person cannot control. The revulsion we feel toward discrimination and slavery comes from a conviction that however much people vary on some traits, they do not vary on these.”

Here, Steven seems to be trying to have his nonconsequentialist cake and eat it too*. If the case against bigotry is “absolutely not” based on discoveries in biology or a claim that people are biologically indistinguishable, then it seems peculiar to reference biological facts concerning some universal traits to try and justify one’s stance. Would the discovery that certain people might dislike being treated unfairly to different degrees justify doing so, all else being equal? If it would, the first quoted idea is wrong; if it would not, the second statement doesn’t make much sense. What is also notable about these two quotes is that they are not cherry-picked from difference sections of the book; the second quote comes from the paragraph immediately following the first. I found their juxtaposition is rather striking.

With respect to the consequentialism debate, the fact that people try to justify their moral stances in the first place seems strange from a nonconsequentialist perspective: if a behavior is just wrong, regardless of the consequences, then it needs no explanation. or justification. Stealing, in that view, should be just wrong; it should matter who stole from who, or the value of the stolen goods. A child stealing a piece of candy from a corner store should be just as wrong as an adult stealing a TV from Best Buy; it shouldn’t matter that Robin Hood stole from the rich and gave to the poor, because stealing is wrong no matter the consequences and he should be condemned for it. Many people would, I imagine, agree that not all acts of theft are created equal though. On the topic of severity, many people would also agree that murder is generally worse than theft. Again, from a nonconsequentialist perspective, this should only be the case for arbitrary reasons, or at least reasons that have nothing at all to do with the fact that murder and theft have different consequences. I have tried to think of what those other, nonconsequentialist reasons might be, but I appear to suffer from a failure of imagination in that respect.

Might there be some findings that one might ostensibly support the notion that moral judgments are, at least in certain respects, nonconsequentialist? Yes; in fact there are. The first of these are a pair of related dilemmas known as the trolley and footbridge dilemmas. In both contexts one life can be sacrificed so that five lives are saved. In the former dilemma, a train heading towards five hikers can be diverted to a side track where there is only a single hiker; in the latter, a train heading towards five hikers can be stopped by pushing a person in front of it. In both cases the welfare outcomes are identical (one dead; five not), so it seems that if moral judgments only track welfare outcomes, there should be no difference between these scenarios. Yet there are: about 90% of people will support diverting the train, and only 10% tend to support pushing (Mikhail, 2007). This would certainly be a problem for any theory of morality that claimed the function of moral judgments more broadly is to make people better off on the whole. Moral judgments that fail to maximize welfare would be indicative of poor design for such a function.

Like how this bathroom was poorly optimized for personal comfort.

There are concerns with the idea that this finding supports moral nonconsequentialism, however: namely, the judgments of moral wrongness for pushing or redirecting are not definitively nonconsequentialist. People oppose pushing others in front of trains, I would imagine, because of the costs that pushing inflicts on the individual being pushed. If the dilemma was reworded to one in which acting on a person would not harm them but save the lives of others, you’d likely find very little opposition to it (i.e. pushing someone in front a train in order to send a signal to the driver, but with enough time so the pushed individual can exit the track and escape harm safely). This relationship holds in the trolley dilemma: when an empty side track is available, redirection to said track is almost universally preferred, as might be expected (Huebner & Hauser, 2011).  One who favors the nonconsequentialist account might suggest that such a manipulation is missing the point: after all, it’s not that pushing someone in front a train is immoral, but rather that killing someone is immoral. This rejoinder would seem to blur the issue, as it suggests, somewhat confusingly, that people might judge certain consequences non-consequentially. Intentionally shooting someone in the head, in this line of reasoning, would be wrong not because it results in death, but because killing is wrong; death just so happens to be a necessary consequence of killing. Either I’m missing some crucial detail or distinction seems unhelpful, so I won’t spend anymore time on it. 

Another matter of evidence touted as evidence of moral nonconsequentialism is the research done on moral dumbfounding (Haidt et al, 2000). In brief, research has found that when presented with cases where objective harms are absent, many people continue to insist that certain acts are wrong. The most well-known of these involves a bother-sister case of consensual incest on a single occasion. The sister is using birth control and the brother wears a condom; they keep their behavior a secret and feel closer because of it. Many subjects (about 80%) insisted that the act was wrong. When pressed for an explanation, many initially referenced harms that might occur as a result, those these harms were always countered by the context (no pregnancy, no emotional harm, no social stigma, etc). From this, it was concluded that conscious concerns for harm appear to represent post hoc justifications for an intuitive moral intuition.

One needs to be cautious in interpreting these results as evidence of moral nonconsequentialism, though, and a simple example would explain why. Imagine in that experiment what was being asked about was not whether the incest itself was wrong, but instead why the brother and sister pair had sex in the first place. Due to the dual contraceptive use, there was no probability of conception. Therefore, a similar interpretation might say, this shows that people are not consciously motivated to have sex because of children. While true enough that most acts of intercourse might not be motivated by the conscious desire for children, and while the part of the brain that’s talking might not have access to information concerning how other cognitive decision rules are enacted, it doesn’t mean the probability of conception plays no role shaping in the decision to engage in intercourse; despite what others have suggested, sexual pleasure per se is not adaptive. In fact, I would go so far as to say that the moral dumbfounding results are only particularly interesting because, most of the time, harm is expected to play a major role in our moral judgments. Pornography manages to “trick” our evolved sexual motivation systems by providing them with inputs similar to those that reliably correlate with the potential for conception; perhaps certain experimental designs – like the case of brother-sister incest – manage to similarly “trick” our evolved moral systems by providing them with inputs similar to those that reliably correlated with harm.

Or illusions; whatever your preferred term is.

In terms of making progress the consequentialism debate, it seems useful to do away with the idea that moral condemnation functions to increase welfare in general: not only are such claims clearly empirically falsified, they could only even be plausible in the realm of group selection, which is a topic we should have all stopped bothering with long ago. Just because moral judgments fail the test of group welfare improvement, however, it does not suddenly make the nonconsequentialist position tenable. There are more ways of being consequentialist than with respect to the total amount of welfare increase. It would be beneficial to turn our eye towards considering strategic welfare consequences that likely to accrue to actors, second parties, and third parties as a result of these behaviors. In fact, we should be able to use such considerations to predict contexts under which people should flip back and forth from consciously favoring consequentialist and nonconsequentialist kinds of moral reasoning. Evolution is a consequentialist process, and we should expect it to produce consequentialist mechanisms. To the extent we are not finding them, the problem might owe itself more to a failure of our expectations for the shape of these consequences than an actual nonconsequentialist mechanism.

References: Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished manuscript.

Huebner, B. & Hauser, M. (2011). Moral judgments about altruistic self-sacrifice: When philosophical and folk intuitions clash. Philosophical Psychology, 24, 73-94.

Mikhail, J. (2007). Universal moral grammar: Theory, evidence, and the future. Trends in Cognitive Science, 11, 143-151.

 

*Later, Steven writes:

“Acknowledging the naturalistic fallacy does not mean that facts about human nature are irrelevant to our choices…Acknowledging the naturalistic fallacy implies only that discoveries about human nature do not, by themselves, dictate our choices…”

I am certainly sympathetic to such arguments and, as usual, Steven’s views on the topic are more nuanced than the these quotes alone are capable of displaying. Steven does, in fact, suggest that all good justifications for moral stances concern harms and benefits. Those two particular quotes are only used to highlight the frequent inconsistencies between people’s stated views.

Classic Research In Evolutionary Psychology: Learning

Let’s say I were to give you a problem to solve: I want you to design a tool that is good at cutting. Despite the apparent generality of the function, this is actually a pretty vague request. For instance, one might want to know more about the material to be cut: a sword might work if your job is cutting some kind human flesh, but it might also be unwieldy to keep around the kitchen for preparing dinner (I’m also not entirely sure they’re dishwasher-safe, provided you managed to fit a katana into your machine in the first place). So let’s narrow the request down to some kind of kitchen utensil. Even that request, however, is a bit vague, as evidenced by Wikipedia naming about a dozen different kinds of utensil-style knives (and about 51 different kinds of knives overall). That list doesn’t even manage to capture other kinds of cutting-related kitchen utensils, like egg-slicers, mandolines, peelers, and graters. Why do we see so much variety, even in the kitchen, and why can’t one simple knife be good enough? Simple: when different tasks have non-overlapping sets of best design solutions, functional specificity tends to yield efficiency in one realm, but not in another.

“You have my bow! And my axe! And my sword-themed skillet!”.

The same basic logic has been applied to the design features of living organisms as well, including aspects of our cognition as I argued in the last post: the part of the mind that functions to logically reason about cheaters in the social environment does not appear to be able logically reason with similar ease about other, even closely-related topics. Today, we’re going to expand on that idea, but shift our focus towards the realm of learning. Generally speaking, learning can be conceived of as some change to an organism’s preexisting cognitive structure due to some experience (typically unrelated to physical trauma). As with most things related to biological changes, however, random alterations are unlikely to result in improvement; to modify a Richard Dawkins quote ever so slightly, “However many ways there may be of [learning something useful], it is certain that there are vastly more ways of [learning something that isn't". For this reason, along with some personal experience, no sane academic has ever suggested that our learning occurs randomly. Learning needs to be a highly-structured process in order to be of any use.

Precisely how structured "highly-structured" entails is a bit of a sticky issue, though. There are undoubtedly still some who would suggest that some general type of reinforcement-style learning might be good enough for learning all sorts of neat and useful things. It's a simple rule: if [action] is followed by [reward], then increase the probability of [action]; if [action] is followed by [punishment], then decrease the probability of [action]. There are a number of problems with such a simple rule, and they return to our knife example: the learning rule itself is under-specified for the demands of the various learning problems organisms face. Let’s begin with an analysis of what is known as conditioned taste aversion. Organisms, especially omnivorous ones, often need to learn about what things in their environment are safe to eat and which are toxic and to be avoided. One problem in learning about which are potential foods are toxic is that the action (eating) is often divorced from the outcome (sickness) by a span of minutes to hours, and plenty of intervening actions take place in the interim. On top of that, this is not the type of learning you want to need repeated exposures to in order to learn, as, and this should go without saying, eating poisonous foods is bad for you. In order to learn the connection between the food and the sickness, then, a learning mechanism would seem to need to “know” that the sickness is related to the food and not other, intervening variables, as well as being related in some specific temporal fashion. Events that conform more closely to this anticipated pattern should be more readily learnable.

The first study we’ll consider, then, is by Garcia & Koelling (1966) who were examining taste conditioning in rats. The experimenters created conditions in which rats were exposed to “bright, noisy” water and “tasty” water. The former condition was created by hooking a drinking apparatus up to a circuit that connected to a lamp and a clicking mechanism, so when the rats drank, they were provided with visual and auditory stimuli. The tasty condition was created by flavoring the water. Garcia & Koelling (1966) then attempted to pair the waters with either nausea or electric shocks, and subsequently measure how the rats responded in their preference for the beverage. After the conditioning phase, during the post-test period, a rather interesting sets of results emerged: while rats readily learned to pair nausea with taste, they did not draw the connection between nausea and audiovisual cues. When it came to the shocks, however, the reverse pattern emerged: rats could pair shocks with audiovisual cues well, but could not manage to pair taste and shock. This result makes a good deal of sense in light of a more domain-specific learning mechanism: things which produce certain kinds of audiovisual cues (like predators) might also have the habit of inflicting certain kinds of shock-like harms (such as with teeth or claws). On the other hand, predators don’t tend to cause nausea; toxins in food tend to do so, and these toxins also tend to come paired with distinct tastes. An all-purpose learning mechanism, by contrast, should be able to pair all these kinds of stimuli and outcomes equally well; it shouldn’t matter whether the conditioning comes in the form of nausea or shocks.

Turns out that shocks are useful for extracting information, as well as communicating it.

The second experiment to consider on the subject of learning, like the previous one, also involves rats, and actually pre-dates it. This paper, by Petrinovich & Bolles (1954), examined whether different deprivation states have qualitatively different effects on behavior. In this case, the two deprivation states under consideration were hunger and thirst. Two samples of rats were either deprived of food or water, then placed in a standard T-maze (which looks precisely how you might imagine it would). The relevant reward – food for the hungry rats and water for the thirsty ones – was placed in one arm of the T maze. The first trial was always rewarded, no matter which side the rat chose. Following that initial choice, the food was placed on the side of the maze the rat did not chose on the previous trial. For instance, if the rat went ‘right’ on the first trial, the reward was placed in the ‘left’ arm on the second trial. Whether the rat chose correctly or incorrectly didn’t matter; the reward was always placed on the opposite side as its previous choice. Did it matter whether the reward was food or water?

Yes; it mattered a great deal. The hungry rats averaged substantially fewer errors in reaching the reward than the thirsty ones (approximately 13 errors over 34 trials, relative to 28 errors, respectively). The rats were further tested until they managed to perform 10 out of 12 trials correctly. The hungry rats managed to meet the criterion value substantially sooner, requiring a median of 23 total trials before reaching that mark. By contrast, 7 of the 10 thirsty rats failed to reach the criterion at all, and, of the three that did, they required approximately 30 trials on average to manage that achievement. Petrinovich & Bolles (1954) suggested that these results can be understood in the following light: hunger makes the rat’s behavior more variable, while thirst makes its behavior more stereotyped. Why? The most likely candidate explanation is the nature of the stimuli themselves, as they tend to appear in the world. Food sources tend to be distributed semi-unpredictably throughout the environment, and where there is food today, there might not be food tomorrow. By contrast, the location of water tends to be substantially more fixed (where there was a river today, there is probably a river tomorrow), so returning to the last place you found water would be the more-secure bet. To continue to drive this point home: a domain general learning mechanism should do both tasks equally as well, and a more general account would seem to struggle to explain these findings.

Shifting gears away from rats, the final study for consideration is one I’ve touched on before, and it involves the fear responses of monkeys. As I’ve already discussed the experiment, (Cook & Mineka, 1989) I’ll offer only a brief recap of the paper. Lab-reared monkeys show no intrinsic fear responses to snakes or flowers. However, social creatures that they are, these lab-reared monkeys can readily develop fear responses to snakes after observing another conspecific reacting fearfully to them. This is, quite literally, a case of monkey see, monkey do. Does this same reaction hold in response to observations of conspecifics reacting fearfully to a flower? Not at all. Despite the lab-reared monkeys being exposed to stimuli they have never seen before in their life (snakes and flowers) paired with a fear reaction in both cases, it seems that the monkeys are prepared to learn to fear snakes, but not similarly prepared to learn a fear of flowers. Of note is that this isn’t just a fear reaction in response to living organisms in general: while monkeys can learn a fear of crocodiles, they do not learn to fear rabbits under the same conditions.

An effect noted by Python (1975)

When it comes to learning, it does not appear that we are dealing with some kind of domain-general learning mechanism, equally capable of learning all types of contingencies. This shouldn’t be entirely surprising, as organisms don’t face all kinds of contingencies with equivalent frequencies: predators that cause nausea are substantially less common than toxic compounds which do. Don’t misunderstand this argument: humans and nonhumans alike are certainly capable of learning many phylogenetically novel things. That said, this learning is constrained and directed in ways we are often wholly unaware of. The specific content area of the learning is of prime importance in determining how quickly somethings can learned, how lasting the learning is likely to be, and which things are learned (or learnable) at all. The take-home message of all this research, then, can be phrased as such: Learning is not the end point of an explanation; it’s a phenomenon which itself requires an explanation. We want to know why an organism learns what it does; not simply that it learns.

References: Cook M, & Mineka S (1989). Observational conditioning of fear to fear-relevant versus fear-irrelevant stimuli in rhesus monkeys. Journal of abnormal psychology, 98 (4), 448-59 PMID: 2592680

Garcia, J. & Koelling, R. (1966). Relation of cue to consequence in avoidance learning. Psychonomic Science, 4, 123-124.

Petrinovich, L. & Bolles, R. (1954). Deprivation states and behavioral attributes. Journal of Comparative Physiological Psychology, 47, 450-453.

Evolutionary Psychology: Tying Psychology Together

Every now and again – perhaps more frequently than many would prefer – someone who apparently fails to understand one or more aspects of the evolutionary perspective in psychology goes on to make rather public proclamations about what it is and what it can and cannot do for us. Notable instances are not particularly difficult to find. The most recent of these to cross my desk comes from Gregg Henriques, which takes a substantially less-nasty tone than I have come to expect. In it, he claims that evolutionary psychology does not provide us with a viable metatheory for understanding psychology, and he bases his argument on three main points: (1) evolutionary psychology is overly committed to the domain-specificity concept, (2) that the theory fails to have the correct map of complexity, and (3) it hasn’t done much for people in a clinical setting. In the course of making these arguments, I feel he stumbles badly on several points, so I’d like to take a little time to point out these errors. Thankfully, given the relative consistency of these errors, doing so is becoming more a routine than anything else.

So feel free to change the channel if you’ve seen this before.

Gregg begins with the natural starting point for many people in criticizing EP: while we have been focusing on how organisms solve specific adaptive problems, there might be more general adaptive problems out there. As Gregg put it:

The EP founders also overlooked the fact that there really is a domain general behavioral problem, which can be characterized as the problem of behavioral investment

There are a number of things to say about such a suggestion. Thankfully, I have said them before, so this is a relatively easy task. To start off, these ostensibly domain-general problems are, in fact, not all that general. To use a simple example, consider one raised by Gregg in his discussion of behavioral investment theory: organisms need to solve the problem of obtaining more energy than they spend to keep on doing things like being alive and mating. That seems like an awfully general problem, but, stated in such manner, the means by which that general problem is, or can be, solved are massively unspecified. How does an organism calculate its current caloric state? How does an organism decide which things to eat to obtain energy? How does an organism decide when to stop foraging for food in one area and pursue a new one? How is the return on energy calculated and compared against the expenditure? As one can quickly appreciate, the larger, domain-general problem (obtain more energy than one expends) is actually composed of very many smaller problems, and things can get complicated quickly. Pursuing mating rather than food, for instance, is unlikely to result in an organism obtaining more energy than it expends. This leaves the behavioral investment problem – broadly phrased – wanting in terms of any predictive power: why do organism pursue goals other than gaining and energy and under what conditions do they do so? The issue here, then, is not so much that domain-general problems aren’t being accounted for by evolutionary psychology, but rather that the problems themselves are being poorly formulated by the critics.

The next area in this criticism that Gregg stumbles on is the level of analysis that evolutionary psychology tends to work with. Gregg considers associative learning a domain general system but, again, it’s trivial to demonstrate it is not all that general. There are many things that associative learning systems do not do: regulate homeostatic processes, like breathing and heart rate, perceive anything, like light, sound, pleasure, or pain, generate emotions, store memory, and so on. In terms of their function, associative learning systems only really seem to do one thing: make behavior followed by reward more likely than behavior followed by discomfort, and that’s only after other systems have decided what is rewarding and what is not. That this system can apply the same function to many different inputs doesn’t make it a domain-general one. The distinction that Gregg appears to miss, then, is that functional specificity is not the same as input specificity. Calling learning a domain-general system is a bit like calling a knife a domain-general tool because it can be used to cut many different objects. Try to use a knife to weld metal, and you’ll quickly appreciate how domain-specific the function of a knife is.

On top of that, there is also the issue that some associations are learned far more readily than others. To quote Dawkins, “However many ways there may be of being alive, it is certain that there are vastly more ways of being dead”. A similar logic applies to learning: there are many more potentially incorrect and useless things to learn than there are useful ones. This is why learning ends up being a rather constrained process: rats can learn to associate light and sound with shocks, but do not tend to make the association between taste and shock, despite the unpleasantness of the shock itself. Conversely, associations between taste and nausea can be readily learned, but not between light and nausea. To continue beating this point to death, a domain-general account of associative learning has a rather difficult time explaining why some connections are readily learned and others are not. In order to generate more textured predictions, you need to start focusing on the more-specific sub-problems that make up the more general one.

And if doing so is not enough of a pain-in-the-ass, you’re probably doing it wrong.

On a topic somewhat-related to learning, the helpful link provided by Gregg concerning behavioral investment theory has several passages that, I think, are rather diagnostic of the perspective he has about evolutionary psychology:

Finally, because [behavioral investment/shutdown theory] is an evolutionary model, it also readily accounts for the fact that there is a substantial genetic component associated with depression (p.61)…there is much debate on the relative amount of genetic constraint versus experiential plasticity in various domains of mental functioning (p.70).

The problem here is that evolutionary psychology concerns itself with far more than genetic components. In the primer on evolutionary psychology, the focus on genetic components in particular is deemed to be nonsensical in the first place, as the dichotomy between genetic and environmental itself is a false one. Gregg appears to be conflating “evolutionary” with “genetic” for whatever reason, and possibly both with “fixed” when he writes:

In contrast to the static model suggested by evolutionary psychologists, The Origin of Minds describes a mind that is dynamic and ever-changing, redesigning itself with each life experience

As far as I know, no evolutionary psychologist has ever suggested a static model of the mind; not one. Given that evolutionary psychologists is pluralized in that sentence, I can only assume that the error is made by at least several of them, but to whom “them” refers is a mystery to me. Indeed, this passage by Gregg appears to play by the rules articulated in the pop anti-evolutionary psychology game nearly perfectly:

The second part of the game should be obvious. Once you’ve baldly asserted what evolutionary psychologists believe – and you lose points if, breaking tradition, you provide some evidence for what evolutionary psychologists have actually claimed in print and accurately portray their view – point out the blindingly obvious opposite of the view you’ve hung on evolutionary psychology. Here, anything vacuous but true works. Development matters. People learn. Behavior is flexible. Brains change over time. Not all traits are adaptations. The world has changed. People differ across cultures. Two plus two equals four. Whatever.

The example is so by-the-book that little more really needs to be said about it. Somewhat ironically, Gregg suggests that the evolutionary perspective creates a straw man of other perspectives, like learning and cultural ones. I’ll leave that suggestion without further comment.

The next point Gregg raises concerning complexity I have a difficult time understanding. If I’m parsing his meaning correctly, he’s saying that culture adds a level of complexity to analyses of human behavior. Indeed, local environmental conditions can certainly shape how adaptations develop and are activated, whether due to culture or not, but I’m not sure precisely how that is supposed to be a criticism of evolutionary psychology. As I mentioned before, I’m not sure a single contemporary evolutionary psychologist has ever been caught seriously suggesting something to the contrary. Gregg also makes some criticism of evolutionary psychology not defining psychology as he would prefer. Again, I’m not quite sure I catch his intended meaning here, but I fail to see how that it is a criticism of the perspective. Gregg suggests that we need psychology that can apply to non-humans as well, but I don’t to see how an evolutionary framework fails that test. No examples are given for further consideration, so there’s not much more to say on that front.

Gregg’s final criticism  amounts to a single line, suggesting that an evolutionary perspective has yet to unify every approach people take in psychotherapy. Not being the expert on psychotherapy myself, I’ll plead ignorance to the success that an evolutionary framework has had in that realm, and no evidence of any kind is provided for assessment. I fail to see why such a claim has any bearing on whether an evolutionary perspective could do so; I just wanted to make note that the criticism has been heard, but perhaps not formulated into a more appreciable fashion.

Final verdict: the prosecution seems confused.

Criticisms of an evolutionary perspective like these are unfortunately common and consistently misguided. Why they continue to abound despite their being answered time and again from the field’s origins is curious. Now in all fairness, Gregg doesn’t appear hostile to the field, and deems it “essential” for understanding psychology. Thankfully, the pop anti-evolutionary psychology game captures this sentiment as well, so I’ll leave it on that note:

The third part of the game is not always followed perfectly, and it is the hardest part. Now that you’ve shown how you are in full command of the way science is conducted or some truth about human behavior that evolutionary psychologists have missed, it’s important to assert that you absolutely acknowledge that of course humans are the product of evolution, and of course humans aren’t exempt from the principles of biology.

Look, you have to say, I’m not opposed to applying evolutionary ideas to humans in principle. This is key, as it gives you a kind of ecumenical gravitas. Yes, you continue, I’m all for the unity of science and cross-pollination and making the social sciences better, and so on. But, you have to add – and writing plaintively, if you can, helps here – I just want things to be done properly. If only evolutionary psychologists would (police themselves, consider development, acknowledge learning, study neuroscience, run experiments, etc…), then I would be just perfectly happy with the discipline.

The “Side-Effect Effect” And Curious Language

You keep using that word. I do not think it means what you think it means

That now famous quote was uttered by the character Inigo Montoya in the movie, The Princess Bride. In recent years, the phrase has been co-opted for its apparent usefulness in mocking people during online debates. While I enjoy a good internet argument as much as the next person, I do try to stay out of them these days due to time constraints, though I did used to be something of a chronic debater. (As an aside, I started this blog, at least in part, for reasons owing to balancing my enjoyment of debates with those time constraints. It’s worked pretty well so far). As any seasoned internet (or non-internet) debater can tell you, one of the underlying reasons debates tend to go on so long is that people often argue past one another. While there are many factors that explain why people do so, the one I would like to highlight today is semantic in nature: definitional obscurity. There are instances where people will use different words to allude to the same concept or use the same word to allude to different concepts. Needless to say, this makes agreement hard to reach.

But what’s the point of arguing if it means we’ll ever agree on something?

This brings us to the question of intentions. Defined by various dictionaries, intentions are aims, plans, or goals. By contrast, the definition of a side effect is just the opposite: an unintended outcome. Were these terms used consistently, then, one could never say a side effect was intended; foreseen, maybe, but not intended. Consistency, however, is rarely humanity’s strongest suit – as we ought expect it not to be – since consistency does not necessarily translate into “useful”: there are many cases in which I would be better off if I could both do X and stop other people from doing X (fill in ‘X’ however you see fit: stealing, having affairs, murder, etc). So what about intentions? There are two facts about intentions which make them prime candidates for expected inconsistency: (1) intentionally-committed acts tend to receive a greater degree of moral condemnation than unintentional ones, and (2) intentions are not readily observable, but rather need to be inferred.

This means that if you want to stop someone else from doing X, it is in your best interests to convince others if someone did X, that X was intended, so as to make punishment less costly and more effective (as more people might be interested in punishing, sharing the costs). Conversely, if you committed X, it is in your best interests to convince others that you did not intend X. It is on the former aspect – condemnation of others – that we’ll focus on here. In the now classic study by Knobe (2003), 39 people were given the following story:

The vice-president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.” The chairman of the board answered, “I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’”They started the new program. Sure enough, the environment was harmed.

When asked whether the chairman intentionally harmed the environment, 82% of the participants agreed that he had. However, when the word “harm” was replaced with “help”, now 77% of the subjects said that the benefits to environment were unintentional (this effect was also replicated using a military context instead). Now, strictly speaking, the only stated intention the chairman had was to make money; whether that harmed or helped the environment should to be irrelevant, as both effects would side effects of that primary intention.Yet that’s not how people rated them.

Related to the point about moral condemnation, it was also found that participants said the chairman who brought about the negative side effect deserved substantially more punishment (4.8 on a 0 to 6 scale) than the chairman who brought about the positive impact deserved praise (1.4), and those ratings correlated pretty well with the extent to which the participants thought the chairman has brought about the effect intentionally. This tendency to asymmetrically see intentions behind negative, but not positive, side effects was dubbed “the side-effect effect”. There exists the possibility, however, that this label is actually not entirely accurate. Specifically, it might not be exclusive to side effects of actions; it might also hold for the means by which an effect is achieved as well. You know; the things that were actually intended.

Just like how this was probably planned by some evil corporation.

The paper that raised this possibility (Cova & Naar, 2012) began by replicating Knobe’s basic effect with different contexts (unintended targets being killed by a terrorist bombing as the negative side effect, and an orphanage expanding due to the terrorist bombing as the positive side effect). Again, negative side effects were seen as more intentional and more blameworthy than positive side effects were rated as intentional and praiseworthy. The interesting twist came when participants were asked about the following scenario:

A man named André tells his wife: “My father decided to leave his immense fortune to only one of his children. To be his heir, I must find a way to become his favorite child. But I can’t figure how.” His wife answers: “Your father always hated his neighbors and has declared war to them. You could do something that would really annoy them, even if you don’t care. Andre decides to set fire to the neighbors’ car.

Unsurprisingly, many people here (about 80% of them) said that Andre had intentionally harmed his neighbors. He planned to harm them, because doing so would further another one of his goals (getting money) A similar situation was also presented, however, where instead of burning down the neighbor’s car, Andre donates to a humanitarian-aid society because his father would have liked that. In that case, only 20% of subjects reported that Andre had intended to give money to the charity.

Now that answer is a bit peculiar. Surely, Andre intended to donate the money, even if his reason for doing so involved getting money from his father. While that might not be the most high-minded reason to donate, it ought not make the donating itself any less intentional (though perhaps it seems a bit grudging). Cova & Naar (2012) raise the following alternative explanation: the way the philosophers tend to use the word “intention” is not the only game in town. There are other possible conceptions that people might have of the word based on the context in which it’s found, such as, “something done knowingly for which an agent deserves praise of blame“. Indeed, taking these results at face value, we would need something else beyond the dictionary definitions of intention and side effect, since they don’t seem to be applying here.

This returns us to my initial point about intentions themselves. While this is an empirical matter (albeit a potentially difficult one), there are at least two distinct possibilities: (a) people mean something different by “intention” in moral and nonmoral contexts (we’ll call this the semantic account), or (b) people mean the same thing in both cases, but they do actually perceive it differently (the perceptual account). As I mentioned before, intentions are not the kinds of things which are readily observable, but rather need to be inferred, or perceived. What was not previously mentioned, however, is that it is not as if people only have a single intention at any given time; given the modularity of the mind, and the various goals one might be attempting to achieve, it is perfectly possible, at least conceptually, for people to have a variety of different intentions at once – even ones that pull in opposite directions. We’re all intimately familiar with the sensation of having conflicting intentions when we find ourselves stuck between two appealing, but mutually-exclusive options: a doctor may intend to do no harm, intend to save people’s lives, and find himself in a position where he can’t do both.

Simple solution: do neither.

For whatever it’s worth, of the two options, I favor the perceptual account over the semantic account for the following reason: there doesn’t seem to be a readily-apparent reason for definitions to change strategically, though there are reasons for perceptions to change. Let’s return to the Andre case to see why. One could say that Andre had at least two intentions: get the inheritance, and complete act X required to achieve the inheritance. Depending on whether one wants to praise or condemn Andre for doing X, one might choose to highlight different intentions, though in both cases keeping the definition of intention the same. In the event you want to condemn Andre for setting the car on fire, you can highlight the fact that he intended to do so; if you don’t feel like praising him for his ostensibly charitable donation, you can choose instead to highlight the fact that (you perceive) his primary intention was to get money – not give it. However, the point of that perceptual change would be to convince others that Andre ought to be punished; simply changing the definition of “intention” when talking with others about the matter wouldn’t seem to accomplish that goal quite as well, as it would require the other speaker to share your definition.

References: Cova, F., & Naar, H. (2012). Side-Effect Effect Without Side Effect: Revisiting Knobe’s Asymmetry. Philosophical Psychology, 25, 837-854

Knobe, J. (2003). Intentional Action and Side Effects in Ordinary Language. Analysis, 63, 190-193 DOI: 10.1093/analys/63.3.190