The “Side-Effect Effect” And Curious Language

You keep using that word. I do not think it means what you think it means

That now famous quote was uttered by the character Inigo Montoya in the movie, The Princess Bride. In recent years, the phrase has been co-opted for its apparent usefulness in mocking people during online debates. While I enjoy a good internet argument as much as the next person, I do try to stay out of them these days due to time constraints, though I did used to be something of a chronic debater. (As an aside, I started this blog, at least in part, for reasons owing to balancing my enjoyment of debates with those time constraints. It’s worked pretty well so far). As any seasoned internet (or non-internet) debater can tell you, one of the underlying reasons debates tend to go on so long is that people often argue past one another. While there are many factors that explain why people do so, the one I would like to highlight today is semantic in nature: definitional obscurity. There are instances where people will use different words to allude to the same concept or use the same word to allude to different concepts. Needless to say, this makes agreement hard to reach.

But what’s the point of arguing if it means we’ll ever agree on something?

This brings us to the question of intentions. Defined by various dictionaries, intentions are aims, plans, or goals. By contrast, the definition of a side effect is just the opposite: an unintended outcome. Were these terms used consistently, then, one could never say a side effect was intended; foreseen, maybe, but not intended. Consistency, however, is rarely humanity’s strongest suit – as we ought expect it not to be – since consistency does not necessarily translate into “useful”: there are many cases in which I would be better off if I could both do X and stop other people from doing X (fill in ‘X’ however you see fit: stealing, having affairs, murder, etc). So what about intentions? There are two facts about intentions which make them prime candidates for expected inconsistency: (1) intentionally-committed acts tend to receive a greater degree of moral condemnation than unintentional ones, and (2) intentions are not readily observable, but rather need to be inferred.

This means that if you want to stop someone else from doing X, it is in your best interests to convince others if someone did X, that X was intended, so as to make punishment less costly and more effective (as more people might be interested in punishing, sharing the costs). Conversely, if you committed X, it is in your best interests to convince others that you did not intend X. It is on the former aspect – condemnation of others – that we’ll focus on here. In the now classic study by Knobe (2003), 39 people were given the following story:

The vice-president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.” The chairman of the board answered, “I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’”They started the new program. Sure enough, the environment was harmed.

When asked whether the chairman intentionally harmed the environment, 82% of the participants agreed that he had. However, when the word “harm” was replaced with “help”, now 77% of the subjects said that the benefits to environment were unintentional (this effect was also replicated using a military context instead). Now, strictly speaking, the only stated intention the chairman had was to make money; whether that harmed or helped the environment should to be irrelevant, as both effects would side effects of that primary intention.Yet that’s not how people rated them.

Related to the point about moral condemnation, it was also found that participants said the chairman who brought about the negative side effect deserved substantially more punishment (4.8 on a 0 to 6 scale) than the chairman who brought about the positive impact deserved praise (1.4), and those ratings correlated pretty well with the extent to which the participants thought the chairman has brought about the effect intentionally. This tendency to asymmetrically see intentions behind negative, but not positive, side effects was dubbed “the side-effect effect”. There exists the possibility, however, that this label is actually not entirely accurate. Specifically, it might not be exclusive to side effects of actions; it might also hold for the means by which an effect is achieved as well. You know; the things that were actually intended.

Just like how this was probably planned by some evil corporation.

The paper that raised this possibility (Cova & Naar, 2012) began by replicating Knobe’s basic effect with different contexts (unintended targets being killed by a terrorist bombing as the negative side effect, and an orphanage expanding due to the terrorist bombing as the positive side effect). Again, negative side effects were seen as more intentional and more blameworthy than positive side effects were rated as intentional and praiseworthy. The interesting twist came when participants were asked about the following scenario:

A man named André tells his wife: “My father decided to leave his immense fortune to only one of his children. To be his heir, I must find a way to become his favorite child. But I can’t figure how.” His wife answers: “Your father always hated his neighbors and has declared war to them. You could do something that would really annoy them, even if you don’t care. Andre decides to set fire to the neighbors’ car.

Unsurprisingly, many people here (about 80% of them) said that Andre had intentionally harmed his neighbors. He planned to harm them, because doing so would further another one of his goals (getting money) A similar situation was also presented, however, where instead of burning down the neighbor’s car, Andre donates to a humanitarian-aid society because his father would have liked that. In that case, only 20% of subjects reported that Andre had intended to give money to the charity.

Now that answer is a bit peculiar. Surely, Andre intended to donate the money, even if his reason for doing so involved getting money from his father. While that might not be the most high-minded reason to donate, it ought not make the donating itself any less intentional (though perhaps it seems a bit grudging). Cova & Naar (2012) raise the following alternative explanation: the way the philosophers tend to use the word “intention” is not the only game in town. There are other possible conceptions that people might have of the word based on the context in which it’s found, such as, “something done knowingly for which an agent deserves praise of blame“. Indeed, taking these results at face value, we would need something else beyond the dictionary definitions of intention and side effect, since they don’t seem to be applying here.

This returns us to my initial point about intentions themselves. While this is an empirical matter (albeit a potentially difficult one), there are at least two distinct possibilities: (a) people mean something different by “intention” in moral and nonmoral contexts (we’ll call this the semantic account), or (b) people mean the same thing in both cases, but they do actually perceive it differently (the perceptual account). As I mentioned before, intentions are not the kinds of things which are readily observable, but rather need to be inferred, or perceived. What was not previously mentioned, however, is that it is not as if people only have a single intention at any given time; given the modularity of the mind, and the various goals one might be attempting to achieve, it is perfectly possible, at least conceptually, for people to have a variety of different intentions at once – even ones that pull in opposite directions. We’re all intimately familiar with the sensation of having conflicting intentions when we find ourselves stuck between two appealing, but mutually-exclusive options: a doctor may intend to do no harm, intend to save people’s lives, and find himself in a position where he can’t do both.

Simple solution: do neither.

For whatever it’s worth, of the two options, I favor the perceptual account over the semantic account for the following reason: there doesn’t seem to be a readily-apparent reason for definitions to change strategically, though there are reasons for perceptions to change. Let’s return to the Andre case to see why. One could say that Andre had at least two intentions: get the inheritance, and complete act X required to achieve the inheritance. Depending on whether one wants to praise or condemn Andre for doing X, one might choose to highlight different intentions, though in both cases keeping the definition of intention the same. In the event you want to condemn Andre for setting the car on fire, you can highlight the fact that he intended to do so; if you don’t feel like praising him for his ostensibly charitable donation, you can choose instead to highlight the fact that (you perceive) his primary intention was to get money – not give it. However, the point of that perceptual change would be to convince others that Andre ought to be punished; simply changing the definition of “intention” when talking with others about the matter wouldn’t seem to accomplish that goal quite as well, as it would require the other speaker to share your definition.

References: Cova, F., & Naar, H. (2012). Side-Effect Effect Without Side Effect: Revisiting Knobe’s Asymmetry. Philosophical Psychology, 25, 837-854

Knobe, J. (2003). Intentional Action and Side Effects in Ordinary Language. Analysis, 63, 190-193 DOI: 10.1093/analys/63.3.190

Can Rube Goldberg Help Us Understand Moral Judgments?

Though many people might be unfamiliar with Rube Goldberg, they are often not unfamiliar with Rube Goldberg machines: anyone who has ever seen the commercial for the game “Mouse Trap” is at least passingly familiar with them. Admittedly, that commercial is about two decades old at this point, so maybe a more timely reference is in order:OK Go’s music video for “This too shall pass” is a fine demonstration (or Mythbusters, if that’s more your cup of tea). The general principle behind a Rube Goldberg machine is that it completes an incredibly simple task in an overly-complicated manner. For instance, one might design one of these machines to turn on a light switch, but that end state will only be achieved after 200 intervening steps and hours of tedious setup. While these machines provide a great deal of novelty when they work (and that is a rather large “when”, since there is the possibility of error in each step), there might be a non-obvious lesson they can also teach us concerning our cognitive systems designed for moral condemnation.

  Or maybe they can’t; either way, it’ll be fun to watch and should kill some time.

In the literature on morality, there is this concept known as the doctrine of double effect. The principle states that actions with harmful consequences can be morally acceptable provided a number of conditions are met: (1) the act itself needs to be morally neutral or better, (2) the actor intends to achieve some positive end through acting; not the harmful consequence, (3) the bad effect is not a means to the good effect, and (4) the positive effects outweigh the negative ones sufficiently. While that might all seem rather abstract, two concrete and popular examples can demonstrate the principle easily: the trolley dilemma and the footbridge dilemma. Taking these in order, the trolley problem involves the following scenario:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. Unfortunately, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person.

In this dilemma, most people who have been surveyed (about 90% of them) suggest that it is morally acceptable to pull the lever, diverting the train onto the side track. It also fits the principle of double effect nicely: (1) the act (redirecting the train) is not itself immoral, (2) the actor intends a positive consequence (saving the 5) and not the negative one (1 dies), (3) the bad consequence (the death) is not a means of achieving the outcome, but rather a byproduct of the action (redirecting the train), and (4) the lives saved substantially outweigh the lives lost.

The footbridge dilemma is very similar in setup, but different in a key detail: in the footbridge dilemma, rather than redirecting the train to a sidetrack, a person is pushed in front of it. While the person dies, that causes the train to stop before hitting the 5 hikers, saving their lives. In this case, only about 10% of people say it’s morally acceptable to push the man. We can see how double effect fails in this case: (1) the act (pushing the man) is relatively on the immoral side of things, (2) the death of the person being pushed in intended, and (3) the bad consequence (the man dying) is the means by which the good consequence is achieved; the fact that the positive consequences outweigh the negative ones in terms of lives saved is not enough. But why should this be the case? Why do consequences alone not dictate our actions, and why can factors as simple as redirecting a train versus pushing a person make such tremendous differences in our moral judgments?

As I suggested recently, the answer to both of those questions can be understood through beginning our analysis of morality with an analysis of condemnation. These questions can be rephrased in that light to the following forms: “Why might people wish to morally condemn someone for achieving an outcome that is, on the whole, good?” and, “Why might people be less inclined to condemn certain outcomes, contingent on how they’re brought about?” The answer to the first question is fairly straightforward: I might wish to morally condemn someone because their actions (or failing to morally condemn them) might have some direct costs on me, even if they benefit others. For instance, I might wish to condemn someone for their behavior in the trolley or footbridge problem if it’s my friend dying, rather than a stranger. That some generally morally positive outcome was achieved is irrelevant to me if it was costly from my perspective. Natural selection doesn’t design adaptations for the good of the group, so that the group’s welfare is increased seems besides the point. Of course, a cost is a cost is a cost, so why should it matter to me at all if my friend was killed by being pushed or having the train sent towards him?

“DR. TEDDY! NOOOO!”

Part of that answer depends on what other people are willing to condemn. Trying to punish someone for their actions is not always cheap or easy: there’s always a chance of retaliation by the punished party or their allies. After all, a cost is a cost is a cost to both me and them. This social variable means that attempting to punish others without additional support might be completely ineffective (or at least substantially less effective) at times. Provided that other parties are less likely to punish negative byproducts, relative to negative intended outcomes, this puts pressure on you to attempt and persuade others that the person you want to punish acted with intent, whereas it puts the reverse pressure on the actor; to convince others they did not intend that bad outcome. This brings us back to Rube Goldberg, the footbridge dilemma, and a slight addition to doctrine of double effect.

There are some who argue that the doctrine of double effect isn’t quite complete. Specifically, there is an unappreciated third type of action: one in which a person acts because a negative outcome will obtain, but they do not intend that outcome (what is known as “triple effect”). This distinction is a bit trickier to grasp, so another example will help. Say that we’re again talking about the footbridge dilemma: there is a man standing on the bridge over the tracks with the oncoming train scheduled to hit the 5 hikers. However, we can pull a lever which will drop the man onto the track where he will be hit, thus stopping the train and saving the five. This is basically identical to the standard footbridge problem, and most people would deem it unacceptable to pull the lever. But now let’s consider another case: again, the man is standing on the bridge, but the mechanism that will drop him off the bridge is a light sensor. If light reflects off the train onto the sensor, the bridge will drop the man, he will die, and the 5 will be saved. Seeing the oncoming train, someone, Rube-Goldberg style, shines a spotlight on the train, illuminating it; the illumination hits the sensor, dropping the man onto the track, killing him and saving the five hikers.

There are some (Otsuka, 2008) that argue there is no meaningful difference between these two cases, but in order to make that claim, they need to infer something about the actor’s intentions in both cases, and precisely what one infers affects the subsequent shape of the analysis. Were one to infer that there is really only one problem to be solved – the train that going to kill 5 people – then the intentions of the person pulling the lever to illuminate the train and pulling the lever to drop the man are equivalent and equally condemnable. However, there is another inference one could make in the light case, as there are multiple facets to the problem: the train will both kill 5 and the train isn’t illuminated. If one intends to solve the latter problem (so now there will be an illuminated train about to kill 5 people) one also, as a byproduct of solving that problem, causes both the problem of 5 people getting killed to be solved and the death of man who got dropped onto the track. Now one could argue, as Otsuka (2008) does, that such an example fails because people could not be plausibly motivated to solve the non-illuminated part of the problem, but that seems like largely a matter of perspective. The addition of the light variable introduces, if even to some small degree, plausible deniability capable of shifting the perception of an outcome from intended to byproduct. Someone pulling the lever could have been doing so in order to illuminate the train or to drop the man onto the track, but it’s not entirely unambiguous which is the case.

“Well how was I supposed to know I was doing something dangerous?”

The light case is also a relatively simple one: there are only 3 steps (shine light on train, light opens door, door opening causes man to fall and stop train), and perfect knowledge is assumed (the person shining the light knew this would happen). Changing either or these variables would likely have the effect of altering the blame of the actor: if the actor didn’t know about the light sensor or the man on the footbridge, condemnation would likely decrease; if the action involved 10 steps, rather than 3, this could potentially introduce further plausible deniability, especially if any of those steps involved the actions of other people. It would be in the actor’s best interests to thus deny their knowledge of the outcome, or separate the outcome from their initial action as broadly as possible. Conversely, someone looking to condemn the actor would need to do the reverse.

Now maybe this all sounds terribly abstract, but there are real-life cases to which similar kinds of analysis can apply. Consider cases where a child is bullied at school and later commits suicide. Depending on one’s perspective in these kinds of cases, one might condemn or fail to condemn the bullies for the suicide (though one might still blame them for the bullying); one might also, however, condemn the parents for not being there for the child as they should have, or one might blame no one but the suicide victim themselves. As one thinks about ways in which the suicide could have been prevented, there are countless potential Rube-Goldberg kinds of variables in the casual chain to point to (violent media, the parents, the bullies, the friends, their diet, the suicide victim, the school, etc), the modification of any of which might have prevented the negative outcome. This gives condemners (who may wish to condemn people for initially-unrelated reasons) a wide-array of potential plausible targets. However, each of these potential sources also gives the other sources some way of mitigating and avoiding blame. While such strategic considerations tend to make a mess of normative moral theories, they do provide us the required tools to actually begin to understand morality itself.

References: Otsuka, M. (2008). Double Effect, Triple Effect and the Trolley Problem: Squaring the Circle in Looping Cases. Utilitas, 20, 92-110 DOI: 10.1017/S0953820807002932

Conscience Does Not Explain Morality

“We may now state the minimum conception: Morality is, at the very least, the effort to guide one’s conduct by reason…while giving equal weight to the interests of each individual affected by one’s decision” (emphasis mine).

The above quote comes to us from Rachaels & Rachaels (2010) introductory chapter entitled “What is morality?” It is readily apparent that their account of what morality is happens to be a conscience-centric one, focusing on self-regulatory behaviors (i.e. what you, personally, ought to do). These conscience-based accounts are exceedingly popular among many people, academics and non-academics alike, perhaps owing to its intuitive appeal: it certainly feels like we don’t do certain things because they feel morally wrong, so understanding morality through conscience seems like the natural starting point. With all due respect to the philosopher pair and the intuitions of people everywhere, they seem to have begun their analysis of morality on entirely the wrong foot.

So close to the record too…

Now, without a doubt, understanding conscience can help us more fully understand morality, and no account of morality would be complete without explaining conscience; it’s just not an ideal starting point for beginning our analysis (DeScioli & Kurzban, 2009; 2013). This is because moral conscience does not, in and of itself, explain our moral intuitions well. Specifically, it fails to highlight the difference between what we might consider ‘preferences’ and ‘moral rules’. To better understand this distinction, consider two following statements: (1) “I have no interest in having homosexual intercourse”, and (2) “Homosexual intercourse is immoral”. These two statements are distinct utterances, aimed at expressing different thoughts. The first expresses a preference, and that preference would appear sufficient for guiding one’s behavior, all else being equal; the latter statement, however, appears to express a different sentiment altogether. That second sentiment appears to imply that others ought to not have homosexual intercourse, regardless of whether you (or they) want to engage in the act.

This is the key distinction, then: moral conscience (regulating one’s own behavior) does not appear to straightforwardly explain moral condemnation (regulating the behavior of others). Despite this, almost every expressed moral rule or law involves punishing others for how they behave – at least implicitly. While the specifics of what gets punished and how much punishment is warranted vary to some degree from individual to individual, the general form of moral rules does not. Were I to say I do not wish to have homosexual intercourse, I’m only expressing a preference, a bit like stating whether or not I would like my sandwich on white or wheat bread. Were I to say homosexuality is immoral, I’m expressing the idea that those who engage in the act ought to be condemned for doing so. By contrast, I would not be interested in punishing people for making the ‘wrong’ choice about bread, even if I think they could have made a better choice.

While we cannot necessarily learn much about moral condemnation via moral conscience, the reverse is not true: we can understand moral conscience quite well through moral condemnation. Provided that there are groups of people who will tend to punish for you for doing something, this provides ample motivation to avoid engaging in that act, even if you otherwise highly desire to do so. Murder is a simple example here: there tend to be some benefits for removing specific conspecifics from one’s world. Whether because those others inflict costs on you or prevent the acquisition of benefits, there is little question that murder might occasionally be adaptive. If, however, the would-be target of your homicidal intentions happens to have friends and family members that would rather not see them dead, thank you very much, the potential costs those allies might inflict need to be taken into account. Provided those costs are appreciably great, and certain actions are punished with sufficient frequency over time, a system for representing those condemned behaviors and their potential costs – so as to avoid engaging in them – could easily evolve.

“Upon further consideration, maybe I was wrong about trying to kill your mom…”

That is likely what our moral conscience represents. To the extent that behaviors like stealing from or physically harming others tended to be condemned and punished, we ought to be expected to have a cognitive system to represent that fact. Now perhaps that all seems a bit perverse. After all, many of us simply experience the sensation that an act is morally wrong or not; we don’t necessarily think about our actions in terms of the likelihood and severity of punishment (we do think such things some of the time, but that’s typically not what appears to be responsible for our feeling of “that’s morally wrong”. People think things are morally wrong regardless of whether one is caught doing it). That all may be true enough, but remember, the point is to explain why we experience those feelings of moral wrongness; not to just note that we do experience them and that they seem to have some effect on our behavior. While our behavior might be proximately motivated by those feelings of moral wrongness, those feelings came to exist because they were useful in guiding out behavior in the face of punishment. That does raise a rather important question, though: why do we still feel certain acts are immoral even when the probability of detection or punishment are rather close to zero?

There are two ways of answering that question, neither of which is mutually exclusive with the other. The first is that the cognitive systems which compute things like the probability of being detected and estimate the likely punishment that will ensue are always working under conditions of uncertainty. Because of this uncertainty, it is inevitable that the system will, on occasion, make mistakes: sometimes one could get away without repercussions when behaving immorally, and one would be better off if they took those chances than if they did not. One also needs to consider the reverse error as well, though: if you assess that you will not be caught or punished when you actually will, you would have been better off not behaving immorally. Provided the costs of punishment are sufficiently high (the loss of social allies, abandonment by sexual partners, the potential loss of your life, etc), it might pay in some situations to still avoid behaving in morally unacceptable ways even when you’re almost positive you could get away with it (Delton et al, 2012). The point here is that it doesn’t just matter if you’re right or wrong about whether you’re likely to be punished: the costs to making each mistake need to be factored into the cognitive equation as well, and those costs are often asymmetric.

The second way of approaching that question is to suggest that the conscience system is just one cognitive system among many, and these systems don’t always need to agree with one another. That is, a conscience system might still represent an act as morally unacceptable while other systems (those designed to get certain benefits and assess costs) might output an incompatible behavioral choice (i.e. cheating on your committed partner despite knowing that it is morally condemned to do so, as the potential benefits are perceived as being greater than the costs). To the extent that these systems are independent, then, it is possible for each to hold opposing representations about what to do at the same time. Examples of this happening in other domains are not hard to find: the checkerboard illusion, for instance, allows us to hold both the representation that A and B are different colors and that A and B are the same color in our mind at once. We need not be of one mind about all such matters because our mind is not one thing.

“Well, shoot; I’ll get the glue gun…”

Now, to be sure, there are plenty of instances where people will behave in ways deemed to be immoral by others (or even by themselves, at different times) without feeling the slightest sensation of their conscience telling them “what you’re doing is wrong”. Understanding how the conscience develops, and the various input conditions likely to trigger it – or fail to do so – are interesting matters. In order to make better progress on researching them, however, it would benefit researchers to begin with an understanding of why moral conscience exists. Once the function of conscience – avoiding condemnation – has been determined, figuring out what questions to ask about conscience becomes an altogether easier task. We might expect, for instance, that moral conscience is less likely to be triggered when others (the target and their allies) are perceived to be incapable of effective retaliation. While such a prediction might appear eminently sensible when beginning with condemnation, it is not entirely clear how one could deliver such a prediction if they began their analysis with conscience instead.

References: Delton, A., Krasnow, M., Cosmides, L., & Tooby, J. (2012). Evolution of direct reciprocity under uncertainty can explain human generosity in one-shot encounters.  Proceedings of the National Academy of Sciences, 108, 13335-13340.

DeScioli P, & Kurzban R (2009). Mysteries of morality. Cognition, 112 (2), 281-99 PMID: 19505683

DeScioli P, & Kurzban R (2013). A solution to the mysteries of morality. Psychological bulletin, 139 (2), 477-96 PMID: 22747563

Rachaels, J. & Rachels S. (2010). The Elements of Moral Philosophy. New York, NY: McGraw Hill.

Simple Rules Do Useful Things, But Which Ones?

Depending on who you ask – and their mood at moment – you might come away with the impression that humans are a uniquely intelligent species, good at all manner of tasks, or a profoundly irrational and, well, stupid one, prone to frequent and severe errors in judgment. The topic often penetrates into lay discussions of psychology, and has been the subject of many popular books, such as the Predictably Irrational series. Part of the reason that people might give these conflicting views of human intelligence – either in terms of behavior or reasoning – is the popularity of explaining human behavior through cognitive heuristics. Heuristics are essentially rules of thumb which focus only on limited sets of information when making decisions. A simple, perhaps hypothetical example of a heuristic might be something like a “beauty heuristic”. This heuristic might go something along the lines of when deciding who to get into a relationship with, pick the most physically attractive available option; other information – such as the wealth, personality traits, and intelligence of the perspective mates – would be ignored by the heuristic.

Which works well when you can’t notice someone’s personality at first glance.

While ignoring potential sources information might seem perverse at first glance, given that one’s goal is to make the best possible choice, it has the potential to be a useful strategy. One of these reasons is that the world is a rather large place, and gathering information is a costly process. The benefits of collecting additional bits of information are outweighed by the costs of doing so past a certain point, and there are many, many potential sources of information to choose from. To the extent that additional information helps one make a better choice, making the best objective choice is often a practical impossibility. In this view, heuristics trade off accuracy with effort, leading to ‘good-enough’ decisions. A related, but somewhat more nuanced benefit of heuristics comes from the sampling-error problem: whenever you draw samples from a population, there is generally some degree of error in your sample. In other words, your small sample is often not entirely representative of the population from which it’s drawn. For instance, if men are, on average, 5 inches taller than women the world over, if you select 20 random men and women from your block to measure, your estimate will likely not be precisely 5 inches; it might be lower or higher, and the degree of that error might be substantial or negligible.

Of note, however, is the fact that the fewer people from the population you sample, the greater your error is likely to be: if you’re only sampling 2 men and women, your estimate is likely to be further from 5 inches (in one direction or the other) relative to when you’re sampling 20, relative to 50, relative to a million. Importantly, the issue of sampling error crops up for each source of information you’re using. So unless you’re sampling large enough quantities of information capable of balancing that error out across all the information sources you’re using, heuristics that ignore certain sources of information can actually lead to better choices at times. This is because the bias introduced by the heuristics might well be less predictively-troublesome than the degree of error variance introduced by insufficient sampling (Gigerenzer, 2010). So while the use of heuristics might at times seem like a second-best option, there appear to be contexts where it is, in fact, the best option, relative to an optimization strategy (where all available information is used).

While that seems to be all well and good, the acute reader will have noticed the boundary conditions required for heuristics to be of value: they need to know how much of which sources of information to pay attention to. Consider a simple case where you have five potential sources of information to attend to in order to predict some outcome: one of these is sources strongly predictive, while the other four are only weakly predictive. If you play an optimization strategy and have sufficient amounts of information about each source, you’ll make the best possible prediction. In the face of limited information, a heuristic strategy can do better provided you know you don’t have enough information and you know which sources of information to ignore. If you picked which source of information to heuristically-attend to at random, though, you’d end up making a worse prediction than the optimizer 80% of the time. Further, if you used a heuristic because you mistakenly believed you didn’t have sufficient amounts of information when you actually did, you’ve also made a worse prediction than the optimizer 100% of the time.

“I like those odds; $10,000 on blue! (The favorite-color heuristic)”

So, while heuristics might lead to better decisions than attempts at optimization at times, the contexts in which they manage that feat are limited. In order for these fast and frugal decision rules to be useful, you need to be aware of how much information you have, as well as which heuristics are appropriate for which situations. If you’re trying to understand why people use any specific heuristic, then, one would need to make substantially more textured predictions about the functions responsible for the existence of the heuristic in the first place. Consider the following heuristic, suggested by Gigerenzer (2010): if there is a default, do nothing about it. That heuristic is used to explain, in this case, the radically different rates of being an organ donor between countries: while only 4.3% of Danish people are donors, nearly everyone in Sweden is (approximately 85%). Since the explicit attitudes about the willingness to be a donor don’t seem to differ substantially between the two countries, the variance might prove a mystery; that is, until one realizes that the Danes have an ‘opt in’ policy to be a donor, whereas the Swedes have an ‘opt out’ one. The default option appears to be responsible for driving most of variance in rates of organ donor status.

While such a heuristic explanation might seem, at least initially, to be a satisfying one (in that it accounts for a lot of the variance), it does leave one wanting in certain regards. If anything, the heuristic seems more like a description of a phenomenon (the default option matters sometimes) rather than an explanation of it (why does it matter, and under what circumstances might we expect it to not?). Though I have no data on this, I imagine if you brought subjects into the lab and presented them with an option to give the experimenter $5 or have the experimenter give them $5, but highlighted the first option as default, you would probably find very few people who did not ignore the default heuristic. Why, then, might the default heuristic be so persuasive at getting people to be or fail to be organ donors, but profoundly unpersuasive at getting people to give up money? Gigerenzer’s hypothesized function for the default heuristic – group coordination – doesn’t help us out here, since people could, in principle, coordinate around either giving or getting. Perhaps one might posit that another heuristic – say, when possible, benefit the self over others – is at work in the new decision, but without a clear, and suitably textured theory for predicting when one heuristic or another will be at play, we haven’t explained these results.

In this regard, then, heuristics (as explanatory variables) share the same theoretical shortcoming as other “one-word explanations” (like ‘culture’, ‘norms’, ‘learning’, ‘the situation’, or similar such things frequently invoked by psychologists). At best, they seem to describe some common cues picked up on by various cognitive mechanisms, such as authority relations (what Gigerenzer suggested formed the following heuristic: if a person is an authority, follow requests) or peer behavior (the imitate-your-peers heuristic: do as your peers do) without telling us anything more. Such descriptions, it seems, could even drop the word ‘heuristic’ altogether and be none the worse for it. In fact, given that Gigerenzer (2010) mentions the possibility of multiple heuristics influencing a single decision, it’s unclear to me that he is still be discussing heuristics at all. This is because heuristics are designed specifically to ignore certain sources of information, as mentioned initially. Multiple heuristics working together, each of which dabble in a different source of information that the others ignore seem to resemble an optimization strategy more closely than heuristic one.

And if you want to retain the term, you need to stay within the lines.

While the language of heuristics might prove to be a fast and frugal way of stating results, it ends up being a poor method of explaining them or yielding much in the way of predictive value. In determining whether some decision rule even is a heuristic in the first place, it would seem to behoove those advocating the heuristic model to demonstrate why some source(s) of information ought to be expected to be ignored prior to some threshold (or whether such a threshold even exists). What, I wonder, might heuristics have to say about the variance in responses to the trolley and footbridge dilemmas, or the variation in moral views towards topics like abortion or recreational drugs (where people are notably not in agreement)? As far as I can tell, focusing on heuristics per se in these cases is unlikely to do much to move us forward. Perhaps, however, there is some heuristic heuristic that might provide us with a good rule of thumb for when we ought to expect heuristics to be valuable…

References: Gigerenzer, G. (2010). Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality Topics in Cognitive Science., 2, 528-554 DOI: 10.1111/j.1756-8765.2010.01094.x

Do People Try To Dishonestly Signal Fairness?

“My five-year old, the other day, one of her toys broke, and she demanded I break her sister’s toy to make it fair. And I did.” – Louis CK

This quote appeared in a post of mine around the middle of last month, in which I wanted to draw attention to the fact that a great deal of caution is warranted in inferring preferences for fairness per se from the end-states of economic games. Just because people behaved in ways that resulted in inequality being reduced, it does not necessarily follow that people were consciously acting in those ways to reduce inequality, or that humans have cognitive adaptations designed to do so; to achieve fairness. In fact, I have the opposite take on matter: since achieving equality per se doesn’t necessarily do anything useful, we should not expect to find cognitive mechanisms designed for achieving that end state. In this view, concerns for fairness are byproducts of cognitive systems designed to do other useful things. Fairness, after all, would be – and indeed can only be – an additional restriction to tack onto the range of possible, consequentially-usefully outcomes. As the Louis CK quote makes clear, concerns for fairness might involve doing things which are actively detrimental, like destroying someone else’s property to maintain some kind of equal distribution of resources. As his quote also makes clear, people are, in fact, sometimes willing to do just that.

Which brings us nicely to the topic of fairness and children.

There has been some research on children where an apparent preference for fairness (the Louis CK kind) has been observed. In the first study in a paper by Shaw et al (2012), children, ages 6 to 8, were asked a series of questions so as to be rewarded with colorful erasers (a valued resource for children). The experimenter also told the child that another, non-present child had finished a similar task and, as such, had also earned some erasers. Initially, the experimenter divided four erasers equally between the two, then left the room to retrieve a final eraser they had ostensibly forgotten. The experimenter returned to the room, and asked the child what they should do with the fifth eraser: should the child themselves get it, should the non-present child get it, or should it be thrown away? A remarkable 80% of children suggested that the eraser should be thrown away, rather than taking it for themselves or giving it to the other child. The first thing worth noting here is that children appeared willing to achieve equality through welfare destruction; equality made no one better off here, and at least one person worse off. This is what I meant when I said that achieving equality only limits the possible range of behaviors. The more interesting finding, though, is what happened when children had the available option for non-transparently unfair behavior.

The two other conditions in this first study tracked the possibility that children only wanted to appear fair, without actually being fair. In these conditions, the erasers were placed inside envelops, so as to be concealed from view. In the first of these two conditions, the child was given 1 eraser while the other non-present child was given two. When the experimenter left the room to retrieve the last eraser, a confederate came in and placed an additional eraser inside the child’s envelop and told the child to keep it secret. Then, the experimenter returned with the final eraser and asked the child what they should do with it. In this condition, only 25% of children said the eraser should be thrown away, with the rest opting instead to keep it for themselves; an unfair distribution. The second version of this condition was the same, except it was the non-present child who got the 1 eraser initially, with the confederate adding the same secret eraser to the non-present child’s envelop. In that condition, 60% of children suggested the experimenter should throw away the last eraser, with the remaining 40% keeping it for themselves (making them appear indifferent between a fair distribution or a selfish, unfair one).

So, just to recap, children will publicly attempt to achieve a fair outcome, even though doing so results in worse consequentialist outcomes (there is no material benefit to either child to throwing away an otherwise-valued eraser). However, privately, children are perfectly content to behave unfairly. The proffered explanation for these findings is that children wanted to send a signal of fairness to others publicly, but actually preferred to behave unfairly, and when they had some way of obscuring that they were doing so, they would make use of it. Indeed, findings along these same lines have been demonstrated across a variety of studies in adults as well – appear publicly fair and privately selfish – so the patterns of behavior appear sound. While I think there is certainly something to the signaling model proposed by Shaw et al (2012), I also think the signaling explanation requires some semantic and conceptual tweaking in order to make it work since, as it stands, it doesn’t make good sense. These alterations focus on two main areas: the nature of communication itself and the necessary conditions for signals to evolve, and also on how to precisely conceptualize what signal is – or rather isn’t – being sent, as well as why we ought to expect that state of affairs. Let’s begin by talking honesty.

Liar, Liar, I’m bad a poetry and you have third degree burns now.

The first issue with the signaling explanation involves a basic conceptual point about communication more generally: in order for a receiver to care about a signal from a sender in the first place, the signal needs to (generally) be honest. If I publicly proclaim that I’m a millionaire when I’m actually not, it would behoove listeners to discount what it is I have to say. A dishonest signal is of no value to the receiver. The same logic holds throughout the animal kingdom, which is why ornaments that signal an animal’s state – like the classic peacock tail – are generally very costly to grow, maintain, and survive with. These handicaps ensure the signal’s honesty and make it worth the peahen’s while to respond to. If, on the other hand, the peacocks could display a train without actually being in better condition, the signal value of the trait is lost, and we should expect peahens to eventually evolve in the direction of no longer caring about the signal. The fairness signaling explanation, then, seems to be phrased rather awkwardly: in essence, it would appear to say that, “though people are not actually fair, they try to signal that they are because other people will believe them”. This requires positing that the signal itself is a dishonest one and that receivers care about it. That’s a conceptual problem.

The second issue is that, even if one was actually fair in terms of resource distribution both publicly and privately, it seems unclear to me that one would benefit in any way by signaling that fact about themselves. Understanding why should be fairly easy: partial friends – one’s who are distinctly and deeply interested in promoting your welfare specifically – are more desirable allies than impartial ones. Someone who would treat all people equally, regardless of preexisting social ties, appears to pose no distinct benefits as an association partner. Imagine, for instance, how desirable a romantic partner would be who is just as interested in taking you out for dinner as they are in taking anyone else out. If they don’t treat you special in any way, investing your time in them would be a waste. Similarly, a best friend who is indifferent between spending time with you or someone they just met works as well for the purposes of this example. Signaling you’re truly fair, then, is signaling that you’re not a good social investment. Further, as this experiment demonstrated, achieving fairness can often mean worse outcomes for many. Since the requirement of fairness is a restriction on the range of possible behaviors on can engage it, fairness per se cannot lead to better utilitarian outcomes. Not only would signaling true fairness make you seem like a poor friend, it would also tell others that you’re the type of person who will tend to make worse decisions, overall. This doesn’t paint a pretty picture of fair individuals.

So what are we to make of the children’s curious behavior of throwing out an eraser? My intuition is that children weren’t trying to send a signal of fairness so much as they were trying to avoid sending a signal of partiality. This is a valuable distinction to make, as it makes the signaling explanation immediately more plausible: now, instead of a dishonest signal that needs to be believed, we’re left with the lack of a distinct signal that need not be considered either honest or dishonest. The signal is what’s important, but the children’s goal is avoid letting signals leak, rather than actively sending them. This raises the somewhat-obvious question of why we might expect people to sometimes forgo personal benefits to themselves or others so as to avoid sending a signal of partiality. This is an especially important consideration, as not throwing away a resource can (potentially) be beneficial no matter where it ends up: either directly beneficial in terms of gaining a resource for yourself, or beneficial in terms of initiating or maintaining new alliances if generously given to others. Though I don’t have a more-definite response to that concern, I do have some tentative suggestions.

Most of which sadly request that I, “Go eat a…”, well, you know.

An alternative possibility is that people might wish to, at times, avoid giving other people information pertaining to the extent of their existing partial relationships. If you know, for instance, that I am already deeply invested in friendships with other people, that might make me look like a bad potential investment, as I have, proportionately, fewer available resources to invest in others than if I didn’t have those friendships; I would also have less of a need for additional friends (as I discussed previously). Further, social relationships can come with certain costs or obligations, and there are times where initiating a new relationship with someone is not in your best interests: even if that person might treat you well, associating with them might carry costs from others your new partner has mistreated in the past. Though these potentials might not necessarily explain why the children are behaving the way they do with respect to erasers, it at least gives us a better theoretical grounding from which to start considering the question. What I feel we can be confident about is that the strategy that children are deploying resembles poker players trying to avoid letting other people see what cards they’re holding, rather than trying to lie to other people about what they are. There is some information that it’s not always wise to send out into the world, and some information it’s not worth listening to. Since communication is a two-way street, it’s important to avoid trying to think about each side individually in these matters.

References: Shaw A, Montinari N, Piovesan M, Olson KR, Gino F, & Norton MI (2013). Children Develop a Veil of Fairness. Journal of experimental psychology. General PMID: 23317084

Why Do People Adopt Moral Rules?

First dates and large social events, like family reunions or holiday gatherings, can leave people wondering about which topics should be off-limits for conversations, or even dreading which topics will inevitably be discussed. There’s nothing quite like the discomfort that a drunken uncle feeling the need to let you know precisely what he thinks about the proper way to craft immigration policy or what he thinks about gay marriage can bring. Similarly, it might not be a good idea to open up a first date with an in-depth discussion of your deeply held views on abortion and racism in the US today. People realize, quite rightly, that such morally-charged topics have the potential to be rather divisive, and can quickly alienate new romantic partners or cause conflict within otherwise cohesive groups. Alternatively, however, in the event that you happen to be in good agreement with others on such topics, they can prove to be fertile grounds for beginning new relationships or strengthening old ones; the enemy of my enemy is my friend and similar such sayings attest to that. All this means you need to be careful about where and how you spread your views about these topics. Moral stances are kind of like manure in that way.

Great on the fields; not so great for tracking around everywhere you walk.

Now these are pretty important things to consider if you’re a human, since a good portion of your success in life is going to be determined by who your allies are. One’s own physical prowess is no longer sufficient to win conflicts when you’re fighting against increasingly larger alliances, not to mention the fact that allies also do wonders for your available options regarding other cooperative ventures. Friends are useful, and this shouldn’t news to anyone. This would, of course, drive selection pressures for adaptations that help people to build and maintain healthy alliances. However, not everyone ends up with a strong network of alliances capable of helping them protect or achieve their interests. Friends and allies are a zero-sum resource, as the time they spend helping one person (or one group of people) is time not spent with another. The best allies are a very limited and desirable resource, and only a select few will have access to them: those who have something of value to offer in return. So what are the people towards the bottom of the alliance hierarchy to do? Well, one potential answer is the obvious, and somewhat depressing, outcome: not much. They tended to get exploited by others; often ruthlessly so. They either need to increase their desirability as a partner to others in order to make friends who can protect them, or face those severe and persistent social costs.

Any available avenue for those exploited parties that help them avoid such costs and protect their interests, then, ought to be extremely appealing. A new paper by Petersen (2013) proposes that one of these avenues might be for those lacking in the alliance department to be more inclined to use moralization to protect their interests. Specifically, the proposition on offer is that if one lacks the private ability to enforce their own interests, in the form of friends, one might be increasingly inclined to turn towards public means of enforcement: recruiting third-party moralistic punishers. If you can create a moral rule that protects your self-interest, third parties – even those who otherwise have no established alliance with you – ought to become your de facto guardians whenever those interests are threatened. Accordingly, the argument goes that those lacking in friends ought to be more likely to support existing rules that protect them against exploitation, whereas those with many friends, who are capable of exploiting others, ought to feel less interest in supporting moral rules that prevent said exploitation. In support of this model, Petersen (2013) notes that there is a negative correlation – albeit a rather small one – between proxies for moralization and friend-based social support (as opposed to familial or religious support, which tended to correlate as well, but in the positive direction).

So let’s run through a hypothetical example to clarify this a bit: you find yourself back in high school and relatively alone in that world, socially. The school bully, with his pack of friends, have been hounding you and taking your lunch money; the classic bully move. You could try and stand up to the bullies to prevent the loss of money, but such attempts are likely to be met with physical aggression, and you’d only end up getting yourself hurt on top of then losing your money anyway. Since you don’t have enough friends who are willing and able to help tip the odds in your favor, you could attempt to convince others that it ought to be immoral to steal lunch money. If you’re successful in your efforts, the next time the bullies attempt to inflict costs on you, they would find themselves opposed by the other students who would otherwise just stay out of it (provided, of course, that they’re around at the time). While these other students might not be your allies at other times, they are your allies, temporarily, when you’re being stolen from. Of course, moralizing stealing prevents you from stealing from others – as well as having it done to you – but since you weren’t in the position to be stealing from anyone in the first place, it’s really not that big of a loss for you, relative to the gain.

Phase Two: Try to make wedgies immoral.

While such a model posits a potentially interesting solution for those without allies, it leaves many important questions unaddressed. Chief among these questions is the matter of what’s in it for third parties? Why should other people adopt your moral rules, as opposed to their own, let alone be sure to intervene even if you share the moral rule? While third-party support is certainly a net benefit for the moralizer who initially can’t defend their own interests, it’s a net cost to the people who actually have to enforce the moral rule. If those bullies are trying to steal from you, the costs of deterring, and if necessary, fighting them off, falls on shoulders of others who would probably rather avoid such risks. These costs are magnified further because a moral rule against stealing lunch money ought to require people to punish any and all instance of the bullying; not just your specific one. As punishing people is generally not a great way to build or maintain relationships with them, supporting this moral rule, then, could prevent the punishers from forming what might be otherwise-useful alliances with the bullying parties. Losing potential friendships to temporarily support someone you’re not actually friends with and won’t become friends with doesn’t sound like a very good investment.

The costs don’t even end there, though. Let’s say, hypothetically, that most people do agree that the stealing of lunch money ought to be stopped and are willing to accept the moral rule in the first place. There are costs involved in enforcing the rule, and it’s generally in everyone’s best interest to not suffer those costs personally. So, while people might be perfectly content with their being a rule against stealing, they don’t want to be the ones who have to enforce it; they would rather free-ride on other people’s punishment efforts. Unfortunately, the moral rule requires a large number of potential punishers for it to be effective. This means that those willing to punish would need to incentivise non-punishers to start punishing as well. These incentives, of course, aren’t free to deliver. This now leads to punishers needing to, in essence, not only punish those who commit the immoral act, but also punish those who fail to punish people who commit the immoral act (which leads to punishing those who fail to punish those who fail to punish as well, and so on. The recursion can be hard to keep track of). As the costs of enforcement continue to mount, in the absence of compensating benefits it’s not at all clear to me why third parties should become involved in the disputes of others, or try to convince other people to get involved. Punishing an act “because it’s immoral” is only a semantic step away from punishing something “just because”.

A more plausible model, I feel, would be an alliance-based model for moralization: people might be more likely to adopt moral rules in the interests of increasing their association value to specific others. Let’s use one of the touchy, initial subjects – abortion – as a test case here: if I adopt a moral stance opposing the practice, I would make myself a less-appealing alliance partner for anyone who likes the idea of abortions being available, but I would also make myself a more-appealing to partner to anyone who dislike the idea (all else being equal). Now that might seem like a wash in terms of costs and benefits on the whole – you open yourself up to some friends and foreclose on others – but there are two main reasons I would still favor the alliance account. The first is the most obvious: it locates some potential benefits for the rule-adopters. While it is true that there are costs to making a moral stance, there aren’t only costs anymore. The second benefit of the alliance account is that the key issue here might not be whether you make or lose friends on the whole, but rather that it can ingratiate you to specific people. If you’re trying to impress a particular potential romantic partner or ally, rather than all romantic partners or allies more generally, it might make good sense to tailor your moral views to that specific audience. As was noted previously, friendship is a zero-sum game, and you don’t get to be friends with everyone.

Basically, these two aren’t trying to impress each other.

It goes without saying that the alliance model is far from complete in terms of having all its specific details fleshed out, but it gives us some plausible places with which to start our analysis: considerations of what specific cues to people might use to assess relative social value, or how those cues interact with current social conditions to determine the degree of current moral support. I feel the answers to such questions will help us shed light on many additional ones, such as why almost all people will agree with the seemingly-universal rule stating “killing morally is wrong” and then go on to expand upon the many, many non-universal exceptions to that moral rule over which they don’t agree (such as when killing in self-defense, or when you find your partner having sex with another person, or when killing a member of certain non-human species, or killing unintentionally, or when killing a terminally ill patient rather than letting them suffer, and so on…). The focus, I feel, should not be on why how powerful of a force third-party punishment can be, but rather why third parties might care (or fail to care) about the moral violations of others in the first place. Just because I think murder is morally wrong, it doesn’t mean I’m going to react the same way to any and all cases of murder.

References: Petersen, M. (2013). Moralization as protection against exploitation: Do individuals without allies moralize more? Evolution and Human Behavior, 34, 78-85 DOI: 10.1016/j.evolhumbehav.2012.09.006

Equality-Seeking Can Lift (Or Sink) All Ships

There’s a saying in economics that goes, “A rising tide lifts all ships”. The basic idea behind the saying is that marginal benefits that accrue from people exchanging goods and services is good for everyone involved – and even for some who are not directly involved – in much the same way that all the boats in a body of water will rise or fall in height together as the overall water level does. While there is an element of truth to the saying (trade can be good for everyone, and the resources available to the poor today can, in some cases, be better than those available to even the wealthy in generations past), economies, of course, are not like bodies of water that rise and fall uniformly; some people can end up radically better- or worse-off than others as economic conditions shift, and inequality is a persistent factor in human affairs. Inequality – or, more aptly, the perception of it – is also commonly used as a justification for furthering certain social or moral goals. There appears to be something (or somethings) about inequality that just doesn’t sit well with people.

And I would suggest that those people go and eat some cake.

People’s ostensible discomfort with inequality has not escaped the eyes of many psychological researchers. There are some who suggest that humans have a preference for avoiding inequality; an inequality aversion, if you will. Phrased slightly differently, there are some who suggest that humans have an egalitarian motive (Dawes et al, 2007) that is distinct from other motives, such as enforcing cooperation or gaining benefits. Provided I’m parsing the meaning of the phrase correctly, then, the suggestion being made by some is that people should be expected to dislike inequality per se, rather than dislike inequality for other, strategic reasons. Demonstrating evidence of a distinct preference for inequality aversion, however, can be difficult. There are two reasons for this, I feel: the first is that inequality is often confounded with other factors (such as someone not cooperating or suffering losses). The second reason is that I think it’s the kind of preference that we shouldn’t expect to exist in the first place.

Taking these two issues in order, let’s first consider the paper by Dawes et al (2007) that sought to disentangle some of these confounding issues. In their experiment, 120 subjects were brought into the lab in groups of 20. These groups were further divided into anonymous groups of 4, such that each participant played in five rounds of the experiment, but never with the same people twice. The subjects also did not know about anyone’s past behavior in the experiment. At the beginning of each round, every subject in each group received a random number of payment units between some unmentioned specific values, and everyone was aware of the payments of everyone else in their group. Naturally, this tended to create some inequality in payments. Subjects were given means by which to reduce this inequality, however: they could spend some of their payment points to either add or subtract from other people’s payments at a ratio of 3 to 1 (in other words, I could spend one unit of my payment to either reduce your payment by three points or add three points to your payment). These additions and deductions were all decided on in private an enacted simultaneously, so as to avoid retribution and cooperation factors. It wasn’t until the end of each round that subjects saw how many additions and reductions they had received. In total, each subject had 15 chances to either add to or deduct from someone else payment (3 people per round over 5 rounds).

The results showed that most subjects paid to either add to or deduct from someone else’s payment at least once: 68% of people reduced the payment of someone else at least once, whereas 74% increased someone’s payment at least once. It wasn’t what one might consider a persistent habit, though: only 28% reduced people’s payment more than five times, while 33% added, and only 6% reduced more than 10 times, whereas 10% added. This, despite their being inequality to be reduced in all cases. Further, an appreciable number of the modifications didn’t go in the equality-reducing direction: 29% of reductions went to below-average earners, and 38% of the additions went to above-average earners. Of particular interest, however, is the precise way in which subjects ended up reducing inequality: the people who earned the least in each round tended to spend 96% more on deductions than top earners. In turn, top earners averaged spending 77% more on additions than the bottom earners. This point is of interest because positing a preference for avoiding inequality does not necessarily help one predict the shape that equality will ultimately take.

You could also cut the legs off the taller boys in the left picture so no one gets to see.

The first thing worth point out here, then, is that about half of all the inequality-reducing behaviors that people engaged in ended up destroying overall welfare. These are behaviors in which no one is materially better off. I’m reminded of part of a standup routine by Louis CK, concerning that idea, in which he recounts the following story (starting at about a 1:40):

“My five-year old, the other day, one of her toys broke, and she demanded I break her sister’s toy to make it fair. And I did.”

It’s important to note this so as to point out that achieving equality itself doesn’t necessarily do anything useful. It is not as if equality automatically makes everyone – or anyone – better off. So what kind of useful outcomes might such spiteful behavior result in? To answer that question, we need to examine the ways people reduced inequality. Any player in this game could reduce the overall amount of inequality by either deducting from high earners payment or adding to low earners. This holds for both the bottom and top earners. This means that there are several ways of reducing inequality available to all players. Low earners, for instance, could reduce inequality by engaging in spiteful reductions towards everyone above them until they’re all down at the same low level; they could also reduce the overall inequality by benefiting everyone above them, until everyone (but them) is at the same high level. Alternatively, they could engage in a mixture of these strategies, benefiting some people and harming others. The same holds for high earners, just in the opposite directions. Which path people would take depends on what their set point for ‘equal’ is. Strictly speaking, then, a preference for equality doesn’t tell us which method people should opt for, nor does it tell us what levels of inequality will be relatively accepted and efforts to achieve equality will cease.

There are, however, other possibilities for explaining these results beyond a preference for inequality per se. One particularly strong alternative is that people use perceptions of inequality as inputs for social bargaining. Consider the following scenario: two people are working together to earn a joint prize, like a $10 reward. If they work together, they get the $10 to split; if they do not work together, neither will receive anything. Further, let’s assume one member of this pair is greedy, and in round one, after they cooperate, takes $9 of the pot for themselves. Now, strictly speaking, the person who received $1 is better off than if they received nothing at all, but that doesn’t mean they ought to accept that distribution, and here’s why: if the person with $1 refuses to cooperate during the next round, they only lose that single dollar; the selfish player would lose out on nine-times as much. This asymmetry in losses puts the poorer player in a stronger bargaining position, as they have far less to lose from not cooperating. It is from bargaining structures similar in structure to this that our sense of fairness likely emerged.

So let’s apply this analysis back to the results of the experiment: people all start off with different amounts of money and people are in positions to benefit or harm each other. Everyone wants to leave with as much benefit as possible, which means contributing nothing and getting additions from everyone else. However, since everyone is seeking this same outcome and they can’t all have it, certain compromises need to be reached. Those in high-earning positions face a different set of problems in that compromise than those in low-earning positions: while the high earners are doing something akin to trying to maintain cooperation by increasing the share of resources other people get (as in the previous example), low earners are faced with the problem of negotiating for a better payoff, threatening to cut off cooperation in the process. Both parties seem to anticipate this, with low earners disproportionately punishing high earners, and high earners disproportionately benefiting low earners. That there is no option for cooperation or bargaining present in this experiment is, I think besides the point, as our minds were not designed to deal with the specific context presented in the experiment. Along those same lines, simply telling people that “you’re now anonymous” doesn’t mean that their mind will automatically function as if it was positive no one could observe its actions, and telling people their computer can’t understand their frustration won’t stop them from occasionally yelling at it.

“Listen only to my voice: you are now anonymous. You are now anonymous”

As a final note, one should be careful about inferring a motive or preference for equality just because inequality was sometimes reduced. A relatively simple example should demonstrate why: consider an armed burglar who enters a store, points their gun at the owner, and demands all the money in the register. If the owner hands over the money, they have delivered a benefit to the burglar at a cost to themselves, but most of us would not understand this as an act of altruism on the part of the owner; the owner’s main concern is not getting shot, and they are willing to pay a small cost (the loss of money) so as to avoid a larger one (possible death). Other research has found, for instance, that when given the option to pay a fixed cost (a dollar) to reduce another person’s payment by any amount (up to a total of $12), when people engage in reduction, they’re highly likely to generate inequality that favors themselves. (Houser & Xiao, 2010). It would be inappropriate to suggest that people are equality-averse from such an experiment, however, and, more to the point, doing so wouldn’t further our understanding of human behavior much, if at all. We want to understand why people do certain things; not simply that they do them.

References: Dawes CT, Fowler JH, Johnson T, McElreath R, & Smirnov O (2007). Egalitarian motives in humans. Nature, 446 (7137), 794-6 PMID: 17429399

Houser, D., & Xiao, E. (2010). Inequality-seeking punishment Economic Letters DOI: 10.1016/j.econlet.2010.07.008.

Why Would You Ever Save A Stranger Over A Pet?

The relationship between myself and my cat has been described by many as a rather close one. After I leave my house for almost any amount of a time, I’m greeted by what appears to be a rather excited animal that will meow and purr excessively, all while rubbing on and rolling around my feet upon my return. In turn, I feel a great deal of affection towards my cat, and derive feelings of comfort and happiness from taking care of and petting her. Like the majority of Americans, I happen to be a pet owner, and these experiences and ones like them will all sound perfectly normal and relatable. I would argue, however, that they are, in fact, very strange feelings, biologically-speaking. Despite the occasional story of cross-species fostering, other animals do not seem to behave in ways that indicates they seek out anything resembling pet-ownership. It’s often not until the idea of other species not making habits of having pets is raised that one realizes how strange of a phenomenon pet ownership can be. Finding that bears, for instance, reliably took care of non-bears, providing them with food and protection, would be a biological mystery of the first-degree.

And that I get most of my work done like this seems normal to me.

So why people seem to be so fond of pets? My guess is that the psychological mechanisms that underlie pet ownership in humans are not designed for that function per se. I would say that for a few reasons, notable among them are the time and resource factors. First, psychological adaptations take a good deal of time to be shaped by selective forces, which means long periods of co-residence between animals and people would be required for any dedicated adaptations to have formed. Though it’s no more than a guess on my part, I would assume that conditions that made extended periods of co-residence more probable would likely not have arisen prior to the advent of agriculture and geographically-stable human populations. The second issue involves the cost/benefit ratios: pets require a good deal of investment, at least in terms of food. In order for there to have been any selective pressure to keep pets, the benefits provided by the pets would have needed to have more than offset the costs of their care, and I don’t know of any evidence in that regard. Dogs might have been able to pull their weight in terms of assisting in hunting and protection, but it’s uncertain; other pets – such as cats, birds, lizards, or even the occasional insect – probably did not. While certain pets (like cats) might well have been largely self-sufficient, they don’t seem to offer much in the way of direct benefits to their owners either. No benefits means no distinct selection, which means no dedicated adaptions.

Given that there are unlikely to be dedicated pet modules in our brain, what other systems are good candidates for explaining the tendency towards seeking out pets? The most promising one that comes to mind are our already-existing systems designed for the care of our own, highly-dependent offspring. Positing that pet-care is a byproduct of our infant-care would manage to skirt both the issues of time and resources; our minds were designed to endure such costs to deliver benefits to our children. It would also allow us to better understand certain facets of the ways people behave towards their pets, such as the “aww” reaction people often have to pets (especially young ones, like kittens and puppies) and babies, as well as the frequent use of motherese (baby-talk) when talking to pets and children (to compare speech directed at pets and babies see here and here. Note as well that you don’t often hear adults talking to each other in this manner). Of course, were you to ask people whether their pets are their biological offspring, many would give the correct response of “no”. These verbal responses, however, do not indicate that other modules of the brain – ones that aren’t doing the talking – “know” that pets aren’t actually your offspring, in much the same way that parts of the brain dedicated to arousal don’t “know” that generating arousal to pornography isn’t going to end up being adaptive.

There is another interesting bit of information concerning pet ownership that I feel can be explained through the pets-as-infants model, but to get to it we need to first consider some research on moral dilemmas by Topolski et al (2013). This dilemma is a favorite of mine, and the psychological community more generally: a variant of the trolley dilemma. In this study, 573 participants were asked to respond to a series of 12 similar moral dilemmas, all of which had the same basic setup: there is a speeding bus that is about to hit either a person or an animal that both wandered out into the street. The subject only has time to save one of them, and are asked which they would prefer to save. (Note: each subject responded to all 12 dilemmas, which might result in some carryover effects. A between subjects design would have been stronger here. Anyway…) The identity of the animal and person in the dilemma were varied across the conditions: the animal was either the subject’s pet (subjects were asked to imagine one if they didn’t currently have one) or someone else’s pet, and the person was either a foreign tourist, a hometown stranger, a distant cousin, a best friend, a sibling, or a grandparent.

The study also starred Keanu Reeves.

In terms of saving someone else’s pet, people generally didn’t seem terribly interested. From a high about of 12% of subjects choosing someone else’s pet over a foreign tourist to a low of approximately 2% of subjects picking the strange pet over their own sibling. The willingness to save the animal in question rose substantially when it was the subject’s own pet being considered, however: while people were still about as likely to save their own pet in cases involving a grandparent or sibling, approximately 40% of subjects indicated they would save their pet over a foreign tourist or a hometown stranger (for the curious, about 23% would save their pet over a distant cousin and only about 5% would save their pet over a close friend. For the very curious, I could see myself saving my pet over the strangers or distant cousin). The strength of the relationship between pet owners and their animals appears to be strong enough to, quite literally, make almost half of them throw another human stranger under the bus to save their pet’s lives.

This is a strange response to give, but not for the obvious reasons: given that our pets are being being treated as our children by certain parts of our brain, this raises the question as to why anyone, let alone a majority of people, would be willing to sacrifice the lives of their pets to save a stranger. I don’t expect, for instance, that many people would be willing to let their baby get hit by the bus to save a tourist, so why that discrepancy? Three potential reasons come to mind: first, the pets are only “fooling” certain psychological systems. While some parts of our psychology might be treating pets as children, other parts may well not be (children do not typically look like cats or dogs, for instance). The second possible reason involves the clear threat of moral condemnation. As we saw, people are substantially more interested in saving their own pets, relative to a stranger’s pet. By extension, it’s probably safe to assume that other, uninvolved parties wouldn’t be terribly sympathetic to your decision to save an animal over a person. So the costs to saving the pet might well be perceived as higher. Similarly, the potential benefits to saving an animal may typically be lower than those of another person, as saved individuals and their allies are more likely to do things like reciprocate help, relative to a non-human. Sure, the pet’s owner might reciprocate, but the pet itself would not.

The final potential reason that comes to mind concerns that interesting bit of information I alluded to earlier: women were more likely to indicate they would save the animal in all conditions, and often substantially so. Why might this be the case? The most probable answer to that question again returns to the pets-as-children model: whereas women have not had to face the risk of genetic uncertainty in their children, men have. This risk makes males generally less interested in investing in children and could, by extension, make them less willing to invest in pets over people. The classic phrase, “Momma’s babies; Daddy’s maybes” could apply to this situation, albeit in an under-appreciated way (in other words, men might be harboring doubts about whether the pet is actually ‘theirs’, so to speak). Without reference parental investment theory – which the study does not contain – explaining this sex difference in willingness to pick animals over people would be very tricky indeed. Perhaps it should come as no surprise, then, that the authors do not do a good job of explaining their findings, opting instead to redescribe them in a crude and altogether useless distinction between “hot” and “cold” types of cognitive processing.

“…and the third type of cognitive processing was just right”

In a very real sense, some parts of our brain treat our pets as children: they love them, care for them, invest in them, and wish to save them from harm. Understanding how such tendencies develop, and what cues our minds use to make distinctions between their offspring, the offspring of others, their pets, and non-pet animals are very interesting matters which are likely to be furthered by considering parental investment theory. Are people raised with pets from a young age more likely to view them as fictive offspring? How might hormonal changes during pregnancy affect women’s interest in pets? Might cues of a female mate’s infidelity make their male partner less interested in taking care of pets they jointly own? Under what conditions might pets be viewed as a deterrent or an asset to starting new romantic relationships, in the same way that children from a past relationship might? The answers to these questions require placing pet care in its proper context, and you’re going to have quite a hard time doing that without the right theory.

References: R. Topolski, J.N. Weaver, Z. Martin, & J. McCoy (2013). Choosing between the Emotional Dog and the Rational Pal: A Moral Dilemma with a Tail. ANTHROZOÖS, 26, 253-263 DOI: 10.2752/175303713X13636846944321

Washing Hands In A Bright Room

Part of academic life in psychology – and a rather large part at that – centers around publishing research. Without a list of publications on your resume (or CV, if you want to feel different), your odds of being able to do all sorts of useful things, such as getting and holding onto a job, can be radically decreased. That said, people doing the hiring do not typically care to read through the published research of every candidate applying for the position. This means that career advancement involves not only publishing plenty of research, but publishing it in journals people care about. Though it doesn’t affect the quality of the research in in any way, publishing in the right places can be suitably impressive to some. In some respects, then, your publications are a bit like recommendations, and some journal’s names carry more weight than others. On that subject, I’m somewhat disappointed to note that a manuscript of mine concerning moral judgments was recently rejected from one of these prestigious journals, building upon the ever-lengthening list of prestigious things I’ve been rejected from. Rejection, I might add, appears be another rather large part of academic life in psychology.

After the first dozen or so times, you really stop even noticing.

The decision letter said, in essence, that while they were interesting, my results were not groundbreaking enough for publication the journal. Fair enough; my results were a bit on the expected side of things, and journals do presumably have standards for such things. Being entirely not bitter about the whole experience of not having my paper placed in the esteemed outlet, I’ve decided to turn my attention to two recent articles published in a probably-unrelated journal within psychology, Psychological Science (proud home of the trailblazing paper entitled “Leaning to the Left Makes the Eiffel Tower Seem Smaller“). Both papers were examining what could be considered to fall within the realm of moral psychology, and both present what one might consider to be novel – or at least cute – findings. Somewhat curiously, both papers also lean a bit heavily on the idea of metaphors being more than metaphors, perhaps owing to their propensity for using the phrase “embodied cognition”. The first paper deals with the association between light and dark and good and evil, while the second concerns the association between physical cleanliness and moral cleanliness.

The first paper, by Banerjee, Chatterjee, & Sinha (2012) sought to examine whether recalling abstract concepts of good and evil could make participants perceive the room they’re in to be brighter or darker, respectively. They predicted this, as far as I can tell, on the basis of embodied cognition suggesting that metaphorical representations are hooked up to perceptual systems and, though they aren’t explicit about this, they also seem to suggest that this connection is instantiated in such a way so as to make people perceive the world incorrectly. That is to say that thinking about a time they behaved ethically or unethically ought to make people’s perceptions about the brightness of the world less accurate, which is a rather strange thing to predict if you ask me. In any case, 40 subjects were asked to think about a time they were ethical or unethical (so 20 per group), and to then estimate the brightness of the room they were in, from 1 to 7. The mean brightness rating of the ethical group was 5.3, and the rating in the unethical group was 4.7. Success; it seemed that metaphors are really embodied in people’s perceptual systems.

Not content to rest on that empirical success, Banerjee et al (2012) pressed forward with a second study to examine whether subjects recalling ethical or unethical actions were more likely to prefer objects that produced light (like a candle or a flashlight), relative to objects which did not (such as an apple or a jug). Seventy-four students were again split into two groups and asked to recall an ethical or unethical action in their life, asked to indicated their preference for the objects, and estimate the brightness of the room in watts. The subjects in the unethical condition again estimated the room as being dimmer (M = 74 watts) than the ethical group (M = 87 watts). The unethical group also tended to show a greater preference for light-producing objects. The authors suggest that this might be the case either because (a) the subjects thought the room was too dim, or (b) that participants were trying to reduce their negative feelings of guilt about acting unethically by making the room brighter. This again sounds like a rather peculiar type of connection to posit (the connection between guilt and wanting things to be brighter), and it manages to miss anything resembling a viable functional account for what I think the authors are actually looking at (but more on that in a minute).

Maybe the room was too dark, so they couldn’t “see” a better explanation.

The second paper comes to us from Schnall, Benton, & Harvey (2008), and it examines an aspect of the disgust/morality connection. The authors noted that previous research had found a connection between increasing feelings of disgust and more severe moral judgments, and they wanted to see if they could get that connection to run in reverse: specifically, they wanted to test whether priming people with cleanliness would cause them to deliver less-severe moral judgments about the immoral behaviors of other. The first experiment involved 40 subjects (20 per-cell seemed to be a popular number) who were asked to complete a scrabbled sentence task, with half of the subjects being posed with neutral sentences and the other half with sentences related to cleanliness. Immediately afterwards, they were asked to rate the severity of six different actions typically judged to be immoral on a 10-point scale. On average, the participants primed with the cleanliness words rated the scenarios as being less wrong (M = 5) than those given neutral primes (M = 5.8). While the overall difference was significant, only one of the six actions was rated as being significantly different between conditions, despite all showing the same pattern between conditions. In any case, the authors suggested that this may be due to the disgust component of moral judgments being reduced by the primes.

To test this explanation, the second experiment involved 44 subjects watching a scene from Trainspotting to induce disgust, and then having half of them wash their hands immediately afterwards. Subjects were then asked to rate the same set of moral scenarios. The group that washed their hands again had a lower overall rating of immorality (M = 4.7), relative to the group that did not (M = 5.3), with the same pattern as experiment 1 emerging. To explain this finding, the authors say that moral cleanliness is more than a metaphor (restating their finding) and then reference the idea that humans are trying to avoid “animal reminder” disgust, which is a pretty silly idea for a number of reasons that I need not get into here (the short version is that it doesn’t sound like the type of thing that does anything useful in the first place).

Both studies, it seems, make some novel predictions and present a set of results that might not automatically occur to people. Novelty only takes us so far, though: neither study seems to move our understanding of moral judgments forward much, if at all, and neither one even manages to put forth a convincing explanation for their findings. Taking these results at face value (with such small sample sizes, it can be hard to say whether these are definitely ‘real’ effects, and some research on priming hasn’t been replicating so well these days), there might be some interesting things worth noting here, but the authors don’t manage to nail down what those things are. Without going into too much detail, the first study seem to be looking at what would be a byproducts of a system dedicated to assessing the risk of detection and condemnation for immoral actions. Simply put, the risks involved in immoral actions go down as the odds of being identified do, so when something lowers the odds of being detected – such as it being dark, or the anonymity that something like the internet or a mask can provide – one could expect people to behave in a more immoral fashion as well.

The internet can make monsters of us all.

In terms of the second study, the authors would likely be looking at another byproduct, this time of a system designed to avoid the perceptions of associations with morally-blameworthy others. As cleaning oneself can do things like remove evidence of moral wrongdoing, and thus lower the odds of detection and condemnation, one might feel a slightly reduced pressure to morally condemn others (as the perception of their being less concrete evidence of an association). With respect to the idea of detection and condemnation, then, both studies might be considered to be looking at the same basic kind of byproduct. Of course, phrased in this light (“here’s a relatively small effect that is likely the byproduct of a system designed to do other things and probably has little to no lasting effect on real-world behavior”), neither study seems terribly “trailblazing”. For a journal that can boast about receiving roughly 3000 submissions a year and accepting only 11% of them for publication, I would think they could avoid such submissions in favor of research that the label “groundbreaking” or “innovative” could be more accurately applied (unless they actually were the most groundbreaking of the bunch, that is). It would be a shame for any journal if genuinely good work was passed on because it seemed to be “too obvious” in favor of research that is cute, but not terribly useful. It also seems silly that it matters which journal one’s research is published in in the first place, career-wise, but so does washing your hands in a bright room so as to momentarily reduce the severity of moral judgments to some mild degree.

References: Banerjee P, Chatterjee P, & Sinha J (2012). Is it light or dark? Recalling moral behavior changes perception of brightness. Psychological science, 23 (4), 407-9 PMID: 22395128

Schnall S, Benton J, & Harvey S (2008). With a clean conscience: cleanliness reduces the severity of moral judgments. Psychological science, 19 (12), 1219-22 PMID: 19121126

This Is Water: Making The Familiar Strange

In the fairly-recent past, there was a viral video being shared across various social media sites called “This is Water” by David Foster Wallace. The beginning of the speech tells a story of two fish who are oblivious to the water in which they exist, in much the same way that humans come to take the existence of the air they breathe for granted. The water is so ubiquitous that the fish fail to notice it; it’s just the way things are. The larger point of the video – for my present purposes – is that the inferences people make in their day-to-day lives are so automatic as to become taken for granted. David correctly notes that there are many, many different inferences that one could make about people we see in our every day lives: is the person in the SUV driving it because they fear for their safety or are they selfish for driving that gas-guzzler? Is the person yelling at their kids not usually like that, or are they an abusive parent? There are two key points in all of this. The first is the aforementioned habit people have to take the ability we have to draw these kinds of inferences in the first place for granted; what Cosmides & Tooby (1994) call instinct blindness. Seeing, for instance, is an incredibly complex and difficult-to-solve task, but the only effort we perceive when it comes to vision involves opening our eyes: the seeing part just happens. The second, related point is the more interesting part to me: it involves the underdetermination of the inferences we draw from the information we’re provided. That is to say that no part of the observations we make (the woman yelling at her child) intrinsically provides us with good information to make inferences with (what is she like at other times).

Was Leonidas really trying to give them something to drink?

There are many ways of demonstrating underdetermination, but visual illusions – like this one – prove to be remarkable effective in quickly highlighting cases where the automatic assumptions your visual systems makes about the world cease to work. Underdetermination isn’t just a problem need to be solved with respect to vision, though: our minds make all sorts of assumptions about the world that we rarely find ourselves in a position to appreciate or even notice. In this instance, we’ll be considering some of the information our mind automatically fills in concerning the actions of other people. Specifically, we perceive our world along a dimension of intentionality. Not only do we perceive that individuals acted “accidentally” or “on purpose”, we also perceive that individuals acted to achieve certain goals; that is, we perceive “motives” in the behavior of others.

Knowing why others might act is incredibly useful for predicting and manipulating their future behavior. The problem that our minds need to solve, as you can no doubt guess by this point, is that intentions and motives are not readily observable from actions. This means that we need to do our best to approximate them from other cues, and that entails making certain assumptions about observable actions and the actors who bring them about. Without these assumptions, we would have no way to distinguish between someone killing in self-defense, killing accidentally, or killing just for the good old fashion fun of it. The questions for consideration, then, concern which kinds of assumptions tend to be triggered by which kinds of cues under what circumstances, as well as why they get triggered by that set of cues. Understanding what problems these inferences about intentions and motives were designed to solve can help us more accurately predict the form that these often-unnoticed assumptions will likely take.

While attempting to answer that question about what cues our minds use, one needs to be careful to not lapse in the automatically-generated inferences our minds typically make and remain instinct-blind. The reason that one ought to avoid doing this – in regards to inferences about intentions and motives – is made very well by Gawronski (2009):

“…how [do] people know that a given behavior is intentional or unintentional[?]  The answer provided…is that a behavior will judged as intentional if the agent (a) desired the outcome, (b) believed that the action would bring about the outcome, (c) planned the action, (d) had the skill to accomplish the action, and (e) was aware of accomplishing the outcome…[T]his conceptualization implies the risk of circularity, as inferences of intentionality provide a precondition for inferences about aims and motives, but at the same time inferences of intentionality depend on a perceivers’ inferences about aims and motives.”

In other words, people often attempt to explain whether or not someone acted intentionally by referencing motives (“he intended to harm X because he stood to benefit”), and they also often attempt to explain someone’s motives on the basis of whether or not they acted intentionally (“because he stood to benefit by harming X, he intended harm”). On top of that, you might also notice that inferences about motives and intentions are themselves derived, at least in part, from other, non-observable inferences about talents and planning. This circularity manages to help us avoid something resembling a more-complete explanation for what we perceive.

“It looks three-dimensional because it is, and it is 3-D because it looks like it”

Even if we ignore this circularity problem for the moment and just grant that inferences about motives and intentions can influence each other, there is also the issue of the multiple possible inferences which could be drawn about a behavior. For instance, if you observe a son push his father down the stairs and kill him, one could make several possible inferences about motives and intentions. Perhaps the son wanted money from an inheritance, resulting in his intending to push his father to cause death. However, pushing his father not only kills close kin, but also carries the risk of a punishment. Since the son might have wanted to avoid punishment (and might well have loved his father), this would result in his not intending to push his father and cause death (i.e. maybe he tripped, which is what caused him to push). Then again, unlikely as it may sound, perhaps the son actively sought punishment, which is why he intended to push. This could go on for some time. The point is that, in order to reach any one of these conclusions, the mind needs to add information that is not present in the initial observation itself.

This leads us to ask what information is added, and on what basis? The answer to this question, I imagine, would depend on the specific inferential goals of the perceiver. One goal is could be accuracy: people wish to try and infer the “actual” motivations and intentions of others, to the extent it makes sense to talk about such things. If it’s true, for instance, that people are more likely to act in ways that avoid something like their own bodily harm, our cognitive systems could be expected to pick up on that regularity and avoid drawing the the inference that someone was intentionally seeking it. Accuracy only gets us so far, however, due to the aforementioned issue of multiple potential motives for acting: there are many different goals one might be intending to achieve and many different costs one might be intending to avoid, and these are not always readily distinguishable from one another. The other complication is that accuracy can sometimes get in the way of other useful goals. Our visual system, for instance, while not always accurate, might well be classified as honest. That is to say though our visual system might occasionally get things wrong, it doesn’t tend to do so strategically; there would be no benefit to sometimes perceiving a shirt as blue and other times as red in the same lighting conditions.

That logic doesn’t always hold for perceptions of intentions and motives, though: intentionally committed moral infractions tend to receive greater degrees of moral condemnation than unintentional ones, and can make one seem like a better or worse social investment. Given that there are some people we might wish to see receive less punishment (ourselves, our kin, and our allies) and some we might wish to see receive more (those who inflict costs on us or our allies), we ought to expect our intentional systems to perceive identical sets of actions very differently, contingent on the nature of the actor in question. In other words, if we can persuade others about our intentions and motives, or the intentions and motives of others, and alter their behavior accordingly, we ought to expect perceptual biases that assist in those goals to start cropping up. This, of course, rests on the idea that other parties can be persuaded to share your sense of these things, posing us with related problems like under what circumstances does it benefit other parties to develop one set of perceptions or another?

How fun this party is can be directly correlated to the odds of picking someone up.

I don’t pretend to have all the answers to questions like these, but they should serve as a reminder that the our minds need to add a lot of structure to the information they perceive in order to do many of the things of which they are capable. Explanations for how and why we do things like perceive intentionality and motive need to be divorced from the feeling that such perceptions are just “natural” or “intuitive”; what we might consider the experience of the word “duh”. This is an especially large concern when you’re dealing with systems that are not guaranteed to be accurate or honest in their perceptions. The cues that our minds use to determine what the motives people had when they acted and what they intended to do are by no means always straightforward, so saying that inferences are generated by “the situation” is unlikely to be of much help, on top of just being wrong.

References: Cosmides, L. & Tooby, J. (1996). Beyond intuition and instinct blindness: Towards an evolutionary rigorous cognitive science. Cognition, 50, 41-77.

Gawronski, B. (2009). The Multiple Inference Model of Social Perception: Two Conceptual Problems and Some Thoughts on How to Resolve Them. Psychological Inquiry, 20, 24-29 DOI: 10.1080/10478400902744261