Having Their Cake And Eating It Too

Humans are a remarkably cooperative bunch of organisms. This is a remarkable fact because cooperation can open the door wide to all manner of costly exploitation. While it can be a profitable strategy for all involved parties, cooperation requires a certain degree of vigilance and, at times, the credible threat of punishment in order to maintain its existence. Figuring out how people manage to solve these cooperative problems has provided us with no shortage of research and theorizing, some of which is altogether more plausible than the rest. Though I haven’t quite figured out the appeal yet, there are many thoughtful people who favor the group selection accounts for explaining why people cooperate. They suggest that people will often cooperate in spite of its personal fitness costs because it serves to better the overall condition of the group to which they belong. While there haven’t been any useful predictions that appear to have fallen out of such a model, there are those who are fairly certain it can at least account for some known, but ostensibly strange findings.

That is a rather strange finding you got there. Thanks, Goodwill.

One human trait purported to require a group selection explanation is altruistic punishment and cooperation, especially in one-shot anonymous economic games. The basic logic goes as follows: in a prisoner’s dilemma game, so long as that game is a non-repeated event, there is really only one strategy, and that’s defection. This is because if you defect when your partner defects, you’re better off than if you cooperated; if you partner cooperated, on the other hand, you’re still better off if you defect. Economists might thus call the strategy of “always defect” to be a “rational” one. Further, punishing a defector in such conditions is similarly considered irrational behavior, as it only results in a lower payment for the punisher than they would have otherwise had. As we know from decades of research using these games, however, people don’t always behave “rationally”: sometimes they’ll cooperate with other people they’re playing with, and sometimes they’ll give up some of their own payment in order to punish someone who has either wronged them or, more importantly, wronged stranger. This pattern of behavior – paying to be nice to people who are nice, and paying to punish those who are not – has been dubbed “strong reciprocity”. (Fehr, Fischbacher, & Gachter, 2002)

The general raison d’etre of strong reciprocity seems to be that groups of people which had lots of individuals playing that strategy managed to out-compete other groups of people without them. Even though strong reciprocity is costly on the individual level, the society at large reaps larger overall benefits, as cooperation has the highest overall payoff, relative to any kind of defection. Strong reciprocity, then, helps to force cooperation by altering the costs and benefits to cooperation and defection on the individual level. There is a certain kind of unfairness inherent in this argument, though; a conceptual hypocrisy that can be summed up by the ever-popular phrase, “having one’s cake and eating it too”. To consider why, we need to understand the reason people engage in punishment in the first place. The likely, possibly-obvious candidate explanation just advanced is that punishment serves a deterrence function: by inflicting costs on those who engage in the punished behavior, those who engage in the behavior fail to benefit from it and thus stop behaving in that manner. This function, however, rests on a seemingly innocuous assumption: actors estimate the costs and benefits to acting, and only act when the expected benefits are sufficiently large, relative to the costs.

The conceptual hypocrisy is that this kind of cost-benefit estimation is something that strong reciprocators are thought to not to engage in. Specifically, they are punishing and cooperating regardless of the personal costs involved. We might say that a strong reciprocator’s behavior is inflexible with respect to their own payments. This example is a bit like playing the game of “chicken”, where two cars face each other from a distance and start driving at one another in a straight line. The first drive to turn away loses the match. However, if both cars continue on their path, the end result is a much greater cost to both drivers than is suffered if either one turns. If a player in this game was to adopt an inflexible strategy, then, by doing something like disabling their car’s ability to steer, they can force the other player to make a certain choice. Faced with a driver who cannot turn, you really only have one choice to make: continue going straight and suffer a huge cost, or turn and suffer a smaller one. If you’re a “rational” being, then, you can be beaten by an “irrational” strategy.

Flawless victory. Fatality.

So what would be the outcome if other individuals started playing the ever-present “always defect” strategy in a similarly inflexible fashion? We’ll call those people “strong defectors” for the sake of contrast. No matter what their partner does in these interactions, the strong defectors will always play defect, regardless of the personal costs and benefits. By doing so, these strong defectors might manage to place themselves beyond the reach of punishment from strong reciprocators. Why? Well, any amount of costly punishment directed towards a strong defector would be a net fitness loss from the group’s perspective, as costly punishment is a fitness-reducing behavior: it reduces the fitness of the person engaging in it (in the form of whatever cost they suffer to deliver the punishment) and it reduces the fitness of the target of the punishment. Further, the costs to punishing the defectors could have been directed towards benefiting other people instead – which are net fitness gains for the group – so there are opportunity costs to engaging in punishment as well. These fitness costs would need to be made up for elsewhere, from the group selection perspective.

The problem is that, because the strong defectors are playing an inflexible strategy, the costs cannot be made up for elsewhere; no behavioral change can be affected. Extending this game of chicken analogy to the group level, let’s say that turning away is the “cooperative” option, and dilemmas like these were at least fairly regular. They might not have involved cars, but they did involve a similar kind of payoff matrix: there’s only one benefit available, but there are potential costs in attempting to achieve it. Keeping in line with the metaphor, it would be in the interests of the larger population if no one crashed. It follows that between-group selective pressures favor turning every time, since the costs are guaranteed to be smaller for the wider population, but the sum of the benefits don’t change; only who achieves them does. In order to force the cooperative option, a strong reciprocator might disable their ability to turn so as it alters the cost and benefits to others.

The strong reciprocators shouldn’t be expected to be unaffected by costs and benefits, however; they ought to be affected by such considerations, just on the group level, rather than the individual one. Their strategy should be just as “rational” as any others, just with regard to a different variable. Accordingly, it can be beaten by other seemingly irrational strategies – like strong defection – that can’t be affected by the threats of costs. Strong defectors which refuse to turn will either force a behavioral change in the strong reciprocators or result in many serious crashes. In either case, the strong reciprocator strategy doesn’t seem to lead to benefits in that regard.

Now perhaps this example sounds a bit flawed. Specifically, one might wonder how appreciable portions of the population might come to develop an inflexible “always defect” strategy in the first place. This is because the strategy appears to be costly to maintain at times: there are benefits to cooperation and being able to alter one’s behavior in response to costs imposed through punishment, and people would be expected to be selected to achieve and avoid them, respectively. On top of that, there is also the distinct concern that repeated attempts at defection or exploitation can result in punishment severe enough to kill the defector. In other words, it seems that there are certain contexts in which strong defectors would be at a selective disadvantage, becoming less prevalent in the population over time. Indeed, such a criticism would be very reasonable, and that’s precisely the because the always defect population behaves without regard to their personal payoff. Of course, such a criticism applies in just as much force to the strong reciprocators, and that’s the entire point: using a limited budget to affect the lives of others regardless of its effects on you isn’t the best way to make the most money.

The interest on “making it rain” doesn’t compete with an IRA.

The idea of strong defectors seems perverse precisely because they act without regard to what we might consider their own rational interests. Were we to replace “rational” with “fitness”, the evolutionary disadvantage to a strategy that functions as if behaving in such a manner seems remarkably clear. The point is that the idea of a strong reciprocator type of strategy should be just as perverse. Those who attempt to put forth a strong reciprocator type of strategy as plausible account for cooperation and punishment attempt to create a context that allows them to have their irrational-agent cake and eat it as well: strong reciprocators need not behave within their fitness interests, but all the other agents are expected to. This assumption needs to be at least implicit within the models, or else they make no sense. They don’t seem to make very much sense in general, though, so perhaps that assumption is the least of their problems.

References: Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13, 1-25 DOI: 10.1007/s12110-002-1012-7

The “Side-Effect Effect” And Curious Language

You keep using that word. I do not think it means what you think it means

That now famous quote was uttered by the character Inigo Montoya in the movie, The Princess Bride. In recent years, the phrase has been co-opted for its apparent usefulness in mocking people during online debates. While I enjoy a good internet argument as much as the next person, I do try to stay out of them these days due to time constraints, though I did used to be something of a chronic debater. (As an aside, I started this blog, at least in part, for reasons owing to balancing my enjoyment of debates with those time constraints. It’s worked pretty well so far). As any seasoned internet (or non-internet) debater can tell you, one of the underlying reasons debates tend to go on so long is that people often argue past one another. While there are many factors that explain why people do so, the one I would like to highlight today is semantic in nature: definitional obscurity. There are instances where people will use different words to allude to the same concept or use the same word to allude to different concepts. Needless to say, this makes agreement hard to reach.

But what’s the point of arguing if it means we’ll ever agree on something?

This brings us to the question of intentions. Defined by various dictionaries, intentions are aims, plans, or goals. By contrast, the definition of a side effect is just the opposite: an unintended outcome. Were these terms used consistently, then, one could never say a side effect was intended; foreseen, maybe, but not intended. Consistency, however, is rarely humanity’s strongest suit – as we ought expect it not to be – since consistency does not necessarily translate into “useful”: there are many cases in which I would be better off if I could both do X and stop other people from doing X (fill in ‘X’ however you see fit: stealing, having affairs, murder, etc). So what about intentions? There are two facts about intentions which make them prime candidates for expected inconsistency: (1) intentionally-committed acts tend to receive a greater degree of moral condemnation than unintentional ones, and (2) intentions are not readily observable, but rather need to be inferred.

This means that if you want to stop someone else from doing X, it is in your best interests to convince others if someone did X, that X was intended, so as to make punishment less costly and more effective (as more people might be interested in punishing, sharing the costs). Conversely, if you committed X, it is in your best interests to convince others that you did not intend X. It is on the former aspect – condemnation of others – that we’ll focus on here. In the now classic study by Knobe (2003), 39 people were given the following story:

The vice-president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.” The chairman of the board answered, “I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’”They started the new program. Sure enough, the environment was harmed.

When asked whether the chairman intentionally harmed the environment, 82% of the participants agreed that he had. However, when the word “harm” was replaced with “help”, now 77% of the subjects said that the benefits to environment were unintentional (this effect was also replicated using a military context instead). Now, strictly speaking, the only stated intention the chairman had was to make money; whether that harmed or helped the environment should to be irrelevant, as both effects would side effects of that primary intention.Yet that’s not how people rated them.

Related to the point about moral condemnation, it was also found that participants said the chairman who brought about the negative side effect deserved substantially more punishment (4.8 on a 0 to 6 scale) than the chairman who brought about the positive impact deserved praise (1.4), and those ratings correlated pretty well with the extent to which the participants thought the chairman has brought about the effect intentionally. This tendency to asymmetrically see intentions behind negative, but not positive, side effects was dubbed “the side-effect effect”. There exists the possibility, however, that this label is actually not entirely accurate. Specifically, it might not be exclusive to side effects of actions; it might also hold for the means by which an effect is achieved as well. You know; the things that were actually intended.

Just like how this was probably planned by some evil corporation.

The paper that raised this possibility (Cova & Naar, 2012) began by replicating Knobe’s basic effect with different contexts (unintended targets being killed by a terrorist bombing as the negative side effect, and an orphanage expanding due to the terrorist bombing as the positive side effect). Again, negative side effects were seen as more intentional and more blameworthy than positive side effects were rated as intentional and praiseworthy. The interesting twist came when participants were asked about the following scenario:

A man named André tells his wife: “My father decided to leave his immense fortune to only one of his children. To be his heir, I must find a way to become his favorite child. But I can’t figure how.” His wife answers: “Your father always hated his neighbors and has declared war to them. You could do something that would really annoy them, even if you don’t care. Andre decides to set fire to the neighbors’ car.

Unsurprisingly, many people here (about 80% of them) said that Andre had intentionally harmed his neighbors. He planned to harm them, because doing so would further another one of his goals (getting money) A similar situation was also presented, however, where instead of burning down the neighbor’s car, Andre donates to a humanitarian-aid society because his father would have liked that. In that case, only 20% of subjects reported that Andre had intended to give money to the charity.

Now that answer is a bit peculiar. Surely, Andre intended to donate the money, even if his reason for doing so involved getting money from his father. While that might not be the most high-minded reason to donate, it ought not make the donating itself any less intentional (though perhaps it seems a bit grudging). Cova & Naar (2012) raise the following alternative explanation: the way the philosophers tend to use the word “intention” is not the only game in town. There are other possible conceptions that people might have of the word based on the context in which it’s found, such as, “something done knowingly for which an agent deserves praise of blame“. Indeed, taking these results at face value, we would need something else beyond the dictionary definitions of intention and side effect, since they don’t seem to be applying here.

This returns us to my initial point about intentions themselves. While this is an empirical matter (albeit a potentially difficult one), there are at least two distinct possibilities: (a) people mean something different by “intention” in moral and nonmoral contexts (we’ll call this the semantic account), or (b) people mean the same thing in both cases, but they do actually perceive it differently (the perceptual account). As I mentioned before, intentions are not the kinds of things which are readily observable, but rather need to be inferred, or perceived. What was not previously mentioned, however, is that it is not as if people only have a single intention at any given time; given the modularity of the mind, and the various goals one might be attempting to achieve, it is perfectly possible, at least conceptually, for people to have a variety of different intentions at once – even ones that pull in opposite directions. We’re all intimately familiar with the sensation of having conflicting intentions when we find ourselves stuck between two appealing, but mutually-exclusive options: a doctor may intend to do no harm, intend to save people’s lives, and find himself in a position where he can’t do both.

Simple solution: do neither.

For whatever it’s worth, of the two options, I favor the perceptual account over the semantic account for the following reason: there doesn’t seem to be a readily-apparent reason for definitions to change strategically, though there are reasons for perceptions to change. Let’s return to the Andre case to see why. One could say that Andre had at least two intentions: get the inheritance, and complete act X required to achieve the inheritance. Depending on whether one wants to praise or condemn Andre for doing X, one might choose to highlight different intentions, though in both cases keeping the definition of intention the same. In the event you want to condemn Andre for setting the car on fire, you can highlight the fact that he intended to do so; if you don’t feel like praising him for his ostensibly charitable donation, you can choose instead to highlight the fact that (you perceive) his primary intention was to get money – not give it. However, the point of that perceptual change would be to convince others that Andre ought to be punished; simply changing the definition of “intention” when talking with others about the matter wouldn’t seem to accomplish that goal quite as well, as it would require the other speaker to share your definition.

References: Cova, F., & Naar, H. (2012). Side-Effect Effect Without Side Effect: Revisiting Knobe’s Asymmetry. Philosophical Psychology, 25, 837-854

Knobe, J. (2003). Intentional Action and Side Effects in Ordinary Language. Analysis, 63, 190-193 DOI: 10.1093/analys/63.3.190

Can Rube Goldberg Help Us Understand Moral Judgments?

Though many people might be unfamiliar with Rube Goldberg, they are often not unfamiliar with Rube Goldberg machines: anyone who has ever seen the commercial for the game “Mouse Trap” is at least passingly familiar with them. Admittedly, that commercial is about two decades old at this point, so maybe a more timely reference is in order:OK Go’s music video for “This too shall pass” is a fine demonstration (or Mythbusters, if that’s more your cup of tea). The general principle behind a Rube Goldberg machine is that it completes an incredibly simple task in an overly-complicated manner. For instance, one might design one of these machines to turn on a light switch, but that end state will only be achieved after 200 intervening steps and hours of tedious setup. While these machines provide a great deal of novelty when they work (and that is a rather large “when”, since there is the possibility of error in each step), there might be a non-obvious lesson they can also teach us concerning our cognitive systems designed for moral condemnation.

  Or maybe they can’t; either way, it’ll be fun to watch and should kill some time.

In the literature on morality, there is this concept known as the doctrine of double effect. The principle states that actions with harmful consequences can be morally acceptable provided a number of conditions are met: (1) the act itself needs to be morally neutral or better, (2) the actor intends to achieve some positive end through acting; not the harmful consequence, (3) the bad effect is not a means to the good effect, and (4) the positive effects outweigh the negative ones sufficiently. While that might all seem rather abstract, two concrete and popular examples can demonstrate the principle easily: the trolley dilemma and the footbridge dilemma. Taking these in order, the trolley problem involves the following scenario:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. Unfortunately, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person.

In this dilemma, most people who have been surveyed (about 90% of them) suggest that it is morally acceptable to pull the lever, diverting the train onto the side track. It also fits the principle of double effect nicely: (1) the act (redirecting the train) is not itself immoral, (2) the actor intends a positive consequence (saving the 5) and not the negative one (1 dies), (3) the bad consequence (the death) is not a means of achieving the outcome, but rather a byproduct of the action (redirecting the train), and (4) the lives saved substantially outweigh the lives lost.

The footbridge dilemma is very similar in setup, but different in a key detail: in the footbridge dilemma, rather than redirecting the train to a sidetrack, a person is pushed in front of it. While the person dies, that causes the train to stop before hitting the 5 hikers, saving their lives. In this case, only about 10% of people say it’s morally acceptable to push the man. We can see how double effect fails in this case: (1) the act (pushing the man) is relatively on the immoral side of things, (2) the death of the person being pushed in intended, and (3) the bad consequence (the man dying) is the means by which the good consequence is achieved; the fact that the positive consequences outweigh the negative ones in terms of lives saved is not enough. But why should this be the case? Why do consequences alone not dictate our actions, and why can factors as simple as redirecting a train versus pushing a person make such tremendous differences in our moral judgments?

As I suggested recently, the answer to both of those questions can be understood through beginning our analysis of morality with an analysis of condemnation. These questions can be rephrased in that light to the following forms: “Why might people wish to morally condemn someone for achieving an outcome that is, on the whole, good?” and, “Why might people be less inclined to condemn certain outcomes, contingent on how they’re brought about?” The answer to the first question is fairly straightforward: I might wish to morally condemn someone because their actions (or failing to morally condemn them) might have some direct costs on me, even if they benefit others. For instance, I might wish to condemn someone for their behavior in the trolley or footbridge problem if it’s my friend dying, rather than a stranger. That some generally morally positive outcome was achieved is irrelevant to me if it was costly from my perspective. Natural selection doesn’t design adaptations for the good of the group, so that the group’s welfare is increased seems besides the point. Of course, a cost is a cost is a cost, so why should it matter to me at all if my friend was killed by being pushed or having the train sent towards him?

“DR. TEDDY! NOOOO!”

Part of that answer depends on what other people are willing to condemn. Trying to punish someone for their actions is not always cheap or easy: there’s always a chance of retaliation by the punished party or their allies. After all, a cost is a cost is a cost to both me and them. This social variable means that attempting to punish others without additional support might be completely ineffective (or at least substantially less effective) at times. Provided that other parties are less likely to punish negative byproducts, relative to negative intended outcomes, this puts pressure on you to attempt and persuade others that the person you want to punish acted with intent, whereas it puts the reverse pressure on the actor; to convince others they did not intend that bad outcome. This brings us back to Rube Goldberg, the footbridge dilemma, and a slight addition to doctrine of double effect.

There are some who argue that the doctrine of double effect isn’t quite complete. Specifically, there is an unappreciated third type of action: one in which a person acts because a negative outcome will obtain, but they do not intend that outcome (what is known as “triple effect”). This distinction is a bit trickier to grasp, so another example will help. Say that we’re again talking about the footbridge dilemma: there is a man standing on the bridge over the tracks with the oncoming train scheduled to hit the 5 hikers. However, we can pull a lever which will drop the man onto the track where he will be hit, thus stopping the train and saving the five. This is basically identical to the standard footbridge problem, and most people would deem it unacceptable to pull the lever. But now let’s consider another case: again, the man is standing on the bridge, but the mechanism that will drop him off the bridge is a light sensor. If light reflects off the train onto the sensor, the bridge will drop the man, he will die, and the 5 will be saved. Seeing the oncoming train, someone, Rube-Goldberg style, shines a spotlight on the train, illuminating it; the illumination hits the sensor, dropping the man onto the track, killing him and saving the five hikers.

There are some (Otsuka, 2008) that argue there is no meaningful difference between these two cases, but in order to make that claim, they need to infer something about the actor’s intentions in both cases, and precisely what one infers affects the subsequent shape of the analysis. Were one to infer that there is really only one problem to be solved – the train that going to kill 5 people – then the intentions of the person pulling the lever to illuminate the train and pulling the lever to drop the man are equivalent and equally condemnable. However, there is another inference one could make in the light case, as there are multiple facets to the problem: the train will both kill 5 and the train isn’t illuminated. If one intends to solve the latter problem (so now there will be an illuminated train about to kill 5 people) one also, as a byproduct of solving that problem, causes both the problem of 5 people getting killed to be solved and the death of man who got dropped onto the track. Now one could argue, as Otsuka (2008) does, that such an example fails because people could not be plausibly motivated to solve the non-illuminated part of the problem, but that seems like largely a matter of perspective. The addition of the light variable introduces, if even to some small degree, plausible deniability capable of shifting the perception of an outcome from intended to byproduct. Someone pulling the lever could have been doing so in order to illuminate the train or to drop the man onto the track, but it’s not entirely unambiguous which is the case.

“Well how was I supposed to know I was doing something dangerous?”

The light case is also a relatively simple one: there are only 3 steps (shine light on train, light opens door, door opening causes man to fall and stop train), and perfect knowledge is assumed (the person shining the light knew this would happen). Changing either or these variables would likely have the effect of altering the blame of the actor: if the actor didn’t know about the light sensor or the man on the footbridge, condemnation would likely decrease; if the action involved 10 steps, rather than 3, this could potentially introduce further plausible deniability, especially if any of those steps involved the actions of other people. It would be in the actor’s best interests to thus deny their knowledge of the outcome, or separate the outcome from their initial action as broadly as possible. Conversely, someone looking to condemn the actor would need to do the reverse.

Now maybe this all sounds terribly abstract, but there are real-life cases to which similar kinds of analysis can apply. Consider cases where a child is bullied at school and later commits suicide. Depending on one’s perspective in these kinds of cases, one might condemn or fail to condemn the bullies for the suicide (though one might still blame them for the bullying); one might also, however, condemn the parents for not being there for the child as they should have, or one might blame no one but the suicide victim themselves. As one thinks about ways in which the suicide could have been prevented, there are countless potential Rube-Goldberg kinds of variables in the casual chain to point to (violent media, the parents, the bullies, the friends, their diet, the suicide victim, the school, etc), the modification of any of which might have prevented the negative outcome. This gives condemners (who may wish to condemn people for initially-unrelated reasons) a wide-array of potential plausible targets. However, each of these potential sources also gives the other sources some way of mitigating and avoiding blame. While such strategic considerations tend to make a mess of normative moral theories, they do provide us the required tools to actually begin to understand morality itself.

References: Otsuka, M. (2008). Double Effect, Triple Effect and the Trolley Problem: Squaring the Circle in Looping Cases. Utilitas, 20, 92-110 DOI: 10.1017/S0953820807002932

Better Fathers Have Smaller Testicles, But…

There is currently an article making the rounds in the popular media (or at least the range of media that I’m exposed to) suggesting that testicular volume is a predictor of paternal investment in children: the larger the testicles, the less nurturing, fatherly behavior we see. I get the nagging sense that stories about genitals tends to get a larger-than-average share of attention (I did end up tracking the article down, after all), and that might have motivated both the crafting and sharing of this study (at least in the media. I can’t speak directly to the author’s intentions, though I can note the two domains often fail to overlap). In any case, more attention does not necessarily mean that people end up with an accurate picture of the research. Indeed, the percentage of people who will – or even can – read the source paper itself is vastly outnumbered by those who will not. So, for whatever it’s worth, here’s a more in-depth look at the flavor of the week research finding.

Our next new flavor will come out at the end of the month…

The paper (Mascaro, Hackett, & Rilling, 2013) begins with a discussion of life history theory. With respect to sexual behavior, life history theory posits that there is a tradeoff between mating effort and parental effort: the energy an organism spends investing in any single offspring is energy not spent in making new ones. Since then name of the game in evolution is maximizing fitness, this tradeoff needs to be resolved, and can be in various ways. Humans, compared to many other species, tend to fall rather heavily on the “investing” side of the scale, pouring immense amounts of time and energy into each highly-dependent offspring. Other species, like Salmon, for instance, invest all their energy into a single bout of mating, producing many offspring, but investing relatively less in each (as dead parents often make poor candidates for sources of potential investment). Life history theory is not just useful for understanding between-species differences though; it is also useful for understanding individual differences within species (as it must be, since the variation in the respective traits between species needed to have come from some initial population without said variance).

Perhaps the most well-known examples are the between-sex differences in life history tradeoffs among mammals, but let’s just stick to humans to make it relatable. When a woman gets pregnant, provided the baby will carried to term, her minimum required investment is approximately 9 months of pregnancy and often several years of breastfeeding, much of which precludes additional reproduction. The metabolic and temporal costs of this endeavor are hard to overstate. By contrast, a male’s minimum obligate investment in the process is a single ejaculate and however long intercourse took. One can immediately see that men tend to have more to gain from investing in mating effort, relative to women, at least from the minimum-investment standpoint. However, not all men have as much potential to achieve those mating-effort gains; some men are more attractive sexual partners, and others will be relatively shut-out of the mating market. If one cannot compete in the mating domain, it might pay to make oneself more appealing in the investment domain where they can more effectively compete. Accordingly, if one tends to attempt the investment strategy (though this need not mean a consciously-chosen plan), it’s plausible their body might follow a similar investment strategy, placing fewer resources into the more mating-orientated aspects of our physiology: specifically, the testicles.

Unsurprisingly, testicular volume appears to be correlated with a number of factors, but most notably sperm production (this especially the case between species, as I’ve written about before). Those men who tend to preferentially pursue a mating strategy (relative to an investment one) have slightly-different adaptive hurdles to overcome, most notably in the insemination and sperm competition arenas. Accordingly, Mascaro, Hackett, & Rilling (2013) predicted that we ought to see a relationship between testes size (representing a form of mating effort) and nurturing offspring (representing a form of parental effort). Enter the current study, where 70 biological fathers who were living with the mother of their children had their testicular volume (n = 55) and testosterone levels (n = 66) assessed. Additionally, reports of their parental behavior were also collected, along with a few other measures. As the title of the paper suggests, there was indeed a negative correlation (-0.29) between reported care-giving and testicle volume. This is the point where the highlighted finding begins to need qualifications, however, due to another pesky little factor: testosterone. Testosterone levels were also found to negatively correlate with reports of care-giving (-0.27), as well as the father’s reported desire to provide care (-0.26). Given that these are correlations, it’s not readily apparent that testicular volume per se would be the metaphorical horse pulling the cart.

Pulling the cart, metaphorically, “all the way“, that is.

Perhaps also unsurprisingly, testicular volume showed what the authors called a “moderate positive correlation” with testosterone levels (0.26, p = 0.06). As an aside, I find it interesting that the authors had, only a few sentences prior, reported an almost identically-sized correlation (r = -0.25, p = 0.06) between testicular volume and desire to invest in children, but there they labeled the correlation as a “strong trend”, rather than a “moderate correlation”. The choice of wording seems peculiar.

In any case, if bigger balls tended to go together with more testosterone, it becomes more difficult to make the case for testicular volume itself to be driving the relationship with parenting behaviors. In order to attempt and solve this problem, Mascaro, Hackett, & Rilling (2013) created a regression model, using testicular volume, testosterone levels, father’s earning, and hours worked as predictors of childcare. In that model, the only significant predictor of childcare was testosterone level.

Removing the “father’s earning” and “number of hours worked” variables from the regression model resulted in a gain in predictive value for testicular volume (though it was still not significant) but, again, it was testosterone that appeared to be having the greater effect. Whether or not it would be defensible to modify the regression model in that particular way in the first place is debatable, as the modification seems to be done in the interest of making testicular volume appear relatively more predictive than it was previously (also, removing those two previous factors resulted in the model accounting for quite a bit less of the variance in fathers’ overall childcare behaviors). Just because the authors had some a priori prediction about testicular volume and not about hours worked or money earned seems like only a mediocre reason for justifying the exclusion of the latter two variables while retaining the former.

There was also some neuroscience included in the study concerning the men looking at pictures of children’s faces and correlating the neural responses with childcare, testicular volume, and testosterone. I’ll preface what I’m about to say with the standard warning: I’m not the world’s foremost expert on neuroscience, so there is a distinct possibility I’m misunderstanding something here. That said, the authors did find a relationship there between testicular volume and neural response to children – a relationship that was apparently not diminished when controlling for testosterone.  It should be noted that, again, unless I’m misunderstanding something, this connection didn’t appear to translate into significant increases in the childcare actually displayed by the males in the study once the effects of testosterone were considered (if it did, it should have shown up in the initial regression models). Then again, I have historically been overly-cautious about inferring much from brain scans, so take from that what you will.

I’ve got my eye on you, imaging technology…

To return to the title of this post, yes, testicular volume appears to have some predictive value in determining parental care, but this value tends to be reduced, often substantially so, once a few other variables are considered. Now I happen to think that the hypotheses derived from life history theory are well thought out in this paper. I imagine I might be inclined to have made such predictions myself. Testicular measures have already given us plenty of useful information about the mating habits of various species, and I would expect there is still value to be gained from considering them. That said, I would also advise some degree of caution in attempting to fit the data to these interesting hypotheses. Using selective phrasing to highlight some trends (the connection between testicular volume and desire to provide childcare) relative to others (the connection between testicular volume and testosterone) because they fit the hypothesis better makes me uneasy. Similarly, dropping variables from a regression model to improve the predictive power of the variable of interest is also troublesome. Perhaps the basic idea might prove more fruitful were it to be expanded to other kinds of men (single men, non-fathers, divorced, etc) but, in any case, I find the research idea quite an interesting step, and I look forward to hearing a lot more about our balls in the future.

References: Mascaro, J., Hackett, P., & Rilling, J. (2013). Testicular volume is inversely correlated with nurturing-related brain activity in human fathers. Proceedings of the National Academy of Sciences of the United States of America.

Conscience Does Not Explain Morality

“We may now state the minimum conception: Morality is, at the very least, the effort to guide one’s conduct by reason…while giving equal weight to the interests of each individual affected by one’s decision” (emphasis mine).

The above quote comes to us from Rachaels & Rachaels (2010) introductory chapter entitled “What is morality?” It is readily apparent that their account of what morality is happens to be a conscience-centric one, focusing on self-regulatory behaviors (i.e. what you, personally, ought to do). These conscience-based accounts are exceedingly popular among many people, academics and non-academics alike, perhaps owing to its intuitive appeal: it certainly feels like we don’t do certain things because they feel morally wrong, so understanding morality through conscience seems like the natural starting point. With all due respect to the philosopher pair and the intuitions of people everywhere, they seem to have begun their analysis of morality on entirely the wrong foot.

So close to the record too…

Now, without a doubt, understanding conscience can help us more fully understand morality, and no account of morality would be complete without explaining conscience; it’s just not an ideal starting point for beginning our analysis (DeScioli & Kurzban, 2009; 2013). This is because moral conscience does not, in and of itself, explain our moral intuitions well. Specifically, it fails to highlight the difference between what we might consider ‘preferences’ and ‘moral rules’. To better understand this distinction, consider two following statements: (1) “I have no interest in having homosexual intercourse”, and (2) “Homosexual intercourse is immoral”. These two statements are distinct utterances, aimed at expressing different thoughts. The first expresses a preference, and that preference would appear sufficient for guiding one’s behavior, all else being equal; the latter statement, however, appears to express a different sentiment altogether. That second sentiment appears to imply that others ought to not have homosexual intercourse, regardless of whether you (or they) want to engage in the act.

This is the key distinction, then: moral conscience (regulating one’s own behavior) does not appear to straightforwardly explain moral condemnation (regulating the behavior of others). Despite this, almost every expressed moral rule or law involves punishing others for how they behave – at least implicitly. While the specifics of what gets punished and how much punishment is warranted vary to some degree from individual to individual, the general form of moral rules does not. Were I to say I do not wish to have homosexual intercourse, I’m only expressing a preference, a bit like stating whether or not I would like my sandwich on white or wheat bread. Were I to say homosexuality is immoral, I’m expressing the idea that those who engage in the act ought to be condemned for doing so. By contrast, I would not be interested in punishing people for making the ‘wrong’ choice about bread, even if I think they could have made a better choice.

While we cannot necessarily learn much about moral condemnation via moral conscience, the reverse is not true: we can understand moral conscience quite well through moral condemnation. Provided that there are groups of people who will tend to punish for you for doing something, this provides ample motivation to avoid engaging in that act, even if you otherwise highly desire to do so. Murder is a simple example here: there tend to be some benefits for removing specific conspecifics from one’s world. Whether because those others inflict costs on you or prevent the acquisition of benefits, there is little question that murder might occasionally be adaptive. If, however, the would-be target of your homicidal intentions happens to have friends and family members that would rather not see them dead, thank you very much, the potential costs those allies might inflict need to be taken into account. Provided those costs are appreciably great, and certain actions are punished with sufficient frequency over time, a system for representing those condemned behaviors and their potential costs – so as to avoid engaging in them – could easily evolve.

“Upon further consideration, maybe I was wrong about trying to kill your mom…”

That is likely what our moral conscience represents. To the extent that behaviors like stealing from or physically harming others tended to be condemned and punished, we ought to be expected to have a cognitive system to represent that fact. Now perhaps that all seems a bit perverse. After all, many of us simply experience the sensation that an act is morally wrong or not; we don’t necessarily think about our actions in terms of the likelihood and severity of punishment (we do think such things some of the time, but that’s typically not what appears to be responsible for our feeling of “that’s morally wrong”. People think things are morally wrong regardless of whether one is caught doing it). That all may be true enough, but remember, the point is to explain why we experience those feelings of moral wrongness; not to just note that we do experience them and that they seem to have some effect on our behavior. While our behavior might be proximately motivated by those feelings of moral wrongness, those feelings came to exist because they were useful in guiding out behavior in the face of punishment. That does raise a rather important question, though: why do we still feel certain acts are immoral even when the probability of detection or punishment are rather close to zero?

There are two ways of answering that question, neither of which is mutually exclusive with the other. The first is that the cognitive systems which compute things like the probability of being detected and estimate the likely punishment that will ensue are always working under conditions of uncertainty. Because of this uncertainty, it is inevitable that the system will, on occasion, make mistakes: sometimes one could get away without repercussions when behaving immorally, and one would be better off if they took those chances than if they did not. One also needs to consider the reverse error as well, though: if you assess that you will not be caught or punished when you actually will, you would have been better off not behaving immorally. Provided the costs of punishment are sufficiently high (the loss of social allies, abandonment by sexual partners, the potential loss of your life, etc), it might pay in some situations to still avoid behaving in morally unacceptable ways even when you’re almost positive you could get away with it (Delton et al, 2012). The point here is that it doesn’t just matter if you’re right or wrong about whether you’re likely to be punished: the costs to making each mistake need to be factored into the cognitive equation as well, and those costs are often asymmetric.

The second way of approaching that question is to suggest that the conscience system is just one cognitive system among many, and these systems don’t always need to agree with one another. That is, a conscience system might still represent an act as morally unacceptable while other systems (those designed to get certain benefits and assess costs) might output an incompatible behavioral choice (i.e. cheating on your committed partner despite knowing that it is morally condemned to do so, as the potential benefits are perceived as being greater than the costs). To the extent that these systems are independent, then, it is possible for each to hold opposing representations about what to do at the same time. Examples of this happening in other domains are not hard to find: the checkerboard illusion, for instance, allows us to hold both the representation that A and B are different colors and that A and B are the same color in our mind at once. We need not be of one mind about all such matters because our mind is not one thing.

“Well, shoot; I’ll get the glue gun…”

Now, to be sure, there are plenty of instances where people will behave in ways deemed to be immoral by others (or even by themselves, at different times) without feeling the slightest sensation of their conscience telling them “what you’re doing is wrong”. Understanding how the conscience develops, and the various input conditions likely to trigger it – or fail to do so – are interesting matters. In order to make better progress on researching them, however, it would benefit researchers to begin with an understanding of why moral conscience exists. Once the function of conscience – avoiding condemnation – has been determined, figuring out what questions to ask about conscience becomes an altogether easier task. We might expect, for instance, that moral conscience is less likely to be triggered when others (the target and their allies) are perceived to be incapable of effective retaliation. While such a prediction might appear eminently sensible when beginning with condemnation, it is not entirely clear how one could deliver such a prediction if they began their analysis with conscience instead.

References: Delton, A., Krasnow, M., Cosmides, L., & Tooby, J. (2012). Evolution of direct reciprocity under uncertainty can explain human generosity in one-shot encounters.  Proceedings of the National Academy of Sciences, 108, 13335-13340.

DeScioli P, & Kurzban R (2009). Mysteries of morality. Cognition, 112 (2), 281-99 PMID: 19505683

DeScioli P, & Kurzban R (2013). A solution to the mysteries of morality. Psychological bulletin, 139 (2), 477-96 PMID: 22747563

Rachaels, J. & Rachels S. (2010). The Elements of Moral Philosophy. New York, NY: McGraw Hill.

The Popularity Of Popularity

Whatever your personal views on the book, one would have a hard time denying the popularity of 50 Shades of Grey, which has sold more than 70 million copies worldwide. The Harry Potter series manages to dwarf that popularity, having sold over 450 million copies worldwide. While books like these have garnered incredible amounts of attention and fandom, hundreds of thousands of other books linger on in obscurity, while others never even see print. It’s nearly impossible to overstate that magnitude of difference in popularity. Similar patterns hold across other domains of cultural products, like music and academia; some albums or papers are just vastly more notable than the rest. For anyone looking to create, then, whether that creation is a book, music, a scholarly paper, or a clothing brand, being aware of factors that separate out the bestsellers from the bargain bin can potentially make or break the endeavor.

“We’d like to thank our fan for coming out to see us tonight”

One of the problems that continuously vexes those who decide which cultural products to invest in is the matter of figuring out which investments are their best bets. For every Harry Potter, Snuggie, or Justin Bieber, there are millions of cultural products and creators who will likely end up resigned to the dustbin of history (if the seemingly-endless lines of people auditioning for shows like American Idol and (ahem) writing blogs are any indication). Despite this tremendous monetary incentive in finding the next big thing, predicting who or what it will turn out to be is incredibly difficult. Some of these cultural products seem to take on a life all their own, despite initially being passed over: for instance, the Beatles were at one point told they had “no future in show business”  before rocketing to international and intergenerational fame. Whoops. Others draw tremendous amounts of investment, only to come up radically short (the issues surrounding the video game Kingdoms of Amalur serve as a good example).

What could have inspired such colossal mistakes? The answer is probably not that the people doing the investing are stupid, wicked, or hate money (in general, at least; some of them may well be any of these things individually); a more likely reason can be summed up by what’s called the Matthew effect. Named after a passage from the Gospel of Matthew, the basic principle is that the rich get richer, while the poor get poorer (also summarized in the Alice Cooper song “Lost in America”). Framed in terms of cultural products, the principle is that as something – be a book, song, or anything else – becomes popular, it will become more popular in turn because of that popularity. Obviously this becomes a problem for anyone trying to predict the success of a cultural product, because when they start out that key variable can’t be readily assessed with much precision; it’s simply not one of the inherent characteristics of the product itself. Even if the product itself is solid in every measurable respect, that’s not a guarantee of success.

While such a proximate explanation sounds appealing, solid experimental evidence of such an effect can be difficult to come by; reality is a large and messy place, so parceling out the effect of popularity can be tricky. Thankfully, some particularly clever researchers managed to leverage the power of the internet to study the effect. Salganik, Dodds, & Watts (2006) began their project by asking people on the internet to do one of the things they do best: download music (taking the coveted third-place spot of popular online activites, behind viewing pornography and complaining). The participants (all 14,341 of them) were invited to listen to any of 48 songs from new artists and, afterwards, rate and download them for free if they so desired (they couldn’t download until they listened). In the first condition, people were just presented with the songs in a random grid or list format and left to explore. In the second condition (the social world), everything was the same as the first, except the listeners were given one other key piece of information: how often that song had been downloaded by others.

And you want to like the same music as all those strangers, don’t you?

Further – and this is the rather clever part – there were eight of these different social worlds. This resulted in each world having initially-different numbers of downloads; which songs were the most downloaded initially in one world were not necessarily the most popular in another, depending on who listened and downloaded first. Salganik et al (2006) were able to report a number of interesting conclusions from these different worlds: first, the “intrinsic” quality of the songs themselves – as assessed by their download count in the asocial world – did tend to predict that song’s popularity in the social worlds. Those songs which tended to do good without the social information also tended to do good with it. That said, the social information mattered quite a bit As the authors put it:

the “best” songs never do very badly, and the “worst” songs never do extremely well, but almost any other result is possible.

When the social information was present, the ultimate popularity of a song became harder to predict; this was especially the case when the information was presented as an ordered list, from most popular to least (as bestseller lists often are), relative to the grid format. On top of the greater unpredictability generated by the social information, there was also an increase in the inequality of popularity. That is, the most popular songs were substantially more popular than the least popular in the social conditions, relative to the asocial ones. In short, the information about other people’s preferences generated more of a winner-takes-all type of environment, and who won became increasing difficult to predict.

This study was followed up by a 2008 paper by Salganik & Watts. In this next study, the design was largely the same, but the authors attempted to add a new twist: the possibility of a self-fulfilling prophecy. Might a “bad” song be able to be made popular by manipulating other people’s beliefs about how many others had downloaded it? Another 12,207 subjects were recruited to again listen to, rate, and download songs. Once the social worlds were initially seeded with download information from the first 2,000 people, the researchers then went in and inverted the ordering: the most and least popular songs switched their download counts, then second most and second least popular, and so on until the whole list was reversed. The remaining 10,000 subjects saw the new download counts, but everything else was left the same.

In the initial, unmodified worlds, each person listened to 7.5 of the 48 songs and downloaded 1.4 of them, on average. Unsurprisingly, people tended to download the songs they gave the highest ratings to. Somewhat more surprisingly, the fake download counts also pushed the previously-unpopular songs towards much higher degree of popularity: when the ranks weren’t inverted, there was a good correlation in the pre- and post-manipulation points (r = 0.84); when the ranks were inverted, that correlation dropped off to near non-existence (r = 0.16). The mere illusion of popularity was not absolute, though: over time, the initially “better” songs tended to regain some of their footing after inversion. Further, the songs in the inverted world tended to be listened to (average 6.3) and downloaded (1.1) less overall, suggesting that people weren’t as thrilled about the “lower-quality” songs they were now being exposed to in greater numbers.

“If none of us applaud, maybe we won’t encourage them to keep playing…”

Towards the end of their paper, the authors also dip into some strategic thinking, likening this effect to a tragedy-of-the-commons-style signaling problem: each creator has an interest in sending inflated signals about the popularity of their product, so as to garner more popularity, but as the number of these signals increases, their value decreases, and all the art as a whole suffers (a topic I touched on lately). I think such a speculation is a well-founded beginning, and I would like to see that line of reasoning taken further. For instance, there are some bands who might settle for a more niche-level popularity at the expense of broader appeal (i.e. the big fish in the small pond, talking about how the bigger fish in the bigger pond are all selling out, mass-produced products for crowds of mindless sheep; or just imagine someone going on endlessly about how bad twilight is at every opportunity they get). Figuring out why people might embrace or reject certain kinds of popularity could open the door for new avenues of profitable research.

References: Salganik, M., Dodds, P., & Watts, D. (2006). Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market. Science, 311, 854-856 DOI: 10.1126/science.1121066

Salganik, M., & Watts, D. (2008). Leading the Herd Astray: An Experimental Study of Self-fulfilling Prophecies in an Artificial Cultural Market. Social Psychology Quarterly , 71, 338-355 DOI: 10.1177/019027250807100404