You’ve Got Some (Base)Balls

Since Easter has rolled around, let’s get in the season and consider very briefly part of the story of Jesus. The Sparknotes version of the story involves God symbolically sacrificing his son in order to in some way redeem mankind. There’s something very peculiar about that line of reasoning, though: the idea that punishing someone for a different person’s misdeed is acceptable. If Bill is driving his car and strikes a pedestrian in a crosswalk, I imagine many of us would find it very odd, if not morally repugnant, to then go an punish Kyle for what happened. Not only did Kyle not directly cause the act to take place, but Kyle didn’t even intend for the action to take place – two of the criteria typically used to assess blame – so it makes little sense to punish him. As it turns out though, people who might very well disagree with punishing Kyle in the previous example can still quite willing to accept that kind of outcome in other contexts.

Turns out that the bombing of Pearl Harbor during World War II was one of those contexts.

If Wikipedia is to be believed, following the bombing of Pearl Harbor, a large number of people of Japanese ancestry – most of which were American citizens – were moved into internment camps. This move was prompted by fears of further possible Japanese attacks on the United States amidst concerns about the loyalty of the Japanese immigrants, who might act in some way against the US with their native country. The Japanese, due to their perceived group membership, were punished because of acts perpetrated by others viewed as sharing that same group membership, not because they had done anything themselves, just that they might do something. Some years down the road, the US government issued an apology on behalf of those who committed the act, likely due to some collective sense of guilt about the whole thing. Not only did guilt get spread to the Japanese immigrants because of the actions of other Japanese people, but the blame for the measures taken against the Japanese immigrants was also shared by those who did not enact it because of their association.

Another somewhat similar example concerns the US government’s response following the attacks of September 11th, 2001. All the men directly responsible for the hijackings were dead, and as such, beyond further punishment. However, their supporters – the larger group to which they belonged – was very much still alive, and it was on that group that military descended (among others). Punishment of group members in this case is known as Accomplice Punishment: where members of a group are seen as contributing to the initial transgression in some way; what is known typically as conspiracy. In this case, people view those being punished as morally responsible for the act in question, so this type of punishment isn’t quite analogous to the initial example of Bill and Kyle. Might there be an example that strips the moral responsibility of the person being punished out of the equation? Why yes, it’s turn out there is at least one: baseball.

In baseball, a batter will occasionally be hit by a ball thrown by a pitcher (known as getting beaned). Sometimes these hits are accidental, sometimes they’re intentional. Regardless, these hits can sometimes cause serious injury, which isn’t shocking considering the speed at which the pitches are thrown, so they’re nothing to take lightly. Cushman, Durwin, and Lively (2012) noted that sometimes a pitcher from one team will intentionally bean a player on the opposing team in response to a previous beaning. For instance, if the Yankees are playing the Red Sox, and a Red Sox pitcher hits a Yankee batter, the Yankee pitcher would subsequently hit a Red Sox batter. The researchers sought to examine the moral intuitions of baseball fans concerning these kinds of revenge beanings.

Serves someone else right!

The first question Cushman et al. asked was whether the fans found this practice to be morally acceptable. One-hundred forty five fans outside of Fenway Park and Yankee Stadium were presented with a story in which the the pitcher for the Cardinals intentionally hit a player for the Cubs, causing serious injury. In response, the pitcher from the Cubs hits a batter from the Cardinals. Fans were asked to rate the moral acceptability of the second pitcher’s actions on a scale from 1 to 7. Those who rated the revenge beaning of an innocent player as at least somewhat morally acceptable accounted for 44% of the sample; 51% found it unacceptable, with 5% being unsure. In other words, about half of the sample saw punishing an innocent player by proxy as acceptable, simply because he was on the same team.

But was the batter hit by the revenge bean actually viewed as innocent? To address this question, Cushman et al. asked a separate sample of 131 fans from online baseball forums whether or not they viewed the batter who was hit second as being morally responsible for the actions of the pitcher form their team. The answers here were quite interesting. First off, they were more in favor of revenge beanings, with 61% of the sample indicating the practice was at least somewhat acceptable. The next finding was that roughly 80% of the people surveyed agreed that, yes, the batter being hit was not morally responsible. This was followed by an agreement that it was, in fact, OK to hit that innocent victim because he happened to belong to the same team.

The final finding from this sample was also enlightening. The order in which people were asked about moral responsibility and endorsement of revenge beaning was randomized, so in some cases people were asked whether punishment was OK first, followed by whether the batter was responsible, and in other cases that order was reversed. When people endorsed vicarious punishment first, they subsequently rated the batter as having more moral responsibility; when rating the moral responsibility first, there was no correlation between moral responsibility and punishment endorsement. What makes this finding so interesting is that it suggests people were making rationalizations for why someone should be punished after they had already decided to punish; not before. They had already decided to punish; now they were looking to justify why they had made that decision. This in turn actually made the batter appear to seem more morally responsible.

“See? Now that he has those handcuffs on his rock-solid alibi is looking weaker already.”

This finding ties in nicely with a previous point I’ve made about how notions of who’s a victim and who’s a perpetrator are fuzzy concepts. Indeed, Cushman et al. present another result along those same lines: when it’s actually their team doing the revenge beaning, people view the act as more morally acceptable. When the home team was being targeted for revenge beaning, 43% of participants said the beaning was acceptable; when it was the home team actually enacting the revenge, 67% of the subjects now said it was acceptable behavior. Having someone on your side of things get hurt appears to make people feel more justified in punishing someone, whether that someone is guilty or not. Simply being associated with the guilty party in name is enough.

Granted, when people have the option to enact punishment on the actual guilty party, they tend to prefer that. In the National League, pitchers also come up to bat, so the option of direct punishment exists in those cases. When the initial offending pitcher was beaned in the story, 70% of participants found the direct form of revenge morally acceptable. However, if direct punishment is not an option, vicarious punishment of a group member seemed to still be a fairly appealing option. Further, this vicarious punishment should be directed towards the offending team, and not an unrelated team. For example, if a Cubs pitcher hits a Yankee batter, only about 20% of participants would say it’s then OK for a Yankee pitcher to hit a Red Sox batter the following night. I suppose you could say the silver-lining here is that people tend to favor saner punishment when it’s an option.

Whether or not people are adapted to punish others vicariously, and, if so, in what contexts is such behavior adaptive and why, is a question left untouched by this paper. I could imagine certain contexts where aggressing against the family or allies of one who aggressed against you could be beneficial, but it would depend on a good deal of contingent factors. For instance, by punishing family members of someone who wronged you, you are still inflicting reproductive costs on the offending party, and by punishing the initial offenders allies, you make siding with and investing in said offender costlier. While the punishment might reach its intended target indirectly, it still reaches them. That said, there would be definite risks of strengthening alliances against you – as you are hurting others, which tends to piss people off – as well as possibly calling retaliation down on your own family and allies. Unfortunately, the results of this study are not broken by gender, so there’s no way to tell if men or women differ or not in their endorsement of vicarious punishment. It seems these speculations will need to remain, well, speculative for now.

References:  Cushman, F., Durwin, A.J., & Lively, C. (2012). Revenge without responsibility? Judgments about collective punishment in baseball. Journal of Experimental Social Psychology. (In Press)

Tucker Max, Hitler, And Moral Contagion.

Disgust is triggered off not primarily by the sensory properties of an object, but by ideational concerns about what it is, or where it has been…The first law, contagion, states that “things which have once been in contact with each other continue ever afterwards to act on each other”…When an offensive (or revered) person or animal touches a previously neutral object, some essence or residue is transmitted, even when no material particles are visible. – Haidt et al. (1997, emphasis theirs).

Play time is over; it’s time to return to the science and think about what we can learn of human psychology from the Tucker Max and Planned Parenthood incident. I’d like to start with a relevant personal story. A few years ago I was living in England for several months. During my stay, I managed to catch my favorite band play a few times. After one of their shows, I got a taxi back to my hotel, picked up my guitar from my room, and got back to the venue. I waited out back with a few other fans by the tour bus. Eventually, the band made their way out back, and I politely asked if they would mind signing my guitar. They agreed, on the condition that I not put it on eBay (which I didn’t, of course), and I was soon the proud owner of several autographs. I haven’t played the guitar since for fear of damaging it.

This is my guitar; there are many like it, but this one is mine…and some kind of famous people wrote on it once.

My behavior, and other similar behavior, is immediately and intuitively understandable by almost all people, especially anyone who enjoys the show Pawnstars, yet very few people take the time to reflect on just how strange it is. By getting the signatures on the guitar, I did little more than show it had been touched very briefly by people I hold in high esteem. Nothing I did fundamentally altered the guitar in anyway, and yet somehow it was different; it was distinguished in some invisible way from the thousands of others just like it, and no doubt more valuable in the eyes of other fans. This example is fairly benign; what happened with Planned Parenthood and Tucker Max was not. In that case, the result of such intuitive thinking was that a helpful organization was out $500,000 and many men and women lost access to their services locally. Understanding what’s going on in both cases better will hopefully help people not make mistakes like that again. It probably won’t, but wouldn’t it be nice if did?

The first order of business in understanding what happened is to take a step back and consider the universal phenomenon of disgust. One function of our disgust psychology is to deal with the constant threat of microbial and parasitic organisms. By avoiding ingesting or contacting potentially contaminated materials, the chances of contracting costly infections or harmful parasites are lowered. Further, if by sheer force of will or accident a disgusting object is actually ingested, it’s not uncommon for a vomiting reaction to be triggered, serving to expel as much of the contaminant as possible. While a good portion of our most visceral disgust reactions focus on food, animals, or bodily products, not all of them do; the reaction extends into the realm of behavior, such as deviant sexual behavior, and perceived physical abnormalities, like birth defects or open wounds. Many of the behaviors that trigger some form of disgust put us in no danger of infection or toxic exposure, so there must be more to the story than just avoiding parasites and toxins.

One way Haidt et al. (1997) attempt to explain the latter part of this disgust reaction is by referencing concerns about humans being reminded of their animal nature, or thinking of their body as a temple, which are, frankly, not explanations at all. All such an “explanation” does is push the question back a step to, “why would being reminded of our animal nature or profaning a temple cause disgust?” I feel there are two facts that stand out concerning our disgust reaction that help to shed a lot of light on the matter: (1) disgust reactions seem to require social interaction to develop, meaning what causes disgust varies to some degree from culture to culture, as well as within cultures, and (2) disgust reactions concerning behavior or physical traits tend to focus heavily on behaviors or traits that are locally abnormal in some way. So, the better question to ask is: “If the function of disgust is primarily related to avoidance behaviors, what are the costs and benefits to people being disgusted by whatever they are, and how can we explain the variance?” This brings us nicely to the topic of Hitler.

Now I hate V-neck shirts even more.

As Haidt et al. (1997) note, people tend to be somewhat reluctant to wear used clothing, even if that clothing had been since washed; it’s why used clothing, even if undamaged, is always substantially cheaper than a new, identical article. If the used clothing in question belonged to a particularly awful person – in this case, Hitler – people are even less interested in wearing it. However, this tendency is reversed for items owned by well-liked figures, just like my initial example concerning my guitar demonstrated. I certainly wouldn’t let a stranger draw on my guitar, and I’d be even less willing to let someone I personally disliked give it a signature. I could imagine myself even being averse to playing an instrument privately that’s been signed by someone I disliked. So why this reluctance? What purpose could it possibly serve?

One very plausible answer is that the core issue here is signaling, as it was in the Tucker Max example. People are morally disgusted by, and subsequently try and avoid, objects or behaviors that could be construed as sending the wrong kind of signal. Inappropriate or offensive behavior can lead to social ostracism, the fitness consequences of which can be every bit as extreme as those from parasites. Likewise, behavior that signals inappropriate group membership can be socially devastating, so you need to be cautious about what signal you’re sending. One big issue that people need to contend with is that signals themselves can be interpreted many different ways. Let’s say you go over to a friend’s house, and find a Nazi flag hanging in the corner of a room; how should you interpret what you’re seeing? Perhaps he’s a history buff, specifically interested in World War II; maybe a relative fought in that war and brought the flag home as a trophy; he might be a Nazi sympathizer; it might even be the case that he doesn’t know what the flag represents and just liked the design. It’s up to you to fill in the blanks, and such a signal comes with a large risk factor: not only could an interpretation of the signal hurt your friend, it could hurt you as well for being seen as complicit in his misdeed.

Accordingly, if that signaling model is correct, then I would predict that signal strength and sign should tend to outweigh the contagion concerns, especially if that signal can be interpreted negatively by whoever you’re hoping to impress. Let’s return to the Hitler example: the signaling model would predict that people should prefer to publicly wear Hitler’s actual black V-neck shirt (as it doesn’t send any obvious signals) over wearing a brand new shirt that read “I Heart Hitler”. This parallels the Tucker Max example: people were OK with the idea of him donating money so long as he did so in a manner that kept his name off the clinic. Tucker’s money wasn’t tainted because of the source as much as it was tainted because his conditions made sure the source was unambiguous. Since people didn’t like the source and wanted to reject the perceived association, their only option was to reject the money.

This signaling explanation also sheds light on why the things that cause disgust are generally seen as, in some way, abnormal or deviant. Those who physically look abnormal may carry genes that are less suited for the current environment, or be physically compromised in such a way as it’s better to avoid them than invest in them. Those who behave in a deviant, inappropriate, or unacceptable manner could be signaling something important about their usefulness, friendliness, or their status as a cooperative individual, depending on the behavior. Disgust of deviants, in this case, helps people pick which conspecifics they’d be most profitably served by, and, more generally, helps people fit into their group. You want to avoid those who won’t bring you much reward for your investment, and avoid doing things that get on other people’s bad side. Moral disgust would seem to serve both functions well.

Which is why I now try and make new friends over mutual hatreds instead of mutual interests.

Now returning one final time to the Planned Parenthood issue, you might not like the idea of Tucker Max having his name on a clinic because you don’t like him. I understand that concern, as I wouldn’t like to play a guitar that was signed by members of the Westboro Baptist Church. On that level, by criticizing those who don’t like the idea of a Tucker Max Planned Parenthood clinic, I might seem like a hypocrite; I would be just as uncomfortable in a similar situation. There is a major difference between the two positions though, as a quick example will demonstrate.

Let’s say there’s a group of starving people in a city somewhere that you happen to be charge of. You make all the calls concerning who gets to bring anything into your city, so anyone who wants to help needs to go through you. In response to the hunger problem, the Westboro Baptist Church offers to donate a truck load of food to those in need, but they have one condition: the truck that delivers the food will bear a sign reading “This food supplied courtesy of the Westboro Baptist Church”. If you dislike the Church, as many people do, you have something of a dilemma: allow an association with them in order to help people out, or turn the food away on principle.

For what it’s worth, I would rather see people eat than starve, even if it means that the food comes from a source I don’t like. If your desire to help the starving people eat is trumped by your desire to avoid associating with the Church, don’t tell the starving people you’re really doing it for their own good, because you wouldn’t be; you’d be doing it for your own reasons at their expense, and that’s why you’d be an asshole.

References: Haidt, J., Rozin, P., McCauley, C., & Imada, S. (1997). Body, psyche, and culture: The relationship between disgust and morality. Psychology and Developing Societies, 9, 107-131.

Communication As Persuasion

Can you even win debates? I’ve never heard someone go, “My opponent makes a ton of sense; I’m out.” -Daniel Tosh

In my younger days, I lost a few years of my life to online gaming. Everquest was the culprit. Now, don’t get me wrong, those years were perhaps some of the happiest in my life. Having something fun to do at all hours of the day with thousands of people to do it with has that effect. Those years just weren’t exactly productive. While I was thoroughly entertained, when the gaming was over I didn’t have anything to show for it. A few years after my gaming phase, I went through another one: chronic internet debating. Much like online gaming, it was oddly addictive and left me with nothing to show for it when it all ended. While I liked to try and justify it to myself – that I was learning a lot from the process, refining my thought process and arguments, and being a good intellectual – I can say with 72% certainty that I had wasted my time again, and this time I wasn’t even having as much fun doing it. Barring a few instances of cleaning up grammar, I’m fairly certain no one changed my opinion about a thing and I changed about as many in return. You’d think with all the collective hours my fellow debaters and I had logged in that we might have been able to come to an agreement about something. We were all reasonable people seeking the truth, after all.

Just like this reasonable fellow.

Yet, despite that positive and affirming assumption, debate after debate devolved into someone – or everyone – throwing their hands up in frustration, accusing the other side of being intentionally ignorant, too biased, intellectually dishonest, unreasonable, liars, stupid, and otherwise horrible monsters (or, as I like to call it, suggesting your opponent is a human). Those characteristics must have been the reason the other side of the debate didn’t accept that our side was the right side, because our side was, of course, objectively right. Debates are full of logical fallacies like those personal attacks, such as: appeals to authority, straw men, red herrings, and question begging, to name a few, yet somehow it only seems like the other side was doing it. People relentless dragged issues into debates that didn’t have any bearing on the outcome, and they always seemed to apply their criticisms selectively.

Take a previously-highlighted example from Amanda Marcotte: when discussing the hand-grip literature on resisting sexual assault, she complained that, “most of the studies were conducted on small, homogeneous groups of women, using subjective measurements.” Pretty harsh words for a  study comprised of 232 college women between the ages of 18 and 35. When discussing another study that found results Amanda liked – a negligible difference in average humor ratings between men and women – she raised no concerns about “…small, homogeneous groups of women, using subjective measurements”. That she didn’t is hypocritical, considering the humor study had only 32 subjects (16 men and women, presumably undergraduates from some college) and used caption writing as the only measure of humor. So what gives: does Amanda care about the number of subjects when assessing the results or not?

The answer, I feel is a, “Yes, but only insomuch as it’s useful to whatever point she’s trying to make”. The goal in debates – and communication more generally – is not logical consistency; it’s persuasion. If consistency (or being accurate) gets in the way of persuasion, the former can easily be jettisoned for the latter. While being right, in some objective sense, is one way of persuading others, being right will not always make your argument the more persuasive one; the resistance to evolutionary theory has demonstrated as much. Make no mistake, this behavior is not limited to Amanda or the people that you happen to disagree with; research has shown that this is a behavior pretty much everyone takes part in at some point, and that includes you*. A second mistake I’d urge you not to make is to see this inconsistency as some kind of flaw in our reasoning abilities. There are some persuasive reasons to see inconsistency as reasoning working precisely how it was designed to, annoying as it might be to deal with.

Much like my design for the airbag that deploys when you start the car.

As Mercier and Sperber (2011) point out, the question, “Why do humans reason?” is often left unexamined. The answer these authors provide is that our reasoning ability evolved primarily for an argumentative context: producing arguments to persuade others and evaluating the arguments others present. It’s uncontroversial that communication between individuals can be massively beneficial. Information which can be difficult or time consuming to acquire at first can be imparted quickly and almost without effort to others. If you discovered how to complete some task successfully – perhaps how to build a tool or a catch fish more effectively – perhaps through a trial-and-error process, communicating that information to others allows them to avoid the need to undergo that same process themselves. Accordingly, trading information can be wildly profitable for all parties involved; everyone gets to save time and energy. However, while communication can offer large benefits, we also need to contend with the constant risk of misinformation. If I tell you that your friend is plotting to kill you, I’d have done you a great service if I was telling the truth; if the information I provided was either mistaken or fabricated, you’d have been better off ignoring me. In order to achieve these two major goals – knowing how to persuade others and when to be persuaded yourself – there’s a certain trust barrier in communication that needs to be overcome.

This is where Mercier and Sperber say our reasoning ability comes in: by giving others convincing justifications to accept our communications, as well as being able to better detect and avoid the misinformation of others, our reasoning abilities allow for more effective and useful communication. Absent any leviathan to enforce honesty, our reasoning abilities evolved to fill the niche. It is worth comparing this perspective to another: the idea that reasoning evolved as some general ability to improve or refine our knowledge across the board. In this scenario, our reasoning abilities more closely resemble some domain-general truth finders. If this latter perspective is true, we should expect no improvements in performance on reasoning tasks contingent on whether or not they are placed in an argumentative context. That is not what we observe, though. Poor performance on a number of abstracted reasoning problems, such as the Wason Selection Task, is markedly improved when those same problems are placed in an argumentative context.

While truth tends to win in cases like the Wason Selection Task being argued over, let’s not get a big-head about it and insist that it implies our reasoning abilities will always push towards truth. It’s important to note how divorced from reality situations like that one are: it’s not often you find people with a mutual interest in truth, arguing over a matter they have no personal stake in, that also has a clearly defined and objective solution. While there’s no doubt that reasoning can sometimes lead people to make better choices, it would be a mistake to assume that’s the primary function of the ability, as reasoning frequently doesn’t seem to lead people towards that destination. To the extent that reasoning tends to push us towards correct, or improved, answers, this is probably due to correct answers being easier to justify than incorrect ones.

As the Amanda Marcotte example demonstrated, when assessing an argument, often “[people] are not trying to form an opinion: They already have one. Their goal is argumentative rather than epistemic, and it ends up being pursed at the expense of epistemic soundness…People who have an opinion to defend don’t really evaluate the arguments of their interlocutors in search for genuine information but rather consider them from the start as counterarguments to be rebutted.” This behavior of assessing information by looking for arguments that support one’s own views and rebut the views of others is known as motivated reasoning. If reasoning served some general knowledge-refining ability, this would be a strange behavior indeed. It seems people often end up strengthening not their knowledge about the world, but rather their existing opinions, a conclusion that fits nicely in the argumentative theory. While opinions that cannot be sustained eventually tend to get tossed aside, as reality does impose some constraints (Kunda, 1990), on fuzzier matters for which there aren’t clear, objective answers – like morality – arguments have gotten bogged down for millenia.

I’m not hearing anymore objections to the proposal that “might makes right”. Looks like that debate has been resolved.

Further still, the argumentative theory can explain a number of findings that economists tend to find odd. If you have a choice between two products that are equally desirable, adding a third and universally less-desirable option should not have any effect on your choice. For instance, let’s say you have a choice between $5 today and $6 tomorrow; adding an additional option of $5 tomorrow to the mix shouldn’t have any effect, according to standard economic rationality, because it’s worse than either option. Like many assumptions of economics, it turns out to not hold up. If you add that additional option, you’ll find people start picking the $5 today option more than they previously did. Why? Because it gives them a clear justification for their decision, as if they were anticipating having to defend it. While $5 today or $6 tomorrow might be equally as attractive, $5 today is certainly more attractive than $5 tomorrow, making the $5 decision more justifiable. Our reasoning abilities will frequently point us towards decisions that are more justifiable, even if they end up not making us more satisfied.

Previous conceptualizations about the function of reasoning have missed the mark, and, as a result, had been trying to jam a series of square pegs into the same round hole. They have been left unable to explain vast swaths of human behaviors, so researchers simply labeled those behaviors that didn’t fit as biases, neglects, blind spots, errors, or fallacies, without ever succeeding in figuring out why they existed; why our reasoning abilities often seemed so poorly designed for reasoning. By placing all these previously anomalous findings under a proper theoretical lens and context, they suddenly start to make a lot more sense. While the people you find yourself arguing with may still seem like total morons, this theory may at least help you gain some insight into why they’re acting so intolerable.

*As a rule, it doesn’t apply to me, so if you find yourself disagreeing with me, you’re going to want to rethink your position. Sometimes life’s just unfair that way.

References: Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108,  480-498.

Mercier, H. & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34, 57-111.

I Do Not Bite My Thumb At You Sir, But I Do Bite My Thumb

I’m not a parent, but I can only imagine being one is a largely horrible affair for all parties involved. While I am currently a fantastically successful and good-looking man, I also represent a multiple-decade old ball of need and demands that has more than likely ruined many years of life for my parents. So, my bad there, I suppose; that one’s on me. Out of the many reasons that I’ve been able to gather as to why being a parent is generally a pain in the ass, one is that children are notoriously finicky eaters. Frustrated with their children’s lack of desire to eat this or that, one line that many parents will resort to is, in one form or another, “Finish what’s on your plate. You’re lucky to even have food; there are people starving in the world who wish they were you right now.” I’m sure those hungry kids of the world can take some solace in knowing that the food wasn’t wasted; it was forced on an unwilling recipient. Much better. 

Any child worth their salt, when faced with such an argument from their parents, would respond along the following lines: “It doesn’t matter whether I eat the food or throw it away; either way, it won’t have any impact on the starving children”. They’d be right. 

“Well, so long as you made them eat all that extra food they didn’t want….”

I’ve been speculating lately about what examples like this one can tell us about the functioning of our moral psychology. Here’s another: Tom Cruise and Katie Holmes had been reported to be spending about $130,000 on their daughter’s Christmas presents. The comments section of the article reveals that many people seem to find such behavior downright morally disgusting, if not outright evil, complete with lots of pictures of thoroughly malnourished children. Not only do Cruise and Holmes get painted as awful people for not using that money for other purposes, there is also rampant speculation as to how their child is going to turn out in the future because of it. Few of the predictions appear optimistic, while most speculate that she’s going to turn into an awful person. Why does the story take on that tone, rather than one of, say, parental affection, or of demonstrating value in personal freedom to spend money however one sees fit? You know, individual rights, and all that.

These examples make a very important point: if you’re trying to convince someone to do something – anything, it doesn’t matter what – it helps to have a victim on your side of the debate; someone who is being harmed by the action in question. Having one or more victims allows you to attempt and appeal to the moral psychology of others; it allows you to gain the support of latent coalitions in your social environment that can help achieve your ends. However, the term “victim” – much like the terms fairness or race – has a great deal of ambiguity to it. Victimhood is not an “out-there” type of variable capable of being easily measured or observed, like height or eye color. Victimhood is something that needs to be inferred. What cues people pick up on or make use of when assessing and generating such claims has gone largely unexamined.  

Consider the victims that people outlined in the Tom and Katie example: first, they are hurting their daughter directly by spoiling her, as she will be unhappy in some way later in life because of it. Perhaps she won’t value what she has very much because it comes easily. By extension, they’re also hurting indirectly the people their daughter will come into contact with later in life, as she will turn out to be a nasty, entitled ass because of the treatment she received from her parents. Finally, Tom and Katie are hurting the starving children of world by choosing to spend some of their money in ways that aren’t immediately alleviating their plight; they are hurting these children by not helping them. The issue comes to be viewed as their responsibility in some way because of their status and wealth, as if they are expected to do something about it. By acting as they are, they are shirking some perceived social debts they have to others, like not repaying a loan.

Maybe he saved some lives by stopping that horrible new virus in MI-2, but he could have saved more lives by fighting hunger in Africa.

As the initial example of the finicky child demonstrates, victimhood connections are also open to be questioned and dismissed. By spending money on their daughter, Tom and Katie do not intend to do any harm; quite the opposite. Perhaps their daughter may grow up to love her own children deeply. After all, they’re trying to make their daughter happy, which most people wouldn’t class as a particularly heinous act. Any argument that applies to their lavish spending would apply with equal force to any non-vital spending. Almost all people in first world countries are capable of lowering their own standard of living and comfort to save at least one starving child from death. The Christmas spending itself is also benefiting others: the businesses being patronized, the employees of those businesses, the families of those employees, generating taxes for the government, and so on. Further, Tom and Katie are not, to the best of my knowledge, the cause of world hunger or the force maintaining it. I imagine you would be offended were you approached by a homeless man who insisted his being homeless was your responsibility and you owe him help to make up for your lavish lifestyle.

What all this demonstrates is that, in the service of promoting their views, people appear highly motivated to find victims. These victims might come from across the globe or live next door; they might live in the present, the future, or even the past; they might be victims in ways that in no way relate to the current situation; they might be victims without a face, like “society”. In fact, the victims may be the very people someone is trying to help. In the service of denying accusations of immorality, people are also highly motivated to deny victims. The shooting of Trayvon Martin was recently, in part, blamed on how he was dressed; because he was wearing a hoodie. A similar phenomena is seen when people suggest women who get raped were in part responsible for the act due to a proactive style of dress. Those who are seen as causing their own misfortunes are rarely given much sympathy.   

This back and forth, between naming victims, assessing victimhood, and denying it, opens the way to what I feel are some sophisticated strategies on the parts of agents, patients, and third-parties. There’s a very valuable resource under contention – the latent coalitions of the social world, and their capacity for punishment – and successfully harnessing that resource depends on manipulating the fuzzy perceptions of harm and responsibility. It should go without saying that a victim also needs a victimizer, and you can bet people can be just as motivated to perceive victimizers in similar fashions. Was that man biting his thumb at you, sir, or was he just biting his thumb? Will he bite his thumb at you in the future if you don’t act to stop him now?

The 16th century equivalent of “suck it”.

As the social world we live in is a dynamic place, people must be prepared to assess these claims when made by others, defend against the claims when leveled against themselves, and generate and level these claims against others. As contexts change, we should be able to observe certain biases in information processing become more active or dormant within subjects. The same person who claims that Tom Cruise is a horrible person for spoiling his daughter will happily justify buying their own child a new iPad for Christmas. The child who asks for a new iPad but doesn’t get it will complain vocally about how they’re being mistreated, while third-parties judge these victimhood claims as lacking before going off to complain about how they’re unappreciated at work by their asshole boss.

What makes a victim a better victim? What about a perpetrator? What are the costs and benefits to being seen as one or the other, and how does that interact with other factors, such as gender? How do these expectations of who ought to do what get formed? How do relationships and group affiliations play into the generation and assessment of these perceptions? There are many such questions currently lacking an answer.

Like a water to fish or air to humans, our abilities in this realm often go unnoticed or unappreciated, despite our being constantly surrounded by them. While we many notice the inconsistencies and hypocrisies of others as their place in the social world changes, we rarely, if ever, notice them in ourselves. Noticing these habits in yourself would do you few favors if your goal is to persuade others. Besides, biases are those things that other people have. They lack your awesome powers of insight and understanding. Who are they to question your perceptions of the social world?  

Why Domain General Does Not Equal Plasticity

“It is because of, and not despite, this specificity of inherent structure that the output of computational systems is so sensitively contingent on environmental inputs. It is just this sensitive contingency to subtitles of environmental variation that make a narrow intractability of outcomes unlikely” - Tooby and Cosmides

In my last post, I mentioned that Stanton Peele directed at evolutionary psychology the criticism of genetic determinism. For those of you who didn’t read the last entry, the reason he did this is because he’s stupid and seems to have issues engaging with source material. This mistake – of confusing genetic determinism with evolutionary psychology – is unnervingly common among the critics who also seem to have issues engaging with source material. The mistake itself tends to take the form of pointing out that some behavior is variable, either across time, context, or people, and then saying, “therefore, genes (or biology) can’t be a causal factor in determining it”. For example, if people are nice sometimes and mean at others, it can’t be the genes; genes can only make people nice or mean at all times, not contingently. This means there must be something in the environment – like the culture – that makes people differ in their behavior the way they do, and the cognitive mechanisms that generate this behavior must be general-purpose. In other words, rather than resembling a Swiss Army knife – a series of tools with specified functions – the mind more closely resembles an unformed lump of clay, ready to adapt to whatever it encounters.

Unformed clay is known for being excellent at “solving problems” by “doing useful things”.

There are two claims found in this misguided criticism of evolutionary psychology. The first is that environments matter when it comes to development, behavior, or anything really, which they clearly do. This is something that’s been noted clearly and repeatedly by every professional evolutionary psychologist I’ve come across. The second claim is that to call a trait “genetic”, or to note that our genes play a role in determining behavior, implies inflexibly across environmental contexts. This second claim is, of course, nonsense. The opposite of “genetic” is not “environmental” or “flexible” for a simple reason: organisms need to be adapted to do anything, flexibly or otherwise. (Note: that does not mean everything an organism does it was adapted to do; the two propositions are quite different)

A quick example should make this point clear: consider my experiments with cats. Not many people know this about me, but I’m a big fan of the field of aviation. While up in the air, I’ve been known to throw cats out of the airplane. You know, for things like science and grant money. My tests have shown the following pattern of results: cats suck at flying. No matter how many times I’ve run the experiment – and believe me, I’ve run it many, many times, just to be sure – the results are always the same. How should I interpret the fact that I’m quickly running out of cats?

Discussion: The previous results were replicated purrrrfectly.

One way would be to suggest that cats would be able to fly, were they not constrained against flight by their genes; in other words, the cat’s behavior would be more “domain general” – even capable of flight – if genetics played less of a role in determining how they acted and developed. Another, more sane, route would be to suggest that cats were never adapted for flight in the first place. They can’t fly because their genes contain no programs that allow for it. Maybe that example sounds silly, but it does well to demonstrate a valuable point: adaptions do not make an organism’s behavior less flexible; it makes them more flexible. In fact, adaptations are what allows an organism to behave at all in the first place; organisms that are not adapted to behave in certain ways won’t behave at all.

So what about domain general abilities, like learning? For the same reasons, simply chalking some behavior up to “learning” or “culture” is often an inadequate explanation by itself. Learning is not something that just happens in the same way that flight doesn’t just happen; the ability to learn itself is an adaptation. It should come as no surprise then that some organisms are relatively prone to learning some things and relatively resistant to learning others. As Dawkins once noted, there are many more ways of being dead than being alive. On a similar note, there are many more ways of learning being useless or harmful than there are of learning being helpful. If an organism learns about the wrong subjects, it wastes time and energy; if an organism learns the wrong thing about the right subject, or if the organism fails to learn the right thing quickly enough, the results would often be deadly.

Cook and Mineka (1989) ran a series of experiments looking at how Rhesus monkeys acquired their fear response. The lab-raised monkeys with no prior exposure to snakes or crocodiles do not show a fear response to toy models of the two potential threats. The researchers then attempted to condition fear into these animals vicariously by showing them a video of another monkey reacting fearfully to either a snake or crocodile model. As expected, after watching the fearful reaction of another monkey, the lab-raised monkeys themselves developed a fear response to the toys. They learned quickly to be afraid when observing that fear reaction in another individual. What was particularly interesting about these studies is that the researchers tried the same thing, but substituted either a brightly-colored flower or a rabbit in place of the snake or crocodile. In these trials, the monkeys did not acquire a fear response to flowers or rabbits. In other words, the monkeys were biologically prepared to quickly learn fear to some objects (historically deadly ones), but not others.

Just remember, they’re more afraid of you than you are of them. Also, remember fear can make one irritable and defensive.

The results of this study make two very important points. The first is that, as I just mentioned, learning is not a completely open-ended process. We’re prepared to learn some things (like certain fears, taste aversions, or language) relatively automatically, given the proper environmental stimulation. I can’t stress the word “proper” there enough. For instance, there are also some learning associations that organisms are unable to make: rats will only learn taste aversion in the presence of nausea, not light or sound, though they will readily associate shocks with light and sound.

The second point is that these results (should) put to bed the mistaken notion that biology and environment are two competing sources of explanation; they are not. Genetics do not make an organism less flexible and environments do not make them more flexible. Learning is not something to be contrasted with biology, but rather learning is biology. This is a point that is repeatedly stressed in introduction level classes on evolutionary psychology, along with every major work within the field. Anyone who is still making this error in their criticisms is demonstrating a profound lack of expertise, and should be avoided.

References: Cook, M. & Mineka, S. (1989). Observational condition of fear to fear-relevant versus fear-irrelevant stimuli in Rhesus monkeys. Journal of Abnormal Psychology, 98, 448-459.

Is It A Paradox, Or Are You Stupid?

Let’s say you’re an intellectual type, like I am. As an intellectual type, you’d probably enjoy spending a good deal of time researching questions of limited scope and even more limited importance. There’s a high probably that your work will be largely ignored, flawed in some major way, or your results interpreted incorrectly by yourself or others. While you may not be the under-appreciated genius that you think you are, you may still be lucky enough to have been paid to do your poor work. Speaking of poor work that someone is getting paid for, here’s a recent piece by Stanton Peele, over at Psychology Today.

I’m fairly certain he wrote his dissertation about his own smug sense of self-satisfaction.

The article itself is your fairly standard piece of moral outrage about how rich and/or powerful people are cheaters who should not be trusted. According to Stanton, research suggests that people who perceive themselves to be high in power are likely to be “deficient in empathy”. This claim strikes me as a little fishy, especially coming from someone who feels so self-important that he links to other pieces he’s written on five separate occasions in an article no longer than a few sentences. Such a display of ego suggests that Stanton thinks he’s particularly high in power, and thus calls into question his empathy and honesty. It also suggests to me he’s a sock-sniffer.

The title of his piece, “Cheaters Always Win – The Paradox of Getting Ahead in America”, along with Stanton’s idea that powerful people are “deficient in empathy” both work well to display the bias in his thinking. In the case of his title, there’s only a paradox if one assumes that people who cheat would not win. We might not like when someone wins because they aren’t playing by the rules, but I don’t see any reason to think a (successful) cheater wouldn’t win; they cheat because it tends to put them at an advantage. In the case of his empathy suggestion, Stanton seems to assume there is some level of empathy people high in power lack that they should otherwise have. However, one could just as easily phrase the suggestion in an opposite fashion: people low in power have too much empathy. How that correlation gets framed says more, I feel, about the preconceptions of the person making it, than the correlation itself.

Though that brings us to the matter of whether or not that claim is true. Are powerful people universally deficient in empathy in some major way? (Across-the-board, as Peele puts it) According to one of the papers Peele mentions, people who rate themselves high in their sense of power were less compassionate and experienced less distress when listening to a highly distressed speaker, relative to those who ranked themselves low in power (van Kleef et al, 2008). See? The powerful people really are “turning a blind eye to the suffering of others” (which is, in fact, the subtitle of the paper).

The next model will also cover the ears in order to block out the sounds from all the people begging for their lives.

It would seem that van Kleef et al (2008) share Peele’s affection for hyperbole. The difference they found in self-reports of compassion and empathic distress between those highest and lowest in power was about a 0.65 on a scale of 1 to 7, or about 9%. We’re not talking about some radical difference in kind, just one of mild degree. However, that difference only existed in the condition where the speaker was highly distressed; when the speaker was low in distress, the effect was reversed, with the higher power subjects reporting more compassion and distress to the speaker’s story, to the tune of about 0.5 on the same scale. What conclusion one wants to draw from that study about the compassion and distress of high and low power individuals depends on which part of the data one is looking at. If you’re looking at a highly distressed speaker, those who feel higher in power are less compassionate and empathic; if you’re looking at a speaker lower in distress, those who feel they have more about are more compassionate and empathic. That would imply Peele is either giving the data a selective reading, or he never even bothered to read it.

A second paper Peele mentions, by Piff et al (2012), found that self-reported social class was correlated with cheating behavior; the higher one was in social class, the more likely they were to cheat or otherwise behave like a asshole across a few different scenarios. However, this effect of class disappeared when the researchers controlled for attitudes towards greed. As it turns out, people who think greed is just dandy tend to cheat a bit more, whether they’re low or high status. Further, asking those low in social status to write about three benefits of greed also eliminated this effect; those from lower social classes now behaving identically to those in the upper social class. It’s almost as if these low status individuals experienced sudden onset empathy-deficiency syndrome.

I’m skimming over most of the details of these papers because there’s another, more pressing, matter I’d like to deal with. These papers that Peele uses are notably devoid of anything that could be considered a theory. They present a series of findings, but no framework to understand them in: Why might people who have some degree of social power be more or less prone to doing something? What cost and benefits accompany these actions for each party and how might they change? Are the actions of those in the upper and lower classes deployed strategically? How might these strategies change as context does? This sounds like just the kind of research that could really be guided and assisted by embracing an evolutionary perspective.

Sadly, some people don’t take too kindly to our theoretical framework.

Unfortunately, because Peele is stupid, he has some harsh criticisms of genetic determinism that he directs at evolutionary psychologists:

“They also seem inconsistent with evolutionary psychologists who have been arguing lately (following “The Selfish Gene“) that altruism is a species-inherited genetic destiny [emphasis, mine].…So, which is it? Do humans progress by being kinder to others and understanding the plights of the downtrodden, or do they do better to ignore these depressing stories?  Do societies advance by displaying empathy towards others outside of their borders and with different customs from their own?

Such questions are about on the level of asking whether people are better off eating every waking moment or never eating again, followed by a self-congratulatory high-five. There are trade-offs to be made, and people aren’t always going to be better served by doing one, and only one, thing at all times. This should not be a difficult point to understand, but, on the other hand, understanding things is clearly not Peele’s strong suit; sock-sniffing is. I don’t mind if, as he finishes writing his ramblings, Peele leans in to get a good whiff of his own odor after a long day battling positions held by legions of imaginary evolutionary psychologists. I just don’t understand why Psychology Today feels the need to give his nonsense a platform.

References: Piff, P.K., Stancato, D.M., Cote, S., Mendoza-Denton, R., & Keltner, D. (2012). Higher social class predicts increased unethical behavior. Proceedings of the National Academy of Sciences.

van Kleef, G.A., Oveis, C., van der Lowe, H., LuoKogan, A., Goetz, J.,  Keltner, D. (2008). Power, distress and compassion: Turning a blind eye to the suffering of others. Psychological Science, 19, 1315-1322 

What Causes (Male) Homosexuality?

My initial inspiration for starting this blog was a brief piece I had written about why Lady Gaga’s song, “Born This Way”, really got under my skin. The general premise of the song is, unless I’m badly mistaken, that homosexuality is genetic in nature, and, accordingly, should be socially accepted. The song is full of very selective logic and a poor grasp of the state of scientific knowledge, all of which is accepted in the service of furthering a political goal. For what it’s worth, I agree with that goal, but the means being used to achieve it in this case were misguided because:

“…I’m not so sure Lady Gaga – or any gay-rights supporter – wants to base their claims to equal rights on the supposition that homosexuality is a trait people are “born” with…If further research uncovers that people can come to develop a homosexual orientation for a number of reasons that have nothing to do with being “born like that”, I wouldn’t want to see the argument for equal rights slip away.”

Today, I’m going to be stepping back into that same political minefield that I did on the topic of race, and discuss a hypothesis regarding the cause of male homosexuality that some people may not like. People will not like this hypothesis for reasons extrinsic to the hypothesis itself, but do your best to contain any moral outrage you may be feeling. My first task in presenting this hypothesis will be to convince you that male homosexuality is not genetically determined – despite what an eccentric young pop-star might tell you – and is also not an adaptation.

Convincing critics is always such a pleasure.

For some, it might seem insulting that homosexuality requires an explanation, whereas heterosexuality does not. Aren’t both just different sides of a very bisexual coin? There’s a simple answer to that concern: heterosexual intercourse is the only means to achieve reproduction. An exclusive homosexual orientation is the evolutionary equivalent to sterility, and if three to five percent of the male population was consistently sterile – despite neither of anyone’s parents being sterile, by definition – that would raise some questions as to how sterility persists. There would be an intense selective pressure away from sterility, and any genes that actively promoted it would fail to reproduce themselves. That homosexuality seems to persist in the population, despite it being a reproductive dead-end, requires an explanation. Heterosexuality poses no such puzzle. 

The first candidate explanation for the persistence of homosexuality is that it’s part on an adaptation for assisting the reproduction of one’s kin. While homosexuals themselves may suffer a dramatic reduction in their lifetime reproduction, they activity assist other genetic relatives, delivering enough benefits to offset their lack of personal reproduction, similar to how ants or bees would assist the queen, forgoing reproduction themselves. This suggestion is implausible on three levels: first, it would require that homosexuals deliver enormous benefits to their relatives. For each one child a gay man wouldn’t have, they would need to ensure a brother or sister would have an additional two that they wouldn’t otherwise have without those benefits. This would require an intense amount of investment. Second, there’s no theoretical reason that’s ever been provided as to why homosexuals would develop a homosexual orientation, as opposed to, say, an asexual orientation. Seeking out intercourse with same-sex individuals doesn’t seem to add anything to the whole investment thing. Finally, this explanation doesn’t work because, as it turns out, homosexuals don’t invest anymore in their relatives than heterosexuals do (Rahman & Hull, 2005). So much for kin selection.

A second  potential explanation for homosexuality is that it’s the byproduct of sexually antagonist selection; a gene that damages the reproductive potential of males persists in the population because the same trait is beneficial when it’s expressed in female offspring (Ciani, Cermelli, & Zanzotto, 2008; Iemmola & Ciani, 2009). Another potential explanation is that a homosexual orientation is like sickle cell anemia: while it hurts the reproductive prospects of those who express it, it provides some unspecified benefit that outweighs that cost in some carriers, as sickle cell protects against malaria. Both explanations have a large issues to contend with but one of the most prominent shared issues is this: despite both hypotheses resting on rather strong genetic assumptions, half or more of the variance in male homosexual orientation can’t be attributed to genetic factors (Kirk et al., 2000; Kendler et al., 2000). Identical twins don’t seem to be concordant for their sexual orientation anymore than 30 to 50% of the time when one of the twins identifies as non-heterosexual. If homosexuality was determined solely by genes, there should be a near complete agreement. 

In fact, most of the variance appears to be due to our decadent Western lifestyle. Who knew, right?

Accordingly, any satisfying explanation for homosexuality needs to reference environmental factors, as all traits do; the picture is far from as crude as there being some genes “for” homosexuality. While there clearly are some genetically inherited components in the ontogeny of a homosexual orientation, it’s entirely unclear what those genetic factors are. It’s also far from clear how those genetic factors interact with their environment – or when, for that matter. They would seem to act sometime before puberty, but beyond that the door is open. What seems to have been established so far is that an exclusive homosexual orientation is detrimental to reproduction in a big way, and these costs are not known to be reliably offset.

There is one last hypothesis that may hold some potential, though, as I mentioned, I suspect many people won’t like it: the “gay germ” theory. The general idea is that some outside pathogen – be it a bacteria or a virus – manipulates development in some way, the end result being a homosexual orientation. This hypothesis seems to have potential for a number of reasons: first, it neatly deals with why homosexuality persists in the population, despite the massive reproductive costs. It could also account for why monozyogtic twins are often discordant for homosexual orientation, despite sharing genes and a prenatal environment. As of now, it remains an untested theory, but other lines of research suggest some preliminary success using the same basic idea to understand the persistence of disorders like schizophrenia and obsessive compulsive disorder, among many others. Of course, such a theory does come with some political baggage and questions.

Like: will two gay men ever be able to hold hands, post love-making, on top of an American flag, just like straight couples do?

The first set of questions concern the data speaking to the hypothesis: what pathogen(s) are responsible? When do they act in development? How do they alter development? Are those alterations an adaptation on the part of the pathogen or merely a byproduct? These are no simple questions to answer, especially because it won’t be clear which children will end up gay until they have matured. This makes narrowing the developmental window in which to be looking something of task. If concordance rates for monozyogtic twins are similar between adopted and reared together twins, that might point to something prenatal, depending on the age at which the twins were separated, but would not definitively rule out other possibilities. Further, this pathogen need not be specific to gay men; it could be a pathogen that much of the population carries, but, for whatever reason, only affects a sub-group of males in such a way that they end up developing a homosexual orientation.        

The second set of questions concern potential implications of this theory, were it to be confirmed. I’ll start by noting these concerns have zero, absolutely nothing, to do with whether or not the gay germ theory is true. That said, these concerns are probably where most of the resistance to the hypothesis would come from, as concerns for data (or lack thereof) are often secondary to debates. Yes, the hypothesis cries out for supporting data so it shouldn’t be accepted just yet, but I’m talking to those people who would reject it as a possibility out of hand because it sounds icky. In terms of gay rights and social acceptance, it shouldn’t matter whether homosexuality is 100% genetically determined, caused by a pathogen, or just a choice someone makes one day because they’re bored with all that vanilla heterosexual sex they’ve been having. That something may be, or is, caused by a pathogen should really have no bearing on it’s moral status. If we discovered tomorrow that it was a virus that caused men to have larger-than-average penises, I doubt many people would cheer for the potential to cure the “disease” of large-penis.         

References: Ciani, A.C., Cermilli, P., & Zanzotto, G. (2008). Sexually antagonistic selection in human male homosexuality. PLosone.org, 3, e,2282.

Iemmola, F. & Ciani, A.C. (2009). New evidence of genetic factors influencing sexual orientation in men: Female fecundity increase in the maternal line. Archives of Sexual Behavior, 38, 393-399

Kendler, K.S., Thornton, L.M., Gilman, S.E., & Kessler, R.C. (2000). Sexual orientation in a U.S. national sample of twins and nontwin sibling pairs. American Journal of Psychiatry, 157, 1843-1846

Kirk, K.M., Bailey, J.M., Dunne, M.P., & Martin, N.G. (2000). Measurement models for sexual orientation in a community twin sample. Behavior Genetics, 30, 345-356

Rahman, Q. & Hull, M.S. (2005). An empirical test of the kin selection hypothesis for male homosexuality. Archives of Sexual Behavior, 234, 461-467 

Do “Daddy Issues” Jumpstart Menstruation?

Like me, most of you probably come from the streets. On the streets, it’s common knowledge that “daddy issues” are the root cause of women developing interests in several activities. Daddy issues are believed to play a major role in becoming a stripper, developing a taste for bad boys, and getting  a series of tattoos containing butterflies, skulls, and/or quotes with at least one glaring spelling mistake. As pointed out by almost any group in the minority at one point or another, however, that knowledge is common does not imply it is also correct. For instance, I’ve recently learned that drive-bys are not a legitimate form of settling academic disagreements (or at least that’s what I’ve been told; I still think it made me the winner of that little debate). So, enterprising psychologist that I am, I’ve decided to question the following piece of folk wisdom: is father absence really a causal variable in determining a young girl’s life history strategy, specifically with regard to the onset of menstruation?

Watch carefully now; that young boy may start to menstruate at any moment… wait; which study is this?

First, a little background is order. Life history theory deals with the way an organism allocates its limited resources in an attempt to maximize its reproductive potential. Using resources to develop one trait precludes the use of those same resources for developing other traits, so there are always inherent trade-offs that organisms need to make during development. Different species have stumbled upon different answers as to how these trade-offs should be made: is it better to be small or large? Is it better to start reproducing as soon as possible or start reproducing later? Is it better to produce many offspring and invest little in each, or produce fewer offspring and invest more? These developmental and behavioral trade-offs all need to be made under a series of ecological constraints, such as the availability of resources or the likelihood of survival. For instance, it makes no sense for a convict to refuse a final cigarette before a firing squad executes him out of concerns for his health. There’s no point worrying about tomorrow if there won’t be one. On the other hand, if you have a secure future, maybe Russian roulette isn’t the best choice for a past time.

So where do family-related issues enter into the equation?  Within each species, different individuals have slightly different answers for those developmental question, and those answers are not fixed from conception. Like all traits, their expression is contingent on the interaction between genes and the environment those genes find themselves in. A human child that finds itself with severely limited access to relevant resources is thus expected to alter their developmental trajectory according to their constraints. This has been demonstrated to be the case for obvious variables like obtaining adequate nutrition: if a young girl does not have access to enough calories, her sexual maturation will be delayed, as her body would be unlikely to successfully support the required investment a child brings.

Another of these hypothesized resources is paternal investment. The suggestion put forth by some researchers (Ellis, 2004) is that a father’s presence or absence signals some useful information to daughters regarding the availability of future mating prospects. The theory that purports to explain this association states that when young girls experience a lack of paternal investment, their developmental path shifts towards one that expects future investment by male partners to be lacking and not vital to reproduction. This, in turn, results in more sexual precociousness. Basically, if dad wasn’t there for you growing up, then, according to this theory, other men probably won’t be either, so it’s better to not develop in a way that expects future investment. That father absence has been associated with a slightly earlier onset of menarche (first menstruation) in women has been taken as evidence supporting this theory.

The basic concept also spun off into a show on MTV.

The major problem with this suggestion is that no causal link has been demonstrated. The only thing that has been demonstrated is that father absence tends to correlate with an earlier age of menstruation, and the degree to which the two are correlated is rather small. According to some correlations reported by Ellis (2004), it looks as if one could predict between 1 to 4% of the variance in timing of pubertal development on the basis of father absence, depending on which parts of the sample is under discussion. Further, that already small correlation does not control for a wide swath of additional variables, such as almost any variables that are found outside the home environments. This entire social world that exists outside of a child’s family has been known to have been of some (major) importance in children’s development, while the research on the home environment seems to suggest that family environments and parenting styles don’t leave lasting marks on personality (Harris, 1998).

As the idea that outside the home environments matter a lot has been around for over a decade, it would seem the only sane things for researchers to do are more nearly identical studies, looking at basically the same parenting/home variables, and finding the same, very small to no effect, then making some lukewarm claim about how it might be causation, but then again might not be. This pattern of research is about as tedious as that last sentence is long, and it plagues psychological research in my opinion. In any case, towards achieving that worthwhile goal of breaking through some metaphorical brick wall by just running into it enough times, Tither and Ellis (2008) set out to examine whether the already small correlation between daughter’s development and father presence was due to a genetic confound.

To do this, Tither and Ellis examined sister-pairs that contained both an older and younger sister. The thinking here is that it’s (relatively) controlled for on a genetic level, but younger sisters would experience more years of father absence following the break-up of a marriage, relative to the older sisters, which would in turn accelerate sexual maturation of the younger one. Skipping to the conclusions, this effect was indeed found, with younger sisters reporting earlier menarche than older sisters in father absent homes (accounting for roughly 2% of the variance). Among those father absent homes, this effect was carried predominately by fathers with a high reported degree of anti-social, dysfunctional behavior, like drug use and suicide attempts (accounting for roughly 10% of the variance within this subset). The moral seems to be that “good-enough” fathers had no real effect, but seriously awful parenting on the father’s part, if experienced at a certain time in a daughter’s life, has some predictive value.

So you may want to hold off on your drug-fueled rampages until your daughter’s about eight or nine years old.

First, let me point out the rather major problem here on a theoretical level. If the theory here is that father presence or absence sends a reliable signal to daughters about the likelihood of future male investment, then one would expect that signal to at least be relatively uniform within a family. If the same father is capable of signaling to an older daughter that future male investment is likely, and also signaling to a younger daughter that future male investment isn’t likely, then that signal would hardly be reliable enough for selection to have seized on.

Second, while I agree with Tither and Ellis that these results are consistent with a casual account, they do not demonstrate that the father’s behavior was the causal variable in any way whatsoever. For one thing, Ellis (2004) notes that this effect of father presence vs absence doesn’t seem to exist in African American samples. Are we to assume that father presence or absence has only been used as a signal by girls in certain parts of the world? Further, as the authors note, there tend to be other changes that go along with divorce and paternal psychotic behavior that will have an effect on a child’s life outside of the home. To continue and beat what should be a long dead horse, researchers may want to actually start to look at variables outside of the family to account for more variation in a girl’s development. After all, it’s not the family life that a daughter is maturing sexually for; it’s her life with non-family members that’s of importance.

References: Ellis, B.J. (2004). Timing of pubertal maturation in girls: An integrated life history approach. Psychological Bulletin, 130, 920-958

Harris, J.R. (1998). The Nurture Assumption: Why Children Turn Out The Way They Do. New York: Free Press

Tither, J.M., & Ellis, B.J. (2008). Impact of fathers on daughters’ age at menarche: A genetically and environmentally controlled sibling study. Developmental Psychology, 44, 1409-1420/

Female Orgasm: This Time, With Feeling

I’ve written about female orgasm on two prior occasions, but in those cases I used the subject more as a vehicle for understanding the opposition to evolutionary explanations rather than discussing orgasm itself. The comments section on a recent Cracked article that concludes female orgasm is a byproduct – not an adaptation – attests to the issues I had discussed. There, we see dozens of comments made by people who’s expertise consists of maybe having watched some documentary once they sort of remember. Believe it or not, as this part is shocking, these uninformed people also have very strong opinions about whether female orgasm has an evolved function. The most commonly hypothesized function for female orgasm found in the comments is that it motivates women to have sex, typically followed with a “duh”. The two assumptions embedded in that idea are (1) women who orgasm during intercourse engage in more sex than women who do not and (2) having more sex means having more children. If either of those points turn out to be false, that hypothesis wouldn’t work.

The first point may be true. According to Lloyd (2005), there is some evidence that suggests women want more sex the more frequently they orgasm. Sure, it’s correlational in nature, but we’ll not worry about that here. It’s the second point that raises some more serious issues. As women can only become pregnant during a specific point of their cycle where an egg is available, having more sex during a non-fertile period will do approximately nothing when it comes to a shot at successful conception. Further, in principle – and many times, in practice – you only need to have sex once to become pregnant; having sex beyond or before that point will not make a woman any more pregnant. The heart of the issue, then, seems to concern proper timing. Having sperm present and ready to do some fertilizing at all points may increase the odds of conception, as neither the man or the woman know the precise moment ovulation will occur. However, at some point there will be diminishing returns on the probably of increasing conception from each additional act of intercourse. It’s not a simple formula of “more sex = more babies”.

I’m going to get soooo pregnant; you have no idea!

If female orgasm evolved to motivate women to have sex with men, it does so rather inefficiently. When women masturbate, the vast majority do not do so in a manner that simulates penetrative intercourse alone, as penetration rarely provides the proper stimulation. When women do achieve orgasm with intercourse, which is often quite variable, most require additional manual stimulation of the clitoris; orgasm is not generally reached through the sex itself. In terms of providing some crucial motivation then, this accounts seems to take an odd do-it-yourself approach to reinforcement. This also raises the question of why so many women are unable to reach orgasm either frequently or at all from intercourse alone if it’s supposed to provide some crucial motivation. Under this functional account, women who did not experience orgasms with intercourse would have been selected against, yet they persist in substantial numbers. In terms of taking home the coveted label of adaptation, this account doesn’t fare so well.

There are many additional adaptive accounts of female orgasm, but I’d like to discuss only one other hypothesis here: the upsuck hypothesis. Though the account had been proposed prior to Baker and Bellis (1992), they were the first to attempt to empirically test the suggestion that female orgasm may serve a function manipulating the amount of sperm retained or ejected from copulation. To test this suggestion, Baker and Bellis found some very willing volunteers to first collect semen samples from sex using condoms in order to generate an estimate of sperm count in the ejaculate. After this period, the couples engaged in unprotected sex and collected the flowback – the secretions from the vagina following sex, including fluids from both the male and female. Sperm count was then obtained from the flowback samples to estimate how much sperm had been retained. The samples were finally assessed on scales of taste and presentation*.

Don’t worry; taste testing was carried out using a double-blind procedure to avoid bias.

The results showed that female orgasm was unrelated to sperm retention in general. However, female orgasms that occurred from one minute prior to male ejaculation to forty-five minutes following ejaculation were associated with greater estimated sperm retention. Lloyd (2005) critiqued this study on statistical grounds, but I’m not currently in a position to evaluate her claims, so I’ll ignore those for now (though I will say I’m always uncomfortable relying on median values without accompanying means). Lloyd also mentions that a later reexamination of the data found that female orgasms occurring one minute to ten minutes following male ejaculation actually did not show that effect of increased sperm retention, which would require the odd pattern of female orgasm having no effect prior to one minute before male ejaculation, then it would increase sperm retention, then decrease retention, then increase retention again. It seems more plausible that there’s an issue with the data, rather than that such a peculiar pattern exists.

There is another concern of mine regarding Baker and Bellis’s flowback data, though it has not been raised by other authors to my knowledge. Perhaps that is for a good statistical reason that escapes me, so bear in mind this may be a moot concern. Naturally, it’s hard to recruit for this kind of research. As a result, Baker and Bellis had a small sample size, but did manage to collect 127 flowback samples across 11 couples. Now Lloyd mentions that, of these 127 samples, 93 came from just one couple. What she does not mention is that this couple also happened to have the second lowest median percentage of sperm retention of all the couples, and they were lower by a substantial margin. In fact, the couple providing most of the data retained only about half of the overall median number of sperm. For reference sake, the only couple to have a lower median retention rate was estimated to have retained a negative number of sperm. If most of the data is coming from an outlier, that would be a big problem.

For example, the average income of these men is roughly ten-million dollars a year.

While these results are suggestive, they beg for replication before full acceptance. Nevertheless, let’s take the results of Baker and Bellis at face value. While all women in the sample were estimated to be able to either nearly completely retain or expel all the sperm of an ejaculate, regardless of whether they could orgasm during sex or not, Baker and Bellis suggest that female orgasm may play some role in affecting the sperm retention process. To attempt and complete an adaptive account, it’s time to consider two other points: first, it’s unclear as to whether the additional sperm retention has any effect on conception rates, either between or within men. It might seem as though additional sperm retention would be useful, but that assumption needs to be demonstrated. Second, female orgasm does not reliably accompany intercourse at all, let alone with a specific timing (up to a minute before hand, but not from one to ten minutes afterwards, but then again after ten minutes, and only during ovulation). As most female orgasms require additional clitoral stimulation on the part of either the man or the woman, this would require ancestral humans to have reliably provided such stimulation, and whether they did so is an open-ended question. Even if female orgasm had this potential function of sperm retention, it does not follow that female orgasm was selected for; that potential function could be a byproduct.

There are certain adaptive hypotheses still to be tested, but what we need is more evidence that’s less ambiguous. The case of whether female orgasm is an adaptation or not is still open to debate. At present, I find the evidence favoring the adaptation side of the debate lacking, much to the dismay of many people who determine the social and personal value of a trait on the basis of whether it’s an adaptation or a byproduct. They seem to think tentatively labeling female orgasm as a byproduct somehow makes it less valuable and reflects a mean-spirited sexism towards women. On a redundant note, they’re still wrong.

*That sentence may or may not be true. I wasn’t there.

References: Baker, R.R. & Bellis, M.A. (1993). Human sperm competition: Ejaculate manipulation by females and a function for female orgasm. Animal Behavior, 46, 887-909

Lloyd, E.A. (2005). The case of the female orgasm: Bias in the science of evolution. Massachusetts: Harvard University Press.

Somebody Else’s Problem

Let’s say you’re a recent high-school graduate at the bank, trying to take out a loan to attend a fancy college so you can enjoy the privileges of complaining about how reading is, like, work, and living the rest of life in debt from student loans. A lone gunman rushes into the bank, intent on robbing the place. You notice the gunman is holding a revolver, meaning he only has six bullets. This is good news, as there happen to be 20 people in the bank; if you all rush him at the same time, he wouldn’t be able to kill more than six people, max; realistically, he’d only be able to get off three or four shots before he was taken down, and there’s no guarantee those shots will even kill the people they hit. The only practical solution here should be to work together to stop the robbery, right?

Look on the bright side: if you pull through, this will look great on your admissions essay.

The idea that evolutionary pressures would have selected for such self-sacrificing tendencies is known as “group selection”, and is rightly considered nonsense by most people who understand evolutionary theory. Why doesn’t it work? Here’s one reason: let’s go back to the bank. The benefits of stopping the robbery will be shared by everyone at the abstract level of the society, but the costs of stopping the robbery will be disproportionately shouldered by those who intervene. While everyone is charging the robber, if you decide that you’re quite comfortable hiding in the back, thank you very much, your chances of getting shot decline dramatically and you still get the benefit; just let it be somebody else’s problem. Of course, most other people should realize this as well, leaving everyone pretty uninclined to try and stop the robbery. Indeed, there are good reasons to suspect that free-riding is the best strategy (Dreber et al., 2008).

There are, unfortunately, some people who think group selection works and actually selects for tendencies to incur costs at no benefit. Fehr, Fischbacher, and Gachter (2002) called their bad idea “strong reciprocity”:

“A person is a strong reciprocator if she is willing to sacrifice resources (a) to be kind to those who are being kind… and (b) to punish those who are being unkind…even if this is costly and provides neither present nor future material rewards for the reciprocator” (p.3, emphasis theirs)

So the gist of the idea would seem to be (to use an economic example) that if you give away your money to people you think are nice – and burn your money to ensure that mean people’s money also gets burned – with complete disregard for your own interests, you’re going to somehow end up with more money. Got it? Me neither…

“I’m telling you, this giving away cash thing is going to catch on big time”

So what would drive Fehr, Fischbacher, and Gachter (2002) to put forth such a silly idea? They don’t seem to think existing theories – like reciprocal altruism, kin selection, costly signaling theory, etc – can account for the way people behave in laboratory settings. That, and existing theories are based around selfishness, which isn’t nice, and the world should be a nicer place. The authors seems to believe that those previous theories lead to predictions like: people should “…always defect in a sequential, one-shot [prisoner's dilemma]” when playing anonymously.That one sentence contains two major mistakes: the first mistake is that those theories most definitely do not say that. The second mistake is part of the first: they assume that people’s proximate psychological functioning will automatically fall in-line with the conditions they attempt to create in the lab, which it does not (as I’ve mentioned recently). While it might be adaptive, in those conditions, to always defect at the ultimate level, it does not mean that the proximate level will behave that way. For instance, it’s a popular theory that sex evolved for the purposes of reproduction. That people have sex with birth control does not mean the reproduction theory is unable to account for that behavior.

As it turns out, people’s psychology did not evolve for life in a laboratory setting, nor is the functioning of our psychology going to be adaptive in each and every context we’re in. Were this the case, returning to our birth control example, simply telling someone that having sex when the pill is involved removes the possibility of pregnancy would lead to people to immediately lose all interest in the act (either having sex or using the pill). Likewise, oral sex, anal sex, hand-jobs, gay sex, condom use, and masturbation should all disappear too, as none are particularly helpful in terms of reproduction.

Little known fact: this parade is actually a celebration of a firm understanding of the proximate/ultimate distinction. A very firm understanding.

Nevertheless, people do cooperate in experimental settings, even when cooperating is costly, the game is one-shot, there’s no possibility of being punished, and everyone’s ostensibly anonymous. This poses another problem for Fehr and his colleagues: their own theory predicts this shouldn’t happen either. Let’s consider an anonymous one-shot prisoner’s dilemma with a strong reciprocator as one of the players. If they’re playing against another strong reciprocator, they’ll want to cooperate; if they’re playing against a selfish individual, they’ll want to defect. However, they don’t know ahead of time who they’re playing against, and once they make their decision it can’t be adjusted. In this case, they run the risk of defecting on a strong reciprocator or benefiting a selfish individual while hurting themselves. The same goes for a dictator game; if they don’t know the character of the person they’re giving money to, how much should they give?

The implications of this extend even further: in a dictator game where the dictator decides to keep the entire pot, third-party strong reciprocators should not really be inclined to punish. Why? Because they don’t know a thing about who the receiver is. Both the receiver and dictator could be selfish, so punishing wouldn’t make much sense. The dictator could be a strong reciprocator and the receiver could be selfish, in which case punishment would make even less sense. Both could be strong reciprocators, unsure of the others’ intentions. It would only make sense if the dictator was selfish and the receiver was a strong reciprocator, but a third-party has no way of knowing whether or not that’s the case. (It also means if strong reciprocators and selfish individuals are about equal in the population, punishment in these cases would be a waste three-forths of the time – maybe half at best, if they want to punish selfish people no matter who they’re playing against – meaning strong reciprocator third-parties should never punish).

There was some chance he might of been being a dick…I think.

The main question for Fehr and his colleagues would then not be, “why do people reciprocate cooperation in the lab” – as reciprocal altruism and the proximate/ultimate distinction can already explain that without resorting to group selection – but rather, “why is there any cooperation in the first place?” The simplest answer to the question might seem to be that some people are prone to give the opposing player the benefit of the doubt and cooperate on the first move, and then adjust their behavior accordingly (even if they are not going to be playing sequential rounds). The problem here is that this is what a tit-for-tat player already does, and it doesn’t require group selection.

It also doesn’t look good for the theory of social preferences invoked by Fehr et al. (2002) when the vast majority of people don’t seem to have preferences for fairness and honesty when they don’t have to, as evidenced by 31 of 33 people strategically using an unequal distribution of information to their advantage in ultimatum games (Pillutla and Murnighan, 1995). In every case Fehr et al. (2002) looks at, outcomes have concrete values that everyone knows about and can observe. What happens when intentions can be obscured, or values misrepresented, as they often can be in real life? Behavior changes and being a strong reciprocator would be even harder. What might happen when the cost/punishment ratio changes from a universal static value, as it often does in real life (not everyone can punish others at the same rate)? Behavior will probably change again.

Simply assuming these behaviors are the result of group selection isn’t enough.The odds are better that the results are only confusing when their interpreter has an incorrect sense of how things should have turned out.

References: Dreber, A., Rand, D.G., Fudenberg, D, & Nowak, M.A. (2008). Winners don’t punish. Nature, 452,  348-351

Fehr, E., Fischbacher, U., & Gachter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13, 1-25.

Pillutla, M.M. & Murnighan, J.K. (1995). Being fair or appearing fair: Strategic behavior in ultimatum bargaining. Academy of Management Journal, 38, 1408-1426.