Divorced Dads And Their Daughters

Despite common assumptions, parents have less of an impact on their children’s future development than they’re often credited with. Twins reared apart usually aren’t much different than twins reared together, and adopted children don’t end up resembling their adoptive parents substantially more than strangers. While parents can indeed affect their children’s happiness profoundly, a healthy (and convincing) literature exists supporting the hypothesis that differences in parenting behaviors don’t do a whole lot of shaping in terms of children’s later personalities (at least when the child isn’t around the parent; Harris, 2009). This makes a good deal of theoretical sense, as children aren’t developing to be better children; they’re developing to become adults in their own right. What children learn works when it comes to interacting with their parents might not readily translate to the outside world. If you assume your boss will treat you the same way your parents would, you’re likely in for some unpleasant clashes with reality. 

“Who’s a good branch manager? That’s right! You are!”

Not that this has stopped researchers from seeking to find ways that parent-child interactions might shape children’s future personalities, mind you. Indeed, I came upon a very new paper purporting to do just that this last week. It suggested that the quality of a father’s investment in his daughters causes shifts in his daughter’s willingness to engage in risky sexual behavior (DelPriore, Schlomer, & Ellis, 2017). The analysis in the paper is admittedly a bit tough to follow, as the authors examine three- and even four-way interactions (which are difficult to keep straight in one’s mind: the importance of variable A changes contingent on the interaction between B, C, & D), so I don’t want to delve too deeply into the specific details. Instead, I want to discuss the broader themes and design of the paper.

Previous research looking at parenting effects on children’s development often suffers from the problem of relatedness, as genetic similarities between parents and children make it hard to tease apart the unique effects of parenting behaviors (how the parents treat their children) from natural resemblances (nice parents have nice children). In a simple example, parents who love and nurture their children tend to have children who grow up kinder and nicer, while parents who neglect their children tend to have children who grow up to be mean. However, it seems likely that parents who care for their children are different in some important regards than those who neglect them, and those tendencies are perfectly capable of being passed on through shared genes. So are the nice kids nice because of how their parents treated them or because of inheritance? The adoption studies I mentioned previously tend to support the latter interpretation. When you control for genetic factors, parenting effects tend to drop out.

What’s good about the present research is its innovative design to try and circumvent this issue of genetic similarities between children and parents. To accomplish this goal, the authors examined (among other things) how divorce might affect the development of different daughters within the same family. The reasoning for doing so seems to go roughly as follows: daughters should base their sexual developmental trajectory, in part, on the extent of paternal investment they’re exposed to during their early years. When daughters are regularly exposed to fathers that invest in them and monitor their behavior, they should come to expect that subsequent male parental investment will be forthcoming in future relationships and avoid peers who engage in risky sexual behavior. The net result is that such daughters will engage in less risky sexual behavior themselves. By contrast, when daughters lack proper exposure to an investing father, or have one who does not monitor their peer behavior as tightly (due to divorce), they should come to view future male investment as unlikely, associate with those who engage in riskier sexual behavior, and engage in such behavior themselves.

Accordingly, if a family with two daughters experiences a divorce, the younger daughter’s development might be affected differently than the older daughter’s, as they have different levels of exposure to their father’s investment. The larger this age gap between the daughters, the larger this effect should be. After recruiting 42 sister pairs from intact families and 59 sister pairs from divorced families and asking them some retrospective questions about what their life was like growing up, this is basically the result the authors found. Younger daughters tended to receive less monitoring than older daughters in families of divorce and, accordingly, tended to associate with more sexually-risky peers and engage in such behaviors themselves. This effect was not present in biologically intact families. Do we finally have some convincing evidence of parenting behaviors shaping children’s personalities outside the home?

Look at this data and tell me the first thing that comes to your mind

I don’t think so. The first concern I would raise regarding this research is the monitoring measure utilized. Monitoring, in this instance, represented a composite score of how much information the daughters reported their parents had about their lives (rated from (1) didn’t know anything, (2) knew a little, or (3) knew a lot) in five domains: who their friends were, how they spent their money, where they spent their time after school, where they were at night, and how they spent their free time. While one might conceptualize that as monitoring (i.e., parents taking an active interest in their children’s lives and seeking to learn about/control what they do), it seems that one could just as easily think of that measure as how often children independently shared information with their parents. After all, the measure doesn’t specify, “how often did your parents try to learn about your life and keep track of your behavior?” It just asked about how much they knew.

To put that point concretely, my close friends might know quite a bit about what I do, where I go, and so on, but it’s not because they’re actively monitoring me; it’s because I tell them about my day voluntarily. So, rather than talking about how a father’s monitoring of his daughter might have a causal effect on her sexual behavior, we could just as easily talk about how daughters who engage in risky behavior prefer not to tell their parents about what they’re doing, especially if their personal relationship is already strained by divorce.

The second concern I have concerns divorce itself. Divorce can indeed affect the personal relationships of children with their parents. However, that’s not the only thing that happens after a divorce. There are other effects that extend beyond emotional closeness. An important example of these other factors are the financial ones. If a father has been working while the mother took care of the children – or if both parents were working – divorce can result in massive financial hits for the children (as most end up living with their mother or in a joint custody arrangement). The results of entering additional economic problems into an already emotionally-upsetting divorce can entail not only additional resentment between children and parents (and, accordingly, less sharing of information between them; the reduced monitoring), but also major alterations to the living conditions of the children. These lifestyle shifts could include moving to a new home, upsetting existing peer relations, entering new social groups, and presenting children with new logistical problems to solve.

Any observed changes in a daughter’s sexual behavior in the years following a divorce, then, can be thought of as a composite of all the changes that take place post-divorce. While the quality and amount of the father-daughter relationship might indeed change during that time, there are additional and important factors that aren’t controlled for in the present paper.

Too bad the house didn’t split down the middle as nicely

The final concern I wanted to discuss was more of a theoretical one, and it’s slightly larger than the methodological points above. According to the theory proposed at the beginning of the paper:

“…the quality of fathering that daughters receive provides information about the availability and reliability of male investment in the local ecology, which girls use to calibrate their mating behavior and expectations for long-term investment from future mates.”

This strikes me as a questionable foundation for a few reasons. First, it would require that the relationship of a daughter’s parents are substantially predictive of the relationships she is likely to encounter in the world with regard to male investment. In other words, if your father didn’t invest in your mother (or you) that heavily (or at least during your childhood), that needs to mean that many other potential fathers are likely to do the same to you (if you’re a girl). This would further require, then, that male investment be appreciably uniform across time in the world. If male investment wasn’t stable between males and across time within a given male, then trying to predict the general availability of future male investment from your father’s seems like a losing formula for accuracy.

It seems unlikely the world is that stable. For similar reasons, I suggested that children probably can’t accurately gauge future food availability from their access to food at a young age. Making matters even worse in this regard is that, unlike food shortages, the presence or absence of male parental investment doesn’t seem like the kind of thing that will be relatively universal. Some men in a local environment might be perfectly willing to invest heavily in women while others are not. But that’s only considering the broad level: men who are willing to invest in general might be unwilling to invest in a particular woman, or might be willing or unwilling to invest in that woman at different stages in her life, contingent on her mate value shifting with age. Any kind of general predictive power that could be derived about men in a local ecology seems weak indeed, especially if you are basing that decision off a single relationship: the one between your parents. In short, if you want to know what men in your environment are generally like, one relationship should be as informative as another. There doesn’t seem to be a good reason to assume your parents will be particularly informative.

Matters get even worse for the predictive power of father-daughter relationships when one realizes the contradiction between that theory and the predictions of the authors. The point can be made crystal clear simply by considering the families examined in this very study. The sample of interest was comprised of daughters from the same family who had different levels exposure to paternal investment. That ought to mean, if I’m following the predictions properly, that the daughters – the older and younger one – should develop different expectations about future paternal investment in their local ecology. Strangely, however, these expectations would have been derived from the same father’s behavior. This would be a problem because both daughters cannot be right about the general willingness of males to invest if they hold different expectations. If the older daughter with more years of exposure to her father comes to believe male investment will be available and the younger daughter with fewer years of exposure comes to believe it will be unavailable, these are opposing expectations of the world.

However, if those different expectations are derived from the same father, that alone should cast doubt on the ability of a single parental relationship to predict broad trends about the world. It doesn’t even seem to be right within families, let alone between them (and it’s probably worth mentioning at this point that, if children are going to be right about the quality of male investment in their local ecology more generally, all the children in the same area should develop similar expectations, regardless of their parent’s behavior. It would be strange for literal neighbors to develop different expectations of general male behavior in their local environment just because the parents of one home got divorced while the other stayed together. Then again, it should strange for daughters of the same home to develop different expectations, too).

Unless different ecologies have rather sharp boarders

On both a methodological and theoretical level, then, there are some major concerns with this paper that render its interpretation suspect. Indeed, at the heart of the paper is a large contradiction: if you’re going to predict that two girls from the same family develop substantially different expectations about the wider world from the same father, then it seems impossible that the data from that father is very predictive of the world. In any case, the world doesn’t seem as stable as it would need to be for that single data point to be terribly useful. There ought not be anything special about the relationship of your parents (relative to other parents) if you’re looking to learn something about the world in general.

While I fully expect that children’s lives following their parents divorce will be different – and those differences can affect development, depending on when they occur – I’m not so sure that the personal relationship between fathers and daughters is the causal variable of primary interest.

References: DelPriore, D., Schlomer, G., & Ellis, B. (2017). Impact of Fathers on Parental Monitoring of Daughters and Their Affiliation With Sexually Promiscuous Peers: A Genetically and Environmentally Controlled Sibling Study. Developmental Psychology. Advance online publication. http://dx.doi.org/10.1037/dev0000327

Harris, J. (2009) The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press, NY.

Why Do So Many Humans Need Glasses?

When I was very young, I was given an assignment in school to write a report on the Peregrine Falcon. One interesting fact about this bird happens to be that it’s quite fast: when the bird spots prey (sometimes from over a mile away) it can enter into a high-altitude dive, reaching speeds in excess of 200 mph, and snatch its prey out of midair (if you’re interested in watching a video of such a hunt, you can check one out here). The Peregrine would be much less capable of achieving these tasks – both the location and capture of prey – if its vision was not particularly acute: failures of eyesight can result in not spotting the prey in the first place, or failing to capture it if distances and movements aren’t properly tracked. For this reason I suspect (though am not positive) that you’ll find very few Peregrines that have bad vision: their survival depends very heavily on seeing well. These birds would probably not be in need of corrective lens, like the glasses and contacts that humans regularly rely upon in modern environments. This raises a rather interesting question: why do so many humans wear glasses?

And why does this human wear so many glasses?

What I’m referring to in this case is not the general degradation of vision with age. As organisms age, all their biological systems should be expected to breakdown and fail with increasing regularity, and eyes are no exception. Crucially, all these systems should be expected to all breakdown, more-or-less, at the same time. This is because there’s little point in a body investing loads of metabolic resources into maintaining a completely healthy heart that will last for 100 years if the liver is going to shut down at 60. The whole body will die if the liver does, healthy heart (or eyes) included, so it would be adaptive to allocate those development resources differently. The mystery posed by frequently-poor human eyesight is appreciably different, as poor vision can develop early in life; often before puberty. When you observe apparent maladaptive development early in life like that, it requires another type of explanation.

So what might explain why human visual acuity appears so lackluster early in life (to the tune of over 20% of teenagers using corrective lenses)? There are a number of possible explanations we might entertain. The first of these is that visual acuity hasn’t been terribly important to human populations for some time, meaning that having poor eyesight did not have an appreciable impact on people’s ability to survive and reproduce. This strikes me as a rather implausible hypothesis on the face of it not only because vision seems rather important for navigating the world, but also because it ought to predict that having poor vision should be something of a species universal. While 20% of young people using corrective lenses is a lot, eyes (and the associated brain regions dedicated to vision) are costly organs to grow and maintain. If they truly weren’t that important to have around, then we might expect that everyone needs glasses to see better; not just pockets of the population. Humans don’t seem to resemble the troglobites that have lost their vision after living in caves away from sunlight for many generations.

Another possibility is that visual acuity has been important – it’s adaptive to have good vision – but people’s eyes fail to develop properly sometimes because of development insults, like infectious organisms. While this isn’t implausible in principle – infectious agents have been known to disrupt development and result in blindness, deafness, and even death on the extreme end – the sheer numbers of people who need corrective lenses seem a bit high to be caused by some kind of infection. Further, the numbers of younger children and adults who need glasses appear to have been rising over time, which might seem strange as medical knowledge and technologies have been steadily improving. If the need for glasses is caused by some kind of infectious agent, we would need to have been unaware of its existence and not accidentally treated it with antibiotics or other such medications. Further, we might expect glasses to be associated with other signs of developmental stress, like bodily asymmetries, low IQ, or other such outcomes. If your immune system didn’t fight off the bugs that harmed your eyes, it might not be good enough to fight off other development-disrupting infections. However, there seems to be a positive correlation between myopia and intelligence, which would be strange under a disease hypothesis.

The negative correlation with fashion sense begs for explanation, too

A third possible explanation is that visual acuity is indeed important for humans, but our technologies have been relaxing the selection pressures that were keeping it sharp. In other words, since humans invented glasses and granted those who cannot see as well a crutch to overcome this issue, any reproductive disadvantage associated with poor vision was effectively removed. It’s an interesting hypothesis that should predict people’s eyesight in a population begins to get worse following the invention and/or proliferation of corrective lenses. So, if glasses were invented in Italy around 1300, that should have lead to the Italian population’s eyesight growing worse, followed by the eyesight of other cultures to which glasses spread but not beforehand. I don’t know much about the history of vision across time in different cultures, but something tells me that pattern wouldn’t show up if it could be assessed. In no small part, that intuition is driven by the relatively-brief window of historical time between when glasses were invented, and subsequently refined, produced in sufficient numbers, distributed globally, and today. A window of only about 700 years for all of that to happen and reduce selection pressures for vision isn’t a lot of time. Further, there seems to be evidence that myopia can develop rather rapidly in a population, sometimes as quick as a generation:

One of the clearest signs came from a 1969 study of Inuit people on the northern tip of Alaska whose lifestyle was changing2. Of adults who had grown up in isolated communities, only 2 of 131 had myopic eyes. But more than half of their children and grandchildren had the condition. 

That’s much too fast for a relaxation of selection pressures to be responsible for the change.

This brings us to the final hypothesis I wanted to cover today: an evolutionary mismatch hypothesis. In the event that modern environments differ in some key ways from the typical environments humans have faced ancestrally, it is possible that people will develop along an atypical path. In this case, the body is (metaphorically) expecting certain inputs during its development, and if they aren’t received things can go poorly. As a for instance, it has been suggested that people develop allergies, in part, as a result of improved hygiene: our immune systems are expecting a certain level of pathogen threat which, when not present, can result in our immune system attacking inappropriate targets, like pollen.

There does seem to be some promising evidence on this front for understanding human vision issues. A paper by Rose et al (2008) reports on myopia in two samples of similarly-aged Chinese children: 628 children living in Singapore and 124 living in Sydney. Of those living in Singapore, 29% appeared to display myopia, relative to only 3% of those living in Sydney. These dramatic differences in rates of myopia are all the stranger when you consider the rates of myopia in their parents were quite comparable. For the Sydney/Singapore samples, respectively, 32/29% of the children had no parent with myopia, 43/43% had one parent with myopia, and 25/28% had two parents with myopia. If myopia was simply the result of inherited genetic mutations, its frequencies between countries shouldn’t be as different as they are, disqualifying hypotheses one and three from above.

When examining what behavioral correlates of myopia existed between countries, several were statistically – but not practically – significant, including number of books read and hours spent on computers or watching TV. The only appreciable behavioral difference between the two samples was the number of hours the children tended to spend outdoors. In Sydney, the children spent an average of about 14 hours a week outside, compared to a mere 3 hours in Singapore. It might be the case, then, that the human eye requires exposure to certain kinds of stimulation provided by outdoor activities to develop properly, and some novel aspects of modern culture (like spending lots of time indoors in a school when children are young) reduce such exposure (which might also explain the aforementioned IQ correlation: smarter children may be sent to school earlier). If that were true, we should expect that providing children with more time outdoors when they are young is preventative against myopia, which it actually seems to be.

Natural light and no Wifi? Maybe I’ll just go blind instead…

It should always strike people as strange when key adaptive mechanisms appear to develop along an atypical path early in life that ultimately makes them worse at performing their function. An understanding of what types of biological explanations can account for these early maladaptive outcomes goes a long way in helping you understand where to begin your searches and what patterns of data to look out for.

References: Rose, K., Morgan, I., Smith, W., Burlutsky, G., Mitchell, P., & Saw, S. (2008). Myopia, lifestyle, and schooling in students of Chinese ehtnicity in Singapore and Sydney. Archives of Ophthalmology, 126, 527-530.

Intergenerational Epigenetics And You

Today I wanted to cover a theoretical matter I’ve discussed before but apparently not on this site: the idea of epigenetic intergenerational transmission. In brief, epigenetics refers to chemical markers attached to your DNA that regulate how it’s expressed and regulated without changing the DNA itself. You could imagine your DNA as a book full of information and each cell in your body contains the same book. However, not every cell expressed the full genome; each cell only expresses part of it (which is why skin cells are different from muscle cells, for instance). The epigenetic portion, then, could be thought of as black tape placed over certain passages in the books so they are not read. As this tape is added or removed by environmental influences, different portions of the DNA will become active. From what I understand about how this works (which is admittedly very little at this juncture), usually these markers are not passed onto offspring from parents. The life experiences of your parents, in other words, will not be passed onto you via epigenetics. However, there has been some talk lately of people hypothesizing that not only are these changes occasionally (perhaps regularly?) passed on from parents to offspring; the implication seems to be present that they also might be passed on in an adaptive fashion. In short, organisms might adapt to their environment not just through genetic factors, but also through epigenetic ones.  

Who would have guessed Lamarckian evolution was still alive?

One of the examples given in the target article on the subject concerns periods of feast and famine. While rare in most first-world nations these days, these events probably used to be more recurrent features of our evolutionary history. The example there involves the following context: during some years in early 1900 Sweden food was abundant, while during other years it was scarce. Boys who were hitting puberty just at the time of a feast season tended to have grandchildren who died six years earlier than the grandchildren of boys who have experienced famine season during the same developmental window. The causes of death, we are told, often involving diabetes. Another case involves the children of smokers: men who smoked right before puberty tended to have children who were fatter, on average, than fathers who smoked habitually but didn’t start until after puberty . The speculation, in this case, is that development was in some way affected in a permanent fashion by food availability (or smoking) during a critical window of development, and those developmental changes were passed onto their sons and the sons of their sons.

As I read about these examples, there were a few things that stuck out to me as rather strange. First, it seems odd that no mention was made of daughters or granddaughters in that case, whereas in the food example there wasn’t any mention of the in-between male generation (they only mentioned grandfathers and grandsons there; not fathers). Perhaps there’s more to the data that is let on there but – in the event that no effects were found for fathers or daughters or any kind – it is also possible that a single data set might have been sliced up into a number of different pieces until the researchers found something worth talking about (e.g., didn’t find an effect in general? Try breaking the data down by gender and testing again). Now that might or might not be the case here, but as we’ve learned from the replication troubles in psychology, one way of increasing your false-positive rate is to divide your sample into a number of different subgroups. For the sake of this post, I’m going to assume that is not the case and treat the data as representing something real, rather than a statistical fluke.   

Assuming this isn’t just a false-positive, there are two issues with the examples as I see them. I’m going to focus predominately on the food example to highlight these issues: first, passing on such epigenetic changes seems maladaptive and, second, the story behind it seems implausible. Let’s take the issues in turn.

To understand why this kind of inter-generational epigenetic transmission seems maladaptive, consider two hypothetical children born one year apart (in, say, the years 1900 and 1901). At the time the first child’s father was hitting puberty, there was a temporary famine taking place and food was scarce; at the time of the second child, the famine had passed and food was abundant. According to the logic laid out, we should expect that (a) both children will have their genetic expression altered due to the epigenetic markers passed down by their parents, affecting their long-term development, and (b) the children will, in turn, pass those markers on to their own children, and their children’s children (and so on).

The big Thanksgiving dinner that gave your grandson diabetes

The problems here should become apparent quickly enough. First, let’s begin by assuming these epigenetic changes are adaptive: they are passed on because they are reproductively useful at helping a child develop appropriately. Specifically, a famine or feast at or around the time of puberty would need to be a reliable cue as to the type of environments their children could expect to encounter. If a child is going to face shortages of food, they might want to develop in a different manner than if they’re expecting food to be abundant.

Now that sounds well and good, but in our example these two children were born just a year apart and, as such, should be expected to face (broadly) the same environment, at least with respect to food availability (since feast and famines tends to be more global). Clearly, if the children were adopting different developmental plans in response to that feast of famine, both of them (plan A affected by the famine and plan B not so affected) cannot be adaptive. Specifically, if this epigenetic inheritance is trying to anticipate children’s future conditions by those present around the time of their father’s puberty, at least one of the children’s developmental plans will be anticipating the wrong set of conditions. That said, both developmental plans could be wrong, and conditions could look different than either anticipated. Trying to anticipate the future conditions one will encounter over their lifespan (and over their children’s and grandchild’s lifespan) using only information from the brief window of time around puberty seems like a plan doomed for failure, or at least suboptimal results.

A second problem arises because these changes are hypothesized to be intergenerational: capable of transmission across multiple generations. If that is the case, why on Earth would the researchers in this study pay any mind to the conditions the grandparents were facing around the time of puberty per se? Shouldn’t we be more concerned with the conditions being faced a number of generations backs, rather than the more immediate ones? To phrase this in terms of a chicken/egg problem, shouldn’t the grandparents in question have inherited epigenetic markers of their own from their grandparents, and so on down the line? If that were the case, the conditions they were facing around their puberty would either be irrelevant (because they already inherited such markers from their own parents) or would have altered the epigenetic markers as well.

If we opt for the former possibility, than studying grandparent’s puberty conditions shouldn’t be too impactful. However, if we opt for the latter possibility, we are again left in a bit of a theoretical bind: if the conditions faced by the grandparents altered their epigenetic markers, shouldn’t those same markers also have been altered by the parent’s experiences, and their grandson’s experiences as well? If they are being altered by the environment each generation, then they are poor candidates for intergenerational transmission (just as DNA that was constantly mutating would be). There is our dilemma, then: if epigenetics change across one’s lifespan, they are unlikely candidates for transmission between generations; if epigenetic changes can be passed down across generations stably, why look at the specific period pre-puberty for grandparents? Shouldn’t we be concerned with their grandparents, and so on down the lines?

“Oh no you don’t; you’re not pinning this one all on me”

Now, to be clear, a famine around the time of conception could affect development in other, more mundane ways. If a child isn’t receiving adequate nutrition at the time they are growing, then it is likely certain parts of their developing body will not grow as they otherwise would. When you don’t have enough calories to support your full development, trade-offs need to be made, just like if you don’t have enough money to buy everything you want at the store you have to pass up on some items to afford others. Those kinds of developmental outcomes can certainly have downstream effects on future generations through behavior, but they don’t seem like the kind of changes that could be passed on the way genetic material can. The same can be said about the smoking example provided as well: people who smoked during critical developmental windows could do damage to their own development, which in turn impacts the quality of the offspring they produce, but that’s not like genetic transmission at all. It would be no more surprising than finding out that parents exposed to radioactive waste tend to have children of a different quality than those not so exposed.

To the extent that these intergenerational changes are real and not just statistical oddities, it doesn’t seem likely that they could be adaptive; they would instead likely reflect developmental errors. Basically, the matter comes down to the following question: are the environmental conditions surrounding a particular developmental window good indicators of future conditions to the point you’d want to not only focus your own development around them, but also the development of your children and their children in turn? To me, the answer seems like a resounding, ‘”No, and that seems like a prime example of developmental rigidity, rather than plasticity.” Such a plan would not allow offspring to meet the demands of their unique environments particularly well. I’m not hopeful that this kind of thinking will lead to any revolutions in evolutionary theory, but I’m always willing to be proven wrong if the right data comes up. 

Mistreated Children Misbehaving

None of us are conceived or born as full adults; we all need to grow and develop from single cells to fully-formed adults. Unfortunately – for the sake of development, anyway – the future world you will find yourself in is not always predictable, which makes development a tricky matter at times. While there are often regularities in the broader environment (such as the presence or absence of sunlight, for instance), not every individual will inhabit the same environment or, more precisely, the same place in their environment. Consider two adult males, one of whom is six-feet tall and 230 pounds of muscle, and the other being five-feet tall and 110 pounds. While the dichotomy here is stark, it serves to make a simple point: if both of these males developed in a psychological manner that led them to pursue precisely the same strategies in life – in this case, say, one involving aggressive contests for access to females – it is quite likely that the weaker male will lose out to the stronger one most (if not all) of the time. As such, in order to be more-consistently adaptive, development must be something of a fluid process that helps tailor an individual’s psychology to the unique positions they find themselves in within a particular environment. Thus, if an organism is able to use some cues within their environment to predict their likely place in it in the future (in this case, whether they would grow large or small), their development could be altered to encourage their pursuit of alternate routes to eventual reproductive success. 

Because pretending you’re cut out for that kind of life will only make it worse

Let’s take that initial example and adapt it to a new context: rather than trying to predict whether one will grow up weak or strong, a child is trying to predict the probability of receiving parental investment in the future. If parental investment is unlikely to be forthcoming, children may need to take a different approach to their development to help secure the needed resources on their own, sometimes requiring their undertaking risky behaviors; by contrast, those children who are likely to receive consistent investment might be relatively less-inclined to take such risky and costly matters into their own hands, as the risk vs. reward calculations don’t favor such behavior. Placed in an understandable analogy, a child who estimates they won’t be receiving much investment from their parents might forgo a college education (and, indeed, even much of a high-school one) because they need to work to make ends meet. When you’re concerned about where your next meal is coming from there’s less time in your schedule for studying and taking out loans to not be working for four years. By contrast, the child from a richer family has the luxury of pursuing an education likely to produce greater future rewards because certain obstacles have been removed from their path.

Now obviously going to college is not something that humans have psychological adaptations for – it wasn’t a recurrent feature of our evolutionary history as a species – but there are cognitive systems we might expect to follow different developmental trajectories contingent on such estimations of one’s likely place in the environment; these could include systems judging the relative attractiveness of short- vs long-term rewards, willingness to take risks, pursuit of aggressive resolutions to conflicts, and so on. If the future is uncertain, saving for it makes less than taking a smaller reward in the present; if you lack social or financial support, being willing to fight to defend what little you do have might sound more appealing (as losing that little bit is more impactful when you won’t have anything left). The questions of interest thus becomes, “what cues in the environment might a developing child use to determine what their future will look like?” This brings us to the current paper by Abajobir et al (2016).

One potential cue might be your experiences with maltreatment while growing up, specifically at the hands of your caregivers. Though Abajobir et al (2016) don’t make the argument I’ve been sketching out explicitly, that seems to be the direction their research takes. They seem to reason (implicitly) that parental mistreatment should be a reliable cue to the future conditions you’re liable to encounter and, accordingly, one that children could use to alter their development. For instance, abusive or neglectful parents might lead to children adopting faster life history strategies involving risk-taking, delinquency, and violence themselves (or, if they’re going the maladaptive explanatory route, the failure of parents to provide supporting environments could in some way hinder development from proceeding as it usually would, in a similar fashion to not having enough food growing up might lead to one being shorter as an adult. I don’t know which line the authors would favor from their paper). That said, there is a healthy (and convincing) literature consistent with the hypothesis that parental behavior per se is not the cause of these developmental outcomes (Harris, 2009), but rather that it simply co-occurs with them. Specifically, abusive parents might be genetically different from non-abusive ones and those tendencies could get passed onto the children, accounting for the correlation. Alternatively, parents that maltreat their children might just happen to go together with children having peer groups growing up more prone to violence and delinquency themselves. Both are caused by other third variables.

Your personality usually can’t be blamed on them; you’re you all on your own

Whatever the nature of that correlation, Abajobir et al (2016) sought to use parental maltreatment from ages 0 to 14 as a predictor of later delinquent behaviors in the children by age 21. To do so, they used a prospective cohort of children and their mothers visiting a hospital between 1981-83. The cohort was then tracked for substantiated cases of child maltreatment reported to government agencies up to age 14, and at age 21 the children themselves were surveyed (the mothers being surveyed at several points throughout that time). Out of the 7200 initial participants, 3800 completed the 21-year follow up. At that follow up point, the children were asked questions concerning how often they did things like get excessively drunk, use recreational drugs, break the law, lie, cheat, steal, destroy the property of others, or fail to pay their debts. The mothers were also surveyed on matters concerning their age when they got pregnant, their arrest records, martial stability, and the amount of supervision they gave their children (all of these factors, unsurprisingly, predicting whether or not people continued on in the study for its full duration).

In total, of the 512 eventual cases of reported child maltreatment, only 172 remained in the sample at the 21-year follow up. As one might expect, maternal factors like her education status, arrest record, economic status, and unstable marriages all predicted increased likelihood of eventual child maltreatment. Further, of the 3800 participants, only 161 of them met the criteria for delinquency at 21 years. All of the previous maternal factors predicted delinquency as well: mothers who were arrested, got pregnant earlier, had unstable marriages, less education, and less money tended to produce more delinquent offspring. Adjusting for the maternal factors, however, it was reported that childhood maltreatment still predicted delinquency, but only for the male children. Specifically, maltreatment in males was associated with approximately 2-to-3.5 times as much delinquency as the non-maltreated males. For female offspring, there didn’t seem to be any notable correlation.

Now, as I mentioned, there are some genetic confounds here. It seems probable that parents who maltreat their children are, in some very real sense, different than parents who do not, and those tendencies can be inherited. This also doesn’t necessarily point a causal finger directly at parents, as it is also likely that maltreatment correlates with other social factors, like the peer group a child is liable to have or the neighborhoods they grow up in. The authors also mention that it is possible their measures of delinquency might not capture whatever effects childhood maltreatment (or its correlates) have on females, and that’s the point I wanted to wrap up discussing. To really put these findings on context, we would need to understand what adaptive role these delinquent behaviors – or rather the psychological mechanisms underlying them – have. For instance, frequent recreational drug use and problems fulfilling financial obligations might both signal that the person in question favors short-term rewards over long-term ones; frequent trouble with the law or destroying other people’s property could signal something about how the individual in question competes for social status. Maltreatment does seem to predict (even if it might not cause) different developmental courses, perhaps reflecting an active adjustment of development to deal with local environmental demands.

 The kids at school will all think you’re such a badass for this one

As we reviewed in the initial example, however, the same strategies will not always work equally well for every person. Those who are physically weaker are less likely to successfully enact aggressive strategies, all else being equal, for reasons which should be clear. Accordingly, we might expect that men and women show different patterns of delinquency to the extent they face unique adaptive problems. For instance, we might expect that females who find themselves in particularly hostile environments preferentially seek out male partners capable of enacting and defending against such aggression, as males tend to be more physically formidable (which is not to say that the women themselves might not be more physically aggressive as well). Any hypothetical shifts in mating preferences like these would not be captured by the present research particularly well, but it is nice to see the authors are at least thinking about what sex differences in patterns of delinquency might exist. It would be preferable if they were asking about those differences using this kind of a functional framework from the beginning, as that’s likely to yield more profitable insights and refine what questions get asked, but it’s good to see this kind of work all the same.

References: Abajobir, A., Kisely, S., Williams, G., Strathearnd, L., Clavarino, A., & Najman, J. (2016). Gender differences in delinquency at 21 years following childhood maltreatment: A birth cohort study. Personality & Individual Differences, 106, 95-103. 

Harris, J. (2009). The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press.

Are Video Games Making People Sexist?

If the warnings of certain pop-culture critics are correct, there’s a harm being perpetuated against women in the form of video games, where women are portrayed as lacking agency, sexualized, or prizes to be won by male characters. The harm comes from the downstream effects of playing these games, as it would lead to players – male and female – developing beliefs about the roles and capabilities of men and women from their depictions, entrenching sexist attitudes against women and, presumably, killing women’s aspirations to be more than mere ornaments for men as readily as one kills the waves of enemies that run directly into their crosshairs in any modern shooter. It’s a very blank slate type of view of human personality; one which suggests that there’s really not a whole lot inside our heads but a mound of person-clay, waiting to be shaped by the first set of media representations we come across. This blank slate view also happens to be a widely-implausible one lacking much in the way of empirical support.

Which would explain why my Stepford wife collection was so hard to build

The blank slate view of the human mind, or at least one of its many varieties, has apparently found itself a new name lately: cultivation theory. In the proud tradition of coming up with psychological theories that are not actually theories, cultivation theory restates an intuition: that the more one is exposed to or uses a certain type of media, the more one’s views will come to resemble what gets depicted in that medium. So, if one plays too many violent video games, say, they should be expected to turn into more violent people over time. This hasn’t happened yet, and violent content per se doesn’t seem to be the culprit of anger or aggression anyway, but it hasn’t stopped people from trying to push the idea that it could, will, or is currently happening. A similar idea mentioned in the introduction would suggest that if people are playing games in which women are depicted in certain ways – or not depicted at all – people will develop negative attitudes to them over time as they play more of these games.

What’s remarkable about these intuitions is how widely they appear to be held, or at least entertained seriously, in the absence of any real evidence that this cultivation of attitudes actually happens. Recently, the first longitudinal test of this cultivation idea was reported by Breuer et al (2015). Drawing on some data from German gamers, the researchers were able to examine how video game use and sexist attitudes changed from 2011 to 2013 among men and women. If there’s any cultivation going on, a few years ought to be long enough to detect at least some of it. The study ended up reporting on data from 824 participants (360 female), ages 14-85 (M = 38) concerning their sex, education level, frequency of game use, preference of genre of game, and sexist attitudes. The latter measure was derived from agreement on a scale from 1 to 5 concerning three questions: whether men should be responsible for major decisions in the family, whether men should take on leadership roles in mixed-sex groups, and whether women should take care of the home, even if both partners are wage earners.

Before getting into the relationships between video game use and sexist attitudes, I would like to note at the outset a bit of news which should be good for almost everyone: sexist attitudes were quite low, with each question garnering about an average agreement of about 1.8. As the scale is anchored from “strongly disagree” to “agree completely”, these scores would indicate that the sexist statements were met with rather palpable disagreement on the whole. There was a modest negative correlation between education and acceptance of those views, as well as a small, and male-specific, negative correlation with age. In other words, those who disagreed with those statements the least tended to be modestly less educated and, if they were male, younger. The questions of the day, though, are whether those people who play more video games are more accepting of such attitudes and whether that relationship grows larger over time.

Damn you, Call of Duty! This is all your fault!

As it turns out, no; they are not. In 2011, the regression coefficients for video game use and sexist attitudes were .04 and .06 for women and men, respectively (in 2013, these numbers were -.08 and -.07). Over time, not much changed: the female association between video game use in 2011 and sexist attitudes in 2013 was .12, while the male association was -.08. If video games were making people more accepting of sexism, it wasn’t showing up here. The analysis was attempted again, this time taking into account specific genres of gaming, including role-playing, action, and first-person shooters; genres in which women are thought to be particularly underrepresented or represented in sexist fashions (full disclosure: I don’t know what a sexist depiction of a woman in a game is supposed to look like, though it seems to be an umbrella term for a lot of different things from presence vs absence, to sexualization, to having women get kidnapped, none of which strike me as sexist, in the strict sense of the word. Instead, it seems to be a term that stands in for some personal distaste on the part of the person doing the assessment). However, considerations of specific genres yielded no notable associations between gaming and endorsement of the sexist statements either, which would seem to leave the cultivation theory dead in the water.

Breuer et al (2015) note that their results appear inconsistent with previous work by Stermer & Burkley (2012) that suggested a correlation exists between sexist video game exposure and endorsement of “benevolent sexism”. In that study, 61 men and 114 women were asked about the three games they played the most, ranked each on a 1-7 scale concerning how much sexism was present in them (again, this term doesn’t seem to be defined in any clear fashion), and then completed the ambivalent sexism scale; a dubious measure I have touched upon before. The results reported by Stermer & Burkley (2012) found participants reporting a very small amount of perceived sexism in their favorite games (M = 1.87 for men and 1.54 for women) and, replicating past work, also found no difference of endorsement of benevolent sexism between men and women on average, nor among those who played games they perceived to be sexist and those who did not, though men who perceived more sexism in their games endorsed the benevolent items relatively more (β = 0.21). Finally, it’s worth noting there was no connection between the hostile sexism score and video game playing. One issue might raise about this design concerns asking people explicitly about whether their leisure time activities are sexist and then immediately asking them about how much they value women and feel they should be protected. People might be right to begin thinking about how experimental demand characteristics could be effecting the results at that point.

Tell me about how much you hate women and why that’s due to video games

So is there much room to worry about when it comes to video games turning people into sexists? According to the present results, I would say probably not. Not only was the connection between sexism and video game playing small to the point of nonexistence in the larger, longitudinal sample, but the overall endorsement and perception of sexism in these samples is close to a floor effect. Rather than shaping our psychology in appreciable ways, a more likely hypothesis is that various types of media – from video games to movies and beyond - reflect aspects of it. To use a simple example, men aren’t drawn to being soldiers because of video games, but video games reflect the fact that most soldiers are men. For whatever reason, this hypothesis appears to receive considerably less attention (perhaps because it makes for a less exciting moral panic?). When it comes to video games, certain features our psychology might be easier to translate into compelling game play, leading to certain aspects more typical of men’s psychology being more heavily represented. In that sense, it would be rather strange to say that women are underrepresented in gaming, as one needs a reference point to what appropriate representation would mean and, as far as I can tell, that part is largely absent; kind of like how most research on stereotypes begins by assuming that they’re entirely false.

References: Breuer, J., Kowert, R., Festl, R., & Quandt, T. (2015). Sexist games = sexist gamers? A longitudinal study on the relationship between video game use and sexist attitudes. Cyberpsychology, Behavior, & Social Networking, 18, 1-6.

Stermer, P. & Burkley, M. (2012). SeX-Box: Exposure to sexist video games predicts benevolent sexism. Psychology of Popular Media Culture, 4, 47-56.

I Reject Your Fantasy And Substitute My Own

I don’t think it’s a stretch to make the following generalization: people want to feel good about themselves. Unfortunately for all of us, our value to other people tends to be based on what we offer them and, since our happiness as a social species tends to be tethered to how valuable we are perceived to be by others, being happy can be more of chore than we would prefer. These valuable things need not be material; we could offer things like friendship or physical attractiveness, pretty much anything that helps fill a preference or need others have. Adding to the list of misfortunes we must suffer in the pursuit of happiness, other people in the world also offer valuable things to the people we hope to impress. This means that, in order to be valuable to others, we need to be particularly good at offering things to others people: either through being better at providing something than many people provide, or able to provide something relatively unique that others typically don’t. If we cannot match the contributions of others, then people will not like to spend time with us and we will become sad; a terrible fate indeed. One way to avoid that undesirable outcome, then, is to increase your level of competition to become more valuable to other people; make yourself into the type of person others find valuable. Another popular route, which is compatible with the first, is to condemn other people who are successful or promote the images of successful people. If there’s less competition around, then our relative ability becomes more valuable. On that note, Barbie is back in the news again.

“Finally; a new doll for my old one to tease for not meeting her standards!”

The Lammily doll has been making the rounds on various social media sites, marketed as the average Barbie, with the tag line: “average is beautiful”. Lammily is supposed to be proportioned so as to represent the average body of a 19-year-old woman. She also comes complete with stickers for young girls to attach to her body in order to give her acne, scars, cellulite, and stretch marks. The idea here seems to be that if young girls see a more average-looking doll, they will compare themselves less negatively to it and, hopefully, end up feeling better about their body. Future incarnations of the doll are hoped to include diverse body types, races, and I presume other features upon which people vary (just in case the average doll ends up being too alienating or high-achieving, I think). If this doll is preferred by girls to Barbie, then by all means I’m not going to tell them they shouldn’t enjoy it. I certainly don’t discourage the making of this doll or others like it. I just get the sense that the doll will end up primarily making parents feel better by giving them the sense they’re accomplishing something they aren’t, rather than affecting their children’s perceptions.

As an initial note, I will say that I find it rather strange that the creator of the doll stated: “By making a doll real I feel attention is taken away from the body and to what the doll actually does.” The reason I find that strange is because the doll does not, as far as I can see, come with a number of different accessories that make it do different things. In fact, if Lammily does anything, I’m not sure what that anything is, as it’s never mentioned. The only accessory I see are the aforementioned stickers to make her look different. Indeed, the whole marketing of the doll is focuses on how it looks; not what it does. For a doll ostensibly attempting to take attention away from the body, it’s body seems to be its only selling point.

The main idea, rather, as far as I can tell, is to try and remove the possible intrasexual competition over appearance that women might feel when confronted with a skinny, attractive, makeup-clad figure. So, by making the doll less attractive with scar stickers, girls will feel less competition to look better. There are a number of facets of the marketing of the doll that would support this interpretation: one such point is the tag line. Saying that “average is beautiful” is, from a statistical standpoint, kind of strange; it’s a bit like saying “average is tall” or “average is smart”. These descriptors are all relative terms – typically ones that apply to upper-ends of some distribution – so applying them to more people would imply that people don’t differ as much on the trait in question. The second point to make about the tagline is that I’m fairly certain, if you asked him, the creator of the Lammily doll – Nickolay Lamm - would not tell you he meant to imply that women who are above or below some average are not beautiful; instead, you’d probably get some sentiment to the effect that everyone is attractive and unique in their own special way, further obscuring the usefulness of the label. Finally, if the idea is to “take attention away from the body”, then selling the doll under the label of its natural beauty is kind of strange.

So does Barbie have a lot to answer for culturally, and is Lammily that answer? Let’s consider some evidence examining whether Barbie dolls are actually doing harm to young girl in the first place and, if they are, whether that harm might be mitigated via the introduction of more-proportionate figures.

“If only she wasn’t as thin, this never would have happened”

One 2006 paper (Dittmar, Halliwell, & Ive, 2006) concludes that the answer is “yes” to both those questions, though I have my doubts. In their paper, the researchers exposed 162 girls between the ages of 5 and 8 to one of three picture books. These books contained a few images of Barbie (who would be a US dress size 2) or Emme (a size 16) dolls engaged in some clothing shopping; there was also a control book that did not draw attention to bodies. The girls were then asked questions about how they looked, how they wanted to look, and how they hoped to look when they grew up. After 15 minutes of exposure to these books, there were some changes in these girl’s apparent satisfaction with their bodies. In general, the girls exposed to the Barbies tended to want to be thinner than those exposed to the Emme dolls. By contrast, those exposed to Emme didn’t want to be thinner than those exposed to no body images at all. In order to get a sense for what was going on, however, those effects require some qualifications

For starters, when measuring the difference between one’s perception of her current body and her current ideal body, exposure to Barbie only made the younger children want to be thinner. This includes the girls in the 5 – 7.5 age range, but not the girls in the 7.5 – 8.5 range. Further, when examining what the girl’s ideal adult bodies would be, Barbie had no effect on the youngest girls (5 – 6.5) or the oldest ones (7.5 – 8.5). In fact, for the older girls, exposure to the Emme doll seemed to make them want to be thinner as adults (the authors suggesting this to be the case as Emme might represent a real, potential outcome the girls are seeking to avoid). So these effects are kind of all over the place, and it is worth noting that they, like many effects in psychology, are modest in size. Barbie exposure, for instance, reduced the girls “body esteem” (a summed measure of six questions about the girl felt about their bodies that got a 1 to 3 response, with 1 being bad, 2 neutral, and 3 being good) from a mean of 14.96 in the control condition to 14.45. To put that in perspective, exposure to Barbie led to girls, on average, moving one response out of six half a point on a small scale, compared to the control group.

Taking these effects at face value, though, my larger concerns with the paper involve a number of things it does not do. First, it doesn’t show that these effects are Barbie-specific. By that I don’t mean that they didn’t compare Barbie against another doll – they did – but rather that they didn’t compare Barbie against, say, attractive (or thin) adult human women. The authors credit Barbie with some kind of iconic status that is likely playing an important role in determining girl’s later ideals of beauty (as opposed to Barbie temporarily, but not lastingly, modifying it their satisfaction), but they don’t demonstrate it. On that point, it’s important to note what the authors are suggesting about Barbie’s effects: that Barbies lead to lasting changes in perceptions and ideals, and that the older girls weren’t being affected by exposures to Barbies because they have already ”…internalized [a thin body ideal] as part of their developing self-concept” by that point.

At least you got all that self-deprecation out of the way early

An interesting idea, to be sure. However, it should make the following prediction: adult women exposed to thin or attractive members of the same sex shouldn’t have their body satisfaction affected, as they have already “internalized a thin ideal”. Yet this is not what one of the meta-analysis papers cited by the authors themselves finds (Groesz, Levine, & Murnen, 2002). Instead, adult women faced with thin models feel less satisfied with their bodies relative to when they view average or above-average weight models. This is inconsistent with the idea that some thin beauty standard has been internalized by age 8. Both sets of data, however, are consistent with the idea that exposure to an attractive competitor might reduce body satisfaction temporarily, as the competitor will be perceived to be more attractive by other people. In much the same way, I might feel bad about my skill at playing music when I see someone much better at the task than I am. I would be dissatisfied because, as I mentioned initially, my value to others depends on who else happens to offer what I do: if they’re better at it, my relative value decreases. A little dissatisfaction, then, either pushes me to improve my skill or to find a new domain in which I can compete more effectively. The disappointment might be painful to experience, but it is useful for guiding behavior. If the older girls just stopped viewing Barbie as competition, perhaps, because they have moved onto new stages in their development, this would explain why Barbie had no effect on them as well. The older girls might simply have grown out of competing with Barbie.

Another issue with the paper is that the experiment used line drawings of body shapes, rather than pictures of actual human bodies, to determine which body girls think they have and which body they want, both now and in the future. This could be an issue, as previous research (Tovee & Cornelissen, 2001) failed to replicate the “girls want to be skinnier than men would prefer” effects – which were found using line drawings – when using actual pictures of human bodies. One potential reason for that different in findings is that a number of features besides thinness might unintentionally co-vary in these line drawings. So some of the desire to be skinny that the girls were expressing in the 2006 experiment might have just been an artifact of the stimulus materials being used.

Additionally, Dittmar, Halliwell, & Ive (2006), somewhat confusingly, didn’t ask the girls about whether or not they owned Barbies or how much exposure they had to them (though they do note that it probably would have been a useful bit of information to have). There are a number of predictions we might make about such a variable. For instance, girls exposed to Barbie more often should be expected to have a greater desire for thinness, if the author’s account is true. Further still, we might also predict that, among girls who have lots of experience with Barbies, a temporary exposure to pictures of Barbie shouldn’t be expected to effect their perception of their ideal body much, if at all. After all, if they’re constantly around the doll, they should have, as the authors put it, already “…internalized [a thin body ideal] as part of their developing self-concept”, meaning that additional exposure might be redundant (as it was with the older girls). Since there’s no data on the matter, I can’t say much more about it.

A match made in unrealistic heaven.

So would a parent have a lasting impact on their daughter’s perception of beauty by buying her a Barbie? Probably not. The current research doesn’t demonstrate any particularly unique, important, or lasting role for Barbie in the development of children’s feelings about their bodies (thought it does assume them). You probably won’t do any damage to your child by buying them an Emme or a Lammily either. It is unlikely that these dolls are the ones socializing children and building their expectations of the world; that’s a job larger than one doll could ever hope to accomplish. It’s more probable that features of these dolls reflect (in some cases exaggerated) aspects of our psychology concerning what is attractive, rather than creating them.

A point of greater interest I wanted to end with, though, is why people felt that the problem which needed to be addressed when it came to Barbie was that she was disproportionate. What I have in mind is that Barbie has a long history of prestigious careers; over 150 of them, most of which being decidedly above-average. If you want a doll that focuses on what the character does, Barbie seems to be doing fine in that regard. If we want Barbie to be an average girl sure, she won’t be as thin, but then chances are that she doesn’t even have her Bachelor’s degree either, which would preclude her from a number of the professions she has held. She’s also unlikely to be a world class athlete or performer. Now, yes, it is possible for people to hold those professions while it is impossible for anyone to be proportioned as Barbie is, but it’s certainly not the average. Why is the concern over what Barbie looks like, rather than what unrealistic career expectations she generates? My speculation is that the focus arises because, in the real world, women compete with each other more over their looks than their careers in the mating market, but I don’t have time to expand on that much more here.

It just seems peculiar to focus on one particular non-average facet of reality obsessively only to state that it doesn’t matter. If the debate over Barbie can teach us anything, it’s that physical appearance does matter; quite a bit, in fact. To try and teach people – girls or boys – otherwise might help them avoid some temporary discomfort (“Looks don’t matter; hooray!”), but it won’t give them an accurate impression of how the wider world will react to them (“Yeah, about that whole looks thing…”); a rather dangerous consequence, if you ask me.

References: Dittmar, H., Halliwell, E., & Ive, S. (2006). Does Barbie make girls want to be thin? The effect of experimental exposure to images of dolls on the body image of 5- to 8-year-old girls. Developmental Psychology, 42, 283-292.

Groesz, L., Levine, M., & Murnen, S. (2002). The effect of experimental presentation of thin media images on body satisfaction: A metaanalytic review. International Journal of Eating Disorders, 31, 1–16.

Tovee, M. & Cornelissen, P. (2001). Female and male perceptions of physical attractiveness in front-view and profile. British Journal of Psychology, 92, 391-402.

Practice Makes Better, But Not Necessarily Much Better

“But I’m not good at anything!” Well, I have good news — throw enough hours of repetition at it and you can get sort of good at anything…It took 13 years for me to get good enough to make the New York Times best-seller list. It took me probably 20,000 hours of practice to sand the edges off my sucking.” -David Wong

That quote is from one of my favorite short pieces of writing entitled “6 Harsh Truths That Will Make You a Better Person”, which you can find linked above. The jist of the article is simple: the world (or, more precisely, the people in the world) only care about what valuable things you provide them, and what is on the inside, so to speak, only matters to the extent that it makes you do useful things for others. This captures nicely some of the logic of evolutionary theory – a piece that many people seem to not appreciate – namely that evolution cannot “see” what you feel; it can only “see” what organisms do (seeing, in this sense, referring to selecting for variants that do reproductively-useful things). No matter how happy you are in life, if you aren’t reproducing, whatever genes contributed to that happiness will not see the next generation. Given that your competence at performing a task is often directly related to the value it could potentially provide for others, the natural question many people begin to consider is, “how can I get better at doing things?”

  Step 1: Print fake diploma for the illusion of achievement

The typical answer to that question, as David mentions, is practice: by throwing enough hours of practice at something, people tend to get sort of good at it. The “sort of” in that last sentence is rather important, according to a recent meta-analysis. The paper – by Macnamara et al (2014) – examines the extent of that “sort of” across a variety of different studies tracking different domains of skill one might practice, as well as across a variety of different reporting styles concerning that practice. The results from the paper that will probably come as little surprise to anyone is that – as intuition might suggest – the amount of time one spends practicing does, on the whole, seem to show a positive correlation with performance; the ones that probably will come as a surprise is that the extent of that benefit explains a relatively-small percentage of the variance in eventual performance between people.

Before getting into the specific results of the paper, it’s worth noting that, as a theoretical matter, there are reasons we might expect practicing on a task to correlate with eventual performance even if the practicing itself has little effect: people might stop practicing things they don’t think they’re very good at doing. Let’s say I wanted to get myself as close to whatever the chess-equivalent of a rockstar happens to be. After playing the game for around a month, I find that, despite my best efforts, I seem to be losing; a lot. While it is true that more practice playing chess might indeed improve my performance to some degree, I might rightly conclude that investing the required time really won’t end up being worth the payoff. Spending 10,000 hours of practice to go from a 15% win rate to a 25% win rate won’t net me any chess groupies. If my time spent practicing chess is, all things considered, a bad investment, not investing anymore time in it than I already had would be a rather useful thing to do, even if it might lead to some gains. The idea that one ought to persist at the task despite the improbable nature of a positive outcome (“if at first you don’t succeed, try, try again”) is as optimistic as it is wasteful. That’s not a call to give up on doing things altogether, of course; just a recognition that my time might be profitably invested in other domains with better payoffs. Then again, I majored in psychology instead of computer science or finance, so maybe I’m not the best person to be telling anyone else about profitable payoffs…

In any case, turning to the meat of the paper, the authors began by locating around 9,300 articles that might have been relevant to their analysis. As it turns out, only 88 of them possessed the relevant inclusion criteria: (1) an estimate of the numbers of hours spent practicing, (2) a measure of performance level, (3) and effect size of the relationship between those first two things, (4) written in English, and (5) conducted on humans. These 88 studies contained 111 samples, 157 effect sizes, and approximately 11,000 participants. Of those 157 correlations, 147 of them were positive: as performance increased, so too did hours of practice tend to increase in an overwhelming majority of the papers. The average correlation between hours of practice and performance was a 0.35. This means that, overall, deliberate practice explained around 12% of the variance in performance. Throw enough hours of repetition at something and you can get sort of better at it. Well, somewhat slightly better at it, anyway…sometimes…

At least it only took a few decades of practice to realize that mediocrity

The average correlation doesn’t give a full sense of the picture, as many averages tend to not. Macnamera et al (2014) first began to break the analysis down by domain, as practicing certain tasks might yield greater improve than others. The largest gains were seen in the realm of games, where hours of practice could explain around a forth of the variance in performance. From there, the percentages decreased to 21% in music, 18% in sports, 4% in education, and less than 1% in professions. Further, as one might also expect, practice showed the greatest effect when the tasks were classified as highly predictable (24% of the variance), followed by moderately (12%) and poorly predictable (4%). If you don’t know what to expect, it’s awfully difficult to know what or how to practice to achieve a good outcome. Then again, even if you do know what to expect, it still seems hard to achieve those outcomes.

Somewhat troubling, however, was that the type of reporting about practicing seemed to have a sizable effect as well: reports that relied on retrospective interviews (i.e. “how often would you say you have practiced over the last X weeks/months/years”) tended to show larger effects; around 20% of the variance explained. When the method was a retrospective questionnaire, rather than an interview, this dropped to 12%. For the studies that actually involved keeping daily logs of practice, this percentage dropped precipitously to a mere 5%. So it seems at least plausible that people might over-report how much time they spend practicing, especially in a face-to-face context. Further still, the extent of the relationship between practice and product depended heavily on the way performance was measured. For the studies simply using “group membership” as the measure of skill, around 26% of the variance was explained. This fell to 14% when laboratory studies alone were considered, and fell further when expert ratings (9%) or standardized measures of performance (8%) were used.

Not only might be people be overestimating how much time they spend practicing a skill, then, but the increase in ability possibly attributable to that practice appears to shrink the more fine-grained or specific an analysis gets. Now it’s worth mentioning that this analysis is not able to answer the question of how much improvement in performance is attributable to practice in some kind of absolute sense; it just deals with how much of the existing differences between people’s ability might be attributable to differences in practice. To make that point clear, imagine a population of people who were never allowed to practice basketball at all, but were asked to play the game anyway. Some people will likely be better than others owing to a variety of factors (like height, speed, fine motor control, etc), but none of that variance would be attributable to practice. It doesn’t mean that people wouldn’t get better if they were allowed to practice, of course; just that none of the current variation would be able to be chalked up to it.

And life has a habit of not being equitable

As per the initial quote, this paper suggests that deliberate practicing, at least past a certain point, might have more to do with sanding the harsh edges off one’s ability rather than actually carving it out. The extent of that sanding likely depends on a lot of things: interests, existing ability, working memory, general cognitive functioning, what kind of skill is in question, and so on. In short, it’s probably not simply a degree of practice that separates a non-musician from Mozart. What extensive practice can help with seems to be more pushing the good towards the great. As nice as it sounds to tell people that they can achieve anything they put their mind to, nice does not equal true. That said, if you have a passion for something and just wish to get better at it (and the task at hand lends itself to improvement via practice), the ability to improve performance by a few percentage points is perfectly respectable. Being slightly better at something can, on the margins, mean the difference between winning or losing (in whatever form that takes); it’s just that all the optimism and training montages in the world probably won’t take you from the middle to the top.

References: Macnamera, B., Hambrick, D., & Oswald, F. (2014). Deliberate practice and performance in music, games, sports, education, and professions: A meta-analysis. Psychological Science, DOI: 10.1177/0956797614535810

What Makes Incest Morally Wrong?

There are many things that people generally tend to view to be disgusting or otherwise unpleasant. Certain shows, like Fear Factor, capitalize on those aversions, offering people rewards if they can manage to suppress those feelings to a greater degree than their competitors. Of the people who watched the show, many would probably tell you that they would be personally unwilling to engage in such behaviors; what many do not seem to say, however, is that others should not be allowed to engage in those behaviors because they are morally wrong. Fear or disgust-inducing, yes, but not behavior explicitly punishable by others. Well, most of the time, anyway; a stunt involving drinking donkey semen apparently made the network hesitant about airing it, likely owing to the idea that some moral condemnation would follow in its wake. So what might help us differentiate between understanding why some disgusting behaviors – like eating live cockroaches or submerging one’s arm in spiders – are not morally condemned while others – like incest – tend to be?

Emphasis on the “tend to be” in that last sentence.

To begin our exploration of the issue, we could examine some research on some cognitive mechanisms for incest aversion. Now, in theory, incest should be an appealing strategy from a gene’s eye perspective. This is due to the manner in which sexual reproduction works: by mating with a full sibling, your offspring would carry 75% of your genes in common by descent, rather than the 50% you’d expect if you mated with a stranger. If those hyper-related siblings in turn mated with one another, after a few generations you’d have people giving birth to infants that were essentially genetic clones. However, such inbreeding appears to carry a number of potentially harmful consequences. Without going into too much detail, here are two candidate explanations one might consider for why inbreeding isn’t a more popular strategy: first, it increases the chances that two harmful, but otherwise rare, recessive alleles will match up with on another. The result of this frequently involves all sorts of nasty developmental problems that don’t bode well for one’s fitness.

A second potential issue involves what is called the Red Queen hypothesis. The basic idea here is that the asexual parasites that seek to exploit their host’s body reproduce far quicker than their hosts tend to. A bacteria can go through thousands of generations in the time humans go through one. If we were giving birth to genetically-identical clones, then, the parasites would find themselves well-adapted to life inside their host’s offspring, and might quickly end up exploiting said offspring. The genetic variability introduced by sexual reproduction might help larger, longer-lived hosts keep up in the evolutionary race against their parasites. Though there may well be other viable hypotheses concerning why inbreeding is avoided in many species, the take-home point for our current purposes is that organisms often appear as if they are designed to avoid breeding with close relatives. This poses many species with a problem they need to solve, however: how do you know who your close kin are? Barring some effective spatial dispersion, organisms will need some proximate cues that help them differentiate between their kin and non-kin so as to determine which others are their best bets for reproductive success.

We’ll start with perhaps the most well-known of the research on incest avoidance in humans. The Westermarck effect refers to the idea that humans appear to become sexually disinterested in those with whom they spent most of their early life. The logic of this effect goes (roughly) as follows: your mother is likely to be investing heavily in you when you’re an infant, in no small part owing to the fact that she needs to breastfeed you (prior to the advent of alternative technologies). Since those who spend a lot of time around you and your mother are more likely to be kin than those who spend less time in your proximity. That degree of that proximity ought to in turn generate some kinship index with others that would generate disinterest in sexual experiences with such individuals. While such an effect doesn’t lend itself nicely to controlled experiments, there are some natural contexts that can be examined as pseudo-experiments. One of these was the Israeli Kibbutz, where children were predominately raised in similarly-aged, mixed-sex peer groups. Of the approximately 3000 children that were examined from these Kibbutz, there were only 14 cases of marriage between individuals from the same group, and almost all of them were between people introduced to the group after the age of 6 (Shepher, 1971).

Which is probably why this seemed like a good idea.

The effect of being raised in such a context didn’t appear to provide all the cues required to trigger the full suite of incest aversion mechanisms, however, as evidenced by some follow-up research by Shor & Simchai (2009). The pair carried out some interviews with 60 of the members of the Kibbutz to examine the feelings that these members had towards each other. A little more than half of the sample reported having either moderate or strong attractions towards other members of their cohort at some point; almost all the rest reported sexual indifference, as opposed to the typical kind of aversion or disgust people report in response to questions about sexual attraction towards their blood siblings. This finding, while interesting, needs to be considered in light of the fact that almost no sexual interactions occurred between members of the same peer group; it should also be considered in light of the fact that there did not appear to exist any strong moral prohibition against such behavior.

Something like a Westermarck effect might explain why people weren’t terribly inclined to have intercourse with their own kin, but it would not explain why people think that others having sex with close kin is morally wrong. Moral condemnation is not required for guiding one’s own behavior; it appears more suited for attempting to guide the behavior of others. When it comes to incest, a likely other whose behavior one might wish to guide would be their close kin. This is what led Lieberman et al (2003) to deliver some predictions about what factors might drive people’s moral attitudes about incest: the presence of others who are liable to be your close kin, especially if those kin are of the opposite sex. If duration of co-residence during infancy is used a proximate input cue for determining kinship, then that duration might also be used as an input condition for determining one’s moral views about the acceptability of incest. Accordingly, Lieberman et al (2003) surveyed 186 individuals about their history of co-residence with other family members and their attitudes towards how morally unacceptable incest is, along with a few other variables.

What the research uncovered was that duration of co-residence with an opposite-sex sibling predicted the subject’s moral judgments concerning incest. For women, the total years of co-residence with a brother was correlated with judgments of wrongness for incest at about r = 0.23, and that held whether the time period from 0 to 10 or 0 to 18 was under investigation; for men with a sister, a slightly higher correlation emerged from 0 to 10 years (r = 0.29), but an even-larger correlation was observed when the period was expanded to age 18 (r = 0.40). Further, such effects remained largely static even after the number of siblings, parental attitudes, sexual orientation, and the actual degree of relatedness between those individuals was controlled for. None of those factors managed to uniquely predict moral attitudes towards incest once duration of co-residence was controlled for, suggesting that it was the duration of co-residence itself driving these effects of moral judgments. So why did this effect not appear to show up in the case of the Kibbutz?

Perhaps the driving cues were too distracted?

If the cues to kinship are somewhat incomplete – as they likely were in the Kibbutz – then we ought to expect moral condemnation of such relationships to be incomplete as well.  Unfortunately, there doesn’t exist much good data on that point that I am aware of, but, on the basis of Shor & Simchai’s (2009) account, there was no condemnation of such relationships in the Kibbutz that rivaled the kind seen in the case of actual families. What their account does suggest is that more cohesive groups experienced less sexual interest in their peers; a finding that dovetails with the results from Lieberman et al (2003): cohesive groups might well have spent more time together, resulting in less sexual attraction due to greater degrees of co-residence. Despite Shor & Simchai’s suggestion to the contrary, their results appear to be consistent with a Westermarck kind of effect, albeit an incomplete one. Though the duration of co-residence clearly seems to matter, the precise way in which it matters likely involves more than a single cue to kinship. What connection might exist between moral condemnation and active aversion to the idea of intercourse with those one grew up around is a matter I leave to you.

References: Lieberman, D., Tooby, J., & Cosmides, L. (2003). Does morality have a biological basis? An empirical test of the factors governing moral sentiments relating to incest. Proceedings of the Royal Society of London B, 270, 819-826.

Shepher, J. (1971). Mate Selection Among Second Generation Kibbutz Adolescents and Adults: Incest Avoidance and Negative Imprinting. Archives of Sexual Behavior, 1, 293-307.

Shor, E. & Simchai, D. (2009). Incest Avoidance, the Incest Taboo, and Social Cohesion: Revisiting Westermarck and the Case of the Israeli Kibbutzim. American Journal of Sociology, 114, 1803-1846,

The Enemy Of My Dissimilar-Other Isn’t My Enemy

Some time ago, I alluded to a  very real moral problem: Observed behavior, on its own, does not necessarily give you much insight into the moral value of the action. While people can generally agree in the abstract that killing is morally wrong, there appear to be some unspoken assumptions that go into such a thought. Without such additional assumptions, there would be no understanding why killing in self-defense is frequently morally excused or occasionally even praised, despite the general prohibition. In short: when “bad” things happen to “bad” people, that is often assessed as a “good” state of affairs. The reference point for such statements like “killing is wrong”, then, seems to be that killing is bad, given that it has happened to someone who was undeserving. Similarly, while most of us would balk at the idea of forcibly removing someone from their home and confining them against their will to dangerous areas in small rooms, we also would not advocate for people to stop being arrested and jailed, despite the latter being a fairly accurate description of the former.

It’s a travesty and all, but it makes for really good TV.

Figuring out the various contextual factors affecting our judgments concerning who does or does not deserve blame and punishment helps keep researchers like me busy (preferably in a paying context, fun as recreational arguing can be. A big wink to the NSF). Some new research on that front comes to us from Hamlin et al (2013), who were examining preverbal children’s responses to harm-doing and help-giving. Given that these young children aren’t very keen on filling out surveys, researchers need alternative methods of determining what’s going on inside their minds. Towards that end, Hamlin et al (2013) settled on an infant-choice style of task: when infants are presented a the choice between items, which one they select is thought to correlate with the child’s liking of, or preference for, that item. Accordingly, if these items are puppets that infants perceive as acting, then their selections ought to be a decent – if less-than-precise – index of whether the infants approve or disapprove of the actions the puppet took.

In the first stage of the experiment, 9- and 14-month old children were given a choice between green beans and graham crackers (somewhat surprisingly, appreciable percentages of the children chose the green beans). Once a child had made their choice, they then observed two puppets trying each of the foods: one puppet was shown to like the food the child picked and dislike the unselected item, while the second puppet liked and disliked the opposite foods. In the next stage, the child observed one of the two puppets playing with a ball. This ball was being bounced off the wall, and eventually ended up by one of two puppet dogs by accident. The dog with the ball either took it and ran away (harming), or picked up the ball and brought it back (helping). Finally, children were provided with a choice between the two dog puppets.

Which dog puppet the infant preferred depended on the expressed food preferences of the first puppet: if the puppet expressed the same food preferences as the child, then the child preferred the helping dog (75% of the 9-month-olds and 100% of the 14-month-olds); if the puppet expressed the opposite food preference, then the child preferred the harming dog (81% of 9-month-olds and 100% of 14-month-olds). The children seemed to overwhelming prefer dogs that helped those similar to themselves or did not help those who were dissimilar. This finding potentially echos the problem I raised at the beginning of this post: whether or not an act is deemed morally wrong or not depends, in part, on the person towards whom the act is directed. It’s not that children universally preferred puppets who were harmful or helpful; the target of that harm or help matters. It would seem that, in the case of children, at least, something as trivial as food preferences is apparently capable of generating a dramatic shift in perceptions concerning what behavior is acceptable.

In her defense, she did say she didn’t want broccoli…

The effect was then mostly replicated in a second experiment. The setup remained largely the same with the addition of a neutral dog puppet that did not act in anyway. Again, 14-month-old children preferred the puppet that harmed the dissimilar other over the puppet that did nothing (94%), and preferred the puppet that did nothing over the puppet that helped (81%). These effects were reversed in the similar other condition, with 75% preferring the dog that helped the similar other over the neutral dog, and preferred the neutral over the harmful puppet 69% of the time. 9-month-olds did not quite show the same pattern in the second experiment, however. While none of the results went in the opposite direction to the predicted pattern, the ones that did exist generally failed to reach significance. This is in some accordance with the first experiment, where 9-month-olds exhibited the tendency to a lesser degree than the 14-month-olds.

So this is a pretty neat research paradigm. Admittedly, one needs to make certain assumptions about what was going on in the infant’s heads to make any sense of the results, but assumptions will always be required when dealing with individuals that can’t tell you much about what they’re thinking or feeling (and even with the ones who can). Assuming that the infant’s selections indicate something about their willingness to condemn or condone helpful or harmful behavior, we again return to the initial point: the same action can be potentially condemned or not, depending on the target of that action. While this might sound trivially true (as opposed to other psychological research, which is often perceived to be trivially false), it is important to bear in mind that our psychology need not be that way: we could have been designed to punish anyone who committed a particular act, regardless of target. For instance, the infants could have displayed a preference towards helping dogs, regardless of whether or not they were helping someone similar or dissimilar to them, or we could view murder as always wrong, even in cases of self-defense.

While such a preference might sound appealing to many people (it would be pretty nice of us to always prefer to help helpful individuals), it is important to note that such a preference might also not end up doing anything evolutionarily-useful. That state of affairs would owe itself to the fact that help directed towards one individual is, essentially, help not directed at any other individual. Provided that help directed towards one person is less likely to pay off in the long run (such as individuals who do not share your preferences) relative to help directed towards others (such as individuals who do share you preferences), we ought to expect people to direct their investments and condemnations strategically. Unfortunately, this is where empirical matters can become complicated, as strategic interests often differ on an individual-to-individual, or even day-to-day basis, regardless of there being some degree of overlap between some broad groups within a population over time.

At least we can all come together to destroy a mutual enemy.

Finally, I see plenty of room for expanding this kind of research. In the current experiments, the infants knew nothing about the preferences of the helper or harmer dogs. Accordingly, it would be interesting to see a simple variant of the present research: it would involve children observing the preferences of the helper and harmer puppets, but not the preferences of the target of that help or harm. Would children still “approve” of the actions of the puppet with similar tastes and “disapprove” of the puppet with dissimilar tastes, regardless of what action they took, relative to a neutral puppet? While it would be ideal to have conditions in which children knew about the preferences of all the puppets involved as well, the risks of getting messy data from more complicated designs might be exacerbated in young children. Thankfully, this research need not (and should not) stick to young children.

References: Hamlin, J., Mahajan, N., Liberman, Z., Wynn, K., (2013). Not like me = bad: Infants prefer those who harm dissimilar others. Psychological Science.

Mothers And Others (With Benefits)

Understanding the existence and persistence of homosexuality in the face of its apparently reproductive fitness costs has left many evolutionary researchers scratching their heads. Though research into homosexuality has not been left wanting for hypotheses, every known hypothesis to date but one has had several major problems when it comes to accounting for the available data (and making conceptual sense). Some of them lack a developmental story; some fail to account for the twin studies; others posit benefits that just don’t seem to be there. What most of the aforementioned research shares in common, however, is its focus: male homosexuality. Female homosexuality has inspired considerably less hypothesizing, perhaps owing to the assumption, valid or not, that female sexual preferences played less of a role in determining fitness outcomes, relative to men’s. More precisely, physical arousal is required for men in order for their to engage in intercourse, whereas it is not necessarily required for women.

Not that lack of female arousal has ever been an issue for this fine specimen.

A new paper out in Evolutionary Psychology by Kuhle & Radtke (2013) takes a functional stab at attempting to explain some female homosexual behavior. Not the homosexual orientations, mind you; just some of the same-sex behavior. On this point, I would like to note that homosexual behavior isn’t what poses an evolutionary mystery anymore than other, likely nonadaptive behaviors, such as masturbation. The mystery is why an individual would be actively averse to intercourse with members of the opposite sex; their only path to reproduction. Nevertheless, the suggestion that Kuhle & Radtke (2013) put forth is that some female homosexual sexual behavior evolved in order to recruit female alloparent support. An alloparent is an individual who provided support for an infant but is not one of that infant’s parents. A grandmother helping to raise a grandchild, then, would represent a case of alloparenting. On the subject of grandmothers, some have suggested that the reason human females reach menopause so early in their lifespan – relative to other species who go on with the potential to reproduce until right around the point they die – is that grandmother alloparenting, specifically maternal grandmother, was a more valuable resource at the point, relative to direct reproduction. On the whole, alloparenting seems pretty important, so getting a hold of good resources for the task would be adaptive.

The suggestion that women might use same-sex sexual behavior to recruit female alloparental support is good, conceptually, on at least three fronts: first, it pays some mind to what is at least a potential function for a behavior. Most psychological research fails to think about function at all, much less plausible functions, and is all the worse because of it. The second positive part of this hypothesis is that it has some developmental story to go with it, making predictions about what specific events are likely to trigger the proposed adaptation and, to some extent, anyway, why they might. Finally, it is consistent with – or at least not outright falsified by – the existing data, which is more than you can say for almost all the current theories purporting to explain male homosexuality. On these conceptual grounds, I would praise the lesbian-sex-for-alloparenting model. On other grounds, both conceptual and empirical, however, I have very serious reservations.

The first of these reservations comes in form of the source of alloparental investment. While, admittedly, I have no hard data to bear on this point (as my search for information didn’t turn up any results), I would wager it’s a good guess that a substantial share of the world’s alloparental resources come from the mother’s kin: grandparents, cousins, aunts, uncles, siblings, or even other older children. As mentioned previously, some have hypothesized that grandmothers stop reproducing, at least in part, for that end. When alloparenting is coming from the female’s relatives, it’s unlikely that much, if any, sexual behavior, same-sex or otherwise, is involved or required. Genetic relatedness is likely providing a good deal of the motivation for the altruism in these cases, so sex would be fairly unnecessary. That thought brings me neatly to my next point, and it’s one raised briefly by the authors themselves: why would the lesbian sex even be necessary in the first place?

“I’ll help mother your child so hard…”

It’s unclear to me what the same-sex behavior adds to the alloparenting equation here. This concern comes in a number of forms. The first is that it seems adaptations designed for reciprocal altruism would work here just fine: you watch my kids and I’ll watch yours. There are plenty of such relationships between same-sex individuals, regardless of whether they involve childcare or not, and those relationships seem to get on just fine without sex being involved. Sure, sexual encounters might deepen that commitment in some cases, but that’s a fact that needs explaining; not the explanation itself. How we explain it will likely have a bearing on further theoretical analysis. Sex between men and women might deepen that commitment on account of it possibly resulting in conception and all the shared responsibilities that brings. Homosexual intercourse, however, does not carry that conception risk. This means that any deepening of the social connections homosexual intercourse might bring would most likely be a byproduct of the heterosexual counterpart. In much the same way, masturbation probably feels good because the stimulation sexual intercourse provides can be successfully mimicked by one’s hand (or whatever other device the more creative among us make use of). Alternatively, it could be possible that the deepening of an emotional bond between two women as the result of a sexual encounter was directly selected for because of it’s role in recruiting alloparent support, but I don’t find the notion particularly likely.

A quick example should make it clear why: for a woman who currently does not have dependent children, the same-sex encounters don’t seem to offer her any real benefit. Despite this, there are many women who continue to engage in frequent to semi-frequent same-sex sexual behaviors and form deep relationships with other women (who are themselves frequently childless as well). If the deepening of the bond between two women was directly selected for in the case of homosexual sexual behavior due to the benefits that alloparents can bring, such facts would seem to be indicative of very poor design. That is to say we should predict that women without children would be relatively uninterested in homosexual intercourse, and the experience would not deepen their social commitment to their partner. So sure, homosexual intercourse might deepen emotional bonds between the people engaging in it, which might in turn effect how the pair behave towards one another in a number of ways. That effect, however, is likely a byproduct of mechanisms designed for heterosexual intercourse; not something that was directly selected for itself. Kuhle & Radtke (2013) do say that they’re only attempting to explain some homosexual behavior, so perhaps they might grant that some increases in emotional closeness are the byproduct of mechanisms designed for heterosexual intercourse while other increases in closeness are due to selection for alloparental concerns. While possible, such a line of reasoning can set up a scenario where the hits for the theory can be counted as supportive and the misses (such as childless women engaging in same-sex sexual behaviors) dismissed as being the product of some other factor.

On top of that concern, the entire analysis rests on the assumption that women who have engaged in sexual behavior with the mother in question ought to be more likely to provide substantially better alloparental care than women who did not. This seems to be an absolutely vital prediction of the model. Curiously, that prediction is not represented in any of the 14 predictions listed in the paper. The paper also offers no empirical data bearing on this point, so whether homosexual behavior actually causes an increase in alloparental investment is in doubt. Even if we assume this point was confirmed however, it raises another pressing question: if same-sex intercourse raises the probability or quality of alloparental investment, why would we expect, as the authors predict, that women should only adopt this homosexual behavior as a secondary strategy? More precisely, I don’t see any particularly large fitness costs to women when it comes to engaging in same-sex sexual behavior but, under this model, there would be substantial benefits. If the costs to same-sex behavior are low and the benefits high, we should see it all the time, not just when a woman is having trouble finding male investment.

“It’s been real, but men are here now so…we can still be friends?”

On the topic of male investment, the model would also seem to predict that women should be relatively inclined to abandon their female partners for male ones (as, in this theory, women’s sexual interest in other women is triggered by lack of male interest). This is anecdotal, of course, but a fairly-frequent complaint I’ve heard from lesbians or bisexual women currently involved in a relationship with a woman is that men won’t leave them alone. They don’t seem to be wanting for male romantic attention. Now maybe these women are, more or less, universally assessing these men as being unlikely or unable to invest on some level, but I have my doubts as to whether this is the case.

Finally, given these sizable hypothesized benefits and negligible costs, we ought to expect to see women competing with other women frequently in the realm of attracting same-sex sexual interest. Same-sex sexual behavior should be expected to not only be cross-cultural universals, but fairly common as well, in much the same way that same-sex friendship is (as they’re hypothesized to serve much the same function, really). Why same-sex sexual interest would be relatively confined to a minority of the population is entirely unclear to me in terms of what is outlined in the paper. This model also doesn’t deal why any women, let alone the vast majority of them, would appear to feel averse to homosexual intercourse. Such aversions would only cause a woman to lose out the hypothesized alloparental benefits which, if the model is true, ought to have been substantial. Women who were not averse would have had more consistent alloparental support historically, leading to whatever genes made such attractions more likely to spread at the expense of women who eschewed it. Again, such aversions would appear to be evidence of remarkably poor design; if the lesbian-alloparents-with-benefits idea is true, that is…

References: Kuhle BX, & Radtke S (2013). Born both ways: The alloparenting hypothesis for sexual fluidity in women. Evolutionary psychology : an international journal of evolutionary approaches to psychology and behavior, 11 (2), 304-23 PMID: 23563096