Intergenerational Epigenetics And You

Today I wanted to cover a theoretical matter I’ve discussed before but apparently not on this site: the idea of epigenetic intergenerational transmission. In brief, epigenetics refers to chemical markers attached to your DNA that regulate how it’s expressed and regulated without changing the DNA itself. You could imagine your DNA as a book full of information and each cell in your body contains the same book. However, not every cell expressed the full genome; each cell only expresses part of it (which is why skin cells are different from muscle cells, for instance). The epigenetic portion, then, could be thought of as black tape placed over certain passages in the books so they are not read. As this tape is added or removed by environmental influences, different portions of the DNA will become active. From what I understand about how this works (which is admittedly very little at this juncture), usually these markers are not passed onto offspring from parents. The life experiences of your parents, in other words, will not be passed onto you via epigenetics. However, there has been some talk lately of people hypothesizing that not only are these changes occasionally (perhaps regularly?) passed on from parents to offspring; the implication seems to be present that they also might be passed on in an adaptive fashion. In short, organisms might adapt to their environment not just through genetic factors, but also through epigenetic ones.  

Who would have guessed Lamarckian evolution was still alive?

One of the examples given in the target article on the subject concerns periods of feast and famine. While rare in most first-world nations these days, these events probably used to be more recurrent features of our evolutionary history. The example there involves the following context: during some years in early 1900 Sweden food was abundant, while during other years it was scarce. Boys who were hitting puberty just at the time of a feast season tended to have grandchildren who died six years earlier than the grandchildren of boys who have experienced famine season during the same developmental window. The causes of death, we are told, often involving diabetes. Another case involves the children of smokers: men who smoked right before puberty tended to have children who were fatter, on average, than fathers who smoked habitually but didn’t start until after puberty . The speculation, in this case, is that development was in some way affected in a permanent fashion by food availability (or smoking) during a critical window of development, and those developmental changes were passed onto their sons and the sons of their sons.

As I read about these examples, there were a few things that stuck out to me as rather strange. First, it seems odd that no mention was made of daughters or granddaughters in that case, whereas in the food example there wasn’t any mention of the in-between male generation (they only mentioned grandfathers and grandsons there; not fathers). Perhaps there’s more to the data that is let on there but – in the event that no effects were found for fathers or daughters or any kind – it is also possible that a single data set might have been sliced up into a number of different pieces until the researchers found something worth talking about (e.g., didn’t find an effect in general? Try breaking the data down by gender and testing again). Now that might or might not be the case here, but as we’ve learned from the replication troubles in psychology, one way of increasing your false-positive rate is to divide your sample into a number of different subgroups. For the sake of this post, I’m going to assume that is not the case and treat the data as representing something real, rather than a statistical fluke.   

Assuming this isn’t just a false-positive, there are two issues with the examples as I see them. I’m going to focus predominately on the food example to highlight these issues: first, passing on such epigenetic changes seems maladaptive and, second, the story behind it seems implausible. Let’s take the issues in turn.

To understand why this kind of inter-generational epigenetic transmission seems maladaptive, consider two hypothetical children born one year apart (in, say, the years 1900 and 1901). At the time the first child’s father was hitting puberty, there was a temporary famine taking place and food was scarce; at the time of the second child, the famine had passed and food was abundant. According to the logic laid out, we should expect that (a) both children will have their genetic expression altered due to the epigenetic markers passed down by their parents, affecting their long-term development, and (b) the children will, in turn, pass those markers on to their own children, and their children’s children (and so on).

The big Thanksgiving dinner that gave your grandson diabetes

The problems here should become apparent quickly enough. First, let’s begin by assuming these epigenetic changes are adaptive: they are passed on because they are reproductively useful at helping a child develop appropriately. Specifically, a famine or feast at or around the time of puberty would need to be a reliable cue as to the type of environments their children could expect to encounter. If a child is going to face shortages of food, they might want to develop in a different manner than if they’re expecting food to be abundant.

Now that sounds well and good, but in our example these two children were born just a year apart and, as such, should be expected to face (broadly) the same environment, at least with respect to food availability (since feast and famines tends to be more global). Clearly, if the children were adopting different developmental plans in response to that feast of famine, both of them (plan A affected by the famine and plan B not so affected) cannot be adaptive. Specifically, if this epigenetic inheritance is trying to anticipate children’s future conditions by those present around the time of their father’s puberty, at least one of the children’s developmental plans will be anticipating the wrong set of conditions. That said, both developmental plans could be wrong, and conditions could look different than either anticipated. Trying to anticipate the future conditions one will encounter over their lifespan (and over their children’s and grandchild’s lifespan) using only information from the brief window of time around puberty seems like a plan doomed for failure, or at least suboptimal results.

A second problem arises because these changes are hypothesized to be intergenerational: capable of transmission across multiple generations. If that is the case, why on Earth would the researchers in this study pay any mind to the conditions the grandparents were facing around the time of puberty per se? Shouldn’t we be more concerned with the conditions being faced a number of generations backs, rather than the more immediate ones? To phrase this in terms of a chicken/egg problem, shouldn’t the grandparents in question have inherited epigenetic markers of their own from their grandparents, and so on down the line? If that were the case, the conditions they were facing around their puberty would either be irrelevant (because they already inherited such markers from their own parents) or would have altered the epigenetic markers as well.

If we opt for the former possibility, than studying grandparent’s puberty conditions shouldn’t be too impactful. However, if we opt for the latter possibility, we are again left in a bit of a theoretical bind: if the conditions faced by the grandparents altered their epigenetic markers, shouldn’t those same markers also have been altered by the parent’s experiences, and their grandson’s experiences as well? If they are being altered by the environment each generation, then they are poor candidates for intergenerational transmission (just as DNA that was constantly mutating would be). There is our dilemma, then: if epigenetics change across one’s lifespan, they are unlikely candidates for transmission between generations; if epigenetic changes can be passed down across generations stably, why look at the specific period pre-puberty for grandparents? Shouldn’t we be concerned with their grandparents, and so on down the lines?

“Oh no you don’t; you’re not pinning this one all on me”

Now, to be clear, a famine around the time of conception could affect development in other, more mundane ways. If a child isn’t receiving adequate nutrition at the time they are growing, then it is likely certain parts of their developing body will not grow as they otherwise would. When you don’t have enough calories to support your full development, trade-offs need to be made, just like if you don’t have enough money to buy everything you want at the store you have to pass up on some items to afford others. Those kinds of developmental outcomes can certainly have downstream effects on future generations through behavior, but they don’t seem like the kind of changes that could be passed on the way genetic material can. The same can be said about the smoking example provided as well: people who smoked during critical developmental windows could do damage to their own development, which in turn impacts the quality of the offspring they produce, but that’s not like genetic transmission at all. It would be no more surprising than finding out that parents exposed to radioactive waste tend to have children of a different quality than those not so exposed.

To the extent that these intergenerational changes are real and not just statistical oddities, it doesn’t seem likely that they could be adaptive; they would instead likely reflect developmental errors. Basically, the matter comes down to the following question: are the environmental conditions surrounding a particular developmental window good indicators of future conditions to the point you’d want to not only focus your own development around them, but also the development of your children and their children in turn? To me, the answer seems like a resounding, ‘”No, and that seems like a prime example of developmental rigidity, rather than plasticity.” Such a plan would not allow offspring to meet the demands of their unique environments particularly well. I’m not hopeful that this kind of thinking will lead to any revolutions in evolutionary theory, but I’m always willing to be proven wrong if the right data comes up. 

Mistreated Children Misbehaving

None of us are conceived or born as full adults; we all need to grow and develop from single cells to fully-formed adults. Unfortunately – for the sake of development, anyway – the future world you will find yourself in is not always predictable, which makes development a tricky matter at times. While there are often regularities in the broader environment (such as the presence or absence of sunlight, for instance), not every individual will inhabit the same environment or, more precisely, the same place in their environment. Consider two adult males, one of whom is six-feet tall and 230 pounds of muscle, and the other being five-feet tall and 110 pounds. While the dichotomy here is stark, it serves to make a simple point: if both of these males developed in a psychological manner that led them to pursue precisely the same strategies in life – in this case, say, one involving aggressive contests for access to females – it is quite likely that the weaker male will lose out to the stronger one most (if not all) of the time. As such, in order to be more-consistently adaptive, development must be something of a fluid process that helps tailor an individual’s psychology to the unique positions they find themselves in within a particular environment. Thus, if an organism is able to use some cues within their environment to predict their likely place in it in the future (in this case, whether they would grow large or small), their development could be altered to encourage their pursuit of alternate routes to eventual reproductive success. 

Because pretending you’re cut out for that kind of life will only make it worse

Let’s take that initial example and adapt it to a new context: rather than trying to predict whether one will grow up weak or strong, a child is trying to predict the probability of receiving parental investment in the future. If parental investment is unlikely to be forthcoming, children may need to take a different approach to their development to help secure the needed resources on their own, sometimes requiring their undertaking risky behaviors; by contrast, those children who are likely to receive consistent investment might be relatively less-inclined to take such risky and costly matters into their own hands, as the risk vs. reward calculations don’t favor such behavior. Placed in an understandable analogy, a child who estimates they won’t be receiving much investment from their parents might forgo a college education (and, indeed, even much of a high-school one) because they need to work to make ends meet. When you’re concerned about where your next meal is coming from there’s less time in your schedule for studying and taking out loans to not be working for four years. By contrast, the child from a richer family has the luxury of pursuing an education likely to produce greater future rewards because certain obstacles have been removed from their path.

Now obviously going to college is not something that humans have psychological adaptations for – it wasn’t a recurrent feature of our evolutionary history as a species – but there are cognitive systems we might expect to follow different developmental trajectories contingent on such estimations of one’s likely place in the environment; these could include systems judging the relative attractiveness of short- vs long-term rewards, willingness to take risks, pursuit of aggressive resolutions to conflicts, and so on. If the future is uncertain, saving for it makes less than taking a smaller reward in the present; if you lack social or financial support, being willing to fight to defend what little you do have might sound more appealing (as losing that little bit is more impactful when you won’t have anything left). The questions of interest thus becomes, “what cues in the environment might a developing child use to determine what their future will look like?” This brings us to the current paper by Abajobir et al (2016).

One potential cue might be your experiences with maltreatment while growing up, specifically at the hands of your caregivers. Though Abajobir et al (2016) don’t make the argument I’ve been sketching out explicitly, that seems to be the direction their research takes. They seem to reason (implicitly) that parental mistreatment should be a reliable cue to the future conditions you’re liable to encounter and, accordingly, one that children could use to alter their development. For instance, abusive or neglectful parents might lead to children adopting faster life history strategies involving risk-taking, delinquency, and violence themselves (or, if they’re going the maladaptive explanatory route, the failure of parents to provide supporting environments could in some way hinder development from proceeding as it usually would, in a similar fashion to not having enough food growing up might lead to one being shorter as an adult. I don’t know which line the authors would favor from their paper). That said, there is a healthy (and convincing) literature consistent with the hypothesis that parental behavior per se is not the cause of these developmental outcomes (Harris, 2009), but rather that it simply co-occurs with them. Specifically, abusive parents might be genetically different from non-abusive ones and those tendencies could get passed onto the children, accounting for the correlation. Alternatively, parents that maltreat their children might just happen to go together with children having peer groups growing up more prone to violence and delinquency themselves. Both are caused by other third variables.

Your personality usually can’t be blamed on them; you’re you all on your own

Whatever the nature of that correlation, Abajobir et al (2016) sought to use parental maltreatment from ages 0 to 14 as a predictor of later delinquent behaviors in the children by age 21. To do so, they used a prospective cohort of children and their mothers visiting a hospital between 1981-83. The cohort was then tracked for substantiated cases of child maltreatment reported to government agencies up to age 14, and at age 21 the children themselves were surveyed (the mothers being surveyed at several points throughout that time). Out of the 7200 initial participants, 3800 completed the 21-year follow up. At that follow up point, the children were asked questions concerning how often they did things like get excessively drunk, use recreational drugs, break the law, lie, cheat, steal, destroy the property of others, or fail to pay their debts. The mothers were also surveyed on matters concerning their age when they got pregnant, their arrest records, martial stability, and the amount of supervision they gave their children (all of these factors, unsurprisingly, predicting whether or not people continued on in the study for its full duration).

In total, of the 512 eventual cases of reported child maltreatment, only 172 remained in the sample at the 21-year follow up. As one might expect, maternal factors like her education status, arrest record, economic status, and unstable marriages all predicted increased likelihood of eventual child maltreatment. Further, of the 3800 participants, only 161 of them met the criteria for delinquency at 21 years. All of the previous maternal factors predicted delinquency as well: mothers who were arrested, got pregnant earlier, had unstable marriages, less education, and less money tended to produce more delinquent offspring. Adjusting for the maternal factors, however, it was reported that childhood maltreatment still predicted delinquency, but only for the male children. Specifically, maltreatment in males was associated with approximately 2-to-3.5 times as much delinquency as the non-maltreated males. For female offspring, there didn’t seem to be any notable correlation.

Now, as I mentioned, there are some genetic confounds here. It seems probable that parents who maltreat their children are, in some very real sense, different than parents who do not, and those tendencies can be inherited. This also doesn’t necessarily point a causal finger directly at parents, as it is also likely that maltreatment correlates with other social factors, like the peer group a child is liable to have or the neighborhoods they grow up in. The authors also mention that it is possible their measures of delinquency might not capture whatever effects childhood maltreatment (or its correlates) have on females, and that’s the point I wanted to wrap up discussing. To really put these findings on context, we would need to understand what adaptive role these delinquent behaviors – or rather the psychological mechanisms underlying them – have. For instance, frequent recreational drug use and problems fulfilling financial obligations might both signal that the person in question favors short-term rewards over long-term ones; frequent trouble with the law or destroying other people’s property could signal something about how the individual in question competes for social status. Maltreatment does seem to predict (even if it might not cause) different developmental courses, perhaps reflecting an active adjustment of development to deal with local environmental demands.

 The kids at school will all think you’re such a badass for this one

As we reviewed in the initial example, however, the same strategies will not always work equally well for every person. Those who are physically weaker are less likely to successfully enact aggressive strategies, all else being equal, for reasons which should be clear. Accordingly, we might expect that men and women show different patterns of delinquency to the extent they face unique adaptive problems. For instance, we might expect that females who find themselves in particularly hostile environments preferentially seek out male partners capable of enacting and defending against such aggression, as males tend to be more physically formidable (which is not to say that the women themselves might not be more physically aggressive as well). Any hypothetical shifts in mating preferences like these would not be captured by the present research particularly well, but it is nice to see the authors are at least thinking about what sex differences in patterns of delinquency might exist. It would be preferable if they were asking about those differences using this kind of a functional framework from the beginning, as that’s likely to yield more profitable insights and refine what questions get asked, but it’s good to see this kind of work all the same.

References: Abajobir, A., Kisely, S., Williams, G., Strathearnd, L., Clavarino, A., & Najman, J. (2016). Gender differences in delinquency at 21 years following childhood maltreatment: A birth cohort study. Personality & Individual Differences, 106, 95-103. 

Harris, J. (2009). The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press.

Are Video Games Making People Sexist?

If the warnings of certain pop-culture critics are correct, there’s a harm being perpetuated against women in the form of video games, where women are portrayed as lacking agency, sexualized, or prizes to be won by male characters. The harm comes from the downstream effects of playing these games, as it would lead to players – male and female – developing beliefs about the roles and capabilities of men and women from their depictions, entrenching sexist attitudes against women and, presumably, killing women’s aspirations to be more than mere ornaments for men as readily as one kills the waves of enemies that run directly into their crosshairs in any modern shooter. It’s a very blank slate type of view of human personality; one which suggests that there’s really not a whole lot inside our heads but a mound of person-clay, waiting to be shaped by the first set of media representations we come across. This blank slate view also happens to be a widely-implausible one lacking much in the way of empirical support.

Which would explain why my Stepford wife collection was so hard to build

The blank slate view of the human mind, or at least one of its many varieties, has apparently found itself a new name lately: cultivation theory. In the proud tradition of coming up with psychological theories that are not actually theories, cultivation theory restates an intuition: that the more one is exposed to or uses a certain type of media, the more one’s views will come to resemble what gets depicted in that medium. So, if one plays too many violent video games, say, they should be expected to turn into more violent people over time. This hasn’t happened yet, and violent content per se doesn’t seem to be the culprit of anger or aggression anyway, but it hasn’t stopped people from trying to push the idea that it could, will, or is currently happening. A similar idea mentioned in the introduction would suggest that if people are playing games in which women are depicted in certain ways – or not depicted at all – people will develop negative attitudes to them over time as they play more of these games.

What’s remarkable about these intuitions is how widely they appear to be held, or at least entertained seriously, in the absence of any real evidence that this cultivation of attitudes actually happens. Recently, the first longitudinal test of this cultivation idea was reported by Breuer et al (2015). Drawing on some data from German gamers, the researchers were able to examine how video game use and sexist attitudes changed from 2011 to 2013 among men and women. If there’s any cultivation going on, a few years ought to be long enough to detect at least some of it. The study ended up reporting on data from 824 participants (360 female), ages 14-85 (M = 38) concerning their sex, education level, frequency of game use, preference of genre of game, and sexist attitudes. The latter measure was derived from agreement on a scale from 1 to 5 concerning three questions: whether men should be responsible for major decisions in the family, whether men should take on leadership roles in mixed-sex groups, and whether women should take care of the home, even if both partners are wage earners.

Before getting into the relationships between video game use and sexist attitudes, I would like to note at the outset a bit of news which should be good for almost everyone: sexist attitudes were quite low, with each question garnering about an average agreement of about 1.8. As the scale is anchored from “strongly disagree” to “agree completely”, these scores would indicate that the sexist statements were met with rather palpable disagreement on the whole. There was a modest negative correlation between education and acceptance of those views, as well as a small, and male-specific, negative correlation with age. In other words, those who disagreed with those statements the least tended to be modestly less educated and, if they were male, younger. The questions of the day, though, are whether those people who play more video games are more accepting of such attitudes and whether that relationship grows larger over time.

Damn you, Call of Duty! This is all your fault!

As it turns out, no; they are not. In 2011, the regression coefficients for video game use and sexist attitudes were .04 and .06 for women and men, respectively (in 2013, these numbers were -.08 and -.07). Over time, not much changed: the female association between video game use in 2011 and sexist attitudes in 2013 was .12, while the male association was -.08. If video games were making people more accepting of sexism, it wasn’t showing up here. The analysis was attempted again, this time taking into account specific genres of gaming, including role-playing, action, and first-person shooters; genres in which women are thought to be particularly underrepresented or represented in sexist fashions (full disclosure: I don’t know what a sexist depiction of a woman in a game is supposed to look like, though it seems to be an umbrella term for a lot of different things from presence vs absence, to sexualization, to having women get kidnapped, none of which strike me as sexist, in the strict sense of the word. Instead, it seems to be a term that stands in for some personal distaste on the part of the person doing the assessment). However, considerations of specific genres yielded no notable associations between gaming and endorsement of the sexist statements either, which would seem to leave the cultivation theory dead in the water.

Breuer et al (2015) note that their results appear inconsistent with previous work by Stermer & Burkley (2012) that suggested a correlation exists between sexist video game exposure and endorsement of “benevolent sexism”. In that study, 61 men and 114 women were asked about the three games they played the most, ranked each on a 1-7 scale concerning how much sexism was present in them (again, this term doesn’t seem to be defined in any clear fashion), and then completed the ambivalent sexism scale; a dubious measure I have touched upon before. The results reported by Stermer & Burkley (2012) found participants reporting a very small amount of perceived sexism in their favorite games (M = 1.87 for men and 1.54 for women) and, replicating past work, also found no difference of endorsement of benevolent sexism between men and women on average, nor among those who played games they perceived to be sexist and those who did not, though men who perceived more sexism in their games endorsed the benevolent items relatively more (β = 0.21). Finally, it’s worth noting there was no connection between the hostile sexism score and video game playing. One issue might raise about this design concerns asking people explicitly about whether their leisure time activities are sexist and then immediately asking them about how much they value women and feel they should be protected. People might be right to begin thinking about how experimental demand characteristics could be effecting the results at that point.

Tell me about how much you hate women and why that’s due to video games

So is there much room to worry about when it comes to video games turning people into sexists? According to the present results, I would say probably not. Not only was the connection between sexism and video game playing small to the point of nonexistence in the larger, longitudinal sample, but the overall endorsement and perception of sexism in these samples is close to a floor effect. Rather than shaping our psychology in appreciable ways, a more likely hypothesis is that various types of media – from video games to movies and beyond - reflect aspects of it. To use a simple example, men aren’t drawn to being soldiers because of video games, but video games reflect the fact that most soldiers are men. For whatever reason, this hypothesis appears to receive considerably less attention (perhaps because it makes for a less exciting moral panic?). When it comes to video games, certain features our psychology might be easier to translate into compelling game play, leading to certain aspects more typical of men’s psychology being more heavily represented. In that sense, it would be rather strange to say that women are underrepresented in gaming, as one needs a reference point to what appropriate representation would mean and, as far as I can tell, that part is largely absent; kind of like how most research on stereotypes begins by assuming that they’re entirely false.

References: Breuer, J., Kowert, R., Festl, R., & Quandt, T. (2015). Sexist games = sexist gamers? A longitudinal study on the relationship between video game use and sexist attitudes. Cyberpsychology, Behavior, & Social Networking, 18, 1-6.

Stermer, P. & Burkley, M. (2012). SeX-Box: Exposure to sexist video games predicts benevolent sexism. Psychology of Popular Media Culture, 4, 47-56.

I Reject Your Fantasy And Substitute My Own

I don’t think it’s a stretch to make the following generalization: people want to feel good about themselves. Unfortunately for all of us, our value to other people tends to be based on what we offer them and, since our happiness as a social species tends to be tethered to how valuable we are perceived to be by others, being happy can be more of chore than we would prefer. These valuable things need not be material; we could offer things like friendship or physical attractiveness, pretty much anything that helps fill a preference or need others have. Adding to the list of misfortunes we must suffer in the pursuit of happiness, other people in the world also offer valuable things to the people we hope to impress. This means that, in order to be valuable to others, we need to be particularly good at offering things to others people: either through being better at providing something than many people provide, or able to provide something relatively unique that others typically don’t. If we cannot match the contributions of others, then people will not like to spend time with us and we will become sad; a terrible fate indeed. One way to avoid that undesirable outcome, then, is to increase your level of competition to become more valuable to other people; make yourself into the type of person others find valuable. Another popular route, which is compatible with the first, is to condemn other people who are successful or promote the images of successful people. If there’s less competition around, then our relative ability becomes more valuable. On that note, Barbie is back in the news again.

“Finally; a new doll for my old one to tease for not meeting her standards!”

The Lammily doll has been making the rounds on various social media sites, marketed as the average Barbie, with the tag line: “average is beautiful”. Lammily is supposed to be proportioned so as to represent the average body of a 19-year-old woman. She also comes complete with stickers for young girls to attach to her body in order to give her acne, scars, cellulite, and stretch marks. The idea here seems to be that if young girls see a more average-looking doll, they will compare themselves less negatively to it and, hopefully, end up feeling better about their body. Future incarnations of the doll are hoped to include diverse body types, races, and I presume other features upon which people vary (just in case the average doll ends up being too alienating or high-achieving, I think). If this doll is preferred by girls to Barbie, then by all means I’m not going to tell them they shouldn’t enjoy it. I certainly don’t discourage the making of this doll or others like it. I just get the sense that the doll will end up primarily making parents feel better by giving them the sense they’re accomplishing something they aren’t, rather than affecting their children’s perceptions.

As an initial note, I will say that I find it rather strange that the creator of the doll stated: “By making a doll real I feel attention is taken away from the body and to what the doll actually does.” The reason I find that strange is because the doll does not, as far as I can see, come with a number of different accessories that make it do different things. In fact, if Lammily does anything, I’m not sure what that anything is, as it’s never mentioned. The only accessory I see are the aforementioned stickers to make her look different. Indeed, the whole marketing of the doll is focuses on how it looks; not what it does. For a doll ostensibly attempting to take attention away from the body, it’s body seems to be its only selling point.

The main idea, rather, as far as I can tell, is to try and remove the possible intrasexual competition over appearance that women might feel when confronted with a skinny, attractive, makeup-clad figure. So, by making the doll less attractive with scar stickers, girls will feel less competition to look better. There are a number of facets of the marketing of the doll that would support this interpretation: one such point is the tag line. Saying that “average is beautiful” is, from a statistical standpoint, kind of strange; it’s a bit like saying “average is tall” or “average is smart”. These descriptors are all relative terms – typically ones that apply to upper-ends of some distribution – so applying them to more people would imply that people don’t differ as much on the trait in question. The second point to make about the tagline is that I’m fairly certain, if you asked him, the creator of the Lammily doll – Nickolay Lamm - would not tell you he meant to imply that women who are above or below some average are not beautiful; instead, you’d probably get some sentiment to the effect that everyone is attractive and unique in their own special way, further obscuring the usefulness of the label. Finally, if the idea is to “take attention away from the body”, then selling the doll under the label of its natural beauty is kind of strange.

So does Barbie have a lot to answer for culturally, and is Lammily that answer? Let’s consider some evidence examining whether Barbie dolls are actually doing harm to young girl in the first place and, if they are, whether that harm might be mitigated via the introduction of more-proportionate figures.

“If only she wasn’t as thin, this never would have happened”

One 2006 paper (Dittmar, Halliwell, & Ive, 2006) concludes that the answer is “yes” to both those questions, though I have my doubts. In their paper, the researchers exposed 162 girls between the ages of 5 and 8 to one of three picture books. These books contained a few images of Barbie (who would be a US dress size 2) or Emme (a size 16) dolls engaged in some clothing shopping; there was also a control book that did not draw attention to bodies. The girls were then asked questions about how they looked, how they wanted to look, and how they hoped to look when they grew up. After 15 minutes of exposure to these books, there were some changes in these girl’s apparent satisfaction with their bodies. In general, the girls exposed to the Barbies tended to want to be thinner than those exposed to the Emme dolls. By contrast, those exposed to Emme didn’t want to be thinner than those exposed to no body images at all. In order to get a sense for what was going on, however, those effects require some qualifications

For starters, when measuring the difference between one’s perception of her current body and her current ideal body, exposure to Barbie only made the younger children want to be thinner. This includes the girls in the 5 – 7.5 age range, but not the girls in the 7.5 – 8.5 range. Further, when examining what the girl’s ideal adult bodies would be, Barbie had no effect on the youngest girls (5 – 6.5) or the oldest ones (7.5 – 8.5). In fact, for the older girls, exposure to the Emme doll seemed to make them want to be thinner as adults (the authors suggesting this to be the case as Emme might represent a real, potential outcome the girls are seeking to avoid). So these effects are kind of all over the place, and it is worth noting that they, like many effects in psychology, are modest in size. Barbie exposure, for instance, reduced the girls “body esteem” (a summed measure of six questions about the girl felt about their bodies that got a 1 to 3 response, with 1 being bad, 2 neutral, and 3 being good) from a mean of 14.96 in the control condition to 14.45. To put that in perspective, exposure to Barbie led to girls, on average, moving one response out of six half a point on a small scale, compared to the control group.

Taking these effects at face value, though, my larger concerns with the paper involve a number of things it does not do. First, it doesn’t show that these effects are Barbie-specific. By that I don’t mean that they didn’t compare Barbie against another doll – they did – but rather that they didn’t compare Barbie against, say, attractive (or thin) adult human women. The authors credit Barbie with some kind of iconic status that is likely playing an important role in determining girl’s later ideals of beauty (as opposed to Barbie temporarily, but not lastingly, modifying it their satisfaction), but they don’t demonstrate it. On that point, it’s important to note what the authors are suggesting about Barbie’s effects: that Barbies lead to lasting changes in perceptions and ideals, and that the older girls weren’t being affected by exposures to Barbies because they have already ”…internalized [a thin body ideal] as part of their developing self-concept” by that point.

At least you got all that self-deprecation out of the way early

An interesting idea, to be sure. However, it should make the following prediction: adult women exposed to thin or attractive members of the same sex shouldn’t have their body satisfaction affected, as they have already “internalized a thin ideal”. Yet this is not what one of the meta-analysis papers cited by the authors themselves finds (Groesz, Levine, & Murnen, 2002). Instead, adult women faced with thin models feel less satisfied with their bodies relative to when they view average or above-average weight models. This is inconsistent with the idea that some thin beauty standard has been internalized by age 8. Both sets of data, however, are consistent with the idea that exposure to an attractive competitor might reduce body satisfaction temporarily, as the competitor will be perceived to be more attractive by other people. In much the same way, I might feel bad about my skill at playing music when I see someone much better at the task than I am. I would be dissatisfied because, as I mentioned initially, my value to others depends on who else happens to offer what I do: if they’re better at it, my relative value decreases. A little dissatisfaction, then, either pushes me to improve my skill or to find a new domain in which I can compete more effectively. The disappointment might be painful to experience, but it is useful for guiding behavior. If the older girls just stopped viewing Barbie as competition, perhaps, because they have moved onto new stages in their development, this would explain why Barbie had no effect on them as well. The older girls might simply have grown out of competing with Barbie.

Another issue with the paper is that the experiment used line drawings of body shapes, rather than pictures of actual human bodies, to determine which body girls think they have and which body they want, both now and in the future. This could be an issue, as previous research (Tovee & Cornelissen, 2001) failed to replicate the “girls want to be skinnier than men would prefer” effects – which were found using line drawings – when using actual pictures of human bodies. One potential reason for that different in findings is that a number of features besides thinness might unintentionally co-vary in these line drawings. So some of the desire to be skinny that the girls were expressing in the 2006 experiment might have just been an artifact of the stimulus materials being used.

Additionally, Dittmar, Halliwell, & Ive (2006), somewhat confusingly, didn’t ask the girls about whether or not they owned Barbies or how much exposure they had to them (though they do note that it probably would have been a useful bit of information to have). There are a number of predictions we might make about such a variable. For instance, girls exposed to Barbie more often should be expected to have a greater desire for thinness, if the author’s account is true. Further still, we might also predict that, among girls who have lots of experience with Barbies, a temporary exposure to pictures of Barbie shouldn’t be expected to effect their perception of their ideal body much, if at all. After all, if they’re constantly around the doll, they should have, as the authors put it, already “…internalized [a thin body ideal] as part of their developing self-concept”, meaning that additional exposure might be redundant (as it was with the older girls). Since there’s no data on the matter, I can’t say much more about it.

A match made in unrealistic heaven.

So would a parent have a lasting impact on their daughter’s perception of beauty by buying her a Barbie? Probably not. The current research doesn’t demonstrate any particularly unique, important, or lasting role for Barbie in the development of children’s feelings about their bodies (thought it does assume them). You probably won’t do any damage to your child by buying them an Emme or a Lammily either. It is unlikely that these dolls are the ones socializing children and building their expectations of the world; that’s a job larger than one doll could ever hope to accomplish. It’s more probable that features of these dolls reflect (in some cases exaggerated) aspects of our psychology concerning what is attractive, rather than creating them.

A point of greater interest I wanted to end with, though, is why people felt that the problem which needed to be addressed when it came to Barbie was that she was disproportionate. What I have in mind is that Barbie has a long history of prestigious careers; over 150 of them, most of which being decidedly above-average. If you want a doll that focuses on what the character does, Barbie seems to be doing fine in that regard. If we want Barbie to be an average girl sure, she won’t be as thin, but then chances are that she doesn’t even have her Bachelor’s degree either, which would preclude her from a number of the professions she has held. She’s also unlikely to be a world class athlete or performer. Now, yes, it is possible for people to hold those professions while it is impossible for anyone to be proportioned as Barbie is, but it’s certainly not the average. Why is the concern over what Barbie looks like, rather than what unrealistic career expectations she generates? My speculation is that the focus arises because, in the real world, women compete with each other more over their looks than their careers in the mating market, but I don’t have time to expand on that much more here.

It just seems peculiar to focus on one particular non-average facet of reality obsessively only to state that it doesn’t matter. If the debate over Barbie can teach us anything, it’s that physical appearance does matter; quite a bit, in fact. To try and teach people – girls or boys – otherwise might help them avoid some temporary discomfort (“Looks don’t matter; hooray!”), but it won’t give them an accurate impression of how the wider world will react to them (“Yeah, about that whole looks thing…”); a rather dangerous consequence, if you ask me.

References: Dittmar, H., Halliwell, E., & Ive, S. (2006). Does Barbie make girls want to be thin? The effect of experimental exposure to images of dolls on the body image of 5- to 8-year-old girls. Developmental Psychology, 42, 283-292.

Groesz, L., Levine, M., & Murnen, S. (2002). The effect of experimental presentation of thin media images on body satisfaction: A metaanalytic review. International Journal of Eating Disorders, 31, 1–16.

Tovee, M. & Cornelissen, P. (2001). Female and male perceptions of physical attractiveness in front-view and profile. British Journal of Psychology, 92, 391-402.

Practice Makes Better, But Not Necessarily Much Better

“But I’m not good at anything!” Well, I have good news — throw enough hours of repetition at it and you can get sort of good at anything…It took 13 years for me to get good enough to make the New York Times best-seller list. It took me probably 20,000 hours of practice to sand the edges off my sucking.” -David Wong

That quote is from one of my favorite short pieces of writing entitled “6 Harsh Truths That Will Make You a Better Person”, which you can find linked above. The jist of the article is simple: the world (or, more precisely, the people in the world) only care about what valuable things you provide them, and what is on the inside, so to speak, only matters to the extent that it makes you do useful things for others. This captures nicely some of the logic of evolutionary theory – a piece that many people seem to not appreciate – namely that evolution cannot “see” what you feel; it can only “see” what organisms do (seeing, in this sense, referring to selecting for variants that do reproductively-useful things). No matter how happy you are in life, if you aren’t reproducing, whatever genes contributed to that happiness will not see the next generation. Given that your competence at performing a task is often directly related to the value it could potentially provide for others, the natural question many people begin to consider is, “how can I get better at doing things?”

  Step 1: Print fake diploma for the illusion of achievement

The typical answer to that question, as David mentions, is practice: by throwing enough hours of practice at something, people tend to get sort of good at it. The “sort of” in that last sentence is rather important, according to a recent meta-analysis. The paper – by Macnamara et al (2014) – examines the extent of that “sort of” across a variety of different studies tracking different domains of skill one might practice, as well as across a variety of different reporting styles concerning that practice. The results from the paper that will probably come as little surprise to anyone is that – as intuition might suggest – the amount of time one spends practicing does, on the whole, seem to show a positive correlation with performance; the ones that probably will come as a surprise is that the extent of that benefit explains a relatively-small percentage of the variance in eventual performance between people.

Before getting into the specific results of the paper, it’s worth noting that, as a theoretical matter, there are reasons we might expect practicing on a task to correlate with eventual performance even if the practicing itself has little effect: people might stop practicing things they don’t think they’re very good at doing. Let’s say I wanted to get myself as close to whatever the chess-equivalent of a rockstar happens to be. After playing the game for around a month, I find that, despite my best efforts, I seem to be losing; a lot. While it is true that more practice playing chess might indeed improve my performance to some degree, I might rightly conclude that investing the required time really won’t end up being worth the payoff. Spending 10,000 hours of practice to go from a 15% win rate to a 25% win rate won’t net me any chess groupies. If my time spent practicing chess is, all things considered, a bad investment, not investing anymore time in it than I already had would be a rather useful thing to do, even if it might lead to some gains. The idea that one ought to persist at the task despite the improbable nature of a positive outcome (“if at first you don’t succeed, try, try again”) is as optimistic as it is wasteful. That’s not a call to give up on doing things altogether, of course; just a recognition that my time might be profitably invested in other domains with better payoffs. Then again, I majored in psychology instead of computer science or finance, so maybe I’m not the best person to be telling anyone else about profitable payoffs…

In any case, turning to the meat of the paper, the authors began by locating around 9,300 articles that might have been relevant to their analysis. As it turns out, only 88 of them possessed the relevant inclusion criteria: (1) an estimate of the numbers of hours spent practicing, (2) a measure of performance level, (3) and effect size of the relationship between those first two things, (4) written in English, and (5) conducted on humans. These 88 studies contained 111 samples, 157 effect sizes, and approximately 11,000 participants. Of those 157 correlations, 147 of them were positive: as performance increased, so too did hours of practice tend to increase in an overwhelming majority of the papers. The average correlation between hours of practice and performance was a 0.35. This means that, overall, deliberate practice explained around 12% of the variance in performance. Throw enough hours of repetition at something and you can get sort of better at it. Well, somewhat slightly better at it, anyway…sometimes…

At least it only took a few decades of practice to realize that mediocrity

The average correlation doesn’t give a full sense of the picture, as many averages tend to not. Macnamera et al (2014) first began to break the analysis down by domain, as practicing certain tasks might yield greater improve than others. The largest gains were seen in the realm of games, where hours of practice could explain around a forth of the variance in performance. From there, the percentages decreased to 21% in music, 18% in sports, 4% in education, and less than 1% in professions. Further, as one might also expect, practice showed the greatest effect when the tasks were classified as highly predictable (24% of the variance), followed by moderately (12%) and poorly predictable (4%). If you don’t know what to expect, it’s awfully difficult to know what or how to practice to achieve a good outcome. Then again, even if you do know what to expect, it still seems hard to achieve those outcomes.

Somewhat troubling, however, was that the type of reporting about practicing seemed to have a sizable effect as well: reports that relied on retrospective interviews (i.e. “how often would you say you have practiced over the last X weeks/months/years”) tended to show larger effects; around 20% of the variance explained. When the method was a retrospective questionnaire, rather than an interview, this dropped to 12%. For the studies that actually involved keeping daily logs of practice, this percentage dropped precipitously to a mere 5%. So it seems at least plausible that people might over-report how much time they spend practicing, especially in a face-to-face context. Further still, the extent of the relationship between practice and product depended heavily on the way performance was measured. For the studies simply using “group membership” as the measure of skill, around 26% of the variance was explained. This fell to 14% when laboratory studies alone were considered, and fell further when expert ratings (9%) or standardized measures of performance (8%) were used.

Not only might be people be overestimating how much time they spend practicing a skill, then, but the increase in ability possibly attributable to that practice appears to shrink the more fine-grained or specific an analysis gets. Now it’s worth mentioning that this analysis is not able to answer the question of how much improvement in performance is attributable to practice in some kind of absolute sense; it just deals with how much of the existing differences between people’s ability might be attributable to differences in practice. To make that point clear, imagine a population of people who were never allowed to practice basketball at all, but were asked to play the game anyway. Some people will likely be better than others owing to a variety of factors (like height, speed, fine motor control, etc), but none of that variance would be attributable to practice. It doesn’t mean that people wouldn’t get better if they were allowed to practice, of course; just that none of the current variation would be able to be chalked up to it.

And life has a habit of not being equitable

As per the initial quote, this paper suggests that deliberate practicing, at least past a certain point, might have more to do with sanding the harsh edges off one’s ability rather than actually carving it out. The extent of that sanding likely depends on a lot of things: interests, existing ability, working memory, general cognitive functioning, what kind of skill is in question, and so on. In short, it’s probably not simply a degree of practice that separates a non-musician from Mozart. What extensive practice can help with seems to be more pushing the good towards the great. As nice as it sounds to tell people that they can achieve anything they put their mind to, nice does not equal true. That said, if you have a passion for something and just wish to get better at it (and the task at hand lends itself to improvement via practice), the ability to improve performance by a few percentage points is perfectly respectable. Being slightly better at something can, on the margins, mean the difference between winning or losing (in whatever form that takes); it’s just that all the optimism and training montages in the world probably won’t take you from the middle to the top.

References: Macnamera, B., Hambrick, D., & Oswald, F. (2014). Deliberate practice and performance in music, games, sports, education, and professions: A meta-analysis. Psychological Science, DOI: 10.1177/0956797614535810

What Makes Incest Morally Wrong?

There are many things that people generally tend to view to be disgusting or otherwise unpleasant. Certain shows, like Fear Factor, capitalize on those aversions, offering people rewards if they can manage to suppress those feelings to a greater degree than their competitors. Of the people who watched the show, many would probably tell you that they would be personally unwilling to engage in such behaviors; what many do not seem to say, however, is that others should not be allowed to engage in those behaviors because they are morally wrong. Fear or disgust-inducing, yes, but not behavior explicitly punishable by others. Well, most of the time, anyway; a stunt involving drinking donkey semen apparently made the network hesitant about airing it, likely owing to the idea that some moral condemnation would follow in its wake. So what might help us differentiate between understanding why some disgusting behaviors – like eating live cockroaches or submerging one’s arm in spiders – are not morally condemned while others – like incest – tend to be?

Emphasis on the “tend to be” in that last sentence.

To begin our exploration of the issue, we could examine some research on some cognitive mechanisms for incest aversion. Now, in theory, incest should be an appealing strategy from a gene’s eye perspective. This is due to the manner in which sexual reproduction works: by mating with a full sibling, your offspring would carry 75% of your genes in common by descent, rather than the 50% you’d expect if you mated with a stranger. If those hyper-related siblings in turn mated with one another, after a few generations you’d have people giving birth to infants that were essentially genetic clones. However, such inbreeding appears to carry a number of potentially harmful consequences. Without going into too much detail, here are two candidate explanations one might consider for why inbreeding isn’t a more popular strategy: first, it increases the chances that two harmful, but otherwise rare, recessive alleles will match up with on another. The result of this frequently involves all sorts of nasty developmental problems that don’t bode well for one’s fitness.

A second potential issue involves what is called the Red Queen hypothesis. The basic idea here is that the asexual parasites that seek to exploit their host’s body reproduce far quicker than their hosts tend to. A bacteria can go through thousands of generations in the time humans go through one. If we were giving birth to genetically-identical clones, then, the parasites would find themselves well-adapted to life inside their host’s offspring, and might quickly end up exploiting said offspring. The genetic variability introduced by sexual reproduction might help larger, longer-lived hosts keep up in the evolutionary race against their parasites. Though there may well be other viable hypotheses concerning why inbreeding is avoided in many species, the take-home point for our current purposes is that organisms often appear as if they are designed to avoid breeding with close relatives. This poses many species with a problem they need to solve, however: how do you know who your close kin are? Barring some effective spatial dispersion, organisms will need some proximate cues that help them differentiate between their kin and non-kin so as to determine which others are their best bets for reproductive success.

We’ll start with perhaps the most well-known of the research on incest avoidance in humans. The Westermarck effect refers to the idea that humans appear to become sexually disinterested in those with whom they spent most of their early life. The logic of this effect goes (roughly) as follows: your mother is likely to be investing heavily in you when you’re an infant, in no small part owing to the fact that she needs to breastfeed you (prior to the advent of alternative technologies). Since those who spend a lot of time around you and your mother are more likely to be kin than those who spend less time in your proximity. That degree of that proximity ought to in turn generate some kinship index with others that would generate disinterest in sexual experiences with such individuals. While such an effect doesn’t lend itself nicely to controlled experiments, there are some natural contexts that can be examined as pseudo-experiments. One of these was the Israeli Kibbutz, where children were predominately raised in similarly-aged, mixed-sex peer groups. Of the approximately 3000 children that were examined from these Kibbutz, there were only 14 cases of marriage between individuals from the same group, and almost all of them were between people introduced to the group after the age of 6 (Shepher, 1971).

Which is probably why this seemed like a good idea.

The effect of being raised in such a context didn’t appear to provide all the cues required to trigger the full suite of incest aversion mechanisms, however, as evidenced by some follow-up research by Shor & Simchai (2009). The pair carried out some interviews with 60 of the members of the Kibbutz to examine the feelings that these members had towards each other. A little more than half of the sample reported having either moderate or strong attractions towards other members of their cohort at some point; almost all the rest reported sexual indifference, as opposed to the typical kind of aversion or disgust people report in response to questions about sexual attraction towards their blood siblings. This finding, while interesting, needs to be considered in light of the fact that almost no sexual interactions occurred between members of the same peer group; it should also be considered in light of the fact that there did not appear to exist any strong moral prohibition against such behavior.

Something like a Westermarck effect might explain why people weren’t terribly inclined to have intercourse with their own kin, but it would not explain why people think that others having sex with close kin is morally wrong. Moral condemnation is not required for guiding one’s own behavior; it appears more suited for attempting to guide the behavior of others. When it comes to incest, a likely other whose behavior one might wish to guide would be their close kin. This is what led Lieberman et al (2003) to deliver some predictions about what factors might drive people’s moral attitudes about incest: the presence of others who are liable to be your close kin, especially if those kin are of the opposite sex. If duration of co-residence during infancy is used a proximate input cue for determining kinship, then that duration might also be used as an input condition for determining one’s moral views about the acceptability of incest. Accordingly, Lieberman et al (2003) surveyed 186 individuals about their history of co-residence with other family members and their attitudes towards how morally unacceptable incest is, along with a few other variables.

What the research uncovered was that duration of co-residence with an opposite-sex sibling predicted the subject’s moral judgments concerning incest. For women, the total years of co-residence with a brother was correlated with judgments of wrongness for incest at about r = 0.23, and that held whether the time period from 0 to 10 or 0 to 18 was under investigation; for men with a sister, a slightly higher correlation emerged from 0 to 10 years (r = 0.29), but an even-larger correlation was observed when the period was expanded to age 18 (r = 0.40). Further, such effects remained largely static even after the number of siblings, parental attitudes, sexual orientation, and the actual degree of relatedness between those individuals was controlled for. None of those factors managed to uniquely predict moral attitudes towards incest once duration of co-residence was controlled for, suggesting that it was the duration of co-residence itself driving these effects of moral judgments. So why did this effect not appear to show up in the case of the Kibbutz?

Perhaps the driving cues were too distracted?

If the cues to kinship are somewhat incomplete – as they likely were in the Kibbutz – then we ought to expect moral condemnation of such relationships to be incomplete as well.  Unfortunately, there doesn’t exist much good data on that point that I am aware of, but, on the basis of Shor & Simchai’s (2009) account, there was no condemnation of such relationships in the Kibbutz that rivaled the kind seen in the case of actual families. What their account does suggest is that more cohesive groups experienced less sexual interest in their peers; a finding that dovetails with the results from Lieberman et al (2003): cohesive groups might well have spent more time together, resulting in less sexual attraction due to greater degrees of co-residence. Despite Shor & Simchai’s suggestion to the contrary, their results appear to be consistent with a Westermarck kind of effect, albeit an incomplete one. Though the duration of co-residence clearly seems to matter, the precise way in which it matters likely involves more than a single cue to kinship. What connection might exist between moral condemnation and active aversion to the idea of intercourse with those one grew up around is a matter I leave to you.

References: Lieberman, D., Tooby, J., & Cosmides, L. (2003). Does morality have a biological basis? An empirical test of the factors governing moral sentiments relating to incest. Proceedings of the Royal Society of London B, 270, 819-826.

Shepher, J. (1971). Mate Selection Among Second Generation Kibbutz Adolescents and Adults: Incest Avoidance and Negative Imprinting. Archives of Sexual Behavior, 1, 293-307.

Shor, E. & Simchai, D. (2009). Incest Avoidance, the Incest Taboo, and Social Cohesion: Revisiting Westermarck and the Case of the Israeli Kibbutzim. American Journal of Sociology, 114, 1803-1846,

The Enemy Of My Dissimilar-Other Isn’t My Enemy

Some time ago, I alluded to a  very real moral problem: Observed behavior, on its own, does not necessarily give you much insight into the moral value of the action. While people can generally agree in the abstract that killing is morally wrong, there appear to be some unspoken assumptions that go into such a thought. Without such additional assumptions, there would be no understanding why killing in self-defense is frequently morally excused or occasionally even praised, despite the general prohibition. In short: when “bad” things happen to “bad” people, that is often assessed as a “good” state of affairs. The reference point for such statements like “killing is wrong”, then, seems to be that killing is bad, given that it has happened to someone who was undeserving. Similarly, while most of us would balk at the idea of forcibly removing someone from their home and confining them against their will to dangerous areas in small rooms, we also would not advocate for people to stop being arrested and jailed, despite the latter being a fairly accurate description of the former.

It’s a travesty and all, but it makes for really good TV.

Figuring out the various contextual factors affecting our judgments concerning who does or does not deserve blame and punishment helps keep researchers like me busy (preferably in a paying context, fun as recreational arguing can be. A big wink to the NSF). Some new research on that front comes to us from Hamlin et al (2013), who were examining preverbal children’s responses to harm-doing and help-giving. Given that these young children aren’t very keen on filling out surveys, researchers need alternative methods of determining what’s going on inside their minds. Towards that end, Hamlin et al (2013) settled on an infant-choice style of task: when infants are presented a the choice between items, which one they select is thought to correlate with the child’s liking of, or preference for, that item. Accordingly, if these items are puppets that infants perceive as acting, then their selections ought to be a decent – if less-than-precise – index of whether the infants approve or disapprove of the actions the puppet took.

In the first stage of the experiment, 9- and 14-month old children were given a choice between green beans and graham crackers (somewhat surprisingly, appreciable percentages of the children chose the green beans). Once a child had made their choice, they then observed two puppets trying each of the foods: one puppet was shown to like the food the child picked and dislike the unselected item, while the second puppet liked and disliked the opposite foods. In the next stage, the child observed one of the two puppets playing with a ball. This ball was being bounced off the wall, and eventually ended up by one of two puppet dogs by accident. The dog with the ball either took it and ran away (harming), or picked up the ball and brought it back (helping). Finally, children were provided with a choice between the two dog puppets.

Which dog puppet the infant preferred depended on the expressed food preferences of the first puppet: if the puppet expressed the same food preferences as the child, then the child preferred the helping dog (75% of the 9-month-olds and 100% of the 14-month-olds); if the puppet expressed the opposite food preference, then the child preferred the harming dog (81% of 9-month-olds and 100% of 14-month-olds). The children seemed to overwhelming prefer dogs that helped those similar to themselves or did not help those who were dissimilar. This finding potentially echos the problem I raised at the beginning of this post: whether or not an act is deemed morally wrong or not depends, in part, on the person towards whom the act is directed. It’s not that children universally preferred puppets who were harmful or helpful; the target of that harm or help matters. It would seem that, in the case of children, at least, something as trivial as food preferences is apparently capable of generating a dramatic shift in perceptions concerning what behavior is acceptable.

In her defense, she did say she didn’t want broccoli…

The effect was then mostly replicated in a second experiment. The setup remained largely the same with the addition of a neutral dog puppet that did not act in anyway. Again, 14-month-old children preferred the puppet that harmed the dissimilar other over the puppet that did nothing (94%), and preferred the puppet that did nothing over the puppet that helped (81%). These effects were reversed in the similar other condition, with 75% preferring the dog that helped the similar other over the neutral dog, and preferred the neutral over the harmful puppet 69% of the time. 9-month-olds did not quite show the same pattern in the second experiment, however. While none of the results went in the opposite direction to the predicted pattern, the ones that did exist generally failed to reach significance. This is in some accordance with the first experiment, where 9-month-olds exhibited the tendency to a lesser degree than the 14-month-olds.

So this is a pretty neat research paradigm. Admittedly, one needs to make certain assumptions about what was going on in the infant’s heads to make any sense of the results, but assumptions will always be required when dealing with individuals that can’t tell you much about what they’re thinking or feeling (and even with the ones who can). Assuming that the infant’s selections indicate something about their willingness to condemn or condone helpful or harmful behavior, we again return to the initial point: the same action can be potentially condemned or not, depending on the target of that action. While this might sound trivially true (as opposed to other psychological research, which is often perceived to be trivially false), it is important to bear in mind that our psychology need not be that way: we could have been designed to punish anyone who committed a particular act, regardless of target. For instance, the infants could have displayed a preference towards helping dogs, regardless of whether or not they were helping someone similar or dissimilar to them, or we could view murder as always wrong, even in cases of self-defense.

While such a preference might sound appealing to many people (it would be pretty nice of us to always prefer to help helpful individuals), it is important to note that such a preference might also not end up doing anything evolutionarily-useful. That state of affairs would owe itself to the fact that help directed towards one individual is, essentially, help not directed at any other individual. Provided that help directed towards one person is less likely to pay off in the long run (such as individuals who do not share your preferences) relative to help directed towards others (such as individuals who do share you preferences), we ought to expect people to direct their investments and condemnations strategically. Unfortunately, this is where empirical matters can become complicated, as strategic interests often differ on an individual-to-individual, or even day-to-day basis, regardless of there being some degree of overlap between some broad groups within a population over time.

At least we can all come together to destroy a mutual enemy.

Finally, I see plenty of room for expanding this kind of research. In the current experiments, the infants knew nothing about the preferences of the helper or harmer dogs. Accordingly, it would be interesting to see a simple variant of the present research: it would involve children observing the preferences of the helper and harmer puppets, but not the preferences of the target of that help or harm. Would children still “approve” of the actions of the puppet with similar tastes and “disapprove” of the puppet with dissimilar tastes, regardless of what action they took, relative to a neutral puppet? While it would be ideal to have conditions in which children knew about the preferences of all the puppets involved as well, the risks of getting messy data from more complicated designs might be exacerbated in young children. Thankfully, this research need not (and should not) stick to young children.

References: Hamlin, J., Mahajan, N., Liberman, Z., Wynn, K., (2013). Not like me = bad: Infants prefer those who harm dissimilar others. Psychological Science.

Mothers And Others (With Benefits)

Understanding the existence and persistence of homosexuality in the face of its apparently reproductive fitness costs has left many evolutionary researchers scratching their heads. Though research into homosexuality has not been left wanting for hypotheses, every known hypothesis to date but one has had several major problems when it comes to accounting for the available data (and making conceptual sense). Some of them lack a developmental story; some fail to account for the twin studies; others posit benefits that just don’t seem to be there. What most of the aforementioned research shares in common, however, is its focus: male homosexuality. Female homosexuality has inspired considerably less hypothesizing, perhaps owing to the assumption, valid or not, that female sexual preferences played less of a role in determining fitness outcomes, relative to men’s. More precisely, physical arousal is required for men in order for their to engage in intercourse, whereas it is not necessarily required for women.

Not that lack of female arousal has ever been an issue for this fine specimen.

A new paper out in Evolutionary Psychology by Kuhle & Radtke (2013) takes a functional stab at attempting to explain some female homosexual behavior. Not the homosexual orientations, mind you; just some of the same-sex behavior. On this point, I would like to note that homosexual behavior isn’t what poses an evolutionary mystery anymore than other, likely nonadaptive behaviors, such as masturbation. The mystery is why an individual would be actively averse to intercourse with members of the opposite sex; their only path to reproduction. Nevertheless, the suggestion that Kuhle & Radtke (2013) put forth is that some female homosexual sexual behavior evolved in order to recruit female alloparent support. An alloparent is an individual who provided support for an infant but is not one of that infant’s parents. A grandmother helping to raise a grandchild, then, would represent a case of alloparenting. On the subject of grandmothers, some have suggested that the reason human females reach menopause so early in their lifespan – relative to other species who go on with the potential to reproduce until right around the point they die – is that grandmother alloparenting, specifically maternal grandmother, was a more valuable resource at the point, relative to direct reproduction. On the whole, alloparenting seems pretty important, so getting a hold of good resources for the task would be adaptive.

The suggestion that women might use same-sex sexual behavior to recruit female alloparental support is good, conceptually, on at least three fronts: first, it pays some mind to what is at least a potential function for a behavior. Most psychological research fails to think about function at all, much less plausible functions, and is all the worse because of it. The second positive part of this hypothesis is that it has some developmental story to go with it, making predictions about what specific events are likely to trigger the proposed adaptation and, to some extent, anyway, why they might. Finally, it is consistent with – or at least not outright falsified by – the existing data, which is more than you can say for almost all the current theories purporting to explain male homosexuality. On these conceptual grounds, I would praise the lesbian-sex-for-alloparenting model. On other grounds, both conceptual and empirical, however, I have very serious reservations.

The first of these reservations comes in form of the source of alloparental investment. While, admittedly, I have no hard data to bear on this point (as my search for information didn’t turn up any results), I would wager it’s a good guess that a substantial share of the world’s alloparental resources come from the mother’s kin: grandparents, cousins, aunts, uncles, siblings, or even other older children. As mentioned previously, some have hypothesized that grandmothers stop reproducing, at least in part, for that end. When alloparenting is coming from the female’s relatives, it’s unlikely that much, if any, sexual behavior, same-sex or otherwise, is involved or required. Genetic relatedness is likely providing a good deal of the motivation for the altruism in these cases, so sex would be fairly unnecessary. That thought brings me neatly to my next point, and it’s one raised briefly by the authors themselves: why would the lesbian sex even be necessary in the first place?

“I’ll help mother your child so hard…”

It’s unclear to me what the same-sex behavior adds to the alloparenting equation here. This concern comes in a number of forms. The first is that it seems adaptations designed for reciprocal altruism would work here just fine: you watch my kids and I’ll watch yours. There are plenty of such relationships between same-sex individuals, regardless of whether they involve childcare or not, and those relationships seem to get on just fine without sex being involved. Sure, sexual encounters might deepen that commitment in some cases, but that’s a fact that needs explaining; not the explanation itself. How we explain it will likely have a bearing on further theoretical analysis. Sex between men and women might deepen that commitment on account of it possibly resulting in conception and all the shared responsibilities that brings. Homosexual intercourse, however, does not carry that conception risk. This means that any deepening of the social connections homosexual intercourse might bring would most likely be a byproduct of the heterosexual counterpart. In much the same way, masturbation probably feels good because the stimulation sexual intercourse provides can be successfully mimicked by one’s hand (or whatever other device the more creative among us make use of). Alternatively, it could be possible that the deepening of an emotional bond between two women as the result of a sexual encounter was directly selected for because of it’s role in recruiting alloparent support, but I don’t find the notion particularly likely.

A quick example should make it clear why: for a woman who currently does not have dependent children, the same-sex encounters don’t seem to offer her any real benefit. Despite this, there are many women who continue to engage in frequent to semi-frequent same-sex sexual behaviors and form deep relationships with other women (who are themselves frequently childless as well). If the deepening of the bond between two women was directly selected for in the case of homosexual sexual behavior due to the benefits that alloparents can bring, such facts would seem to be indicative of very poor design. That is to say we should predict that women without children would be relatively uninterested in homosexual intercourse, and the experience would not deepen their social commitment to their partner. So sure, homosexual intercourse might deepen emotional bonds between the people engaging in it, which might in turn effect how the pair behave towards one another in a number of ways. That effect, however, is likely a byproduct of mechanisms designed for heterosexual intercourse; not something that was directly selected for itself. Kuhle & Radtke (2013) do say that they’re only attempting to explain some homosexual behavior, so perhaps they might grant that some increases in emotional closeness are the byproduct of mechanisms designed for heterosexual intercourse while other increases in closeness are due to selection for alloparental concerns. While possible, such a line of reasoning can set up a scenario where the hits for the theory can be counted as supportive and the misses (such as childless women engaging in same-sex sexual behaviors) dismissed as being the product of some other factor.

On top of that concern, the entire analysis rests on the assumption that women who have engaged in sexual behavior with the mother in question ought to be more likely to provide substantially better alloparental care than women who did not. This seems to be an absolutely vital prediction of the model. Curiously, that prediction is not represented in any of the 14 predictions listed in the paper. The paper also offers no empirical data bearing on this point, so whether homosexual behavior actually causes an increase in alloparental investment is in doubt. Even if we assume this point was confirmed however, it raises another pressing question: if same-sex intercourse raises the probability or quality of alloparental investment, why would we expect, as the authors predict, that women should only adopt this homosexual behavior as a secondary strategy? More precisely, I don’t see any particularly large fitness costs to women when it comes to engaging in same-sex sexual behavior but, under this model, there would be substantial benefits. If the costs to same-sex behavior are low and the benefits high, we should see it all the time, not just when a woman is having trouble finding male investment.

“It’s been real, but men are here now so…we can still be friends?”

On the topic of male investment, the model would also seem to predict that women should be relatively inclined to abandon their female partners for male ones (as, in this theory, women’s sexual interest in other women is triggered by lack of male interest). This is anecdotal, of course, but a fairly-frequent complaint I’ve heard from lesbians or bisexual women currently involved in a relationship with a woman is that men won’t leave them alone. They don’t seem to be wanting for male romantic attention. Now maybe these women are, more or less, universally assessing these men as being unlikely or unable to invest on some level, but I have my doubts as to whether this is the case.

Finally, given these sizable hypothesized benefits and negligible costs, we ought to expect to see women competing with other women frequently in the realm of attracting same-sex sexual interest. Same-sex sexual behavior should be expected to not only be cross-cultural universals, but fairly common as well, in much the same way that same-sex friendship is (as they’re hypothesized to serve much the same function, really). Why same-sex sexual interest would be relatively confined to a minority of the population is entirely unclear to me in terms of what is outlined in the paper. This model also doesn’t deal why any women, let alone the vast majority of them, would appear to feel averse to homosexual intercourse. Such aversions would only cause a woman to lose out the hypothesized alloparental benefits which, if the model is true, ought to have been substantial. Women who were not averse would have had more consistent alloparental support historically, leading to whatever genes made such attractions more likely to spread at the expense of women who eschewed it. Again, such aversions would appear to be evidence of remarkably poor design; if the lesbian-alloparents-with-benefits idea is true, that is…

References: Kuhle BX, & Radtke S (2013). Born both ways: The alloparenting hypothesis for sexual fluidity in women. Evolutionary psychology : an international journal of evolutionary approaches to psychology and behavior, 11 (2), 304-23 PMID: 23563096

Mate Choices Can Be Complex, But Are They Oedipal Complex?

Theory is arguably the most important part of research. A good theory helps researchers formulate better research questions as well as understand the results that their research projects end up producing.I’ve said this so often that expressing the idea is closer to a reflex than a thought at this point. Unfortunately, “theories” in psychology – if we can even call them theories – are frequently of poor quality, if not altogether absent from research, leading to similarly poorly formulated projects and explanations. Evolutionary theory offers an escape from this theoretically shallowness, and it’s the major reason the field appeals to me. I find myself somewhat disappointed, then, to see a new paper published in Evolutionary Psychology that appears to be, well, atheoretical.

No, I’m not mad; I’m just disappointed…

The paper was ostensibly looking at whether or not human children sexually imprint on the facial traits of their opposite sex parent, or, more specifically (for those of you that don’t know about imprinting):

Positive sexual imprinting has been defined as a sexual preference for individuals possessing the characteristics of one’s parents… It is said to be a result of acquiring sexual preferences via exposure to the parental phenotype during a sensitive period in early childhood.

The first sentence of that definition seems to me to be unnecessary. One could have preferences for characteristics that one’s parents also happen to possess without those preferences being the result of any developmental mechanism that uses parental phenotype as its input. So I’d recommend using the second part of the definition, which seems fine, as far as describing sexual imprinting on parents goes. As the definition suggests, such a mechanism would require (1) a specified developmental window during which the imprinting takes place (i.e. the preferences would not be acquired prior to or after that time, and would be relatively resistant to change afterwards) and (2)  that mechanism to be specifically focused on parental features.

So how did Marcinkowska & Rantala (2012) go about testing this hypothesis? Seventy subjects, their sexual partner, and their opposite sex parent (totaling 210 people) were each photographed from straight ahead and in profile. These subjects were also asked to report about their upbringing as a child. Next, a new group of subjects were presented with an array of pictures: on one side of the array was a picture of one of the opposite sex parents; on the other side there were four pictures, one of which was the partner of that parent’s child and three of which were controls. The new subjects were asked to rate how similar the picture of the parent was to the pictures of the people on the other side of the display.

The results showed that the group of independent raters felt that a man’s mother resembled slightly more closely his later partner than the controls did. The results also showed that the same raters did not feel that a woman’s father more closely resembled her later partner than the control did. Neither of these findings were in any way related to the self-reports that subjects had delivered about their upbringing either. If you’ve been following along so far, you might be curious as to what these results have to do with a sexual imprinting hypothesis. As far as I can tell, the answer is a resounding, “nothing”.

Discussion: Never mind

Let’s consider what these results don’t tell us: they certainly don’t speak to the matter of preferences. As Marcinkowska & Rantala (2012) note, actual mating preferences can be constrained by other factors. Everyone in the population might wish to monopolize the matings of a series of beautiful others, but if those beautiful others have different plans, that desire will not be fulfilled. Since the initial definition of imprinting specifically referenced preferences – not actual choices – the findings would have very little relevance to the matter of imprinting no matter how the data fell out. It’s worse than that, however: this study didn’t even attempt to look for any developmental window either. The authors seemed to just assume it existed without any demonstration that it actually does.

What’s particularly peculiar about this oversight is that, in the discussion, the authors note they did not look at any adoptive families. This suggests that the authors at least realized there were ways of testing to see if this developmental window even exists, but didn’t seem to bother running the required tests. A better test – one that might suggest such a developmental window exists – would be to test preferences of adoptive or step-children towards the features of their biological and adoptive/step-parents. If the imprinting hypothesis was true, you would expect that adoptive/step-children would prefer the characteristics of their adoptive/step-parents, not their biological ones. Further, this research could be run with respect to the time at which the new parent came into the picture (and the old one left). If there is a critical developmental window, you should only expect to see this effect when the new parent entered into the equation at a certain age; not before or beyond that point.

The problems don’t even end there, however. As I mentioned previously, this paper appears atheoretical in nature, in that the authors give absolutely no reason as to why one would expect to find a sexual imprinting mechanism in the first place, why it would operate in early childhood, let alone why that mechanism would be inclined to imprint on one’s close, biological kin. What the precise fitness benefits to such a mechanism would be are entirely unclear to me, though, at the very least, I could see it carrying fitness costs in that it might heighten the probability of incest taking place. Further, if this mechanism is presumably,active in all members of our species, and each person is looking to mate with someone who resembles their opposite sex parent, it would seem that such a preference might actively disincline people from having what would be otherwise adaptive matings. Lacking any theoretical explanation for any of this, the purpose of the research seems very confusing.

On the plus side, you can still add it to your resume, and we all know how important publications are.

All that said, even if research did find that people tended to be attracted to the traits of their opposite sex parent, such a finding could, in principle, be explained by sexual selection. Offspring inherent genes from their parents that both contributed to their parent’s phenotype as well as genes that contributed to their parent’s psychological preferences. If preferences were not similarly inherited, sexual selection would be impossible and ornaments like the peacock’s tail could never have come into existence. So, presuming your parents found each other at least attractive enough to get together and mate, you could expect their offspring to resemble them both physically and psychologically to some extent. When those offspring are then making their own mate choices, you might then expect them to make a similar set of choices (all else being equal, of course).

What can be said for the study is that it’s a great example of how not to do research. Don’t just assume the effect you’re looking to study exists; demonstrate that it does. Don’t assume that it works in a particular way in the event that it actually exists either. Most importantly, don’t formulate your research project in absence of a clearly stated theory that explains why such an effect would exist and, further, why it would work the way you expect it might. You should also try and rule out alternative explanations for whatever findings you’re expecting. Without good theory, the quality of your research will likely suffer, and suffer badly.

 References: Marcinkowska, U.M., & Rantala, M.J. (2012). Sexual Imprinting on Facial Traits of Opposite-Sex Parents in Humans. Evolutionary Psychology, 10, 621-630

Does Infidelity Pay Off (For Sparrows)?

For some species, mating can be a touch more complicated than others. In species where males provide little more than their gametes, the goal of mating for females is simple: get the best gametes available. While the specifics as to how that’s accomplished vary substantially between species, the overall goal remains the same. Since genes are all the female is getting, she may as well get the best that she can. In contrast, for some other species males provide more than just their genes; they also provide some degree of investment, which can take the form of a one-time gift through upwards of decades of sustained investment. In these species, females need to work this additional variable into their mating calculus, as the two goals do not always overlap. The male who’s willing to provide the best investment might not also happen to have the best genes, and pursuing one might risk the other.

Accordingly, it’s long been assumed that extra-pair mating (cheating) is part of the female strategy to have her cake and eat it too. A female can initiate a pair-bond with a male willing to invest while simultaneously having affairs with genetically higher-quality males, leaving the unfortunate cuckold to invest in offspring he did not sire. Undertaking extra-pair matings, however, can be risky business, in that detection by the investing male might lead to a withdrawal of investment and, in certain cases, bodily harm.

Good luck to all you parents when it comes to weaving that tidbit into your birds and bees talk.

These risks would require that offspring sired through extra-pair mating to tend to actually be fitter than offspring sired by the within-pair male, in order to be selected for. Abandonment can entail some serious risks, so females would need some serious compensating gains to offset that fact. A new paper by Sardell et al (2012) sought to determine whether extra-pair offspring would in fact be ‘fitter’ than within-pair offspring in Melospiza melodia – the song sparrow – when fitness was measured by lifetime reproductive success in number of offspring hatched, the number that survived to enter the breeding population, and the number of grand-offspring eventually produced. The results? Data gathered across 17 years, representing 2,343 hatchlings and 854 broods found that extra-pair offspring seemed to actually be less fit than their within-pair half-siblings. Well, kind of… but not really.

Over the 17 years of data collection, roughly 28% of the offspring were classed as being extra-pair offspring, and only broods with mixed paternity was considered for the present study (i.e. there was at least 1 offspring from the resident male and also at least 1 offspring from an extra-pair male). This cut the sample size down to 471 hatchlings, representing 154 mixed paternity broods across 117 pair bonds. The first point I’d like to make is that a 28% non-paternity rate seems large, and, unless it’s the result of an epidemic of forced copulations (rape), that means these female sparrows are having a lot of affairs, presumably because some mating module in their brain is suggesting they do

Within the sample of sparrows, female extra-pair offspring (the ones who were sired by the non-resident male) averaged 5.4 fewer hatched offspring over their lives, relative to their within-pair half-siblings; for extra-pair males, the corresponding average was 1.5 fewer offspring. However, not all of those hatchlings live to eventually breed. Of the 99 that did, the females that were the result of  extra-pair mating, on average, had 6.4 fewer hatchlings of their own, relative to the within-pair females; the extra-pair males also had fewer hatchlings of their own, averaging 2.6 fewer. Thus, relative to their within-pair half-siblings, extra-pair offspring seemed to produce fewer offspring of their own, and, in turn, fewer grand-offspring. (I should note at this point that any potential reasons for why extra-pair young seemed to be having fewer hatchlings are left entirely unexamined. This strikes me as something of a rather important oversight)

Are we to conclude from this pattern of results (as this article from the Huffington post, as well as the authors of the current paper did) that extra-pair mating is not currently adaptive?

And is it time for those who support the “good genes” theory to start panicking?

I don’t think so, and here’s why: when it came to the number of recruited offspring – the hatchlings who eventually reached breeding age – extra-pair females ended up having 0.2 more of them, on average, while extra-pair males had 0.2 less of them, relative to their within-pair half-siblings. While that might seem like something of a wash, consider the previous finding: within-pair offspring were having more offspring overall. If within-pair offspring tended to have more hatchlings, but a roughly equal number reach the breeding pool, that means, proportionally, more of the within-pair offspring were dying before they reached maturity. (In fact, extra-pair offspring had a 5% advantage in the number of total hatchlings that ended up reaching maturity) Having more offspring doesn’t mean a whole lot if those offspring don’t survive and then go on to reproduce themselves, and many of the within-pair offspring were not surviving.

One big area this paper doesn’t deal with is why that mortality gap exists; merely that it does. This mortality gap might even be more surprising, given that the potential risk of abandonment might mean males were less likely to have been investing when they doubted their paternity, though the current paper doesn’t speak to that possibility one way or another. Two of the obvious potential suspects for this gap are predation and parasites. Extra-pair young may be better able to either avoid predators and/or defend against pathogens because of their genetic advantages, leading to them being more likely to survive to breeding age. Then there’s also a possibility of increased parental investment: if extra-pair hatchlings are in better condition, (perhaps due to said pathogen resistance or freedom from deleterious mutations) the parents may preferentially divert scarce resources to them, as they’re a safer wager against an uncertain future. Alternatively, extra-pair offspring might have commanded a higher mating value, and were able to secure a partner more able and/or willing to invest long term. There are many unexplored possibilities.

The heart of the matter here concerns whether the female sparrows who committed infidelity would have been better off had they not done so. From the current data, there is no way of determining that as there’s no random assignment to groups and no comparison to non-mixed paternity broods (though that latter issue comes with many confounds). So not only can the data not definitely determine whether the extra-pair mating was adaptive or not, but the data even suggests that extra-pair offspring are slightly more successful in reaching breeding age. That is precisely counter to the conclusions reached by Sardell et al (2012), who state:

Taken together, these results do not support the hypothesis that EPR [extra-pair reproduction] is under positive indirect selection in female song sparrows…and in fact suggest… [that] other components of selection now need to be invoked to explain the evolution and persistence of EPR.

Their data don’t seem to suggest anything of the sort. They haven’t even established current adaptive value, let alone anything about past selection pressures. Sardell et al ‘s (2012) interpretation  of this mountain of data seems to be biting off more than they can chew.

It was a good try at least…

One final thoroughly confusing point is that Sardell et al (2012) suggest that how many grand-hatchlings the extra-pair and within-pair young had mattered. The authors concede that, sure, in the first generation within-pair sparrows had more hatchlings, proportionately more of which died, actually leaving the extra-pair offspring as the more successful ones when it came to reaching the breeding pool. They then go on to say that:

However, since EPY [extra-pair young] had 30% fewer hatched grandoffspring than WPY [within-pair young], higher recruitment of offspring of EPY does not necessarily mean that EPY had higher LRS [lifetime reproductive success] measured to the next generation. (p.790)

The obvious problem here is that they’re measuring grandoffspring before the point when many of them would seem to die off, as they did in the previous generation. So, while number of hatched grandoffspring says nothing important, they seem to think it does this time around. It’s been known that counting babies is only of limited use in determining adaptive value (let alone past adaptive value), and I hope this paper will serve as a cautionary tale for why that’s the case.

References: Sardell, R., Arcese, P., Keller, L., & Reid, J. (2012). Are There Indirect Fitness Benefits of Female Extra-Pair Reproduction? Lifetime Reproductive Success of Within-Pair and Extra-Pair Offspring The American Naturalist, 179 (6), 779-793 DOI: 10.1086/665665