Intergenerational Epigenetics And You

Today I wanted to cover a theoretical matter I’ve discussed before but apparently not on this site: the idea of epigenetic intergenerational transmission. In brief, epigenetics refers to chemical markers attached to your DNA that regulate how it’s expressed and regulated without changing the DNA itself. You could imagine your DNA as a book full of information and each cell in your body contains the same book. However, not every cell expressed the full genome; each cell only expresses part of it (which is why skin cells are different from muscle cells, for instance). The epigenetic portion, then, could be thought of as black tape placed over certain passages in the books so they are not read. As this tape is added or removed by environmental influences, different portions of the DNA will become active. From what I understand about how this works (which is admittedly very little at this juncture), usually these markers are not passed onto offspring from parents. The life experiences of your parents, in other words, will not be passed onto you via epigenetics. However, there has been some talk lately of people hypothesizing that not only are these changes occasionally (perhaps regularly?) passed on from parents to offspring; the implication seems to be present that they also might be passed on in an adaptive fashion. In short, organisms might adapt to their environment not just through genetic factors, but also through epigenetic ones.  

Who would have guessed Lamarckian evolution was still alive?

One of the examples given in the target article on the subject concerns periods of feast and famine. While rare in most first-world nations these days, these events probably used to be more recurrent features of our evolutionary history. The example there involves the following context: during some years in early 1900 Sweden food was abundant, while during other years it was scarce. Boys who were hitting puberty just at the time of a feast season tended to have grandchildren who died six years earlier than the grandchildren of boys who have experienced famine season during the same developmental window. The causes of death, we are told, often involving diabetes. Another case involves the children of smokers: men who smoked right before puberty tended to have children who were fatter, on average, than fathers who smoked habitually but didn’t start until after puberty . The speculation, in this case, is that development was in some way affected in a permanent fashion by food availability (or smoking) during a critical window of development, and those developmental changes were passed onto their sons and the sons of their sons.

As I read about these examples, there were a few things that stuck out to me as rather strange. First, it seems odd that no mention was made of daughters or granddaughters in that case, whereas in the food example there wasn’t any mention of the in-between male generation (they only mentioned grandfathers and grandsons there; not fathers). Perhaps there’s more to the data that is let on there but – in the event that no effects were found for fathers or daughters or any kind – it is also possible that a single data set might have been sliced up into a number of different pieces until the researchers found something worth talking about (e.g., didn’t find an effect in general? Try breaking the data down by gender and testing again). Now that might or might not be the case here, but as we’ve learned from the replication troubles in psychology, one way of increasing your false-positive rate is to divide your sample into a number of different subgroups. For the sake of this post, I’m going to assume that is not the case and treat the data as representing something real, rather than a statistical fluke.   

Assuming this isn’t just a false-positive, there are two issues with the examples as I see them. I’m going to focus predominately on the food example to highlight these issues: first, passing on such epigenetic changes seems maladaptive and, second, the story behind it seems implausible. Let’s take the issues in turn.

To understand why this kind of inter-generational epigenetic transmission seems maladaptive, consider two hypothetical children born one year apart (in, say, the years 1900 and 1901). At the time the first child’s father was hitting puberty, there was a temporary famine taking place and food was scarce; at the time of the second child, the famine had passed and food was abundant. According to the logic laid out, we should expect that (a) both children will have their genetic expression altered due to the epigenetic markers passed down by their parents, affecting their long-term development, and (b) the children will, in turn, pass those markers on to their own children, and their children’s children (and so on).

The big Thanksgiving dinner that gave your grandson diabetes

The problems here should become apparent quickly enough. First, let’s begin by assuming these epigenetic changes are adaptive: they are passed on because they are reproductively useful at helping a child develop appropriately. Specifically, a famine or feast at or around the time of puberty would need to be a reliable cue as to the type of environments their children could expect to encounter. If a child is going to face shortages of food, they might want to develop in a different manner than if they’re expecting food to be abundant.

Now that sounds well and good, but in our example these two children were born just a year apart and, as such, should be expected to face (broadly) the same environment, at least with respect to food availability (since feast and famines tends to be more global). Clearly, if the children were adopting different developmental plans in response to that feast of famine, both of them (plan A affected by the famine and plan B not so affected) cannot be adaptive. Specifically, if this epigenetic inheritance is trying to anticipate children’s future conditions by those present around the time of their father’s puberty, at least one of the children’s developmental plans will be anticipating the wrong set of conditions. That said, both developmental plans could be wrong, and conditions could look different than either anticipated. Trying to anticipate the future conditions one will encounter over their lifespan (and over their children’s and grandchild’s lifespan) using only information from the brief window of time around puberty seems like a plan doomed for failure, or at least suboptimal results.

A second problem arises because these changes are hypothesized to be intergenerational: capable of transmission across multiple generations. If that is the case, why on Earth would the researchers in this study pay any mind to the conditions the grandparents were facing around the time of puberty per se? Shouldn’t we be more concerned with the conditions being faced a number of generations backs, rather than the more immediate ones? To phrase this in terms of a chicken/egg problem, shouldn’t the grandparents in question have inherited epigenetic markers of their own from their grandparents, and so on down the line? If that were the case, the conditions they were facing around their puberty would either be irrelevant (because they already inherited such markers from their own parents) or would have altered the epigenetic markers as well.

If we opt for the former possibility, than studying grandparent’s puberty conditions shouldn’t be too impactful. However, if we opt for the latter possibility, we are again left in a bit of a theoretical bind: if the conditions faced by the grandparents altered their epigenetic markers, shouldn’t those same markers also have been altered by the parent’s experiences, and their grandson’s experiences as well? If they are being altered by the environment each generation, then they are poor candidates for intergenerational transmission (just as DNA that was constantly mutating would be). There is our dilemma, then: if epigenetics change across one’s lifespan, they are unlikely candidates for transmission between generations; if epigenetic changes can be passed down across generations stably, why look at the specific period pre-puberty for grandparents? Shouldn’t we be concerned with their grandparents, and so on down the lines?

“Oh no you don’t; you’re not pinning this one all on me”

Now, to be clear, a famine around the time of conception could affect development in other, more mundane ways. If a child isn’t receiving adequate nutrition at the time they are growing, then it is likely certain parts of their developing body will not grow as they otherwise would. When you don’t have enough calories to support your full development, trade-offs need to be made, just like if you don’t have enough money to buy everything you want at the store you have to pass up on some items to afford others. Those kinds of developmental outcomes can certainly have downstream effects on future generations through behavior, but they don’t seem like the kind of changes that could be passed on the way genetic material can. The same can be said about the smoking example provided as well: people who smoked during critical developmental windows could do damage to their own development, which in turn impacts the quality of the offspring they produce, but that’s not like genetic transmission at all. It would be no more surprising than finding out that parents exposed to radioactive waste tend to have children of a different quality than those not so exposed.

To the extent that these intergenerational changes are real and not just statistical oddities, it doesn’t seem likely that they could be adaptive; they would instead likely reflect developmental errors. Basically, the matter comes down to the following question: are the environmental conditions surrounding a particular developmental window good indicators of future conditions to the point you’d want to not only focus your own development around them, but also the development of your children and their children in turn? To me, the answer seems like a resounding, ‘”No, and that seems like a prime example of developmental rigidity, rather than plasticity.” Such a plan would not allow offspring to meet the demands of their unique environments particularly well. I’m not hopeful that this kind of thinking will lead to any revolutions in evolutionary theory, but I’m always willing to be proven wrong if the right data comes up. 

Mistreated Children Misbehaving

None of us are conceived or born as full adults; we all need to grow and develop from single cells to fully-formed adults. Unfortunately – for the sake of development, anyway – the future world you will find yourself in is not always predictable, which makes development a tricky matter at times. While there are often regularities in the broader environment (such as the presence or absence of sunlight, for instance), not every individual will inhabit the same environment or, more precisely, the same place in their environment. Consider two adult males, one of whom is six-feet tall and 230 pounds of muscle, and the other being five-feet tall and 110 pounds. While the dichotomy here is stark, it serves to make a simple point: if both of these males developed in a psychological manner that led them to pursue precisely the same strategies in life – in this case, say, one involving aggressive contests for access to females – it is quite likely that the weaker male will lose out to the stronger one most (if not all) of the time. As such, in order to be more-consistently adaptive, development must be something of a fluid process that helps tailor an individual’s psychology to the unique positions they find themselves in within a particular environment. Thus, if an organism is able to use some cues within their environment to predict their likely place in it in the future (in this case, whether they would grow large or small), their development could be altered to encourage their pursuit of alternate routes to eventual reproductive success. 

Because pretending you’re cut out for that kind of life will only make it worse

Let’s take that initial example and adapt it to a new context: rather than trying to predict whether one will grow up weak or strong, a child is trying to predict the probability of receiving parental investment in the future. If parental investment is unlikely to be forthcoming, children may need to take a different approach to their development to help secure the needed resources on their own, sometimes requiring their undertaking risky behaviors; by contrast, those children who are likely to receive consistent investment might be relatively less-inclined to take such risky and costly matters into their own hands, as the risk vs. reward calculations don’t favor such behavior. Placed in an understandable analogy, a child who estimates they won’t be receiving much investment from their parents might forgo a college education (and, indeed, even much of a high-school one) because they need to work to make ends meet. When you’re concerned about where your next meal is coming from there’s less time in your schedule for studying and taking out loans to not be working for four years. By contrast, the child from a richer family has the luxury of pursuing an education likely to produce greater future rewards because certain obstacles have been removed from their path.

Now obviously going to college is not something that humans have psychological adaptations for – it wasn’t a recurrent feature of our evolutionary history as a species – but there are cognitive systems we might expect to follow different developmental trajectories contingent on such estimations of one’s likely place in the environment; these could include systems judging the relative attractiveness of short- vs long-term rewards, willingness to take risks, pursuit of aggressive resolutions to conflicts, and so on. If the future is uncertain, saving for it makes less than taking a smaller reward in the present; if you lack social or financial support, being willing to fight to defend what little you do have might sound more appealing (as losing that little bit is more impactful when you won’t have anything left). The questions of interest thus becomes, “what cues in the environment might a developing child use to determine what their future will look like?” This brings us to the current paper by Abajobir et al (2016).

One potential cue might be your experiences with maltreatment while growing up, specifically at the hands of your caregivers. Though Abajobir et al (2016) don’t make the argument I’ve been sketching out explicitly, that seems to be the direction their research takes. They seem to reason (implicitly) that parental mistreatment should be a reliable cue to the future conditions you’re liable to encounter and, accordingly, one that children could use to alter their development. For instance, abusive or neglectful parents might lead to children adopting faster life history strategies involving risk-taking, delinquency, and violence themselves (or, if they’re going the maladaptive explanatory route, the failure of parents to provide supporting environments could in some way hinder development from proceeding as it usually would, in a similar fashion to not having enough food growing up might lead to one being shorter as an adult. I don’t know which line the authors would favor from their paper). That said, there is a healthy (and convincing) literature consistent with the hypothesis that parental behavior per se is not the cause of these developmental outcomes (Harris, 2009), but rather that it simply co-occurs with them. Specifically, abusive parents might be genetically different from non-abusive ones and those tendencies could get passed onto the children, accounting for the correlation. Alternatively, parents that maltreat their children might just happen to go together with children having peer groups growing up more prone to violence and delinquency themselves. Both are caused by other third variables.

Your personality usually can’t be blamed on them; you’re you all on your own

Whatever the nature of that correlation, Abajobir et al (2016) sought to use parental maltreatment from ages 0 to 14 as a predictor of later delinquent behaviors in the children by age 21. To do so, they used a prospective cohort of children and their mothers visiting a hospital between 1981-83. The cohort was then tracked for substantiated cases of child maltreatment reported to government agencies up to age 14, and at age 21 the children themselves were surveyed (the mothers being surveyed at several points throughout that time). Out of the 7200 initial participants, 3800 completed the 21-year follow up. At that follow up point, the children were asked questions concerning how often they did things like get excessively drunk, use recreational drugs, break the law, lie, cheat, steal, destroy the property of others, or fail to pay their debts. The mothers were also surveyed on matters concerning their age when they got pregnant, their arrest records, martial stability, and the amount of supervision they gave their children (all of these factors, unsurprisingly, predicting whether or not people continued on in the study for its full duration).

In total, of the 512 eventual cases of reported child maltreatment, only 172 remained in the sample at the 21-year follow up. As one might expect, maternal factors like her education status, arrest record, economic status, and unstable marriages all predicted increased likelihood of eventual child maltreatment. Further, of the 3800 participants, only 161 of them met the criteria for delinquency at 21 years. All of the previous maternal factors predicted delinquency as well: mothers who were arrested, got pregnant earlier, had unstable marriages, less education, and less money tended to produce more delinquent offspring. Adjusting for the maternal factors, however, it was reported that childhood maltreatment still predicted delinquency, but only for the male children. Specifically, maltreatment in males was associated with approximately 2-to-3.5 times as much delinquency as the non-maltreated males. For female offspring, there didn’t seem to be any notable correlation.

Now, as I mentioned, there are some genetic confounds here. It seems probable that parents who maltreat their children are, in some very real sense, different than parents who do not, and those tendencies can be inherited. This also doesn’t necessarily point a causal finger directly at parents, as it is also likely that maltreatment correlates with other social factors, like the peer group a child is liable to have or the neighborhoods they grow up in. The authors also mention that it is possible their measures of delinquency might not capture whatever effects childhood maltreatment (or its correlates) have on females, and that’s the point I wanted to wrap up discussing. To really put these findings on context, we would need to understand what adaptive role these delinquent behaviors – or rather the psychological mechanisms underlying them – have. For instance, frequent recreational drug use and problems fulfilling financial obligations might both signal that the person in question favors short-term rewards over long-term ones; frequent trouble with the law or destroying other people’s property could signal something about how the individual in question competes for social status. Maltreatment does seem to predict (even if it might not cause) different developmental courses, perhaps reflecting an active adjustment of development to deal with local environmental demands.

 The kids at school will all think you’re such a badass for this one

As we reviewed in the initial example, however, the same strategies will not always work equally well for every person. Those who are physically weaker are less likely to successfully enact aggressive strategies, all else being equal, for reasons which should be clear. Accordingly, we might expect that men and women show different patterns of delinquency to the extent they face unique adaptive problems. For instance, we might expect that females who find themselves in particularly hostile environments preferentially seek out male partners capable of enacting and defending against such aggression, as males tend to be more physically formidable (which is not to say that the women themselves might not be more physically aggressive as well). Any hypothetical shifts in mating preferences like these would not be captured by the present research particularly well, but it is nice to see the authors are at least thinking about what sex differences in patterns of delinquency might exist. It would be preferable if they were asking about those differences using this kind of a functional framework from the beginning, as that’s likely to yield more profitable insights and refine what questions get asked, but it’s good to see this kind of work all the same.

References: Abajobir, A., Kisely, S., Williams, G., Strathearnd, L., Clavarino, A., & Najman, J. (2016). Gender differences in delinquency at 21 years following childhood maltreatment: A birth cohort study. Personality & Individual Differences, 106, 95-103. 

Harris, J. (2009). The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press.

If No One Else Is Around, How Attractive Are You?

There’s anecdote that I’ve heard a few times about a man who goes to a diner for a meal. After finishing his dinner the waitress asks him if he’d like some dessert. When he inquires as to what flavors of pie they have the waitress tells him they have apple and cherry. The man says cherry and the waitress leaves to get it. She returns shortly afterwards and tells him she had forgotten they actually also had a blueberry pie. “In that case,” the man replies, “I’ll have apple.” Breaking this story down into a more abstract form, the man was presented with two options: A and B. Since he prefers A to B, he naturally selected A. However, when represented with A, B, and C, he now appears to reverse his initial preference, favoring B over A. Since he appears to prefer both A and B over C, it seems strange that C would affect his judgment at all, yet here it does. Now that’s just a funny little story, but there does appear some psychological literature suggesting that people’s preferences can be modified in similar ways.

“If only I had some more pointless options to help make my choice clear”

The general phenomenon might not be as strange as it initially sounds for two reasons. First, when choosing between A and B, the two items might be rather difficult to directly compare. Both A and B could have some upsides and downsides, but since they don’t necessarily all fall in the same domains, weighing them against the other isn’t always simple. As a for instance, if you looking to buy a new car, one option might have good gas mileage and great interior features (option A) while the other looks more visually appealing and comes with a lower price tag (option B). Pitting A against B here doesn’t always yield a straightforward choice, but if option C rolls around that gets good gas mileage, looks visually appealing, and comes with a lower price tag, this car can look better than either of the previous options by comparison. This third option need not even better more appealing than both alternatives, however; simply being preferable to one of them is usually enough (Mercier & Sperber, 2011).

Related to this point, people might want to maintain some degree of justifiably in their choices as well. After all, we don’t just make choices in a vacuum; the decisions we make often have wider social ramifications, so making a choice that can be easily justified to others can make them accept your decisions more readily (even if the choice you make is overall worse for you). Sticking with our car example, if you were to select option A, you might be praised by your environmentally-conscience friends while mocked by your friends more concerned with the look of the car; if you choose option B a similar outcome might obtain, but the friends doing the praising and mocking could switch. However, option C might be a crowd pleaser for both groups, yielding a decision with greater approval (you miss out on the interior features you want, but that’s the price you pay for social acceptance). The general logic of this example should extend to a number of different domains both in terms of things you might select and features you might use as the basis to select them on. So long as your decisions need to be justified to others, the individual appeal of certain features can be trumped.

Whether these kinds of comparison effects exist across all domains is an open question, however. The adaptive problems species need to solve often require specific sets of cognitive mechanics, so the mental algorithms that are leveraged to solve problems relating to selecting a car (a rather novel issue at that) might not be the same that help solve other problems. Given that different learning mechanisms appear to underlie seemingly similar problems – like learning the location of food and water - there is some good theoretical reasons to suspect that these kinds of comparison effects might not exist in domains where decisions require less justification, such as selecting a mate. This brings us to the present research today by Tovee et al (2016) who were examining the matter of how attractive people perceive the bodies of others (in this case, women) to be.

“Well, that’s not exactly how the other participants posed, but we can make an exception”

Tovee et al (2016) were interested in finding out whether judging bodies among a large array of other bodies might influence the judgments on any individual body’s attractiveness. The goal here was to find out whether people’s bodies have an attractiveness value independent of the range of bodies they happen to be around, or whether attractiveness judgments are made in relation to immediate circumstances. To put that another way, if you’re a “5-out-of-10″ on your own, might standing next to a three (or several threes) make you look more like a six? This is a matter of clear empirical importance as, when studies of this nature are conducted, it is fairly common for participants to be rating a large number of targets for attractiveness one after the other. If attractiveness judgments are, in some sense, corrupted from previous images there are implications for both past and future research that make use of such methods.

So, to get at the matter, the researchers employed a straightforward strategy: first, they asked one group of 20 participants (10 males and females) to judge 20 images of female bodies for attractiveness (these bodies varied in their BMI and waist-to-hip ratio; all clothing was standardized and all faces blurred out). Following that, a group of 400 participants rated the same images, but this time only rating a single image rather than 20 of them, again providing 10 male and female ratings per picture. The logic of this method is simple: if ratings of attractiveness tend to change contingent on the array of bodies available, then the between-subjects group ratings should be expected to differ in some noticeable way than those of the within-subjects group.

Turning to the results, there was a very strong correspondence between male and female judgments of attractiveness (r = .95) as well as within sex agreement (Cronbach’s alphas of 0.89 and 0.95). People tended to agree that as BMI and WHR increased, the women’s bodies became less attractive (at least within the range of values examined; the results might look different if women with very low BMIs were examined). As it turns out, however, there were no appreciable differences when comparing the within- and between-groups attractiveness ratings. When people were making judgments of just a single picture, they delivered similar judgments to those presented with many bodies. The authors conclude that perceptions of attractiveness appear to be generated by (metaphorically) consulting an internal reference template, rather than such judgments being influenced by the range of available bodies.

Which is not to say that being the best looking member of group will hurt

These findings make quite a bit of sense in light of the job that judgments of physical attractiveness are supposed to accomplish; namely assessing traits like physical health, fertility, strength, and so on. If one is interested in assessing the probable fertility of a given female, that value should not be expected to change as a function of whom she happens to be standing next to. In a simple example, a male copulating with a post-menopausal female should not be expected to achieve anything useful (in the reproductive sense of word), and the fact that she happened to be around women who are even older or less attractive shouldn’t be expected to change that fact. Indeed, on theoretical level we shouldn’t expect the independent attractiveness value of a body to change based on the other bodies around; at least there doesn’t seem to be any obvious adaptive advantages to (incorrectly) perceive a five as a six because she’s around a bunch of threes, rather than just (accurately) perceiving that five as a five and nevertheless concluding she’s the most attractive of the current options. However, if you were to incorrectly perceive that five as a six, it might have some downstream consequences when future options present themselves (such as not pursuing a more attractive alternative because the risk vs. reward calculations are being made with inaccurate information). As usual, acting on accurate information tends to have more benefits that changing your perceptions of the world.

References: Mercier, H. & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34, 57-111.

Tovee, M., Taylor, J., & Cornelissen, P. (2016). Can we believe judgments of human physical attractiveness? Evolution & Human Behavior, doi: 10.1016/j.evolhumbehav.2016.10.005

When It’s Not About Race Per Se

We can use facts about human evolutionary history to understand the shape of our minds; using it to understand people’s reactions to race is no exception. As I have discussed before, it is unlikely that ancestral human populations ever traveled far enough, consistently enough throughout our history as a species to have encountered members of other races with any regularity. Different races, in other words, were unlikely to be a persistent feature of our evolutionary history. As such, it seems correspondingly unlikely that human minds contain any modules that function to attend to race per se. Yet we do seem to automatically attend to race on a cognitive level (just as we do with sex and age), so what’s going on here? The best hypothesis I’ve seen as of yet is that people aren’t paying attention to race itself as much as they are using it as a proxy for something else that likely was recurrently relevant during our history: group membership and social coalitions (Kurzban, Tooby, & Cosmides, 2001). Indeed, when people are provided with alternate visual cues to group membership – such as different color shirts – the automaticity of race being attended to appears to be diminished, even to the point of being erased entirely at times.

Bright colors; more relevant than race at times

If people attend to race as a byproduct of our interest in social coalitions, then there are implications here for understanding racial biases as well. Specifically, it would seem unlikely for widespread racial biases to exist simply because of superficial differences like skin color or facial features; instead, it seems more likely that racial biases are a product of other considerations, such as the possibility that different groups – racial or otherwise – simply hold different values as social associates to others. For instance, if the best interests of group X are opposed to those group Y, then we might expect those groups to hold negative opinions of each other on the whole, since the success of one appears to handicap the success of the other (for an easy example of this, think about how more monogamous individuals tend to come into conflict with promiscuous ones). Importantly, to the extent that those best interests just so happen to correlate with race, people might mistake a negative bias due to varying social values or best interests for one due to race.

In case that sounds a bit too abstract, here’s an example to make it immediately understandable: imagine an insurance company that is trying to set its premiums only in accordance with risk. If someone lives in an area at a high risk of some negative outcome (like flooding or robbery), it makes sense for the insurance company to set a higher premium for them, as there’s a greater chance they will need to pay out; conversely, those in low-risk areas can pay reduced premiums for the same reason. In general, people have no problem with this idea of discrimination: it is morally acceptable to charge different rates for insurance based on risk factors. However, if that high-risk area just so happens to be one in which a particular racial group lives, then people might mistake a risk-based policy for a race-based one. In fact, in previous research, certain groups (specifically liberal ones) generally say it is unacceptable for insurance companies to require those living in high-risk areas pay higher premiums if they happen to be predominately black (Tetlock et al, 2000).

Returning the main idea at hand, previous research in psychology has tended to associate conservatives – but not liberals – with prejudice. However, there has been something of a confounding factor in that literature (which might be expected, given that academics in psychology are overwhelmingly liberal): specifically, much of that literature on prejudice asks about attitudes towards groups whose values tend to lean more towards the liberal side of the political spectrum, like homosexual, immigrant, and black populations (groups that might tend to support things like affirmative action, which conservative groups would tend to oppose). When that confound is present, then it’s not terribly surprising that conservatives would look more prejudiced, but that prejudice might ultimately have little to do with the target’s race or sexual orientation per se.  More specifically, if animosity between different racial groups is due primarily to a factor like race itself, then you might expect those negative feelings to persist even in the face of compatible values. That is, if a white person happens to not like black people because they are black, then the views of a particular black person shouldn’t be liable to change those racist sentiments too much. However, if those negative attitudes are instead more of a product of a perceived conflict of values, then altering those political or social values should dampen or remove the effects of race altogether. 

Shaving the mustache is probably a good place to start

This idea was tested by Chambers et al (2012) over the course of three studies. The first of these involved 170 Mturk participants who indicated their own ideological position (strongly liberal to strongly conservative, 5-point scale), their impressions of 34 different groups (in terms of whether they’re usually liberal or conservative on the same scale, as well as how much they liked the target group), as well as a few other measures related to the prejudice construct, like system justification and modern racism. As it turns out, both liberals and conservatives tended to agree with one another about how liberal or conservative the target groups tended to be (r = .97), so their ratings were averaged. Importantly, when the target group in question tended to be liberal (such as Feminists or Atheists), liberals tended to have higher favorability ratings of them (M = 3.48) than did conservatives (M = 2.57; d = 1.23); conversely, when the target group was perceived as conservative (such as business people or the elderly), liberals now tended to have lower favorability ratings (M = 2.99) of them than conservatives (M = 3.86; d = 1.22). In short, liberals tended to feel positive about liberals, and conservatives tended to feel positive about conservatives. The more extreme the perceived political differences of the target were, the larger these biases were (r = .84). Further, when group memberships needed to be chosen, the biases were larger than when they were involuntary (e.g., as a group, “feminist”s generated more bias from liberals and conservatives than “women”).

Since that was all correlational, studies 2 and 3 took a more experimental approach. Here, participants were exposed to a target whose race (white/black) and positions (conservatives or liberal) were manipulated on six different issues (welfare, affirmative action, wealth redistribution, abortion, gun control, and the Iraq war).In study 2 this was done on a within-subjects basis with 67 participants, and in study 3 it was done between-subjects with 152 participants. In both cases, however, the results were similar: in general, the results showed that while the target’s attitudes mattered when it came to how much the participants liked them, the target’s race did not. Liberals didn’t like black targets who disagreed any more than conservatives did. The conservatives happened to like the targets who expressed conservative views more, whereas liberals tended to like targets who expressed liberal views more. The participants had also provided scores on measures of system justification, modern racism, and attitudes towards blacks. Even when these factors were controlled for, however, the pattern of results remained: people tended to react favorably towards those who shared views and unfavorably to those who did not. The race of the person with those views seemed besides the point for both liberals and conservatives. Not to hammer the point home too much, but perceiving ideological agreement – not race – was doing the metaphorical lifting here. 

Now perhaps these results would have looked different if the samples in question were comprised of people who held, more or less, extreme and explicit racist views; the type of people would wouldn’t want to live next to someone of a different race. While that’s possible, there are a few points to make about that suggestion: first, it’s becoming increasing difficult to find people who hold such racist or sexist views, despite certain rhetoric to the contrary; that’s the reason researchers ask about “symbolic” or “modern” or “implicit” racism, rather than just racism. Such openly-racist individuals are clearly the exceptions, rather than the rule. This brings me to the second point, which is that, even if biases did look different among the hardcore racists (we don’t know if they do), for more average people, like the kind in these studies, there doesn’t appear to be a widespread problem with race per se; at least not if the current data have any bearing on the matter. Instead, it seems possible that people might be inferring a racial motivation where it doesn’t exist because of correlations with race (just like in our insurance example).

Pictured: unusual people; not everyone you disagree with

For some, the reaction to this finding might be to say that it doesn’t matter. After all, we want to reduce racism, so being incredibly vigilant for it should ensure that we catch it where it exists, rather than miss it or make it seem permissible. Now that likely true enough, but there are other considerations to add into that equation. One of them is that by reducing your type-two errors (failing to see racism where it exists) you increase your type-one errors (seeing racism where there is none). As long as accusations of being a racist are tied to social condemnation (not praise; a fact alone which ought to tell you something), you will be harming people by overperceiving the issue. Moreover, if you perceive racism where it doesn’t exist too often, you will end up with people who don’t take your claims of racism seriously anymore. Another point to make is that if you’re actually serious about addressing a social problem you see, accurately understanding its causes will go a long way. That is to say that time and energy invested in interventions to reduce racism is time not spent trying to address other problems. If you have misdiagnosed the issue you seek to tackle as being grounded in race, then your efforts to address it will be less successful than they otherwise could be, not unlike a doctor prescribing the wrong medication to treat an infection.          

References: Chambers, J., Schlenker, B., & Collisson, B. (2012). Ideology and prejudice: The role of value conflicts. Psychological Science, 24, 140-149.

Kurzban, R., Tooby, J., & Cosmides, L. (2001). Can race be erased? Coalitional computation and social categorization. PNAS, 98, 15387-15392.

Tetlock, P., Kristel, O., Elson, S., Green, M., & Lerner, J. (2000). The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78 (5), 853-870 DOI: 10.1037//0022-3514.78.5.853

Violence In Games Does Not Cause Real-Life Violence

Violence is a strategic act. What I mean by this is that a threat to employ physical aggression against someone else unless they do what you want is one that needs to be credible to be useful. If a 5-year-old child threatened to beat up her parents if they don’t stop for ice cream, the parents understand that the child does not actually pose a real physical risk and, if push came to shove, the parents would win a physical contest; by contrast, if you happen to be hanging out with a heavy-weight MMA fighter and he demands you pull over for ice cream, you should be more inclined to take his request seriously. If you cannot realistically threaten others with credible claims of violence – if you are not likely to be able to inflict harmful costs on others physically – then posturing aggressively shouldn’t be expected to do you any favors; if anything, adopting aggressive stances you cannot back up will result in your suffering costs inflicted by others, and that’s generally an outcome to be avoided. It’s for this reason that – on a theoretical level – we should have expected research on power poses to fail to replicate: simply acting more dominant will not make you more able to actually back up those boasts, and people shouldn’t be expected to take such posturing seriously. If you apply that same logic to nonhumans – say Rams – a male who behaves dominantly will occasionally encourage another male who will challenge that dominance. If neither backs down the result is a physical conflict, and the subsequent realization that writing metaphorical checks you cannot cash is a bad idea.

“You have his attention; sure hope you also have a thick skull, too”

This cursory analysis already suggests there might be a theoretical problem with the idea that people who are exposed to violent content in media will subsequently become more aggressive in real life. Yes, watching Rambo or playing Dark Souls might inspire some penchant for spilling fantasy blood (at least in the short term), but seeing violence doesn’t suddenly increase the advisability of your pursuing such a strategy, as you are no more likely to be able effectively employ it than you were before your exposure. Again, to place that in a nonhuman example (always a good idea when you’re dealing with psychology research to see if an idea still make sense; if it only makes sense for humans, odds are it’s lacking in interpretation), if you exposed a male ram to media depicting males aggressively slamming their horns into other males, that doesn’t suddenly mean your subject ram will be inspired to run out and challenge a rival. His chances of winning that contest haven’t changed, so why should his behavior?

Now the matter is more complex than this analysis lets on, admittedly, but it does give us something of a starting point for understanding why violent content in media – video games in particular – should not be expected to have uniform or lasting impacts on the player’s subsequent behavior. Before I get into the empirical side of this issue, however, I think it’s important I lay my potential bias on the table: I’m a gamer; have been my entire life, at least as far as I can remember. I’ve played games in all sorts of mediums – video, card, board, and sometimes combinations of those – and across a variety of genres, including violent ones. As such, when I see someone leveling accusations against one of my more cherished hobbies, my first response is probably defensive. That is, I don’t perceive people who research the link between violent games and aggression to be doing so for no particular reason; I assume they have some specific goals in mind (consciously understood or not) that center around telling other people what they shouldn’t do or enjoy, perhaps even ranging as far as trying to build a case for the censorship of such materials. As such, I’m by no means an unbiased observed in this matter, but I am also something of an expert in the subject matter as well, which can provide me with insights that others might not possess.

That disclaimer out the way, I wanted to examine some research today which examines the possibility that the relationship people have sometimes spotted between violent video game content and aggression isn’t casual (Przybylski et al, 2014; I say sometimes because apparently this link between the two is inconsistently present, possibly only short-term in nature, and the subject of some debate). The thrust of this paper focuses on the idea that human aggression (proximately) is a response to having one’s psychological needs thwarted. I think there are better ways to think about what aggression is, but this general idea is probably close enough to that truth to do us just fine. In brief, the idea motivating this paper is that people play video games (again, proximately), in part, because they provide feelings of competency and skill growth. Something about the challenges games offers to be overcome proves sufficiently motivating for players to get pleasure out of the experience. Importantly, this should hold true across gaming content: people don’t find content appealing because it is violent generally, but rather because it provides us abilities to test, display, and strengthen certain competencies. As such, manipulating the content of the games (from violent to non-violent) should be much less effective at causing subsequent aggression than manipulating the difficulty of the game (from easy/intuitive to difficult/confusing).    

“I’ll teach him a lesson about beating me in Halo”

This is a rather important factor to consider because the content of a game (whether it is violent or not, for instance) might be related to how difficult the game is to learn or master. As such, if researchers have been trying to vary the content without paying much mind to the other factors that correlate with it, that could handicap the usefulness of subsequent interpretations. Towards that end, Przybylski et al (2014) report on the results of seven studies designed to examine just that issue. I won’t be able to go over all of them in depth, but try to provide a general adequate summary of their methods and findings. In their first study, they examined how 99 participants reacted to playing a simple but non-violent game (about flying a paper airplane through rings) or a complex but violent one (a shooter with extensive controls). The players were then asked about their change in aggressive feelings (pre- and post-test difference) and mastery of the controls. The quick summary of the results was that aggressive content did not predict change in aggression scores above and beyond the effects of frustrations over the controls, while the control scores did predict aggression.

Their second study actually manipulated the content and complexity factors (N = 101). Two versions of the same game (Half-Life 2) were created, such that one contained violent content and the other did not, while the overall environment and difficulty were held constant. Again, there were no effects of content on aggression, but there was an effect of perceived mastery. In other words, people felt angry when they were frustrated with the game; not because of the content. Their third study (N = 104) examined what happened when a non-violent puzzle game (Tetris) was modified to either contain simple or complex control interface. As before, those who had to deal with the frustrating controls were quicker to access aggressive thoughts and terms than those in the intuitive control condition. Study 4 basically repeated that design with some additional variables and found the same type of results: perceived competency in the game correlated negatively with aggression and that people become more aggressive the less they enjoyed the game, among a few other things.The fifth study had 112 participants all play a complex game that was either (a) violent or non-violent, but also gave them either (b) 10 minutes of practice time with the game or no experience with it. As expected, there was an effect of being able to practice on subsequent aggression, but no effect of violent content.

Study 6 asked participants to first submerge their arm in ice water for 25 seconds (a time period ostensibly determined by the last participant), then play a game of Tetris for a few minutes that was modified to be either easy or difficult (but not because of the controls this time). Those assigned to play the more difficult version of Tetris also reported more aggressive feelings, and assigned the next subject to submerge their arm for about 30 seconds in the ice water (relative to the 22 second average assignment in the easy group). The final study surveyed regular players about their experiences gaming over the last month and aggressive feelings, again finding that the ratings of content did not predict aggressive self-reported reactions to gaming, but frustrations with playing the game did.

“I’m going to find the developer of this game and kill him for it!”

In summation, then, violent content per se does not appear to make players more aggressive; instead, frustration and losing seem to play a much larger role. It is at this point that my experience as a gamer comes in handy, because such an insight should be readily apparent to anyone who has watched many other people play games. As an ever-expanding library of YouTube rage-quit videos document, a gamer can become immediately enraged by losing at almost any game, regardless of the content (for those of you not in the know, rage-quitting refers to aggressively quitting out of a game following a loss, often accompanied by yelling, frustrating, and broken controllers). I’ve seen people losing their minds over shooters, sports games, card games, board games, and storming off while shouting. Usually such outbursts are short-term affairs – you don’t see that person the next day and notice they’re visibly more aggressive towards others indiscriminately – but the important part is that they almost always occur in response to losses (and usually losses deemed to be unfair, in some sense).

As a final thought, in addition to the introductory analysis and empirical evidence presented here, there are other reasons one might not predict that violent content per se would be related to subsequent aggression even if one wants to hold onto the idea that mere exposure to content is enough to alter future behavior. In this case, most of the themes found within games that have violent content are not violence and aggression as usually envisioned (like The Onion‘s piece on Close Range: the video game about shooting people point blank in the face). Instead, those themes usually focus on the context in which that violence is used: defeating monsters or enemies that threaten the safety of you or others, killing corrupt and dangerous individuals in positions of power, or getting revenge for past wrongs. Those themes are centered more around heroism and self-defense than aggression for the sake of violence. Despite that, I haven’t heard of many research projects examining whether playing such violent games could lead to increased psychological desires to be helpful, or encourage people to take risks to save others from suffering costs.

References: Przybylski, A., Rigby, C., Deci, E., & Ryan, R. (2014). Competent-impeding electronic games and players’ aggressive feelings, thoughts, and behaviors. Journal of Personality & Social Psychology, 16, 441-457.

Money For Nothing, But The Chicks Aren’t Free

When people see young, attractive women in relationships with older and/or unattractive men, the usual perception that comes to mind is that the relationship revolves around money. This perception is usual because it tends to be accurate: women do, in fact, tend to prefer men who both have access to financial resources and who are willing to share them.  What is rather notable is that the reverse isn’t quite as a common: a young, attractive man shacking up with an older, rich woman just doesn’t call too many examples to mind. Women seem to have a much more pronounced preference for men with wealth than men have for women. While examples of such preferences playing themselves out in real life exist anecdotally, it’s always good to try and showcase their existence empirically.

Early attempts were made by Dr. West, but replications are required

This brings me to a new paper by Arnocky et al (2016) that examined how altruism affects mating success in humans (as this is still psychology research, “humans” translates roughly as “undergraduate psychology majors”, but such is the nature of convenience samples). The researchers first sought (a) to document that more altruistic people really were preferred as mating partners (spoilers: they are), and then (b) to try and explain why we might expect them to be. Let’s begin with what they found, as that much is fairly straightforward. In their first study, Arnocky et al (2016) recruited 192 women and 105 men from a Canadian university and asked them to complete a few self-report measures: an altruism scale (used to measure general dispositions towards providing aid to others when reciprocation is unlikely), a mating success scale (measuring perceptions of how desirable one tends to be towards the opposite sex), their numbers of lifetime sexual partners, as well as the number of those that were short-term, the number of times over the last month they had sex with their current partner (if they had one, which about 40% did), and a measure of their personality more generally.

These measures were then entered into a regression (controlling for personality). When it came to predicting perceived mating success, reported altruism was a significant predictor (ß = 0.25), but neither sex nor the altruism-sex interaction was. This suggests that both men and women tend become more attractive to the opposite sex if they behave more altruistically (or, conversely, that people who are more selfish are less desirable, which sounds quite plausible). However, what it means for one to be successful in the mating domain varies by sex: for men, having more sexual partners usually implies a greater level of success, whereas the same does not hold true for women as often (as gametes are easy to obtain for women, but investment is difficult). In accordance with this point, it was also found that altruism predicted the number of lifetime sexual partners overall (ß = .16), but this effect was specific to men: more altruistic men had more sexual partners (and more casual ones), whereas more altruistic women did not. Finally, within the contexts of existing relationships, altruism also (sort of) predicted the number of times someone had sex with their partner in the last month (ß = .27); while there was not a significant interaction with sex, a visual inspection of the provided graphs suggest that if this effect existed, it was being predominately carried by altruistic women having more sex within a relationship; not the men.

Now that’s all well and good, but the authors wanted to go a little further. In their second study, rather than just asking participants about how altruistic they were, they offered participants the opportunity to be altruistic: after completing the survey, participants could indicate how much (if any) of their earnings they wanted to donate to a charity of their choice. That way, you get what might be a less-biased measure of one’s actual altruism (rather than their own perception of it). Another 335 women and 189 men were recruited for this second phase and, broadly, the results follow the same general pattern, but there were some notable differences. In terms of mating success, actual altruistic donations (categorized as either making a donation or not, rather than the amount donated) were not a good predictor (ß = -.07). In terms of number of lifetime dating and sexual partners, however, the donation-by-sex interaction was significant, indicating that more charitable men – but not women – had a greater number of relationships and sexual partners (perhaps suggesting that charitable men tend to have more, but shorter, relationships, which isn’t necessarily a good thing for the women involved). Donations also failed to predict the amount of sex participants had been having in their relationship in the last month.

Guess the blood drive just isn’t a huge turn on after all

With these results in mind, there are two main points I wanted to draw attention to. The first of these concerns the measures of altruism in general: effectively charitable behaviors to strangers. While such a behavior might be a more “pure” form of altruistic tendencies as compared with, say, helping a friend move or giving money to your child, it does pose some complications for the present topic. Specifically, when looking for a desirable mate, people might not want someone who is just generally altruistic. After all, it doesn’t always do me much good if my committed partner is spending time and investing resources in other people. I would probably prefer that resources be preferentially directed at me and those I care about, rather than strangers, and I might especially dislike it if altruism directed towards strangers came at my expense (as the same resources can’t be invested in me and someone else most of the time). While it is possible that such investments in strangers could return to me later in the form of them reciprocating such aid to my partner, it seems unlikely that deficit would be entirely and consistently made up, let alone surpassed.

To make the point concrete, if someone was equally altruistic towards all people, there would be little point in forming as kind of special relationship with that kind person (friendships or otherwise) because you’d get the same benefits from them regardless of how much you invested in them (even if that amount was nothing).

This brings me to the second point I wanted to discuss: the matter of why people like the company of altruists. There are two explanations that come to mind. The first explanation is simple: people like access to resources, and altruists tend to provide them. This explanation should hardly require much in the way of testing given its truth is plainly obvious. The second explanation is more complex, and it’s one the authors favor: altruism honestly signals some positive, yet difficult-to-observe quality about the altruist. For instance, if I were to donate blood, or my time to clean up a park, this would tell you something about my underlying genetic qualities, as an individual in worse condition couldn’t shoulder the costs of altruism effectively. In this sense, altruism functions in a comparable manner to a peacock’s tail feathers; it’s a biologically-honest signal because it’s costly.

While it does have some plausibility, this signaling explanation runs into some complications. First, as the authors note, women donated more than men did (70% to 57%), despite donating predicting sexual behavior better for men. If women were donating to signal some positive qualities in the mating domain, it’s not at all clear it was working. Further, patterns of charitable donations in the US show a U-shaped distribution, whereby those with access to the most and  the fewest financial resources tend to donate more than those in the middle. This seems like a pattern the signaling explanation should not predict if altruism is meaningfully and consistently tied to important, but difficult-to-observe biological characteristics. Finally, while the argument could be made that altruism directed towards friends, sexual partners, and kin are not necessarily indicative of someone’s willingness to donate to strangers (i.e., how altruistic they are dispositionally might not predict how nepotistic they are), well, that’s kind of a problem for the altruism-as-signaling model. If donations towards strangers are fairly unpredictive of altruism towards closer relations, then they don’t really tell you what you want to know.  Specifically, if you want to know how good of a friend or dating partner someone would be for you, a better cue is how much altruism they direct towards their friends and romantic partners; not how much they direct to strangers.

“My boyfriend is so altruistic, buying drinks for other women like that”

Last, we can consider the matter of why people behave altruistically, with respect to the mating domain. (Very) broadly speaking, there are two primary challenges people need to overcome: attracting a mate and retaining them. Matters get tricky here, as altruism can be used for both of these tasks. As such, a man who is generally altruistic towards lot of people might be using altruism as a means of attracting the attention of prospective mates without necessarily intending to keep them around. Indeed, the previous point about how altruistic men report having more relationships and sexual partners could be interpreted in just such a light. There are other explanations, of course, such as the prospect that generally selfish people simply don’t have many relationships at all, but these need to be separated out. In either case, in terms of how much altruism we provide to others, I suspect that the amount provided to strangers and charitable organizations only makes up a small fraction; we give much more towards friends, family, and lovers regularly. If that’s the case, measuring someone’s willingness to donate in those fairly uncommon contexts might not capture their desirability as partner as well as we would like.

References: Arnocky, S., Piche, T., Albert, G., Ouellette, D., & Barclay, P. (2016). Altruism predicts mating success in humans. British Journal of Psychology, DOI:10.1111/bjop.12208

 

The Fight Against Self-Improvement

In the abstract, most everyone wants to be the best version of themselves they can. More attractive bodies, developing and improving useful skills, a good education, achieving career success; who doesn’t want those things? In practice, lots of people, apparently. While people might like the idea of improving various parts of their life, self-improvement takes time, energy, dedication, and restraint; it involves doing things that might not be pleasant in the short-term with the hope that long-term rewards will follow. Those rewards are by no means guaranteed, though, either in terms of their happening at all or the degree to which they do. While people can usually improve various parts of their life, not everyone can achieve the levels of success they might prefer no matter how much time they devote to their crafts. All of those are common reasons people will sometimes avoid improving themselves (it’s difficult and contains opportunity costs), but they do not straightforwardly explain why people sometimes fight against others improving.

“How dare they try to make a better life for themselves!”

I was recently reading an article about the appeal of Trump and came across this passage concerning this fight against the self-improvement of others:

“Nearly everyone in my family who has achieved some financial success for themselves, from Mamaw to me, has been told that they’ve become “too big for their britches.”  I don’t think this value is all bad.  It forces us to stay grounded, reminds us that money and education are no substitute for common sense and humility. But, it does create a lot of pressure not to make a better life for yourself…”

At first blush, this seems like a rather strange idea: if people in your community – your friends and family – are struggling (or have yet to build a future for themselves), why would anyone object to the prospect of their achieving success and bettering their lot in life? Part of the answer is found a little further down:

“A lot of these [poor, struggling] people know nothing but judgment and condescension from those with financial and political power, and the thought of their children acquiring that same hostility is noxious.”

I wanted to explore this idea in a bit more depth to help explain why these feelings might rear their head when faced with the social or financial success of others, be they close or distant relations.

Understanding these feelings requires drawing on a concept my theory of morality leaned heavily on: association value. Association value refers to the abstract value that others in the social world have for each other; essentially, it asks the question, “how desirable of a friend would this person make for me (and vice versa)?” This value comes in two parts: first, there is the matter of how much value someone could add to your life. As an easy example, someone with a lot of money is more capable of adding value to your life than someone with less money; someone who is physically stronger tends to be able to provide benefits a weaker individual could not; the same goes for individuals who are more physically attractive or intelligent. It is for this reason that most people wish they could improve on some or all of these dimensions if doing so were possible and easy: you end up as a more desirable social asset to others.

The second part of that association value is a bit trickier, however, reflecting the crux of the problem: how willing someone is to add value to your life. Those who are unwilling to help me have a lower value than those willing to make the investment. Reliable friends are better than flaky ones, and charitable friends are better than stingy ones. As such, even if someone has a great potential value they could add to my life, they still might be unattractive as associates if they are not going to turn that potential into reality. An unachieved potential is effectively the same thing as having no potential value at all. Conversely, those who are very willing to add to my life but cannot actually do so in meaningful ways don’t make attractive options either. Simply put, eager but incompetent individuals wouldn’t make good hires for a job, but neither would competent yet absent ones.

“I could help you pay down your crippling debt. Won’t do it, though”

With this understanding of association value, there is only one piece left to add to equation: the zero-sum nature of friendship. Friendship is a relative term; it means that someone values me more than they value others. If someone is a better friend to me, it means they are a worse friend to others; they would value my welfare over the welfare of others and, if a choice had to be made, would aid me rather than someone else. Having friends is also useful in the adaptive sense of the word: they help provide access to desirable mates, protection, provisioning, and can even help you exploit others if you’re on the aggressive side of things. Putting all these pieces together, we end up with the following idea: people generally want access to the best friends possible. What makes a good friend is a combination of their ability and willingness to invest in you over others. However, their willingness to do so depends in turn on your association value to them: how willing and able you are to add things to their lives. If you aren’t able to help them out – now or in the future – why would they want to invest resources into benefiting you when they could instead put those resources into others who could?

Now we can finally return to the matter of self-improvement. By increasing your association value through various forms of self-improvement (e.g., making yourself more physically attractive and stronger through exercise, improving your income by moving forward in your career, learning new things, etc) you make yourself a more appealing friend to others. Crucially, this includes both existing friends and higher-status individuals who might not have been willing to invest in you prior to your ability to add value to their life materializing. In other words, as your value as an associate rises, unless the value of your existing associates rises in turn, it is quite possible that you can now do better than them socially, so to speak. If you have more appealing social prospects, then, you might begin to neglect or break-off existing contacts in favor of newer, more-profitable friendships or mates. It is likely that your existing contacts understand this – implicitly or otherwise – and might seek to discourage you from improving your life, or preemptively break-off contact with you if you do, under the assumptions you will do likewise to them in the future. After all, if you’re moving on eventually they would be better off building new connections sooner, rather than later. They don’t want to invest in failing relationships anymore than you do.

In turn, those who are thinking about self-improvement might actually decide against pursuing their goals not necessarily because they wouldn’t be able to achieve them, but because they’re afraid that their existing friends might abandon them, or even that they themselves might be the ones who do the abandoning. Ironically, improving yourself can sometimes make you look like a worse social prospect.

To put that in a simple example, we could consider the world of fitness. The classic trope of weak high-schooler being bullied by the strong, jock type has been ingrained in many stories in our culture. For those doing the bullying, their targets don’t offer them much socially (their association value to others is low, while the bully’s is high) and they are unable to effectively defend themselves, making exploitation appear as an attractive option. In turn, those who are the targets of this bullying are, in some sense, wary of adopting some of the self-improvement behaviors that the jocks engage in, such as working out, because they either don’t feel they can effectively compete against the jocks in that realm (e.g., they wouldn’t be able to get as strong, so why bother getting stronger) or because they worry that improving their association value by working out will lead to them adopting a similar pattern of behavior to those they already dislike, resulting in their losing value to their current friends (usually those of similar, but relatively-low association value). The movie Mean Girls is an example of this dynamic struggle in a different domain.

So many years later, and “Fetch” still never happened…

This line of thought has, as far as I can tell, also been leveraged (again, consciously or otherwise) by one brand within the fitness community: Planet Fitness. Last I heard an advertisement for their company on the radio, their slogan appeared to be, “we’re not a gym; we’re planet fitness.” An odd statement to be sure, because they are a gym, so what are we to make of it? Presumably that they are in some important respects different from their competition. How are they different from other gyms? The “About” section on their website lays their differences out in true, ironic form:

“Make yourself comfy. Because we’re Judgement Free…you deserve a little cred just for being here. We believe no one should ever feel Gymtimidated by Lunky behavior and that everyone should feel at ease in our gyms, no matter what his or her workout goals are…We’re fiercely protective of our Planet and the rights of our members to feel like they belong. So we create an environment where you can relax, go at your own pace and just do your own thing without ever having to worry about being judged.”

This marketing is fairly transparent pandering to those who currently do not feel they can compete with those who are very fit or are worried about becoming a “lunk” themselves (they even have an alarm in the gym designed to bet set off if someone is making too much noise while lifting, or wearing the wrong outfit). However, in doing so, they devalue those who are successful or passionate in their pursuits of self-improvement. While I have never seen a gym more obsessed with judging their would-be members than Planet Fitness, so long as that judgment is pointed at the right targets, they try to appeal (presumably effectively) to certain portions of the population untapped by other gyms. Planet Fitness wants to be your friend; not the friend of those jerks who make you feel bad.

There is value in not letting success go to one’s head; no one wants a fair-weather friend who will leave the moment it’s expedient. Such an attitude undermines loyalty. The converse, however, is that using that as an excuse to avoid (or condemn) self-improvement will make you and others worse-off in the long term. A better solution to this dilemma is to improve yourself so you can improve those who matter the most to you, hoping they reciprocate in turn (or improve together for even better success).

Skepticism Surrounding Sex

It’s a basic truth of the human condition that everybody lies; the only variable is about what

One of my favorite shows from years ago was House; a show centered around a brilliant but troubled doctor who frequently discovers the causes of his patient’s ailments through discerning what they – or others – are lying about. This outlook on people appears to be correct, at least in spirit. Because it is sometimes beneficial for us that other people are made to believe things that are false, communication is often less than honest. This dishonesty entails things like outright lies, lies by omission, or stretching the truth in various directions and placing it in different lights. Of course, people don’t just lie because deceiving others is usually beneficial. Deception – much like honesty – is only adaptive to the extent that people do reproductively-relevant things with it. Convincing your spouse that you had an affair when you didn’t is dishonest for sure, but probably not a very useful thing to do; deceiving someone about what you had for breakfast is probably fairly neutral (minus the costs you might incur from coming to be known as a liar). As such, we wouldn’t expect selection to have shaped our psychology to lie about all topics with equal frequency. Instead, we should expect that people tend to preferentially lie about particular topics in predictable ways.

Lies like, “This college degree will open so many doors for you in life”

The corollary idea to that point concerns skepticism. Distrusting the honesty of communications can protect against harmful deceptions, but it also runs the risk of failing to act on accurate and beneficial information. There are costs and benefits to skepticism as there are to deception. Just as we shouldn’t expect people to be dishonest about all topics equally often, then, we shouldn’t expect people to be equally skeptical of all the information they receive either. This is point I’ve talked about before with regards to our reasoning abilities, whereby information agreeable to our particular interests tends to be accepted less critically, while disagreeable information is scrutinized much more intensely.

This line of thought was recently applied to the mating domain in a paper by Walsh, Millar, & Westfall (2016). Humans face a number of challenges when it comes to attracting sexual partners typically centered around obtaining the highest quality of partner(s) one can (metaphorically) afford, relative to what one offers to others. What determines the quality of partners, however, is frequently context specific: what makes a good short-term partner might differ from what makes a good long-term partner and – critically, as far as the current research is concerned – the traits that make good male partners for women are not the same as those that make good females partner for men. Because women and men face some different adaptive challenges when it comes to mating, we should expect that they would also preferentially lie (or exaggerate) to the opposite sex about those traits that the other sex values the most. In turn, we should also expect that each sex is skeptical of different claims, as this skepticism should reflect the costs associated with making poor reproductive decisions on the basis of bad information.

In case that sounds too abstract, consider a simple example: women face a greater obligate cost when it comes to pregnancy than men do. As far as men are concerned, their role in reproduction could end at ejaculation (which it does, for many species). By contrast, women would be burdened with months of gestation (during which they cannot get pregnant again), as well as years of breastfeeding prior to modern advancements (during which they also usually can’t get pregnant). Each child could take years of a woman’s already limited reproductive lifespan, whereas the man has lost a few minutes. In order to ease those burdens, women often seek male partners who will stick around and invest in them and their children. Men who are willing to invest in children should thus prove to be more attractive long-term partners for women than those who are unwilling. However, a man’s willingness to stick around needs to be assessed by a woman in advance of knowing what his behavior will actually be. This might lead to men exaggerating or lie about their willingness to invest, so as to encourage women to mate with them. Women, in turn, should be preferentially skeptical of such claims, as being wrong about a man’s willingness to invest is costly indeed. The situation should be reversed for traits that men value in their partners more than women.

Figure 1: What men most often value in a woman

Three such traits for both men and women were examined by Walsh et al (2016). In their study, eight scenarios depicting a hypothetical email exchange between a man and woman who had never met were displayed to approximately 230 (mostly female; 165) heterosexual undergraduate students. For the women, these emails depicted a man messaging a woman; for men, it was a woman messaging a man. The purpose of these emails was described as the person sending them looking to begin a long-term intimate relationship with the recipient. Each of these emails described various facets of the sender, which could be broadly classified as either relevant primarily to female mating interests, relevant to male interests, or neutral. In terms of female interests, the sender described their luxurious lifestyle (cuing wealth), their desire to settle down (commitment), or how much they enjoy interacting with children (child investment). In terms of male interests, the sender talked about having a toned body (cuing physical attractiveness), their openness sexually (availability/receptivity), or their youth (fertility and mate value). In the two neutral scenarios, the sender either described their interest in stargazing or board games.

Finally, the participants were asked to rate (on a 1-5 scale) how deceitful they thought the sender was, whether they believed the sender or not, and how skeptical they were of the claims in the message. These three scores were summed for each participant to create a composite score of believability for each of the messages (the lower the score, the less believable it was rated as being). Those scores were then averaged across the female-relevant items (wealth, commitment, and childcare), the male-relevant items (attractiveness, youth, and availability), and the control conditions. (Participants also answered questions about whether the recipient should respond and how much they personally liked the sender. No statistical analyses are reported on those measures, however, so I’m going to assume nothing of note turned up)

The results showed that, as expected, the control items were believed more readily (M = 11.20) than the male (M = 9.85) or female (9.6) relevant items. This makes sense, inasmuch as believing lies about stargazing or interests in board games aren’t particularly costly for either sex in most cases, so there’s little reason to lie about them (and thus little reason to doubt them); by contrast, messages about one’s desirability as a partner have real payoffs, and so are treated more cautiously. However, an important interaction with the sex of the participant was uncovered as well: female participants were more skeptical on the female-relevant items (M = about 9.2) than males were (M = 10.6); similarly, males were more likely to be skeptical in male-relevant conditions  (M = 9.5) than females were (M = 10). Further, the scores for the individual items all showed evidence of the same sex kinds of differences in skepticism. No sex difference emerged for the control condition, also as expected.

In sum, then – while these differences were relatively small in magnitude – men tended to be more skeptical of claims that, if falsely believed, were costlier for them than women, and women tended to be more skeptical of claims that, if falsely believed, were costlier for them than men. This is a similar pattern to that found in the reasoning domain, where evidence that agrees with one’s position is accepted more readily than evidence that disagrees with it.

“How could it possibly be true if it disagrees with my opinion?”

The authors make a very interesting point towards the end of their paper about how their results could be viewed as inconsistent with the hypothesis that men have a bias to over-perceived women’s sexual interest. After all, if men are over-perceiving such interest in the first place, why would they be skeptical about claims of sexual receptivity? It is possible, of course, that men tend to over-perceive such availability in general and are also skeptical of claims about its degree (e.g., they could still be manipulated by signals intentionally sent by females and so are skeptical, but still over-perceive ambiguous or less-overt cues), but another explanation jumps out at me that is consistent with the theme of this research: perhaps when asked to self-report about their own sexual interest, women aren’t being entirely accurate (consciously or otherwise). This explanation would fit well with the fact that men and women tend to perceive a similar level of sexual interest in other women. Then again, perhaps I only see that evidence as consistent because I don’t think men, as a group, should be expected to have such a bias, and that’s biasing my skepticism in turn.

References: Walsh, M., Millar, M., & Westfall, S. (2016). The effects of gender and cost on suspicion in initial courtship communications. Evolutionary Psychological Science, DOI 10.1007/s40806-016-0062-8

Why Women Are More Depressed Than Men

Women are more likely to be depressed than men; about twice as likely here in the US, as I have been told. It’s an interesting finding, to be sure, and making sense of it poses a fun little mystery (as making sense of many things tends to). We don’t just want to know that women are more depressed than men; we also want to know why women are more depressed. So what are the causes of this difference? The Mayo Clinic floats a few explanations, noting that this sex difference appears to emerge around puberty. As such, many of the explanations they put forth center around the problems that women (but not men) might face when undergoing that transitional period in their life. These include things like increased pressure to achieve in school, conflict with parents, gender confusion, PMS, and pregnancy-related factors. They also include ever-popular suggestions such as societal biases that harm women. Now I suspect these are quite consistent with the answers you would get if queried your average Joe or Jane on the street as to why they think women are more depressed. People recognize that depression often appears to follow negative life events and stressors, and so they look for proximate conditions that they believe (accurately or not) disproportionately affect women.

Boys don’t have to figure out how to use tampons; therefore less depression

While that seems to be a reasonable strategy, it produces results that aren’t entirely satisfying. First, it seems unlikely that women face that much more stress and negative life events than men do (twice as much?) and, secondly, it doesn’t do much to help us understand individual variation. Lots of people face negative life events, but lots of them also don’t end up spiraling into depression. As I noted above, our understanding of the facts related to depression can be bolstered by answering the why questions. In this case, the focus many people have is on answering the proximate whys rather than the ultimate ones. Specifically, we want to know why people respond to these negative life events with depression in the first place; what adaptive function depression might have. Though depression reactions appear completely normal to most, perhaps owing to their regularity, we need to make that normality strange. If, for example, you imagine a new mouse mother facing the stresses of caring for her young in a hostile world, a postpartum depression on her part might seem counterproductive: faced with the challenges of surviving and caring for her offspring, what adaptive value would depressive symptoms have? How would low energy, a lack of interest in important everyday activities, and perhaps even suicidal ideation help make her situation better? If anything, they would seem to disincline her from taking care of these important tasks, leaving her and her dependent offspring worse off. This strangeness, of course, wouldn’t just exist in mice; it should be just as strange when we see it in humans.

The most compelling adaptive account of depression I’ve read (Hagen, 2003) suggests that the ultimate why of depression focuses on social bargaining. I’ve written about it before, but the gist of the idea is as follows: if I’m facing adversity that I am unlikely to be able to solve alone, one strategy for overcoming that problem is to recruit others in the world to help me. However, those other people aren’t always forthcoming with the investment I desire. If others aren’t responding to my needs adequately, it would behoove me to try and alter their behavior so as to encourage them to increase their investment in me. Depression, in this view, is adapted to do just that. The psychological mechanisms governing depression work to, essentially, place the depressed individual on a social strike. When workers are unable to effectively encourage an increased investment from their employers (perhaps in the form of pay or benefits), they will occasionally refuse to work at all until their conditions improve. While this is indeed costly for the workers, it is also costly for the employer, and it might be beneficial for the employer to cave to the demands rather than continue to face the costs of not having people work. Depression shows a number of parallels to this kind of behavior, where people withdraw from the social world – taking with them the benefits they provided to others – until other people increase their investment in the depressed individual to help see them through a tough period.

Going on strike (or, more generally, withdrawing from cooperative relationships), of course, is only one means of getting other people to increase their investment in you; another potential strategy is violence. If someone is enacting behaviors that show they don’t value me enough, I might respond with aggressive behaviors to get them to alter that valuation. Two classic examples of this could be shooting someone in self-defense or a loan-shark breaking a delinquent client’s legs. Indeed, this is precisely the type of function that Sell et al (2009) proposed that anger has: if others aren’t giving me my due, anger motivates me to take actions that could recalibrate their concern for my welfare. This leaves us with two strategies – depression and anger – that can both solve the same type of problem. The question arises, then, as to which strategy will be the most effective for a given individual and their particular circumstances. This raises a rather interesting possibility: it is possible that the sex difference in depression exists because the anger strategy is more effective for men, whereas the depression strategy is more effective for women (rather than, say, because women face more adversity than men). This would be consistent with the sex difference in depression arising around puberty as well, since this is when sex differences in strength also begin to emerge. In other words, both men and women have to solve similar social problems; they just go about it in different ways. 

“An answer that doesn’t depend on wide-spread sexism? How boring…”

Crucially, this explanation should also be able to account for within-sex differences as well: while men are more able to successfully enact physical aggression than women, not all men will be successful in that regard since not all men possess the necessary formidability. The male who is 5’5″ and 130 pounds soaking wet likely won’t win against his taller, heavier, and stronger counterparts in a fight. As such, men who are relatively weak might preferentially make use of the depression strategy, since picking fights they probably won’t win is a bad idea, while those who are on the stronger side might instead make use of anger more readily. Thankfully, a new paper by Hagen & Rosenstrom (2016) examines this very issue; at least part of it. The researchers sought to test whether upper-body strength would negatively predict depression scores, controlling for a number of other, related variables.

To do so, they accessed data from the National Health and Nutrition Examination Survey (NHANES), netting a little over 4,000 subjects ranging in age from 18-60. As a proxy for upper-body strength, the authors made use of the measures subjects had provided of their hand-grip strength. The participants had also filled out questions concerning their depression, height and weight, socioeconomic status, white blood cell count (to proxy health), and physical disabilities. The researchers predicted that: (1) depression should negatively correlate with grip-strength, controlling for age and sex, (2) that relationship should be stronger for men than women, and (3) that the relationship would persist after controlling for physical health. About 9% of the sample qualified as depressed and, as expected, women were more likely to report depression than men by about 1.7 times. Sex, on its own, was a good predictor of depression (in their regression, ß = 0.74).

When grip-strength was added into the statistical model, however, the effect of sex dropped into the non-significant range (ß = 0.03), while strength possessed good predictive value (ß = -1.04). In support of the first hypothesis, then, increased upper-body strength did indeed negatively correlate with depression scores, removing the effect of sex almost entirely. In fact, once grip strength was controlled for, men were actually slightly more likely to report depression than women (though this didn’t appear to be significant). Prediction 2 was not supported, however, with their being no significant interaction between sex and grip-strength on measures of depression. This effect persisted even when controlling for socioeconomic status, age, anthropomorphic, and hormonal variables. However, physical disability did attenuate the relationship between strength and depression quite a bit, which is understandable in light of the fact that physically-disabled individuals likely have their formidability compromised, even if they have stronger upper bodies (an example being a man in a wheelchair having good grip strength, but still not being much use in a fight). It is worth mentioning that the relationship between strength and depression appeared to grow larger over time; the authors suggest this might have something to do with older individuals having more opportunities to test their strength against others, which sounds plausible enough. 

Also worth noting is that when depression scores were replaced with suicidal ideation, the predicted sex-by-strength interaction did emerge, such that men with greater strength reported being less suicidal, while women with greater strength reported being more suicidal (the latter portion of which is curious and not predicted). Given that men succeed at committing suicide more often than women, this relationship is probably worth further examination.  

“Not today, crippling existential dread”

Taken together with findings from Sell et al (2009) – where men, but not women, who possessed greater strength reported being quicker to anger and more successful in physical conflicts – the emerging picture is one in which women tend to (not consciously) “use” depression as a means social bargaining because it tends to work better for them than anger, whereas the reverse holds true for men. To be clear, both anger and depression are triggered by adversity, but those events interact with an individual’s condition and their social environment in determining the precise response. As the authors note, the picture is likely to be a dynamic one; not one that’s as simple as “more strength = less depression” across the board. Of course, other factors that co-vary with physical strength and health – like attractiveness – could also being playing a roll in the relationship with depression, but since such matters aren’t spoken to directly by the data, the extent and nature of those other factors is speculative.

What I find very persuasive about this adaptive hypothesis, however – in addition to the reported data – is that many existing theories of depression would not make the predictions tested by Hagen & Rosenstrom (2016) in the first place. For example, those who claim something like, “depressed people perceive the world more accurately” would be at a bit of a loss to explain why those who perceive the world more accurately also seem to have lower upper-body strength (they might also want to explain why depressed people don’t perceive the world more accurately, either). A plausible adaptive hypothesis, on the other hand, is useful for guiding our search for, and understanding of, the proximate causes of depression.

References: Hagen, E.H. (2003). The bargaining model of depression. In: Genetic and Cultural Evolution of Cooperation, P. Hammerstein (ed.). MIT Press, 95-123

Hagen, E. & Rosenstrom, T. (2016). Explain the sex difference in depression with a unified bargaining model of anger and depression. Evolution, Medicine, & Public Health, 117-132

Sell, A., Tooby, J., & Cosmides, L. (2009). Formidability and the logic of human anger. Proceedings of the National Academy of Sciences, 106, 15073-78.

Smoking Hot

If the view counts on previous posts have been any indication, people really do enjoy reading about, understanding, and – perhaps more importantly – overcoming the obstacles found on the dating terrain; understandably so, given its greater personal relevance to their lives. In the interests of adding some value to the lives of others, then, today I wanted to discuss some research examining the connection between recreational drug use and sexual behavior in order to see if any practical behavioral advice can be derived from it. The first order of business will be to try and understand the relationship between recreational drugs and mating from an evolutionary perspective; the second will be to take a more direct look at whether drug use has positive and negative effects when it comes to attracting a partner, and in what contexts those effects might exist. In short, will things like drinking and smoking make you smoking hot to others?

So far selling out has been unsuccessful, so let’s try talking sex

We can begin by considering why people care so much about recreational drug use in general: from historical prohibitions on alcohol to modern laws prohibiting the possession, use, and sale of drugs, many people express a deep concern over who gets to put what into their body at what times and for what reasons. The ostensibly obvious reason for this concern that most people will raise immediately is that such laws are designed to save people from themselves: drugs can cause a great degree of harm to users and people are, essentially, too stupid to figure out what’s really good for them. While perceptions of harm to drug users themselves no doubt play a role in these intuitions, they are unlikely to actually be whole story for a number of reasons, chief among which is that they would have a hard time explaining the connection between sexual strategies and drug use (and that putting people in jail probably isn’t all that good for them either, but that’s another matter). Sexual strategies, in this case, refer roughly to an individual’s degree of promiscuity: some people preferentially enjoy engaging in one or more short-term sexual relationships (where investment is often funneled to mating efforts), while others are more inclined towards single, long-term ones (where investment is funneled to parental efforts). While people do engage in varying degrees of both at times, the distinction captures the general idea well enough. Now, if one is the type who prefers long-term relationships, it might benefit you to condemn behaviors that encourage promiscuity; it doesn’t help your relationship stability to have lots of people around who might try to lure your mate away or reduce the confidence of a man’s paternity in his children. To the extent that recreational drug use does that (e.g., those who go out drinking in the hopes of hooking up with others owing to their reduced inhibitions), it will be condemned by the more long-term maters in turn. Conversely, those who favor promiscuity should be more permissive towards drug use as it makes enacting their preferred strategy easier.

This is precisely the pattern of results that Quintelier et al (2013) report: in a cross-cultural sample of Belgians (N = 476), Dutch (N = 298), and Japanese (N = 296) college students who did not have children, even after controlling for age, sex, personality variables, political ideology, and religiosity, attitudes towards drug use were still reliably predicted by participant’s sexual attitudes: the more sexually permissive one was, the more they tended to approve of drug use. In fact, sexual attitudes were the best predictors of people’s feelings about recreational drugs both before and after the controls were added (findings which replicated a previous US sample). By contrast, while the non-sexual variables were sometimes significant predictors of drug views after controlling for sexual attitudes, they were not as reliable and their effects were not as large. This pattern of results, then, should yield some useful predictions about how drug use effects your attractiveness to other people: those who are looking for short-term sexual encounters might find drug use more appealing (or at least less off-putting), relative to those looking for long-term relationships.

“I pronounce you man and wife. Now it’s time to all get high”

Thankfully, I happen to have a paper on hand that speaks to the matter somewhat more directly. Vincke (2016) sought to examine how attractive brief behavioral descriptions of men were rated as being by women for either short- or long-term relationships. Of interest, these descriptions included the fact that the man in question either (a) did not, (b) occasionally, or (c) frequently smoke cigarettes or drink alcohol. A sample of 240 Dutch women were recruited and asked to rate these profiles with respect to how attractive the men in question would be for either a casual or committed relationship and whether they thought the men themselves were more likely to be interested in short/long-term relationships.

Taking these in reverse order, the women rated the men who never smoked as somewhat less sexually permissive (M = 4.31, scale from 1 to 7) than those who either occasionally or frequently did (Ms = 4.83 and 4.98, respectively; these two values did not significantly differ). By contrast, those who never drank or occasionally did were rated as being comparably less permissive (Ms = 4.04) than the men who drank frequently (M = 5.17). Drug use, then, did effect women’s perceptions of men’s sexual interests (and those perceptions happen to match reality, as a second  study with men confirmed). If you’re interested in managing what other people think your relationship intentions are, then, managing your drug use accordingly can make something of a difference. Whether that ended up making the men more attractive is a different matter, however.

As it turns out, smoking and drinking appear to look distinct in that regard: in general, smoking tended to make men look less attractive, regardless of whether the mating context was short- or long-term, and frequent smoking was worse than occasional smoking. However, the decline in attractiveness from smoking was not as large in short-term contexts. (Oddly, Vincke (2016) frames smoking as being an attractiveness benefit in short-term contexts within her discussion when it’s really just less of a cost. The slight bump seen in the data is neither statistically or practically significant) This pattern can be seen in the left half of the author’s graph. By contrast – on the right side – occasional drinkers were generally rated as more attractive than men who never or frequently drank across conditions across both short- and long-term relationships. However, in the context of short-term mating, frequent drinking was rated as being more attractive than never drinking, whereas this pattern reversed itself for long-term relationships. As such, if you’re looking to attract someone for a serious relationship, you probably won’t be impressing them much with your ability to do keg stands of liquor, but if you’re looking for someone to hook up with that night it might be better to show that off than sip on water all evening.

Cigarettes and alcohol look different from one another in the attractiveness domain even though both might be considered recreational drug use. It is probable that what differentiates them here is their effects on encouraging promiscuity, as previously discussed. While people are often motivated to go out drinking in order to get intoxicated, lose their inhibitions, and have sex, the same cannot usually be said about smoking cigarettes. Singles don’t usually congregate at smoking bars to meet people and start relationships, short-term or otherwise (forgoing for the moment that smoking bars aren’t usually things, unless you count the rare hookah lounges). Smoking might thus make men appear to be more interested in casual encounters because it cues a more general interest in short-term rewards, rather than anything specifically sexual; in this case, if one is willing to risk the adverse health effects in the future for the pleasure cigarettes provide today, then it is unlikely that someone would be risk averse in other areas of their life.

If you want to examine sex specifically, you might have picked the wrong smoke

There are some limitations here, namely that this study did not separate women in terms of what they were personally seeking in terms of relationships or their own interests/behaviors when it comes to engaging in recreational drug use. Perhaps these results would look different if you were to account for women’s smoking/drinking habits. Even if frequent drinking is a bad thing for long-term attractiveness in general, a mismatch with the particular person you’re looking to date might be worse. It is also possible that a different pattern might emerge if men were assessing women’s attractiveness, but what differences those would be are speculative. It is unfortunate that the intuitions of the other gender didn’t appear to be assessed. I think this is a function of Vincke (2016) looking for confirmatory evidence for her hypothesis that recreational drug use is attractive to women in short-term contexts because it entails risk, and women value risk-taking more in short-term male partners than long-term ones. (There is a point to make about that theory as well: while some risky activities might indeed be more attractive to women in short-term contexts, I suspect those activities are not preferred because they’re risky per se, but rather because the risks send some important cue about the mate quality of the risk taker. Also, I suspect the risks need to have some kind of payoff; I don’t think women prefer men who take risks and fail. Anyone can smoke, and smoking itself doesn’t seem to send any honest signal of quality on the part of the smoker.)

In sum, the usefulness of these results for making any decisions in the dating world is probably at its peak when you don’t really know much about the person you’re about to meet. If you’re a man and you’re meeting a woman who you know almost nothing about, this information might come in handy; on the other hand, if you have information about that woman’s preferences as an individual, it’s probably better to use that instead of the overall trends. 

References: Quintelier, K., Ishii, K., Weeden, J., Kurzban, R., & Braeckman, J. (2013). Individual differences in reproductive strategy are related to views about recreational drug use in Belgium, the Netherlands, and Japan. Human Nature, 24, 196-217.

Vincke, E. (2016). The young male cigarette and alcohol syndrome: Smoking and drinking as a short-term mating strategy. Evolutionary Psychology, 1-13.