Semen Quality And The Menstrual Cycle

One lesson I always try to drive home in any psychology course I teach is that biology (and, by extension, psychology) is itself costly. The usual estimate on offer is that our brains consume about 20% of our daily caloric expenditure, despite making up a small portion of our bodily mass. That’s only the cost of running the brain, mind you; growing and developing it adds further metabolic costs into the mix. When you consider the extent of those costs over a lifetime, it becomes clear that – ideally – our psychology should only be expected to exist in an active state to the extent it offers adaptive benefits that tend to outweigh them. Importantly, we should also expect that cost/benefit analysis to be dynamic over time. If a component of our biology/psychology is useful during one point in our lives but not at another, we might predict that it would switch on or off accordingly. This line of thought could help explain why humans are prolific language learners early in life but struggle to learn a second language in their teens and beyond; a language-learning mechanism active during development it would be useful up to a certain age for learning a native tongue, but later becomes inactive when its services are no longer liable to required, so to speak (which they often wouldn’t be in an ancestral environment in which people didn’t travel far enough to encounter speakers of other languages).

“Good luck. Now get to walking!”

The two key points to take away from this idea, then, are (a) that biological systems tend to be costly and, because of that, (b) the amount of physiological investment in any one system should be doled out only to the extent it is likely to deliver adaptive benefits. With those two points as our theoretical framework, we can explain a lot about behavior in many different contexts. Consider mating as a for instance. Mating effort intended to attract and/or retain a partner is costly to engage in (in terms of time, resource invest, risk, and opportunity costs), so people should only be expected to put effort into the endeavor to the extent they view it as likely to produce benefits. As such, if you happen to be a hard “5″ on the mating market, it’s not worth your time pursuing a mate that’s a “9″ because you’re probably wasting your effort; similarly, you don’t want to pursue a “3″ if you can avoid it, because there are better options you might be able to achieve if you invest your efforts elsewhere.

Speaking of mating effort, this brings us to the research I wanted to discuss today. Sticking to mammals just for the sake of discussion, males of most species endure less obligate parenting costs than females. What this means is that if a copulation between a male and female results in conception, the female bears the brunt of the biological costs of reproduction. Many males will only provide some of the gametes required for reproduction, while the females must provide the egg, gestate the fetus, birth it, and nurse/care for it for some time. Because the required female investment is substantially larger, females tend to be more selective about which males they’re willing to mate with. That said, even though the male’s typical investment is far lower than the female’s, it’s still a metabolically-costly investment: the males need to generate the sperm and seminal fluid required for conception. Testicles need to be grown, resources need to be invested into sperm/semen production, and that fluid needs to be rationed out on a per-ejaculation basis (a drop may be too little, while a cup may be too much). Put simply, males cannot afford to just produce gallons of semen for fun; it should only be produced to the extent that the benefits outweigh the costs.

For this reason, you tend to see that male testicle size varies between species, contingent on the degree of sperm competition typically encountered. For those not familiar, sperm competition refers to the probability that a female will have sperm from more than one male in her reproductive tract at a time when she might conceive. In a concrete sense, this translates into a fertile female mating with two or more males during her fertile window. This creates a context that favors the evolution of greater male investment into sperm production mechanisms, as the more of your sperm are in the fertilization race, the greater your probability of beating the competition and reproducing. When sperm competition is rare (or absent), however, males need not invest as many resources into mechanisms for producing testes and they are, accordingly, smaller.

Find the sperm competition

This logic can be extended to matters other than sperm competition. Specifically, it can be applied to cases where a male is (metaphorically) deciding how much to invest into any given ejaculate, even if he’s the female’s only sexual partner. After all, if the female you’re mating with is unlikely to get pregnant at the time, whatever resources are being invested into an ejaculate are correspondingly more likely to represent wasted effort; a case where the male would be better off investing those resources to things other than his loins. What this means is that – in addition to between-species differences of average investment in sperm/semen production – there might also exist within-individual differences in the amount of resources devoted to a given ejaculate, contingent on the context. This idea falls under the lovely-sounding name, the theory of ejaculate economics. Put into a sentence, it is metabolically costly to “buy” ejaculates, so males shouldn’t be expected to invest in them irrespective of their adaptive value.

A prediction derived from this idea, then, is that males might invest more in semen quality when the opportunity to mate with a fertile female is presented, relative to when that same female is not as likely to conceive. This very prediction happens to have been recently examined by Jeannerat et al (2017). Their sample for this research consisted of 16 adult male horses and two adult females, each of which had been living in a single-sex barn. Over the course of seven weeks, the females were brought into a new building (one at a time) and the males were brought in to ostensibly mate with them (also one at a time). The males would be exposed to the female’s feces on the ground for 15 seconds (to potentially help them detect pheromones, we are told), after which the males and females were held about 2 meters from each other for 30 seconds. Finally, the males were led to a dummy they could mount (which had also been scented with the feces). The semen sample from that mount was then collected from the dummy and the dummy refreshed for the next male.

This experiment was repeated several times, such that each stallion eventually provided semen after exposure to each mare two or three times. The crucial manipulation, however, involved the mares: each male had provided a semen sample for each mare once when she was ovulating (estrous) and two to three times when she was not (dioestrous). These samples were then compared against each other, yielding a within-subjects analysis of semen quality.

The result suggested that the stallions could – to some degree – accurately detect the female’s ovulatory status: when exposed to estrous mares, the stallions were somewhat quicker to achieve erections, mount the dummy, and to ejaculate, demonstrating a consistent pattern of arousal. When the semen samples themselves were examined, another interesting set of patterns emerged: relative to dioestrous mares, when the stallions were exposed to estrous mares they left behind larger volumes of semen (43.6 mL vs 46.8 mL) and more motile sperm (a greater percentage of active, moving sperm; about 59 vs 66%). Moreover, after 48 hours, the sperm samples obtained from the stallions exposed to estrous mares showed less of a decline of viability (66% to 65%) relative to those obtained from dioestrous exposure (64% to 61%). The estrous sperm also showed reduced membrane degradation, relative to the dioestrous samples. By contrast, sperm count and velocity did not significantly differ between conditions.

“So what it was with a plastic collection pouch? I still had sex”

While these differences appear slight in the absolute sense, they are nevertheless fascinating as they suggest males were capable of (rather quickly) manipulating the quality of the ejaculate they provided from intercourse, depending on the fertility status of their mate. Again, this was a within-subjects design, meaning the males are being compared against themselves to help control for individual differences. The same male seemed to invest somewhat less in an ejaculate when the corresponding probability of successful fertilization was low.

Though there are many other questions to think about (such as whether males might also make long-term adjustments to semen characteristics depending on context, or what the presence of other males might do, to name a few), one that no doubt pops into the minds of people reading this is whether other species – namely, humans – do something similar. While it is certainly possible, from the present results we clearly cannot say; we’re not horses. An important point to note is that this ability to adjust semen properties depends (in part) on the male’s ability to accurately detect female fertility status. To the extent human males have access to reliable cues regarding fertility status (beyond obvious ones, like pregnancy or menstruation), it seems at least plausible that this might hold true for us as well. Certainly an interesting matter worth examining further.   

References: Jeannerat, E., Janett, F., Sieme, H., Wedekind, C., & Burger, D. (2017). Quality of seminal fluids varies with type of stimulus at ejaculation. Scientific Reports. 7, DOI: 10.1038/srep44339

 

Academic Perversion

As an instructor, I have made it my business to enact a unique kind of assessment policy for my students. Specifically, all tests are short-essay style and revisions are allowed after a grade has been received. This ensures that students always have some motivation to figure out what they got wrong and improve on it. In other words, I design my assessment to incentivize learning. From the standpoint of some abstract perspective on the value of education, this seems like a reasonable perspective to adopt (at least to me, though I haven’t heard any of my colleagues argue with the method). It’s also, for lack of a better word, a stupid thing for me to do, from a professional perspective. What I mean here is that – on the job market – my ability to get students to learn successfully is not exactly incentivized, or at least that’s the impression that others with more insight have passed on to me. Not only are people on hiring committees not particularly interested in how much time I’m willing to devote to my students learning (it’s not the first thing they look at, or even in the top 3, I think), but the time I do invest in this method of assessment is time I’m not spending doing other things they value, like seeking out grants or trying to publish as many papers as I can in the most prestigious outlets available.

“If you’re so smart, how come you aren’t rich?”

And my method of assessment does involve quite a bit of time. When each test takes about 5-10 minutes to grade and make comments on and you’re staring down a class of about 100 students, some quick math tells you that each round of grading will take up about 8 to 16 hours. By contrast, I could instead offer my students a multiple choice test which could be graded almost automatically, cutting my time investment down to mere minutes. Over the course of a semester, then, I could devote 24 to 48 hours to helping students learn (across three tests) or I could instead provide grades for them in about 15 minutes using other methods. As far as anyone on a hiring committee will be able to tell, those two options are effectively equivalent. Sure, one helps students learn better, but being good at getting students to learn isn’t exactly incentivized on a professional level. Those 24 to 48 hours could have instead been spent seeking out grant funding or writing papers and – importantly – that’s per 100 students; if you happen to be teaching three or more classes a semester, that number goes up.

These incentives don’t just extend to tests and grading, mind you. If hiring committees aren’t all that concerned with my student’s learning outcomes, that has implications as for how much time I should spend designing my lecture material as well. Let’s say I was faced with the task of having to teach my students about information I was not terribly familiar with, be that the topic of the class as a whole or a particular novel piece of information within that otherwise-familiar topic. I could take the time-consuming route and familiarize myself with the information first, tracking down relevant primary sources, reading them in depth, assessing their strengths and weaknesses, as well as search out follow-up research on the matter. I could also take the quick route and simply read the abstract/discussion section of the paper or just report on the summary of the research provided by textbook writers or publisher’s materials.

If your goal is prep about 12-weeks worth of lecture material, it’s quite clear which method saves the most time. If having well-researched courses full of information you’re an expert on isn’t properly incentivized, then why would we expect professors to take the latter path? Pride, perhaps – many professors want to be good at their job and helpful to their students – but it seems other incentives push against devoting time to quality education if one is looking to make themselves an attractive hire*. I’ve heard teaching referred to as a distraction by more than one instructor, hinting strongly as to where they perceive incentives exist.

The implications of these concerns about incentives extend beyond any personal frustrations I might have and they’re beginning to get a larger share of the spotlight. One of the more recent events highlighting this issue was dubbed the replication crisis, where many published findings did not show up again when independent research teams sought them out. This wasn’t some appreciable minority, either; in psychology it was well over 50% of them. There’s little doubt that a healthy part of this state of affairs owes its existence to researchers purposefully using questionable methods to find publishable results, but why would they do so in the first place? Why are they so motivated to find these results. Again, pride factors into the equation but, as is usually the case, another part of that answer revolves around the incentive structure of academia: if academics are judged, hired, promoted, and funded on their ability to publish results, then they are incentivized to publish as many of those results as they can, even if the results themselves aren’t particularly trustworthy (they’re also disincentivized from trying to publish negative results, in many instances, which causes other problems).

Incentives so perverse I’m sure they’re someone’s fetish

A new paper has been making the rounds discussing these incentives in academia (Edwards & Roy, 2017), which begins with a simple premise: academic researchers are humans. Like other humans, we tend respond to particular incentives. While the incentive structures within academia might have been created with good intentions in mind, there is always a looming threat from the law of unintended consequences. In this case, those unintended consequences as referred to as Goodhart’s Law, which can be expressed as such: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes,” or, “when a measure becomes a target, it ceases to be a good measure.” In essence, this idea means that people will follow the letter of the law, rather than the spirit.

Putting that into an academic example, a university might want to hire intelligent and insightful professors. However, assessing intelligence and insight are difficult to do, so, rather than assess those traits, the university assesses proxy measures of them; something that tends to be associated with intelligence and insight, but is not itself either of those things. In this instance, it might be noticed that intelligent, insightful professors tend to publish more papers than their peers. Because the number of papers someone publishes is much easier to measure, the university simply measures that variable instead in determining who to hire and promote. While publication records are initially good predictors of performance, once they become the target of assessment, that correlation begins to decline. As publishing papers per se became the target behavior people are assessed on, they begin to maximize that variable rather than the thing it was intended to measure in the first place. Instead of publishing fewer quality papers full of insight, they publish many papers that do a worse job of helping us understand the world. 

In much the same vein, student grades on a standardized test might be a good measure of a teacher’s effectiveness; more effective teachers tend to produce students that learn more and subsequently do better on the test. However, if the poor teachers are then penalized and told to improve their performance or find a new job, the teachers might try to game the system. Now, instead of teaching their students about a subject in a holistic fashion that results in real learning, they just start teaching to the test. Rather than being taught, say, chemistry, students begin to get taught how to take a chemistry test, and the two are decidedly not the same thing. So long as teachers are only assessed on the grades of their students that take those tests, this is the incentive structure that ends up getting created.

Pictured: Not actual chemistry

Beyond just impacting the number of papers that academics might publish, a number of other potential unintended consequences of incentive structures are discussed. One of which involves measures of the quality of published work. We might expect that theoretically and empirically meaningful papers will receive more citations than weaker work. However, because the meaningfulness of a paper can’t be assessed directly, we look at proxy measures, like citation count (how often a paper is cited by other papers or authors). The consequence? People citing their own work more often and peer reviewers requesting their work be cited by people seeking to publish in the field. The number of pointless citations are inflated. There are also incentives for publishing in “good” or prestigious journals; those which are thought to preferentially publish meaningful work. Again, we can’t just assess how “good” a journal is, so we use other metrics, like how often papers from that journal are cited. The net result here is much the same, where journals would prefer to publish papers that cite papers they have previously published. Going a step further, when universities are ranked on certain metrics, they are incentivized to game those metrics or simply misreport them. Apparently a number of colleges have been caught just lying on that front to get their rankings up, while others can improve their rankings without really improving their institution. 

There are many such examples we might run though (and I recommend you check out the paper itself for just that reason), but the larger point I wanted to discuss was what all this means on a broader scale. To the extent that those who are more willing to cheat the system are rewarded for their behavior, those who are less willing to cheat will be crowded out, and there we have a real problem on our hands. For perspective, Fanelli (2009) reports that 2% of scientists admit to fabricating data and 10% report engaging in less overt, but still questionable practices, on average; he also reports that when asked about if they know of a case of their peers doing such things, those numbers are around 14% and 30%, respectively. While those numbers aren’t straightforward to interpret (it’s possible that some people cheat a lot, several people know of the same cases, or that one might be willing to cheat if the opportunity presented itself even if it hasn’t yet, for instance), they should be taken very seriously as a cause for concern.

(It’s also worth noting that Edwards & Roy misreport the Fanelli findings by citing his upper-bounds as if they were the average, making the problem of academic misconduct seem as bad a possible. This is likely just a mistake, but it highlights the possibility that mistakes likely follow the incentive structure as well; not just cheating. Just as researchers have incentives to overstate their own findings, they also have incentives to overstate the findings of others to help make their points convincingly)

Which is ironic for a paper complaining about incentives to overstate results

When it’s not just the case that a handful of bad apples within academia are contributing to a problem of, say, cheating with their data, but rather an appreciable minority of them are, this has the potential to have at least two major consequences. First, it can encourage more non-cheaters to become cheaters. If I were to observe my colleagues cheating the system and getting rewarded for it, I might be encouraged to cheat myself just to keep up when faced with (very) limited opportunities for jobs or funding. Parallels can be drawn to steroid use in sports, where those who do not initially want to use steroids might be encouraged to if enough of their competitors did.

The second consequence is that, as more people take part in that kind of culture, public faith in universities – and perhaps scientific research more generally – erodes. With eroding public faith comes reduced funding and increased skepticism towards research findings; both responses are justified (why would you fund researchers you can’t trust?) and worrying, as there are important problems that research can help solve, but only if people are willing to listen.    

*To be fair, it’s not that my ability as a teacher is entirely irrelevant to hiring committees; it’s that not only is this ability secondary to other concerns (i.e., my teaching ability might be looked at only after they narrow the search down by grant funding and publications), but my teaching ability itself isn’t actually assessed. What is assessed are my student evaluations and that is decidedly not the same thing.

References: Edwards, M. & Roy, S. (2017). Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34, 51-61.

Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One. 4, e5738

Courting Controversy

“He says true but unpopular things. If you can’t talk about problems, you can’t fix them.”

The above quote comes to us from an interview with Trump supporters. Regardless of what one thinks about Trump and the truth of what he says, that idea holds a powerful truth itself: the world we live in can be a complicated one, and if we want to figure out how to best solve the problems we face, we need to be able to talk about them openly; even if the topics are unpleasant or the ideas incorrect. That said, there are some topics that people tend to purposefully avoid talking about. Not because the topics themselves are in some way unimportant or uninteresting, but rather because the mere mention of them is not unlike the prodding of a landmine. They are taboo thoughts: things that are made difficult to even think without risking moral condemnation and social ostracism. As I’m no fan of taboos, I’m going to cross one of them today myself, but in order to talk about those topics with some degree of safety, one needs to begin by talking about other topics which are safe. I want to first talk about something that is not dangerous, and slowly ramp up the danger. As a fair warning, this does require that this post be a bit longer than usual, but I think it’s a necessary precaution. 

“You have my attention…and it’s gone”

Let’s start by talking about driving. Driving is a potentially dangerous task, as drivers are controlling heavy machinery traveling at speeds that regularly break 65 mph. The scope of that danger can be highlighted by estimates that put the odds of pedestrian death – were they to be struck by a moving vehicle – at around 85% at only 40 mph. Because driving can have adverse consequences for both the driver and those around them, we impose restrictions on who is allowed to drive what, where, when, and how. The goal we are trying to accomplish with these restrictions is to minimize harm while balancing benefits. After all, driving isn’t only risky; it’s also useful and something people want to do. So, how are we going to – ideally – determine who is allowed the ability to drive and who is not? The most common solution, I would think, is to determine what risks we are trying to minimize and then ensure that people are able to surpass some minimum threshold of demonstrated ability. Simply put, we want to know people are good drivers.

Let’s make that concrete. In order to safely operate a vehicle you need to be able: (a) see out of the windows, (b) know how to operate the car mechanically, (c) have the physical strength and size to operate the car, (d) understand the “rules of the road” and all associated traffic signals, (e) have adequate visual acuity to see the world you’ll be driving through, (f) possess adequate reaction time to be able to respond to the ever-changing road environment, and (g) possess the psychological restraint to not take excessive risks, such as traveling at unreasonably high speeds or cutting people off. This list is non-exhaustive, but it’s a reasonable place to start.

If you want to drive, then, you need to demonstrate that you can see out of the car while still being able to operate it. This would mean that those who are too small to accomplish both tasks at once – like young children or very short adults – shouldn’t be allowed to drive. Similarly, those who are physically large enough to see out of the windows but possess exceptionally poor eyesight should similarly be barred from driving, as we cannot trust they will respond appropriately. If they can see but not react in time, we don’t want them on the road either. If they can operate the car, can see, and know the rules but refuse to obey them and drive recklessly, we either don’t grant them a license or revoke it if they already have one.

In the service of assessing these skills we subject people to a number of tests: there are written tests that must be completed to determine knowledge of the rules of the road; there are visual tests; there are tests of driving ability. Once these tests are passed, they are still reviewed from time to time, and a buildup of infractions can yield to a revocation of driving privileges.

However, we do not test everyone for these abilities. All of these things that we want a driver’s license to reflect – like every human trait – need to develop over time. In other words, they tend to fall within some particular distribution – often a normal one – with respect to age. As such, younger drivers are thought to pose more risk than adult drivers along a number of these desired traits. For instance, while not every person who is 10 years old is too small to operate a vehicle, the large majority of them are. Similarly, your average 15-year-old might not appropriately understand the risks of reckless driving and avoid it as we would hope. Moreover, the benefits that these young individuals can obtain from driving are lower as well; it’s not common for 12-year-olds to need a car to commute to work.

Accordingly, we also set minimum age laws regarding when people can begin to be considered for driving privileges. These laws are not set because it is impossible that anyone below the specific age set by it might have need of a car and be able to operate it safely and responsibly, but rather a recognition that a small enough percentage of them can that it’s not really worth thinking about (in the case of two-year-olds, for instance, that percentage is 0, as none could physically operate the vehicle; in the case of 14-year-olds it’s non-zero, but judged to be sufficiently low all the same). There are even proposals floating around concerning something like a maximum driving age, as driving abilities appear to deteriorate appreciably in older populations. As such, it’s not that we’re concerned about the age per se of the drivers – we don’t just want anyone over the age of 18 on the road – but age is a still a good correlate of other abilities and allows us to save a lot of time in not having to assess every single individual for driving abilities from birth to death under every possible circumstance.

Don’t worry; he’s watched plenty of Fast & Furious movies

This brings us to first point of ramping up the controversy. Let’s talk a bit about drunk driving. We have laws against operating vehicles while drunk because of the effects that drinking has: reduced attention and reaction time, reduced inhibitions resulting in more reckless driving, and impaired ability to see or stay awake, all of which amount to a reduction in driving skill and increase potential for harmful accidents. Reasonable as these laws sound, imagine, if you would, two hypothetical drivers: the worst driver legally allowed to get behind a wheel, as well as the best driver. Sober, we should expect the former to pose a much greater risk to himself and others than the latter but, because they both pass the minimum threshold of ability, both are allowed to drive. It is possible, however, that the best driver’s abilities while he is drunk still exceed those of the worst driver’s while he is sober.

Can we recognize that exception to the spirit of the law against drunk driving without saying it is morally or legally acceptable for the best driver to drive drunk? I think we can. There are two reasons we might do so. The first is that we might say even if the spirit of the rule seems to be violated in this particular instance, the rule is still one that holds true more generally and should be enforced for everyone regardless. That is, sometimes the rule will make a mistake (in a manner of speaking), but it is right often enough that we tolerate the mistake. This seems perfectly reasonable, and is something we accept in other areas of life, like medicine. When we receive a diagnosis from a doctor, we accept that it might not be right 100% of the time, but (usually) believe it to be right often enough that we act as if it were true. Further, the law is efficient: it saves us the time and effort in testing every driver for their abilities under varying levels of intoxication. Since the consequences of making an error in this domain might outweigh the benefits of making a correct hit, we work on maximizing the extent to which we avoid errors. If such methods of testing driving ability were instantaneous and accurate, however, we might not need this law against drunk driving per se because we could just be looking at people’s ability, rather than blood alcohol content. 

The second argument you might make to uphold the drunk driving rule is to say that even if the best drunk driver is still better than the worst sober one, the best drunk driver is nevertheless a worse driver than he is while sober. As such, he would be imposing more risk on himself and others than he reasonably needs to, and should not be allowed to engage in the behavior because of that. This argument is a little weaker – as it sets up a double standard – but it could be defensible in the right context. So long as you’re explicit about it, driving laws could be set such that people need to pass a certain threshold of ability and need to be able to perform within a certain range of their maximum ability. This might do things like make driving while tired illegal, just like drunk driving. 

The larger point I hope to hit on here is the following, which I hope we all accept: there are sometimes exceptions (in spirit) to rules that generally hold true and are useful. It is usually the case that people below a certain minimum driving age shouldn’t be trusted with the privilege, but it’s not like something magical happens at that age where an ability appears fully-formed in their brain. People don’t entirely lack the ability to drive at 17.99 years old and possess it fully at 18.01 years. That’s just not how development works for any trait in any species. We can recognize that some young individuals possess exceptional driving abilities (at least for their age, if not in the absolute sense, like this 14-year-old NASCAR driver) without suggesting that we change the minimum age driving law or even grant those younger people the ability to drive yet. It’s also not the case (in principle) that every drunk driver is incapable of operating their vehicle at or above the prescribed threshold of minimum safety and competency. We can recognize those exceptional individuals as being unusual in ability while still believing that the rule against drunk driving should be enforced (even for them) and be fully supportive of it.

That said, 14-year-old drunk drivers are a recipe for disaster

Now let’s crank up the controversy meter further and talk about sex. Rather than talking about when we allow people to drive cars and under what circumstances, let’s talk about when we accept their ability to consent to have sex. Much like driving, sex can carry potential costs, including pregnancy, emotional harm, and the spread of STIs. Also like driving, sex tends to carry benefits, like physical pleasure, emotional satisfaction and, depending on your perspective, pregnancy. Further, much like driving, there are laws set for the minimum age at which someone can be said to legally consent to sex. These laws seem to be set in balancing the costs and benefits of the act; we do not trust the individuals below certain ages are capable of making responsible decisions about when to engage in the act, with whom, in what contexts, and so on. There is a real risk that younger individuals can be exploited by older ones in this realm. In other words, we want to ensure that people are at least at a reasonable point in their physical and psychological development that can allow them to make an informed choice. Much like driving (or signing contracts), we want people to possess a requisite level of skills before they are allowed to give consent for sex.

This is where the matter begins to get complicated because, as far as I have seen throughout discussions on the matter, people are less than clear about what skills or bodies of knowledge people should possess before they are allowed to engage in the act. While just about everyone appears to believe that people should possess a certain degree of psychological maturity, what that precisely means is not outlined. In this regard, consent is quite unlike driving: people do not need to obtain licenses to have sex (excepting some areas in which sex outside of marriage is not permitted) and do not need to demonstrate particular skills or knowledge. They simply need to reach a certain age. This is (sort of) like giving everyone over the age of, say, 16, a license to drive regardless of their abilities. This lack of clarity regarding what skills we want people to have is no doubt as least partially responsible for the greater variation in age of consent laws, relative to age of driving laws, across the globe.   

The matter of sex is complicated by a host of other factors, but the main issue is this: it is difficult for people to outline what psychological traits we need to have in order to be deemed capable of engaging in the behavior. For driving, this is less of a problem: pretty much everyone can agree on what skills and knowledge they want other drivers to have; for sex, concerns are much more strategic. Here’s a great for instance: one potential consequence (intended for some) to sex is pregnancy and children. Because sex can result in children and those children need to be cared for, some might suggest that people who cannot reasonably be expected to be able to provide well enough for said children should be barred from consenting to sex. This proposal is frequently invoked to justify the position that non-adults shouldn’t be able to consent to sex because they often do not have access to child-rearing resources. It’s an argument that has intuitive appeal, but it’s not applied consistently. That is, I don’t see many people suggesting that the age of consent should be lowered for rich individuals who could care for children, nor that people who fall below a certain poverty line be barred from having sex because they might not be able to care for any children it produced.

There are other arguments one might consider on that front as well: because the biological consequences of sex fall on men and women differently, might we actually hold different standards for men and women when considering whether they are allowed to engage in the behavior? That is, would it be OK for a 12-year-old boy to consent to sex with a 34-year-old woman because she can bear the costs of pregnancy, but not allow the same relationship when the sexes were reversed? Legally we have the answer: no, it’s not acceptable in either case. However, there are some who would suggest such the former relationship is actually acceptable. Even in the realm of law, it would seem, a sex-dependent standard has been upheld in the past. 

Sure hope that’s his mother…

This is clearly not an exhaustive list of questions regarding how age of consent laws might be set, but the point should be clear enough: without a clear standard about what capabilities one needs to possess to be able to engage in sex, we end up with rather unproductive discussions. Making things even trickier, sex is more of a strategic act than driving, yielding greater disagreements over the matter and inflamed passions. It is very difficult to make explicit what abilities we want people to demonstrate in order to be able to consent to sex and reach consensus on them for just this reason. Toss in the prospect of adults taking advantage of teenagers and you have all the makings of a subject people really don’t want to talk about. As such, we are sometimes left in a bit of an awkward spot when thinking about whether exceptions to the spirit of age of consent laws exist. Much like driving, we know that nothing magical happens to someone’s body and brain when they hit a certain age: development is a gradual process that, while exhibiting regularities, does not occur identically for all people. Some people will possess the abilities we’d like them to have before the age of consent; some people won’t possess those abilities even after it.

Importantly – and this is the main point I’ve been hoping to make – this does not mean we need to change or discard these laws. We can recognize that these laws do not fit every case like a glove while still behaving as if they do and intuitively judging them as being about right. Some 14-year-olds do possess the ability to drive, but they are not allowed to legally; some 14-year-olds possess whatever requisite abilities we hope those who consent to sex will have, but we still treat them as if they do not. At least in the US: in Canada, the age of consent is currently 16, up from 14 a few years ago, in some areas of Europe it is still 14, and in some areas of Mexico it can be lower than that.

“Don’t let that distract from their lovely architecture or beaches, though”

Understanding the variation in these intuitions both between countries, between individuals, and over time are interesting matters in their own right. However, there are some who worry about the consequences of even discussing the issue. That is, if we acknowledge that even a single individual is an exception to the general rule, we would be threatening the validity of the rule itself. Now I don’t think this is the case, as I have outlined above, but it is worth adding the following point to that concern: recognizing possible exceptions to the rule is an entirely different matter than the consequences of doing so. Even if there are negative consequences to discussing the matter, that doesn’t change the reality of the situation. If your argument requires that you fail to recognize parts of reality because it might upset people – or that your decree, from the get go, that certain topics cannot be discussed – then your argument should be refined.

There is a fair bit of danger in accepting these taboos: while it might seem all well and good when the taboo is directed against a topic you feel shouldn’t be discussed, a realization needs to be made that your group is not always going to be in charge of what topics fall under that umbrella, and to accept it as legitimate when it benefits you is to accept it as legitimate when it hurts you as well. For instance, not wanting to talk about sex with children out of fear it would cause younger teens to become sexually active yielded the widely-ineffective abstinence-only sex education (and, as far as I can tell, talking comprehensive sex education does not result in worse outcomes, but I’m always open to evidence that it does). There is a real hunger in people to understand the world and to be able to voice what is on their mind; denying that comes with very real perils.

The Connection Between Economics and Promiscuity

When it comes to mating, humans are a rather flexible species. In attempting to make sense of this variation, a natural starting point for many researchers is to try and tackle what might be seen as the largest question: why are some people more inclined to promiscuity or monogamy than others? Though many answers can be given to that question, a vital step in building towards a plausible and useful explanation of the variance is to consider the matter of function (as it always is). That is, we want to be asking ourselves the question, “what adaptive problems might be solved by people adopting long- or short-term mating strategies?” By providing answers to this question we can, in turn, develop expectations for what kind of psychological mechanisms exist to help solve these problems, explanations for how they could solve them, and then go examine the data more effectively for evidence of their presence or absence.

It will help until the research process is automated, anyway

The current research I wanted to talk about today begins to answer the question of function by considering (among other things) the matter of resource acquisition. Specifically, women face greater obligate biological costs when it comes to pregnancy than men. Because of this, men tend to be the more eager sex when it comes to mating and are often willing to invest resources to gain favor with potential mates (i.e., men are willing to give up resources for sex). Now, if you’re a woman, receiving this investment is an adaptive benefit, as it can be helpful in ensuring the survival and well-being of both yourself and your offspring. The question then becomes, “how can women most efficiently extract these resources from men?” As far as women are concerned, the best answer – in an ideal world – is to extract the maximum amount of investment from the maximum amount of men.

However, men have their own interests too; while they might be willing to pay-to-play, as it were, the amount they’re willing to give up depends on what they’re getting in return. What men are looking for (metaphorically or literally speaking) is what women have: a guarantee of sharing genes with their offspring. In other words, men are looking for paternity certainty. Having sex with a woman a single time increases the odds of being the father of one of her children, but only by a small amount. As such, men should be expected to prefer extended sexual access over limited access. Paternity confidence can also be reduced if a woman is having sex with one or more other men at the same time. This leads us to expect that men adjust their willingness to invest in women upwards if that investment can help them obtain one or both of those valued outcomes.

This line of reasoning lead the researchers to develop the following hypothesis: as female economic dependence on male investment increases, so too should anti-promiscuity moralization. That is, men and women should both increase their moral condemnation of short-term sex when male investment is more valuable to women. For women, this expectation arises because promiscuity threatens paternity confidence, and so their engaging in mating with multiple males should make it more difficult for them to obtain substantial male investment. Moreover, other women engaging in short-term sex similarly makes it more difficult for even monogamous women to demand male investment, and so would be condemned for their behavior as well. Conversely, since men value paternity certainty, they too should condemn promiscuity to a greater degree when their investment is more valuable, as they are effectively in a better position to bargain for what they want.

In sum, the expectation in the present study was that as female economic dependence increases, men and women should become more opposed to promiscuous mating.

“Wanted: Looking for paternity certainty. Will pay in cash”

This was tested in two different ways: in the first study, 656 US residents answered questions about their perceptions of female economic dependence on male investment in their social network, as well as their attitudes about promiscuity and promiscuous people. The correlation between the measures ended up being r = .28, which is a good proof of concept, though not a tremendous relationship (which is perhaps to be expected, given that multiple factors likely impact attitudes towards promiscuity). When economic dependence was placed into a regression to predict this sexual moralization, controlling for age, sex, religiosity, and conservatism in the first step, it was found that female economic dependence accounted for approximately 2% of the remaining variance in the wrongness of promiscuity ratings. That’s not nothing, to be sure, but it’s not terribly substantial either.

In the second study, 4,626 participants from across the country answered these same basic questions, along with additional questions, like their (and their partner’s) personal income. Again, there was a small correlation (r = .23) between female economic dependence and wrongness of promiscuity judgments. Also again, when entered into a regression, as before, an additional 2% of the variance in these wrongness judgments was predicted by economic dependence measures. However, this effect became more substantial when the analysis was conducted at the level of the states, rather than at the level of individuals. At the state level, the correlation between female economic dependence and attitudes towards promiscuity now rose to r = .66, with the dependence measure predicting 9% of the variance of promiscuity judgments in the regression with the other control factors.

Worth noting is that, though a women’s personal income was modestly predictive of her attitudes towards promiscuity, it was not as good of a predictor as her perception of the dependence of women she knows. There are two ways to explain this, though they are not mutually exclusive: first, it’s possible that women are adjusting their attitudes so as to avoid condemnation of others. If lots of women rely on this kind of investment, then she could be punished for being promiscuous even if it was in her personal interests. As such, she adopts anti-promiscuity attitudes as a way of avoiding punishment preemptively. The second explanation is that, given our social nature, our allies are important to us, and adjusting our moral attitudes so as to gain and maintain social support is also a viable strategy. It’s something of the other side of the same social support coin, and so both explanations can work together.

The dual-purpose marriage/friendship ring

Finally, I wanted to discuss a theoretical contradiction I find myself struggling to reconcile. Specifically, in the beginning of the paper, the authors mention that females will sometimes engage in promiscuous behavior in the service of obtaining resources from multiple males. A common example of this kind of behavior is prostitution, where a woman will engage in short-term intercourse with men in explicit exchange for money, though the strategy need not be that explicit or extreme. Rather than obtaining lots of investment from a single male, then, a viable female strategy should be to obtain several smaller investments from multiple males. Following this line of reasoning, then, we might end up predicting that female economic dependence on males might increase promiscuity and, accordingly, lower moral condemnation of it, at least in some scenarios.

If that were the case, the pattern of evidence we might predict is that, when female economic dependence is high, we should see attitudes towards promiscuity become more bi-modal, with some women more strongly disapproving of it while others become more strongly approving. As such, looking at the mean impact of these economic factors might be something of a wash (as they kind of were on the individual level). Instead, one might be interested in looking at the deviations from the mean instead, and see if those areas in which female economic dependence is the greatest show a larger standard deviation from the average moralization value than those in areas of lower dependence. Perhaps there are some theoretical reasons that this is implausible, but none are laid out in the paper.

References: Price, M., Pound, N., & Scott, I. (2014). Female economic dependence and the morality of promiscuity. Archives of Sexual Behavior, 43, 1289-1301.

Why Non-Violent Protests Work

It’s a classic saying: The pen is mightier than the sword. While this saying communicates some valuable information, it needs to be qualified in a significant way to be true. Specifically, in a one-on-one fight, the metaphorical pens do not beat swords. Indeed, as another classic saying goes: Don’t bring a knife to a gun fight. If knives aren’t advisable against guns, then pens are probably even less advisable. This raises the question as to how – and why – pens can triumph over swords in conflicts. These questions are particularly relevant, given some recent happenings in California at Berkeley where a protest against a speaking engagement by Milo Yiannopoulos took a turn for the violent. While those who initiated the violence might not have been students of the school, and while many people who were protesting might not engage in such violence themselves when the opportunity arises, there does appear to be a sentiment among some people who dislike Milo (like those leaving comments over on the Huffington Post piece) that such violence is to be expected, is understandable, and sometimes even morally justified or praiseworthy. The Berkeley riot was not the only such incident lately, either.

The Nazis shooting guns is a very important detail here

So let’s discuss why such violent behavior is often counterproductive for the swords in achieving their goals. Non-violent political movements, like those associated with leaders like Martin Luther King Jr. and Gandhi, appear to yield results, at least according to the only bit of data on the matter I’ve come across (for the link-shy: nonviolent campaigns combined complete and partial success rate was about 73%, while the comparable violent rate was about 33%). I even came across a documentary recently I intend to watch about a black man who purportedly got over 200 members of the KKK to leave the organization without force or the threat of it; he simply talked to them. That these nonviolent methods work at all seems rather unusual, at least if you were to frame it in terms of any other nonhuman species. Imagine, for instance, that a chimpanzee doesn’t like how he is being treated by the resident dominant male (who is physically aggressive), and so attempts to dissuade that individual from his behavior by nonviolently confronting him. No matter how many times the dominant male struck him, the protesting chimp would remain steadfastly nonviolent until he won over the other chimps in his group, and they all turned against the dominant male (or until the dominant male saw the error of his ways). As this would likely not work out for our nonviolent chimp, hopefully nonviolent protests are sounding a little stranger to you now; yet they often seem to work better than violence, at least for humans. We want to know why.

The answer to that question involves turning our attention back to the foundation of our moral sense: why do we perceive a dimension of right and wrong in the world in the first place? The short answer to this question, I think, is that when a dispute arises, those involved in the dispute find themselves in a state of transient need for social support (since numbers can decide the outcome of the conflict). Third parties (those not initially involved in the dispute) can increase their value as a social asset to one of the disputants by filling that need and assisting them in the fight against the rival. This allows third parties to leverage the transient needs of the disputants to build future alliances or defend existing allies. However, not all behaviors generate the same degree of need: the theft of $10 generates less need than a physical assault. Accordingly, our moral psychology represents a cognitive mechanism for determining what degree of need tends to be generated by behaviors in the interests of guiding where one’s support can best be invested (you can find the longer answer here). That’s not to say our moral sense will be the only input for deciding what side we eventually take – factors like kinship and interaction history matter too – but it’s an important part of the decision.

The applications of this idea to nonviolent protest ought to be fairly apparent: when property is destroyed, people are attacked, and the ability of regular citizens to go about their lives is disrupted by violent protests, this generates a need for social support on the part of those targeted or affected by the violence. It also generates worries in those who feel they might be targeted by similar groups in the future. So, while the protesters might be rioting because they feel they have important needs that aren’t being met (seeking to achieve them via violence, or the threat of it), third parties might come to view the damage inflicted by the protest as being more important or harmful (as they generate a larger, or more legitimate need). The net result of that violence is now that third parties side against the protesters, rather than with them. By contrast, a nonviolent protest does not create as large a need on the part of those it targets; it doesn’t destroy property or harm people. If the protesters have needs they want to see met and they aren’t inflicting costs on others, this can yield more support for the protester’s side.

I’m sure the owner of that car really had this coming…

This brings us to our third classic saying of the post: While I disagree with what you have to say, I will defend to the death your right to say it. Though such a sentiment might be seldom expressed these days, it highlights another important point: even if third parties agree with the grievances of the protesters (or, in this case, disagree with the behavior of the people being protested), the protesters can make themselves seem like suitably poor social assets by inflicting inappropriately-large costs (as disagreeing with someone generates less harm than stifling their speech through violence). Violence can alienate existing social support (since they don’t want to have to defend you from future revenge, as people who pick fights tend to initiate and perpetuate conflicts, rather than end them) and make enemies of allies (as the opposition now offers a better target of social investment, given their relative need). The answer as to why pens can beat swords, then, is not that pens are actually mightier (i.e., capable of inflicting greater costs), but rather that pens tend to be better at recruiting other swords to do their fighting for them (or, in more mild cases, pens can remove the social support from the swords, making them less dangerous). The pen doesn’t actually beat the sword; it’s the two or more swords the pen has persuaded to fight for it – and not the opposing sword – that do.

Appreciating the power of social support helps bolster our understanding of other possible interactions between pens and swords. For instance, when groups are small, swords will likely tend to be more powerful than pens, as large numbers of third parties aren’t around to be persuaded. This is why our nonviolent chimp example didn’t work well: chimps don’t reliably join disputes as third parties on the basis of behavior the way humans do. Without that third-party support, non-violence will fail. The corollary point here is that pens might find themselves in a bit of a bind when it comes to confrontations with other pens. Put in plain terms: nonviolence is a useful rallying cry for drawing social support if the other side of the dispute is being violent. If both sides abstain from violence, however, nonviolence per se no longer persuades people. You can’t convince someone to join your side in a dispute by pointing out something your side shares with the other. This should result in the expectation that people will frequently over-represent the violence of the opposition, perhaps even fabricating it completely, in the interests of persuading others. 

Yet another point that can be drawn from this analysis is that even “bad” ideas or groups (whether labeled as such because of moral or factual reasons) can recruit swords to their side if they are targeted by violence. Returning to the cases we began with – the riot at UC Berkeley and the incident where Richard Spencer got punched – if you hope to exterminate people who hold disagreeable views, then violence might seem like the answer. However, as we have seen, violence against others, even disagreeable others, who are not themselves behaving violently can rally support from third parties, as they might begin to worry that threats to free speech (or other important issues) are more harmful than the opinions and words we find disagreeable (again, hitting someone creates more need than talking does). On the other hand, if you hope to persuade people to join your side (or at least not join the opposition), you will need to engage with arguments and reasoning. Importantly, you need to treat those you hope to persuade as people and engage with the ideas and values they actually hold. If the goal in these disputes really is to make allies, you need to convince others that you have their best interests at heart. Calling those who disagree “baskets of deplorables,” suggesting they’re too stupid to understand the world, or anything to that extent doesn’t tend to win their hearts and minds. If anything, it sends a signal to them that you do not value them, giving them all the more reason to not spend their time helping you achieve your goals.  

“Huh; I guess I really am a moron and you’re right. Well done,” said no one, ever.

As a final matter, we could also discuss the idea that violence is useful at snuffing out threats preemptively. In other words, better to stop someone before they can try and attack you, rather than after their knife is already in your back. There are several reasons preemptive defense is just as suspect, so let’s run through a few: first, there are different legal penalties for acts like murder and attempted murder, as attempted – but incomplete acts – generate less needs than completed ones. As such, they garner less social support. Second, absent very strong evidence that the people targeted for violence would have eventually become violent, the preemptive attacks will not look defensive; they will simply look aggressive, returning to the initial problems violent protests face. Relatedly, it is unlikely to ever make allies of enemies; if anything, it will make deeper enemies of existing ones and their allies. Remember: when you hurt someone, you indirectly inflict costs on their friends, families, and other relations as well. Finally, some people will likely develop reasonable concerns about the probability of being attacked for holding other opinions or engaging in behaviors people find unpleasant or dangerous. With speech already being equated to violence among certain groups, this concern doesn’t seem unfounded. 

In the interests of persuading others – actors and third parties alike – nonviolence is usually the better first step. However, nonviolence alone is not enough, especially if your opposition is nonviolent as well. Not being violent does not mean you’ve already won the dispute; just that you haven’t lost it. It is at that point you need to persuade others that your needs are legitimate, your demands reasonable, and your position in their interests as well, all while your opposition attempts to be persuasive themselves. It’s not an easy task, to be sure, and it’s one many of us are worse at then we’d like to think; it’s just the best way forward.

On The Need To Evolutionize Memory Research

This semester I happen to be teaching a course on human learning and memory. Part of the territory that comes with designing and teaching any class is educating yourself on the subject: brushing up on what you do know and learning about what you do not. For the purposes of this course, much of my preparations come from the latter portion. Memory isn’t my main specialty, so I’ve been spending a lot of time reading up on it. Wandering into a relatively new field is always an interesting experience, and on that front I consider myself fortunate: I have a theoretical guide to help me think about and understand the research I’m encountering – evolution. Rather than just viewing the field of memory as a disparate collection of facts and findings, evolutionary theory allows me to better synthesize and explain, in a satisfying way, all these novel (to me) findings and tie them to one another. It strikes me as unfortunate that, as with much of psychology, there appears to be a distinct lack of evolutionary theorizing on matters of learning and memory, at least as far as the materials I’ve come across would suggest. That’s not to say there has been none (indeed, I’ve written about some before), but rather that there certainly doesn’t seem to have been enough. It’s not the foundation of the field, as it should be. 

“How important could a solid foundation really be?”

To demonstrate what I’m talking about, I wanted to consider an effect I came across during my reading: the generation effect in memory. In this case, generation refers not to a particular age group (e.g., people in my generation), but rather to the creation of information, as in to generate. The finding itself – which appears to replicate well – is that, if you give people a memory task, they tend to be better at remembering information they generated themselves, relative to remembering information that was generated for them. To run through a simple example, imagine I was trying to get you to remember the word “bat.” On the one hand, I could just have the word pop up on a screen, tell you to read and remember it. On the other hand, I could give you a different word, say, “cat” and ask you to come up with a word that rhymes with “cat” that can complete the blanks in “B _ _.” Rather than my telling you the word “bat,” then, you would generate the word on your own (even if the task nudges you towards generating it rather strongly). As it turns out, you should have a slight memory advantage for the words you generated, relative to the words you were just given.

Now that’s a neat finding at all – likely one that people would read about and thoughtfully nod their head in agreement – but we want to explain it: why is memory better for words you generate? On that front, the textbook I was using was of no use, offering nothing beyond the name of the effect and a handful of examples. If you’re trying to understand the finding – much less explain it to a class full of students – you’ll be on your own. Textbooks are always incomplete, though, so I turned to some of the referenced source material to see how the researchers in the field were thinking about it. These papers seemed to predominately focus on how information was being processed, but not necessarily on why it was being processed that way. As such, I wanted to advance a little bit of speculation on how an evolutionary approach could help inform our understanding of the finding (I say could because this is not the only possible answer to the question one could derive from evolutionary theory; what I hope to focus on is the approach to answering the question, rather than the specific answer I will float. Too often people can talk about an evolutionary hypothesis that was wrong as a reflection of the field, neglecting that how an issue was thought through is a somewhat separate matter from the answer that eventually got produced).

To explain the generation effect I want to first take it out of an experimental setting and into a more naturalistic one. That is, rather than figuring out why people can remember arbitrary words they generated better than ones they just read, let’s think about why people might have a better memory for information they’ve created in general, relative to information they heard. The initial point to make on that front is that our memory systems will only retain a (very) limited amount of the information we encounter. The reason for this, I suspect, is that if we retained too much information, cognitively sorting through it for the most useful pieces of information would be less efficient, relative to a case where only the most useful information was retained in the first place. You don’t want a memory (which is metabolically costly to maintain) chock-full of pointless information, like what color shirt your friend wore when you hung out 3 years ago. As such, we ought to expect that we have a better memory for events or facts that carry adaptively-relevant consequences.

“Yearbooks; helping you remember pointless things your brain would otherwise forget”

Might information you generate carry different consequences than information you just hear about? I think there’s a solid case to be made that, at least socially, this can be true. In a quick example, consider the theory of evolution itself. This idea is generally considered to be one the better ones people (collectively) have had. Accordingly, it is perhaps unsurprising that most everyone knows the name of the man who generated this idea: Charles Darwin. Contrast Darwin with someone like me: I happen to know a lot about evolutionary theory and that does grant me some amount of social prestige within some circles. However, knowing a lot about evolutionary theory does not afford me anywhere near the amount of social acclaim that Darwin receives. There are reasons we should expect this state of affairs to hold as well, such as that generating an idea can signal more about one’s cognitive talents than simply memorizing it does. Whatever the reasons for this, however, if ideas you generate carry greater social benefits, our memory systems should attend to them more vigilantly; better to not forget that brilliant idea you had than the one someone else did.

Following this line of reasoning, we could also predict that there would be circumstances in which information you generated is recalled less-readily than if you had just read about it: specifically, in cases when the information would carry social costs for the person who generated it.

Imagine, for instance, that you’re a person who is trying to think up reasons to support your pet theory (call that theory A). Initially, your memory for that reasoning might be better if you think you’ve come up with an argument yourself than if you had read about someone else who put forth that same idea. However, it later turns out that a different theory (call that theory B) ends up saying your theory is wrong and, worse yet, theory B is also better supported and widely-accepted. At that point, you might actually observe that the person’s memory for the initial information supporting theory A is worse if they generated those reasons themselves, as that reflects more negatively on them than if they had just read about someone else being wrong (and memory would be worse, in this case, because you don’t want to advertise the fact that you were wrong to others, while you might care less about talking about why someone who wasn’t you was wrong).

In short, people might selectively forget potentially embarrassing information they generated but was wrong, relative to times they read about someone else being wrong. Indeed, this might be why it’s said truth passes through three stages: ridicule, opposition, and acceptance. This can be roughly translated to someone saying of a new idea, “That’s silly,” to, “That’s dangerous,” to, “That’s what I’ve said all along.” This is difficult to test, for sure, but it’s a possibility worth mulling over.

How you should feel reading over old things you forgot you wrote

With the general theory described, we can now try and apply that line of thinking back into the unnatural environment of memory research labs in universities. One study I came across (deWinstanley & Bjork, 1997) claims that the generation effect doesn’t always have an advantage over reading information. In their first experiment, the researchers had conditions where participants would either read cue-word pairs (like “juice” – “orange”, and, “sweet” – “pineapple”) or read a cue and then generate a word (e.g., “juice” – “or_n_ _”). The participants would later be tested on how many of the target words (the second one in the pair) they could recall. When participants were just told there would be a recall task later, but not the nature of that test, the generate group had a memory advantage. However, when both groups were told to focus on the relationship between the targets (such as them all being fruits), the read group’s ability to remember now matched that of the generate group.

In their second experiment, the researchers then changed the nature of the memory task: instead of asking participants to just freely recall the target words, they would be given the cue word and asked to recall the associated target (e.g., they see “juice” and need to remember “orange”). In this case, when participants were instructed to focus on the relationship between the cue and the target, it was the read participants with the memory advantage; not the generate group.

One might explain these findings within this framework I discussed as follows: in the first experiment, participants in the “read” condition were actually also in an implicit generate condition; they were being asked to generate a relationship between the targets to be remembered and, as such, their performance improved on the associated memory task. By contrast, in the second experiment, participants in the read condition were still in the implicit “generate” condition: being asked to generate connections between the cues and targets. However, those in the explicit generate condition were only generating the targets; not their cues. As such, it’s possible participants tended to selectively attend to the information they had created over the information they did not. Put simply, the generate participant’s ability to better recall the words they created was interfering with their ability to remember their associations with the words they did not create. Their memory systems were focusing on the former over the latter.

A more memorable meal than one you go out and buy

If one wanted to increase the performance of those in the explicit generate condition for experiment two, then, all a researcher might have to do would be to get their participants to generate both the cue and the target. In that instance, the participants should feel more personally responsible for the connections – it should reflect on them more personally – and, accordingly, remember them better. 

Now whether that answers I put forth get it all the way (or even partially) right is besides the point. It’s possible that the predictions I’ve made here are completely wrong. It’s just that what I have been noticing is that words like “adaptive” and “relevance” are all but absent from this book (and papers) on memory. As I hope this post (and my last one) illustrates, evolutionary theory can help guide our thinking to areas it might not otherwise reach, allowing us to more efficiently think up profitable avenues for understanding existing research and creating future projects. It doesn’t hurt that it helps students understand the material better, either.

References: deWinstanley, P. & Bjork, E. (1997). Processing instructions and the generation effect: a test of the multifactor transfer-appropriate processing theory. Memory, 5, 401-421.

 

The Adaptive Significance Of Priming

One of the more common words you’ll come across in the psychological literature is priming, which is defined as an instance where exposure to one stimulus influences the reaction to a subsequent one. There are plenty of examples one might think of to demonstrate this effect, one of which might be if you were to ask participants to tell you whether a string of letters they see pop up on a screen is a word or a non-word. They would be quicker to respond to the word “nurse” if it were preceded by the prime of “doctor” relative to being preceded by “chair,” owing to the relative association (or lack thereof) between the words. While a great deal of psychological literature deals with priming, very few papers I have come across actually attempt to give some kind of adaptive, functional account of what priming is and, accordingly, why they should expect it to behave the way it does. Because of that absence of theoretical grounding, some research that utilizes priming ends up generating some hypotheses that aren’t just strange; they’re biologically implausible. 

Pictured: Something strange, but at least biological plausible

To give a more concrete sense of what I mean, I wanted to briefly summarize some of that peculiar research using priming that I’ve covered before. In this case, the research either focuses on how priming can affect perceptions about the world or how priming can affect people’s behavior. These lines of inquiry often seem to try and demonstrate how people are biased, inaccurate, or otherwise easily manipulated by seemingly-minor environmental influences. To see what I mean, let’s consider some research findings on both fronts. In terms of perceptions about the world, a few findings are highlighted in this piece, including the prospect that holding a warm drink (instead of a cold one) can lead you judge other people as more caring and generous; a finding that falls under the umbrella of embodied cognition. Why would such a finding arise? If I understand correctly, the line of thought is that holding a warm drink activates some part of your brain that holds the concept “warm”; as that concept is tied indirectly to personality (e.g., “He’s a really warm and friendly person”), you end up thinking the person is nicer than you otherwise would. Warm drinks prime the concept of emotional warmth.

It doesn’t take much thinking about this explanation to see why it seems wrong: a mind structured in such a way would be making a mistake about the world. Specifically, because holding a warm object in your hand should have no effect on the personality and behavior of someone else, if you use that temperature information to influence your judgments, you will be more likely to misjudge the probable intentions of others. If you’re not nice, but I think you are (or at least you’re not as nice as I think you are), I will behave in sub-optimal ways, perhaps by putting more trust in you than I should or generating other expectations of you that won’t be fulfilled. Because there are real costs to being wrong about others – as it opens you up to risks of exploitation, for instance – a cognitive system wired this way should be outperformed by one which ignores more irrelevant information.

Other such examples of the effects of priming posit something similar, except on a behavioral level instead of a perceptual one. For instance, research on stereotype threat suggests that if you remind women of their gender before a math test (in other words, you’re priming gender), they will tend to perform worse than women who were not primed because the concept of “woman” is related to stereotypes of “being worse at a math.” This should be maladaptive for precisely the same reason that the perceptual variety is: actively making yourself worse at a task than you actually are will tend to carry costs. To the extent this effect ostensibly runs in the opposite direction – a case where someone gets better at a task because of a prime, as is the case in the work on power poses – one would wonder why an individual should wait for a prime rather than just get on with the task at hand. Now sure, these effects don’t replicate well and probably aren’t real (see stereotype threat, power poses, and elderly prime effects on walking speed), but that they would even be proposed in the first place is indicative of a problem in the way people think about psychology. They seem like ideas that couldn’t even possibly be correct, given what they would imply about the probable fitness costs to their bearers. Positing maladaptive psychological mechanisms seems rather perverse.

Now I suspect some might object at this point and remind me that not everything our brains do is adaptive. In fact, priming might simply just be an occasionally maladaptive byproduct of activating certain portions of our brain by proximity. This is referred to as spreading activation and it might just be an unfortunate byproduct of brain wiring. “Sure,” you might say, “this kind of spreading activation isn’t adaptive, but it’s just a function of how the brain gets wired up. Our brains can’t help it.” Well, as it turns out, it seems they certainly can.

“Don’t give me that look; you knew better!”

This brings me to some research on memory and priming by Klein et al (2002). These researchers begin with their adaptive framework for priming, suggesting that priming reflects a feature of our cognitive systems – rather than a byproduct – that helps speed up the delivery of appropriate information. In short, priming represents something of a search engine that uses one’s present state to try and predict what information will be needed in the near future. It is crucial to emphasize the word “appropriate” in that hypothesis: the benefits to accessing information stored in our memories, for instance, is to help guide our current behavior. As there many more ways of guiding your behavior towards some maladaptive end, rather than an adaptive one, information stored in memory needs to be accessed selectively. If you spend too much time accessing irrelevant or distracting information, the function of the priming itself – to deliver relevant information quickly – would be thwarted. To put that into a simple example, if you’re trying to quickly solve a problem related to whether you should trust someone, accessing information about what you had for breakfast that morning will not only fail to help you, but it will actively slow your completion of the task down. You’d be wasting time processing irrelevant information.

To demonstrate this point, Klein et al (2002) decided to look at trait judgments: essentially asking people a question like, “how well does the word ‘kind’ describe [you/someone else]?” Without going into too much detail on the matter, our brains seem to store information relevant to these tasks in two different formats: in a summary form and an episodic form. This means one memory system contains information about particular behavioral instances (e.g., a time someone was kind or mean) while another derives summaries of that behavior (e.g., that person, overall, is kind or mean). Broadly speaking, these different memory systems exist owing to a cognitive trade-off between speed and accuracy in judgments: if you want to know how to behave towards someone, it’s quicker to consult the summary information than process every individual memory of their behaviors. However, the summary information tends to be less complete and accurate than the sum of the individual memories. That is, knowing that someone is “often nice” doesn’t give you insight into the conditions during which they are mean. 

As such, if someone was trying to make a judgment about whether you were nice, if they have a summary of “often nice,” they don’t need to spend time consulting memories of every nice thing you’ve done; that would be redundant processing. Instead, they would want to selectively consult the information about times you are not nice, as this would help them figure out the boundary conditions of their judgment; when the “often nice” label doesn’t apply. This lead Klein et al (2002) to the following prediction: retrieving a trait summary of a person should prime trait-inconsistent episodes from memory, rather than trait-consistent ones. In short, priming effects ought to be functionally specific.

“The cat is usually friendly, except for those times you shaved him”

And that is exactly what they found: when participants were asked to judge whether a trait described them (or their mother), they were quicker to subsequently recall a time they (or their mother) behaved in an inconsistent manner. To put that in context, if participants were asked whether the word “polite” described them, they would be quicker to recall a specific instance they were rude, relative to a time they were polite. Moreover, just being asking to define the terms (e.g., rude or polite) didn’t appear to prime trait-consistent episodes in memory either: participants were not quicker to recall a time they were polite after having defined the term. This would be a function of the fact that defining a term does not require you to make a trait judgment about it, so episodic memories wouldn’t be relevant information.

These results are important because if priming were truly just a byproduct of spreading neural activation, then trait-judgments (Are you kind?) should prime trait-consistent episodes (a time you were kind); they just don’t seem to do that. As such, we could conclude that priming does not appear to just be a byproduct of neural activation. If priming isn’t just a biological necessity, then, studies which make use of this paradigm would have to better justify their expectations. If it’s not justifiable to expect indiscriminate neural activation, researchers would need to put in more time to explain and understand the particular patterns of priming they find. Ideally they would do this advance of conducting the research (as Klein et al did), as that would likely save people a lot of time publishing papers on priming that subsequently fail to replicate.  

References: Klein, S., Cosmides, L., Tooby, J., & Chance, S. (2002). Decisions and the evolution of memory: multiple systems, multiple functions. Psychological Review, 109, 306-329.

Overperception Of Sexual Interest Or Overeager Researchers?

Though I don’t make a habit of watching many shows, I do often catch some funny clips of them that have been posted online. One that I saw semi-recently (which I feel relates to the present post) is a clip from Portlandia. In this video, people are writing a magazine issue about a man living a life that embodies manhood. In this case, they select a man who used to work at an office, but then left his job and now makes furniture. While everyone is really impressed with the idea, it eventually turns out that the man in question does make furniture…but it’s terrible. Faced with the revelation that the man’s work isn’t good – that it probably wasn’t worth leaving his job to do something he’s bad at – the people in question aren’t impressed by his over-confidence in pursuing his furniture work. They don’t seem to find him more attractive because he was overconfident; quite the opposite in fact. The key determent of his attractiveness was the actual quality of his work. In other words, since he couldn’t back up his confidence with his efforts, the ratings of his attractiveness appeared to drop precipitously.

It might not be comfortable, but at least it’s hand-made

What we can learn from an example like this is that something like overconfidence per se – being more confident than one should be and behaving in ways one rightfully shouldn’t because of it – doesn’t appear to be impressive to potential mates. As such, we might expect that people who pursue activities they aren’t well suited for tend to do worse in the mating domain than those who are able to exist within a niche they more suitably fill: if you can’t cut it as a craftsman, better to keep that steady, yet less-interesting office job. This is largely a factor of the overconfident investing their time and effort into pursuits that do not yield positive benefits for them or others. You can think of it like playing the lottery, in a sense: if you are overly-confident that you’ll win the lottery, you might incorrectly invest money into lottery tickets that you could otherwise spend on pursuits that don’t amount to lighting it on fire.

This is likely why the research on the (over)perception of sexual intent turns out the way it does. I’ve written about the topic before, but to give you a quick overview of the main points: researchers have uncovered that men tend to perceive more sexual interest in women than women themselves report having. To put that in a simple example, if you were to ask a woman, “given you were holding a man’s hand, how sexually interested in him are you?” you’ll tend to get a different answer than if you ask a man, “given that a woman was holding your hand, how sexually interested do you think she is in you?” In particular, men tend to think behaviors like hand-holding signal more sexual intent than women report. These kinds of results have been chalked up to men overperceiving sexual intent, but more recent research puts a different spin on the answers: specifically, if you were to ask a woman, “given that another woman (who is not you) is holding a man’s hand, how sexually interested in him do you think she is?” the answers from the women now align with those of the men. Women (as well as men) seem to believe that other women will underreport their sexual intent, while believing their own self-reports are accurate. Taken together, then, both men and women seem to perceive more sexual intent in a woman’s behavior than the woman herself reports. Rather than everyone else overperceiving sexual intent, it seems a bit more probable that women themselves tend to underreport their own sexual intent. In a sentence, women might play a little coy, rather than everyone else in the world being wrong about them.

Today, I wanted to talk about a very recent paper (Murray et al, 2017) by some of the researchers who seem to favor the overperception hypothesis. That is, they seem to suggest that women honestly and accurately report their own sexual interest, but everyone else happens to perceive it incorrectly. In particular, their paper represents an attempt to respond to the point that women overperceive the sexual intent of other women as well. At the outset, I will say that I find the title of their research rather peculiar and their interpretation of the data rather strange (points I’ll get to below). Indeed, I found both things so strange that I had to ask around a little first before I stared writing this post to ensure that I wasn’t misreading something, as I know smart people wrote the paper in question and the issues seemed rather glaring to me (and if a number of smart people seem to be mistaken, I wanted to make sure the error wasn’t simply something on my end first). So let’s get into what was done in the paper and why I think there are some big issues.

“…Am I sure I’m not the crazy one here, because this seems real strange”

Starting out with what was done, Murray et al (2017) collected data from 414 heterosexual women online. These women answered questions about 15 different behaviors which might signal romantic interest. They were first asked these questions about themselves, and then about other women. So, for instance, a woman might be asked, “If you held hands with a man, how likely is it you intend to have sex with him?” They would then be asked the same hand-holding question, but about other women: “If a woman (who is not you) held hands with a man, how likely is it that she would…” and then end with something along the lines of, “say she wants to have sex with him,” or “actually want to have a sex with him.” They were trying to tap this difference between the perceptions of “what women will say” and “what women actually want” responses. They also wanted to see what happened when you ask the “say” or “want” question first.

Crucially, the previous research these authors are responding to found that both men and women tend to report that, in general, women tend to want more than they say. The present research was only looking at women, but it found that same pattern: regardless of whether you ask the “say” or “want” questions first, women seem to think that other women will say they are less interested than they actually are. In short, women believe other women to be at least somewhat coy. In that sense, these results are a direct replication of the previous findings.

One of the things I find strange about the paper, then, is the title: “A Preregistered Study of Competing Predictions Suggests That Men Do Overestimate Women’s Sexual Intent.” Since this study was only looking at women, the use of “men” in the title seems poorly thought out. I assume the intentions of the authors were to say that these results are consistent with the idea that men also overperceive, but even in that case it really ought to say “People Overestimate,” rather than “Men Do.” I earnestly can’t think of a reason to single out the male gender in the title other than the possibility that the authors seem to have forgotten they were measuring something other than what they actually wanted to. That is, they wanted their results to speak to the idea that male perceptions are biased upwards (in order support their own, prior work), but they seem to be a bit overeager to do so and jumped the gun.

“Well women are basically men anyway, right? Close enough”

Another point I find rather curious from the paper – some data the authors highlight – is that the women’s responses did depend (somewhat) on whether you ask the “say” or “want” questions first. Specifically, the responses to both the “say” and “want” scales are a little lower when the “want” question is asked first. However, the relative pattern of the data – the effect of perceived coyness – exists regardless of the order. I’ve attached a slightly-modified version of their graph below so you can see what their results look like.

What this suggests to me is that something of an anchoring effect exists, whereby the question order might affect how people interpret the values on the scale of sexual intent (which goes from 1 to 7). What it does not suggest to me is what Murray et al (2017) claim it does:

These results support the hypothesis that women’s differential responses to the “say” and “want” questions in Perilloux and Kurzban’s study were driven by question-order effects and language conventions, rather than by women’s chronic underreporting of their sexual intentions.”

As far as I can tell, they do nothing of the sort. Their results – to reiterate – are effectively direct replications of what Perilloux & Kurzban found. Regardless of the order in which you ask the questions, women believed other women wanted more than they would let on. How that is supposed to imply that the previously (and presently) observed effects are due to questioning ordering are beyond me. To convince me this was simply an order effect, data would need to be presented that either (a) showed the effect goes away when the order of the questions is changed or (b) showed the effect changes direction when the question order is changed. Since neither of those things happened, I’m hard pressed to see how the results can be chalked up to order effects.

For whatever reason, Murray et al (2017) seem to make a strange contrast on that front:

We predicted that responses to the “say” and “want” questions would be equivalent when they were asked first, whereas Perilloux and Kurzban confirmed their prediction that ratings for the “want” question would be higher than ratings for the “say” question regardless of the order of the questions”

Rather than being competing hypotheses, these two seem like hypotheses that could both be true: you could see that people interpret the values on the scales differently, depending on which question you ask first while also predicting that the ratings for “want” questions will be higher than those for “say” questions, regardless of the order. Basically, I have no idea why the word “whereas” was inserted into that passage, as if to suggest both of those things could not be true (or false) at the same time (I also have no idea why the word “competing” was inserted into the title of their paper, as it seems equally inappropriate there as the word “men”). Both of those hypotheses clearly can both be true and, indeed, seem to be if these results are taken at face value.

“They predict this dress contains the color black, whereas we predict it contains white”

To sum up, the present research by Murray et al (2017) doesn’t seem to suggest that women (and, even though they didn’t look at them in this particular study, men) overperceive other women’s sexual intentions. If anything, it suggests the opposite. Indeed, as their supplementary file points out, “…across both experimental conditions women reported their own sexual intentions to be significantly lower than both what other women say and what they actually want,” and, “…it seems that when reporting on their own behavior, the most common responses for acting either less or more interested are either never engaging in this behavior or sometimes doing so, whereas women seem to believe that other women most commonly engage in both behaviors some of the time.”

So, not only do women believe that other women (who aren’t them) engage in this coy behavior more often than they themselves do (which would be impossible, as not everyone can think that about everyone else and be right), but women also even admit to, at least sometimes, acting less interested than they actually are. When women are actually reporting that, “Yes, I have sometimes underreported my sexual interest,” well, that seems to make the underreporting hypothesis sound a bit more plausible. The underreporting hypothesis would also be consistent with the data that finds women tend to underreport their number of sexual partners when they think others might see that report or lies will not be discovered; by contrast, male reports of partner numbers are more consistent (Alexander & Fisher, 2003).

Perhaps the great irony here, then, is the Murray et al (2017) might have been a little overeager to interpret their results in a certain fashion, and so end up misinterpreting their study as speaking to men (when it only looks at women) and their hypothesis as being a competing one (when it is not). There are costs to being overeager, just as there are costs to being overconfident; better to stick with appropriate eagerness or confidence to avoid those pitfalls. 

References: Alexander MG, & Fisher TD (2003). Truth and consequences: using the bogus pipeline to examine sex differences in self-reported sexuality. Journal of sex research, 40 (1), 27-35

Murray, D., Murphy, S., von Hippel, W., Trivers, R., & Haselton, M. (2017). A preregistered study of competing predictions suggests that men do overestimate women’s sexual intent. Psychological Science. 

 

Intergenerational Epigenetics And You

Today I wanted to cover a theoretical matter I’ve discussed before but apparently not on this site: the idea of epigenetic intergenerational transmission. In brief, epigenetics refers to chemical markers attached to your DNA that regulate how it’s expressed and regulated without changing the DNA itself. You could imagine your DNA as a book full of information and each cell in your body contains the same book. However, not every cell expressed the full genome; each cell only expresses part of it (which is why skin cells are different from muscle cells, for instance). The epigenetic portion, then, could be thought of as black tape placed over certain passages in the books so they are not read. As this tape is added or removed by environmental influences, different portions of the DNA will become active. From what I understand about how this works (which is admittedly very little at this juncture), usually these markers are not passed onto offspring from parents. The life experiences of your parents, in other words, will not be passed onto you via epigenetics. However, there has been some talk lately of people hypothesizing that not only are these changes occasionally (perhaps regularly?) passed on from parents to offspring; the implication seems to be present that they also might be passed on in an adaptive fashion. In short, organisms might adapt to their environment not just through genetic factors, but also through epigenetic ones.  

Who would have guessed Lamarckian evolution was still alive?

One of the examples given in the target article on the subject concerns periods of feast and famine. While rare in most first-world nations these days, these events probably used to be more recurrent features of our evolutionary history. The example there involves the following context: during some years in early 1900 Sweden food was abundant, while during other years it was scarce. Boys who were hitting puberty just at the time of a feast season tended to have grandchildren who died six years earlier than the grandchildren of boys who have experienced famine season during the same developmental window. The causes of death, we are told, often involving diabetes. Another case involves the children of smokers: men who smoked right before puberty tended to have children who were fatter, on average, than fathers who smoked habitually but didn’t start until after puberty . The speculation, in this case, is that development was in some way affected in a permanent fashion by food availability (or smoking) during a critical window of development, and those developmental changes were passed onto their sons and the sons of their sons.

As I read about these examples, there were a few things that stuck out to me as rather strange. First, it seems odd that no mention was made of daughters or granddaughters in that case, whereas in the food example there wasn’t any mention of the in-between male generation (they only mentioned grandfathers and grandsons there; not fathers). Perhaps there’s more to the data that is let on there but – in the event that no effects were found for fathers or daughters or any kind – it is also possible that a single data set might have been sliced up into a number of different pieces until the researchers found something worth talking about (e.g., didn’t find an effect in general? Try breaking the data down by gender and testing again). Now that might or might not be the case here, but as we’ve learned from the replication troubles in psychology, one way of increasing your false-positive rate is to divide your sample into a number of different subgroups. For the sake of this post, I’m going to assume that is not the case and treat the data as representing something real, rather than a statistical fluke.   

Assuming this isn’t just a false-positive, there are two issues with the examples as I see them. I’m going to focus predominately on the food example to highlight these issues: first, passing on such epigenetic changes seems maladaptive and, second, the story behind it seems implausible. Let’s take the issues in turn.

To understand why this kind of inter-generational epigenetic transmission seems maladaptive, consider two hypothetical children born one year apart (in, say, the years 1900 and 1901). At the time the first child’s father was hitting puberty, there was a temporary famine taking place and food was scarce; at the time of the second child, the famine had passed and food was abundant. According to the logic laid out, we should expect that (a) both children will have their genetic expression altered due to the epigenetic markers passed down by their parents, affecting their long-term development, and (b) the children will, in turn, pass those markers on to their own children, and their children’s children (and so on).

The big Thanksgiving dinner that gave your grandson diabetes

The problems here should become apparent quickly enough. First, let’s begin by assuming these epigenetic changes are adaptive: they are passed on because they are reproductively useful at helping a child develop appropriately. Specifically, a famine or feast at or around the time of puberty would need to be a reliable cue as to the type of environments their children could expect to encounter. If a child is going to face shortages of food, they might want to develop in a different manner than if they’re expecting food to be abundant.

Now that sounds well and good, but in our example these two children were born just a year apart and, as such, should be expected to face (broadly) the same environment, at least with respect to food availability (since feast and famines tends to be more global). Clearly, if the children were adopting different developmental plans in response to that feast of famine, both of them (plan A affected by the famine and plan B not so affected) cannot be adaptive. Specifically, if this epigenetic inheritance is trying to anticipate children’s future conditions by those present around the time of their father’s puberty, at least one of the children’s developmental plans will be anticipating the wrong set of conditions. That said, both developmental plans could be wrong, and conditions could look different than either anticipated. Trying to anticipate the future conditions one will encounter over their lifespan (and over their children’s and grandchild’s lifespan) using only information from the brief window of time around puberty seems like a plan doomed for failure, or at least suboptimal results.

A second problem arises because these changes are hypothesized to be intergenerational: capable of transmission across multiple generations. If that is the case, why on Earth would the researchers in this study pay any mind to the conditions the grandparents were facing around the time of puberty per se? Shouldn’t we be more concerned with the conditions being faced a number of generations backs, rather than the more immediate ones? To phrase this in terms of a chicken/egg problem, shouldn’t the grandparents in question have inherited epigenetic markers of their own from their grandparents, and so on down the line? If that were the case, the conditions they were facing around their puberty would either be irrelevant (because they already inherited such markers from their own parents) or would have altered the epigenetic markers as well.

If we opt for the former possibility, than studying grandparent’s puberty conditions shouldn’t be too impactful. However, if we opt for the latter possibility, we are again left in a bit of a theoretical bind: if the conditions faced by the grandparents altered their epigenetic markers, shouldn’t those same markers also have been altered by the parent’s experiences, and their grandson’s experiences as well? If they are being altered by the environment each generation, then they are poor candidates for intergenerational transmission (just as DNA that was constantly mutating would be). There is our dilemma, then: if epigenetics change across one’s lifespan, they are unlikely candidates for transmission between generations; if epigenetic changes can be passed down across generations stably, why look at the specific period pre-puberty for grandparents? Shouldn’t we be concerned with their grandparents, and so on down the lines?

“Oh no you don’t; you’re not pinning this one all on me”

Now, to be clear, a famine around the time of conception could affect development in other, more mundane ways. If a child isn’t receiving adequate nutrition at the time they are growing, then it is likely certain parts of their developing body will not grow as they otherwise would. When you don’t have enough calories to support your full development, trade-offs need to be made, just like if you don’t have enough money to buy everything you want at the store you have to pass up on some items to afford others. Those kinds of developmental outcomes can certainly have downstream effects on future generations through behavior, but they don’t seem like the kind of changes that could be passed on the way genetic material can. The same can be said about the smoking example provided as well: people who smoked during critical developmental windows could do damage to their own development, which in turn impacts the quality of the offspring they produce, but that’s not like genetic transmission at all. It would be no more surprising than finding out that parents exposed to radioactive waste tend to have children of a different quality than those not so exposed.

To the extent that these intergenerational changes are real and not just statistical oddities, it doesn’t seem likely that they could be adaptive; they would instead likely reflect developmental errors. Basically, the matter comes down to the following question: are the environmental conditions surrounding a particular developmental window good indicators of future conditions to the point you’d want to not only focus your own development around them, but also the development of your children and their children in turn? To me, the answer seems like a resounding, ‘”No, and that seems like a prime example of developmental rigidity, rather than plasticity.” Such a plan would not allow offspring to meet the demands of their unique environments particularly well. I’m not hopeful that this kind of thinking will lead to any revolutions in evolutionary theory, but I’m always willing to be proven wrong if the right data comes up. 

Mistreated Children Misbehaving

None of us are conceived or born as full adults; we all need to grow and develop from single cells to fully-formed adults. Unfortunately – for the sake of development, anyway – the future world you will find yourself in is not always predictable, which makes development a tricky matter at times. While there are often regularities in the broader environment (such as the presence or absence of sunlight, for instance), not every individual will inhabit the same environment or, more precisely, the same place in their environment. Consider two adult males, one of whom is six-feet tall and 230 pounds of muscle, and the other being five-feet tall and 110 pounds. While the dichotomy here is stark, it serves to make a simple point: if both of these males developed in a psychological manner that led them to pursue precisely the same strategies in life – in this case, say, one involving aggressive contests for access to females – it is quite likely that the weaker male will lose out to the stronger one most (if not all) of the time. As such, in order to be more-consistently adaptive, development must be something of a fluid process that helps tailor an individual’s psychology to the unique positions they find themselves in within a particular environment. Thus, if an organism is able to use some cues within their environment to predict their likely place in it in the future (in this case, whether they would grow large or small), their development could be altered to encourage their pursuit of alternate routes to eventual reproductive success. 

Because pretending you’re cut out for that kind of life will only make it worse

Let’s take that initial example and adapt it to a new context: rather than trying to predict whether one will grow up weak or strong, a child is trying to predict the probability of receiving parental investment in the future. If parental investment is unlikely to be forthcoming, children may need to take a different approach to their development to help secure the needed resources on their own, sometimes requiring their undertaking risky behaviors; by contrast, those children who are likely to receive consistent investment might be relatively less-inclined to take such risky and costly matters into their own hands, as the risk vs. reward calculations don’t favor such behavior. Placed in an understandable analogy, a child who estimates they won’t be receiving much investment from their parents might forgo a college education (and, indeed, even much of a high-school one) because they need to work to make ends meet. When you’re concerned about where your next meal is coming from there’s less time in your schedule for studying and taking out loans to not be working for four years. By contrast, the child from a richer family has the luxury of pursuing an education likely to produce greater future rewards because certain obstacles have been removed from their path.

Now obviously going to college is not something that humans have psychological adaptations for – it wasn’t a recurrent feature of our evolutionary history as a species – but there are cognitive systems we might expect to follow different developmental trajectories contingent on such estimations of one’s likely place in the environment; these could include systems judging the relative attractiveness of short- vs long-term rewards, willingness to take risks, pursuit of aggressive resolutions to conflicts, and so on. If the future is uncertain, saving for it makes less than taking a smaller reward in the present; if you lack social or financial support, being willing to fight to defend what little you do have might sound more appealing (as losing that little bit is more impactful when you won’t have anything left). The questions of interest thus becomes, “what cues in the environment might a developing child use to determine what their future will look like?” This brings us to the current paper by Abajobir et al (2016).

One potential cue might be your experiences with maltreatment while growing up, specifically at the hands of your caregivers. Though Abajobir et al (2016) don’t make the argument I’ve been sketching out explicitly, that seems to be the direction their research takes. They seem to reason (implicitly) that parental mistreatment should be a reliable cue to the future conditions you’re liable to encounter and, accordingly, one that children could use to alter their development. For instance, abusive or neglectful parents might lead to children adopting faster life history strategies involving risk-taking, delinquency, and violence themselves (or, if they’re going the maladaptive explanatory route, the failure of parents to provide supporting environments could in some way hinder development from proceeding as it usually would, in a similar fashion to not having enough food growing up might lead to one being shorter as an adult. I don’t know which line the authors would favor from their paper). That said, there is a healthy (and convincing) literature consistent with the hypothesis that parental behavior per se is not the cause of these developmental outcomes (Harris, 2009), but rather that it simply co-occurs with them. Specifically, abusive parents might be genetically different from non-abusive ones and those tendencies could get passed onto the children, accounting for the correlation. Alternatively, parents that maltreat their children might just happen to go together with children having peer groups growing up more prone to violence and delinquency themselves. Both are caused by other third variables.

Your personality usually can’t be blamed on them; you’re you all on your own

Whatever the nature of that correlation, Abajobir et al (2016) sought to use parental maltreatment from ages 0 to 14 as a predictor of later delinquent behaviors in the children by age 21. To do so, they used a prospective cohort of children and their mothers visiting a hospital between 1981-83. The cohort was then tracked for substantiated cases of child maltreatment reported to government agencies up to age 14, and at age 21 the children themselves were surveyed (the mothers being surveyed at several points throughout that time). Out of the 7200 initial participants, 3800 completed the 21-year follow up. At that follow up point, the children were asked questions concerning how often they did things like get excessively drunk, use recreational drugs, break the law, lie, cheat, steal, destroy the property of others, or fail to pay their debts. The mothers were also surveyed on matters concerning their age when they got pregnant, their arrest records, martial stability, and the amount of supervision they gave their children (all of these factors, unsurprisingly, predicting whether or not people continued on in the study for its full duration).

In total, of the 512 eventual cases of reported child maltreatment, only 172 remained in the sample at the 21-year follow up. As one might expect, maternal factors like her education status, arrest record, economic status, and unstable marriages all predicted increased likelihood of eventual child maltreatment. Further, of the 3800 participants, only 161 of them met the criteria for delinquency at 21 years. All of the previous maternal factors predicted delinquency as well: mothers who were arrested, got pregnant earlier, had unstable marriages, less education, and less money tended to produce more delinquent offspring. Adjusting for the maternal factors, however, it was reported that childhood maltreatment still predicted delinquency, but only for the male children. Specifically, maltreatment in males was associated with approximately 2-to-3.5 times as much delinquency as the non-maltreated males. For female offspring, there didn’t seem to be any notable correlation.

Now, as I mentioned, there are some genetic confounds here. It seems probable that parents who maltreat their children are, in some very real sense, different than parents who do not, and those tendencies can be inherited. This also doesn’t necessarily point a causal finger directly at parents, as it is also likely that maltreatment correlates with other social factors, like the peer group a child is liable to have or the neighborhoods they grow up in. The authors also mention that it is possible their measures of delinquency might not capture whatever effects childhood maltreatment (or its correlates) have on females, and that’s the point I wanted to wrap up discussing. To really put these findings on context, we would need to understand what adaptive role these delinquent behaviors – or rather the psychological mechanisms underlying them – have. For instance, frequent recreational drug use and problems fulfilling financial obligations might both signal that the person in question favors short-term rewards over long-term ones; frequent trouble with the law or destroying other people’s property could signal something about how the individual in question competes for social status. Maltreatment does seem to predict (even if it might not cause) different developmental courses, perhaps reflecting an active adjustment of development to deal with local environmental demands.

 The kids at school will all think you’re such a badass for this one

As we reviewed in the initial example, however, the same strategies will not always work equally well for every person. Those who are physically weaker are less likely to successfully enact aggressive strategies, all else being equal, for reasons which should be clear. Accordingly, we might expect that men and women show different patterns of delinquency to the extent they face unique adaptive problems. For instance, we might expect that females who find themselves in particularly hostile environments preferentially seek out male partners capable of enacting and defending against such aggression, as males tend to be more physically formidable (which is not to say that the women themselves might not be more physically aggressive as well). Any hypothetical shifts in mating preferences like these would not be captured by the present research particularly well, but it is nice to see the authors are at least thinking about what sex differences in patterns of delinquency might exist. It would be preferable if they were asking about those differences using this kind of a functional framework from the beginning, as that’s likely to yield more profitable insights and refine what questions get asked, but it’s good to see this kind of work all the same.

References: Abajobir, A., Kisely, S., Williams, G., Strathearnd, L., Clavarino, A., & Najman, J. (2016). Gender differences in delinquency at 21 years following childhood maltreatment: A birth cohort study. Personality & Individual Differences, 106, 95-103. 

Harris, J. (2009). The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press.