About Jesse Marczyk

An Evolutionary-Minded Psychologist, of All Things

Why Non-Violent Protests Work

It’s a classic saying: The pen is mightier than the sword. While this saying communicates some valuable information, it needs to be qualified in a significant way to be true. Specifically, in a one-on-one fight, the metaphorical pens do not beat swords. Indeed, as another classic saying goes: Don’t bring a knife to a gun fight. If knives aren’t advisable against guns, then pens are probably even less advisable. This raises the question as to how – and why – pens can triumph over swords in conflicts. These questions are particularly relevant, given some recent happenings in California at Berkeley where a protest against a speaking engagement by Milo Yiannopoulos took a turn for the violent. While those who initiated the violence might not have been students of the school, and while many people who were protesting might not engage in such violence themselves when the opportunity arises, there does appear to be a sentiment among some people who dislike Milo (like those leaving comments over on the Huffington Post piece) that such violence is to be expected, is understandable, and sometimes even morally justified or praiseworthy. The Berkeley riot was not the only such incident lately, either.

The Nazis shooting guns is a very important detail here

So let’s discuss why such violent behavior is often counterproductive for the swords in achieving their goals. Non-violent political movements, like those associated with leaders like Martin Luther King Jr. and Gandhi, appear to yield results, at least according to the only bit of data on the matter I’ve come across (for the link-shy: nonviolent campaigns combined complete and partial success rate was about 73%, while the comparable violent rate was about 33%). I even came across a documentary recently I intend to watch about a black man who purportedly got over 200 members of the KKK to leave the organization without force or the threat of it; he simply talked to them. That these nonviolent methods work at all seems rather unusual, at least if you were to frame it in terms of any other nonhuman species. Imagine, for instance, that a chimpanzee doesn’t like how he is being treated by the resident dominant male (who is physically aggressive), and so attempts to dissuade that individual from his behavior by nonviolently confronting him. No matter how many times the dominant male struck him, the protesting chimp would remain steadfastly nonviolent until he won over the other chimps in his group, and they all turned against the dominant male (or until the dominant male saw the error of his ways). As this would likely not work out for our nonviolent chimp, hopefully nonviolent protests are sounding a little stranger to you now; yet they often seem to work better than violence, at least for humans. We want to know why.

The answer to that question involves turning our attention back to the foundation of our moral sense: why do we perceive a dimension of right and wrong in the world in the first place? The short answer to this question, I think, is that when a dispute arises, those involved in the dispute find themselves in a state of transient need for social support (since numbers can decide the outcome of the conflict). Third parties (those not initially involved in the dispute) can increase their value as a social asset to one of the disputants by filling that need and assisting them in the fight against the rival. This allows third parties to leverage the transient needs of the disputants to build future alliances or defend existing allies. However, not all behaviors generate the same degree of need: the theft of $10 generates less need than a physical assault. Accordingly, our moral psychology represents a cognitive mechanism for determining what degree of need tends to be generated by behaviors in the interests of guiding where one’s support can best be invested (you can find the longer answer here). That’s not to say our moral sense will be the only input for deciding what side we eventually take – factors like kinship and interaction history matter too – but it’s an important part of the decision.

The applications of this idea to nonviolent protest ought to be fairly apparent: when property is destroyed, people are attacked, and the ability of regular citizens to go about their lives is disrupted by violent protests, this generates a need for social support on the part of those targeted or affected by the violence. It also generates worries in those who feel they might be targeted by similar groups in the future. So, while the protesters might be rioting because they feel they have important needs that aren’t being met (seeking to achieve them via violence, or the threat of it), third parties might come to view the damage inflicted by the protest as being more important or harmful (as they generate a larger, or more legitimate need). The net result of that violence is now that third parties side against the protesters, rather than with them. By contrast, a nonviolent protest does not create as large a need on the part of those it targets; it doesn’t destroy property or harm people. If the protesters have needs they want to see met and they aren’t inflicting costs on others, this can yield more support for the protester’s side.

I’m sure the owner of that car really had this coming…

This brings us to our third classic saying of the post: While I disagree with what you have to say, I will defend to the death your right to say it. Though such a sentiment might be seldom expressed these days, it highlights another important point: even if third parties agree with the grievances of the protesters (or, in this case, disagree with the behavior of the people being protested), the protesters can make themselves seem like suitably poor social assets by inflicting inappropriately-large costs (as disagreeing with someone generates less harm than stifling their speech through violence). Violence can alienate existing social support (since they don’t want to have to defend you from future revenge, as people who pick fights tend to initiate and perpetuate conflicts, rather than end them) and make enemies of allies (as the opposition now offers a better target of social investment, given their relative need). The answer as to why pens can beat swords, then, is not that pens are actually mightier (i.e., capable of inflicting greater costs), but rather that pens tend to be better at recruiting other swords to do their fighting for them (or, in more mild cases, pens can remove the social support from the swords, making them less dangerous). The pen doesn’t actually beat the sword; it’s the two or more swords the pen has persuaded to fight for it – and not the opposing sword – that do.

Appreciating the power of social support helps bolster our understanding of other possible interactions between pens and swords. For instance, when groups are small, swords will likely tend to be more powerful than pens, as large numbers of third parties aren’t around to be persuaded. This is why our nonviolent chimp example didn’t work well: chimps don’t reliably join disputes as third parties on the basis of behavior the way humans do. Without that third-party support, non-violence will fail. The corollary point here is that pens might find themselves in a bit of a bind when it comes to confrontations with other pens. Put in plain terms: nonviolence is a useful rallying cry for drawing social support if the other side of the dispute is being violent. If both sides abstain from violence, however, nonviolence per se no longer persuades people. You can’t convince someone to join your side in a dispute by pointing out something your side shares with the other. This should result in the expectation that people will frequently over-represent the violence of the opposition, perhaps even fabricating it completely, in the interests of persuading others. 

Yet another point that can be drawn from this analysis is that even “bad” ideas or groups (whether labeled as such because of moral or factual reasons) can recruit swords to their side if they are targeted by violence. Returning to the cases we began with – the riot at UC Berkeley and the incident where Richard Spencer got punched – if you hope to exterminate people who hold disagreeable views, then violence might seem like the answer. However, as we have seen, violence against others, even disagreeable others, who are not themselves behaving violently can rally support from third parties, as they might begin to worry that threats to free speech (or other important issues) are more harmful than the opinions and words we find disagreeable (again, hitting someone creates more need than talking does). On the other hand, if you hope to persuade people to join your side (or at least not join the opposition), you will need to engage with arguments and reasoning. Importantly, you need to treat those you hope to persuade as people and engage with the ideas and values they actually hold. If the goal in these disputes really is to make allies, you need to convince others that you have their best interests at heart. Calling those who disagree “baskets of deplorables,” suggesting they’re too stupid to understand the world, or anything to that extent doesn’t tend to win their hearts and minds. If anything, it sends a signal to them that you do not value them, giving them all the more reason to not spend their time helping you achieve your goals.  

“Huh; I guess I really am a moron and you’re right. Well done,” said no one, ever.

As a final matter, we could also discuss the idea that violence is useful at snuffing out threats preemptively. In other words, better to stop someone before they can try and attack you, rather than after their knife is already in your back. There are several reasons preemptive defense is just as suspect, so let’s run through a few: first, there are different legal penalties for acts like murder and attempted murder, as attempted – but incomplete acts – generate less needs than completed ones. As such, they garner less social support. Second, absent very strong evidence that the people targeted for violence would have eventually become violent, the preemptive attacks will not look defensive; they will simply look aggressive, returning to the initial problems violent protests face. Relatedly, it is unlikely to ever make allies of enemies; if anything, it will make deeper enemies of existing ones and their allies. Remember: when you hurt someone, you indirectly inflict costs on their friends, families, and other relations as well. Finally, some people will likely develop reasonable concerns about the probability of being attacked for holding other opinions or engaging in behaviors people find unpleasant or dangerous. With speech already being equated to violence among certain groups, this concern doesn’t seem unfounded. 

In the interests of persuading others – actors and third parties alike – nonviolence is usually the better first step. However, nonviolence alone is not enough, especially if your opposition is nonviolent as well. Not being violent does not mean you’ve already won the dispute; just that you haven’t lost it. It is at that point you need to persuade others that your needs are legitimate, your demands reasonable, and your position in their interests as well, all while your opposition attempts to be persuasive themselves. It’s not an easy task, to be sure, and it’s one many of us are worse at then we’d like to think; it’s just the best way forward.

On The Need To Evolutionize Memory Research

This semester I happen to be teaching a course on human learning and memory. Part of the territory that comes with designing and teaching any class is educating yourself on the subject: brushing up on what you do know and learning about what you do not. For the purposes of this course, much of my preparations come from the latter portion. Memory isn’t my main specialty, so I’ve been spending a lot of time reading up on it. Wandering into a relatively new field is always an interesting experience, and on that front I consider myself fortunate: I have a theoretical guide to help me think about and understand the research I’m encountering – evolution. Rather than just viewing the field of memory as a disparate collection of facts and findings, evolutionary theory allows me to better synthesize and explain, in a satisfying way, all these novel (to me) findings and tie them to one another. It strikes me as unfortunate that, as with much of psychology, there appears to be a distinct lack of evolutionary theorizing on matters of learning and memory, at least as far as the materials I’ve come across would suggest. That’s not to say there has been none (indeed, I’ve written about some before), but rather that there certainly doesn’t seem to have been enough. It’s not the foundation of the field, as it should be. 

“How important could a solid foundation really be?”

To demonstrate what I’m talking about, I wanted to consider an effect I came across during my reading: the generation effect in memory. In this case, generation refers not to a particular age group (e.g., people in my generation), but rather to the creation of information, as in to generate. The finding itself – which appears to replicate well – is that, if you give people a memory task, they tend to be better at remembering information they generated themselves, relative to remembering information that was generated for them. To run through a simple example, imagine I was trying to get you to remember the word “bat.” On the one hand, I could just have the word pop up on a screen, tell you to read and remember it. On the other hand, I could give you a different word, say, “cat” and ask you to come up with a word that rhymes with “cat” that can complete the blanks in “B _ _.” Rather than my telling you the word “bat,” then, you would generate the word on your own (even if the task nudges you towards generating it rather strongly). As it turns out, you should have a slight memory advantage for the words you generated, relative to the words you were just given.

Now that’s a neat finding at all – likely one that people would read about and thoughtfully nod their head in agreement – but we want to explain it: why is memory better for words you generate? On that front, the textbook I was using was of no use, offering nothing beyond the name of the effect and a handful of examples. If you’re trying to understand the finding – much less explain it to a class full of students – you’ll be on your own. Textbooks are always incomplete, though, so I turned to some of the referenced source material to see how the researchers in the field were thinking about it. These papers seemed to predominately focus on how information was being processed, but not necessarily on why it was being processed that way. As such, I wanted to advance a little bit of speculation on how an evolutionary approach could help inform our understanding of the finding (I say could because this is not the only possible answer to the question one could derive from evolutionary theory; what I hope to focus on is the approach to answering the question, rather than the specific answer I will float. Too often people can talk about an evolutionary hypothesis that was wrong as a reflection of the field, neglecting that how an issue was thought through is a somewhat separate matter from the answer that eventually got produced).

To explain the generation effect I want to first take it out of an experimental setting and into a more naturalistic one. That is, rather than figuring out why people can remember arbitrary words they generated better than ones they just read, let’s think about why people might have a better memory for information they’ve created in general, relative to information they heard. The initial point to make on that front is that our memory systems will only retain a (very) limited amount of the information we encounter. The reason for this, I suspect, is that if we retained too much information, cognitively sorting through it for the most useful pieces of information would be less efficient, relative to a case where only the most useful information was retained in the first place. You don’t want a memory (which is metabolically costly to maintain) chock-full of pointless information, like what color shirt your friend wore when you hung out 3 years ago. As such, we ought to expect that we have a better memory for events or facts that carry adaptively-relevant consequences.

“Yearbooks; helping you remember pointless things your brain would otherwise forget”

Might information you generate carry different consequences than information you just hear about? I think there’s a solid case to be made that, at least socially, this can be true. In a quick example, consider the theory of evolution itself. This idea is generally considered to be one the better ones people (collectively) have had. Accordingly, it is perhaps unsurprising that most everyone knows the name of the man who generated this idea: Charles Darwin. Contrast Darwin with someone like me: I happen to know a lot about evolutionary theory and that does grant me some amount of social prestige within some circles. However, knowing a lot about evolutionary theory does not afford me anywhere near the amount of social acclaim that Darwin receives. There are reasons we should expect this state of affairs to hold as well, such as that generating an idea can signal more about one’s cognitive talents than simply memorizing it does. Whatever the reasons for this, however, if ideas you generate carry greater social benefits, our memory systems should attend to them more vigilantly; better to not forget that brilliant idea you had than the one someone else did.

Following this line of reasoning, we could also predict that there would be circumstances in which information you generated is recalled less-readily than if you had just read about it: specifically, in cases when the information would carry social costs for the person who generated it.

Imagine, for instance, that you’re a person who is trying to think up reasons to support your pet theory (call that theory A). Initially, your memory for that reasoning might be better if you think you’ve come up with an argument yourself than if you had read about someone else who put forth that same idea. However, it later turns out that a different theory (call that theory B) ends up saying your theory is wrong and, worse yet, theory B is also better supported and widely-accepted. At that point, you might actually observe that the person’s memory for the initial information supporting theory A is worse if they generated those reasons themselves, as that reflects more negatively on them than if they had just read about someone else being wrong (and memory would be worse, in this case, because you don’t want to advertise the fact that you were wrong to others, while you might care less about talking about why someone who wasn’t you was wrong).

In short, people might selectively forget potentially embarrassing information they generated but was wrong, relative to times they read about someone else being wrong. Indeed, this might be why it’s said truth passes through three stages: ridicule, opposition, and acceptance. This can be roughly translated to someone saying of a new idea, “That’s silly,” to, “That’s dangerous,” to, “That’s what I’ve said all along.” This is difficult to test, for sure, but it’s a possibility worth mulling over.

How you should feel reading over old things you forgot you wrote

With the general theory described, we can now try and apply that line of thinking back into the unnatural environment of memory research labs in universities. One study I came across (deWinstanley & Bjork, 1997) claims that the generation effect doesn’t always have an advantage over reading information. In their first experiment, the researchers had conditions where participants would either read cue-word pairs (like “juice” – “orange”, and, “sweet” – “pineapple”) or read a cue and then generate a word (e.g., “juice” – “or_n_ _”). The participants would later be tested on how many of the target words (the second one in the pair) they could recall. When participants were just told there would be a recall task later, but not the nature of that test, the generate group had a memory advantage. However, when both groups were told to focus on the relationship between the targets (such as them all being fruits), the read group’s ability to remember now matched that of the generate group.

In their second experiment, the researchers then changed the nature of the memory task: instead of asking participants to just freely recall the target words, they would be given the cue word and asked to recall the associated target (e.g., they see “juice” and need to remember “orange”). In this case, when participants were instructed to focus on the relationship between the cue and the target, it was the read participants with the memory advantage; not the generate group.

One might explain these findings within this framework I discussed as follows: in the first experiment, participants in the “read” condition were actually also in an implicit generate condition; they were being asked to generate a relationship between the targets to be remembered and, as such, their performance improved on the associated memory task. By contrast, in the second experiment, participants in the read condition were still in the implicit “generate” condition: being asked to generate connections between the cues and targets. However, those in the explicit generate condition were only generating the targets; not their cues. As such, it’s possible participants tended to selectively attend to the information they had created over the information they did not. Put simply, the generate participant’s ability to better recall the words they created was interfering with their ability to remember their associations with the words they did not create. Their memory systems were focusing on the former over the latter.

A more memorable meal than one you go out and buy

If one wanted to increase the performance of those in the explicit generate condition for experiment two, then, all a researcher might have to do would be to get their participants to generate both the cue and the target. In that instance, the participants should feel more personally responsible for the connections – it should reflect on them more personally – and, accordingly, remember them better. 

Now whether that answers I put forth get it all the way (or even partially) right is besides the point. It’s possible that the predictions I’ve made here are completely wrong. It’s just that what I have been noticing is that words like “adaptive” and “relevance” are all but absent from this book (and papers) on memory. As I hope this post (and my last one) illustrates, evolutionary theory can help guide our thinking to areas it might not otherwise reach, allowing us to more efficiently think up profitable avenues for understanding existing research and creating future projects. It doesn’t hurt that it helps students understand the material better, either.

References: deWinstanley, P. & Bjork, E. (1997). Processing instructions and the generation effect: a test of the multifactor transfer-appropriate processing theory. Memory, 5, 401-421.

 

The Adaptive Significance Of Priming

One of the more common words you’ll come across in the psychological literature is priming, which is defined as an instance where exposure to one stimulus influences the reaction to a subsequent one. There are plenty of examples one might think of to demonstrate this effect, one of which might be if you were to ask participants to tell you whether a string of letters they see pop up on a screen is a word or a non-word. They would be quicker to respond to the word “nurse” if it were preceded by the prime of “doctor” relative to being preceded by “chair,” owing to the relative association (or lack thereof) between the words. While a great deal of psychological literature deals with priming, very few papers I have come across actually attempt to give some kind of adaptive, functional account of what priming is and, accordingly, why they should expect it to behave the way it does. Because of that absence of theoretical grounding, some research that utilizes priming ends up generating some hypotheses that aren’t just strange; they’re biologically implausible. 

Pictured: Something strange, but at least biological plausible

To give a more concrete sense of what I mean, I wanted to briefly summarize some of that peculiar research using priming that I’ve covered before. In this case, the research either focuses on how priming can affect perceptions about the world or how priming can affect people’s behavior. These lines of inquiry often seem to try and demonstrate how people are biased, inaccurate, or otherwise easily manipulated by seemingly-minor environmental influences. To see what I mean, let’s consider some research findings on both fronts. In terms of perceptions about the world, a few findings are highlighted in this piece, including the prospect that holding a warm drink (instead of a cold one) can lead you judge other people as more caring and generous; a finding that falls under the umbrella of embodied cognition. Why would such a finding arise? If I understand correctly, the line of thought is that holding a warm drink activates some part of your brain that holds the concept “warm”; as that concept is tied indirectly to personality (e.g., “He’s a really warm and friendly person”), you end up thinking the person is nicer than you otherwise would. Warm drinks prime the concept of emotional warmth.

It doesn’t take much thinking about this explanation to see why it seems wrong: a mind structured in such a way would be making a mistake about the world. Specifically, because holding a warm object in your hand should have no effect on the personality and behavior of someone else, if you use that temperature information to influence your judgments, you will be more likely to misjudge the probable intentions of others. If you’re not nice, but I think you are (or at least you’re not as nice as I think you are), I will behave in sub-optimal ways, perhaps by putting more trust in you than I should or generating other expectations of you that won’t be fulfilled. Because there are real costs to being wrong about others – as it opens you up to risks of exploitation, for instance – a cognitive system wired this way should be outperformed by one which ignores more irrelevant information.

Other such examples of the effects of priming posit something similar, except on a behavioral level instead of a perceptual one. For instance, research on stereotype threat suggests that if you remind women of their gender before a math test (in other words, you’re priming gender), they will tend to perform worse than women who were not primed because the concept of “woman” is related to stereotypes of “being worse at a math.” This should be maladaptive for precisely the same reason that the perceptual variety is: actively making yourself worse at a task than you actually are will tend to carry costs. To the extent this effect ostensibly runs in the opposite direction – a case where someone gets better at a task because of a prime, as is the case in the work on power poses – one would wonder why an individual should wait for a prime rather than just get on with the task at hand. Now sure, these effects don’t replicate well and probably aren’t real (see stereotype threat, power poses, and elderly prime effects on walking speed), but that they would even be proposed in the first place is indicative of a problem in the way people think about psychology. They seem like ideas that couldn’t even possibly be correct, given what they would imply about the probable fitness costs to their bearers. Positing maladaptive psychological mechanisms seems rather perverse.

Now I suspect some might object at this point and remind me that not everything our brains do is adaptive. In fact, priming might simply just be an occasionally maladaptive byproduct of activating certain portions of our brain by proximity. This is referred to as spreading activation and it might just be an unfortunate byproduct of brain wiring. “Sure,” you might say, “this kind of spreading activation isn’t adaptive, but it’s just a function of how the brain gets wired up. Our brains can’t help it.” Well, as it turns out, it seems they certainly can.

“Don’t give me that look; you knew better!”

This brings me to some research on memory and priming by Klein et al (2002). These researchers begin with their adaptive framework for priming, suggesting that priming reflects a feature of our cognitive systems – rather than a byproduct – that helps speed up the delivery of appropriate information. In short, priming represents something of a search engine that uses one’s present state to try and predict what information will be needed in the near future. It is crucial to emphasize the word “appropriate” in that hypothesis: the benefits to accessing information stored in our memories, for instance, is to help guide our current behavior. As there many more ways of guiding your behavior towards some maladaptive end, rather than an adaptive one, information stored in memory needs to be accessed selectively. If you spend too much time accessing irrelevant or distracting information, the function of the priming itself – to deliver relevant information quickly – would be thwarted. To put that into a simple example, if you’re trying to quickly solve a problem related to whether you should trust someone, accessing information about what you had for breakfast that morning will not only fail to help you, but it will actively slow your completion of the task down. You’d be wasting time processing irrelevant information.

To demonstrate this point, Klein et al (2002) decided to look at trait judgments: essentially asking people a question like, “how well does the word ‘kind’ describe [you/someone else]?” Without going into too much detail on the matter, our brains seem to store information relevant to these tasks in two different formats: in a summary form and an episodic form. This means one memory system contains information about particular behavioral instances (e.g., a time someone was kind or mean) while another derives summaries of that behavior (e.g., that person, overall, is kind or mean). Broadly speaking, these different memory systems exist owing to a cognitive trade-off between speed and accuracy in judgments: if you want to know how to behave towards someone, it’s quicker to consult the summary information than process every individual memory of their behaviors. However, the summary information tends to be less complete and accurate than the sum of the individual memories. That is, knowing that someone is “often nice” doesn’t give you insight into the conditions during which they are mean. 

As such, if someone was trying to make a judgment about whether you were nice, if they have a summary of “often nice,” they don’t need to spend time consulting memories of every nice thing you’ve done; that would be redundant processing. Instead, they would want to selectively consult the information about times you are not nice, as this would help them figure out the boundary conditions of their judgment; when the “often nice” label doesn’t apply. This lead Klein et al (2002) to the following prediction: retrieving a trait summary of a person should prime trait-inconsistent episodes from memory, rather than trait-consistent ones. In short, priming effects ought to be functionally specific.

“The cat is usually friendly, except for those times you shaved him”

And that is exactly what they found: when participants were asked to judge whether a trait described them (or their mother), they were quicker to subsequently recall a time they (or their mother) behaved in an inconsistent manner. To put that in context, if participants were asked whether the word “polite” described them, they would be quicker to recall a specific instance they were rude, relative to a time they were polite. Moreover, just being asking to define the terms (e.g., rude or polite) didn’t appear to prime trait-consistent episodes in memory either: participants were not quicker to recall a time they were polite after having defined the term. This would be a function of the fact that defining a term does not require you to make a trait judgment about it, so episodic memories wouldn’t be relevant information.

These results are important because if priming were truly just a byproduct of spreading neural activation, then trait-judgments (Are you kind?) should prime trait-consistent episodes (a time you were kind); they just don’t seem to do that. As such, we could conclude that priming does not appear to just be a byproduct of neural activation. If priming isn’t just a biological necessity, then, studies which make use of this paradigm would have to better justify their expectations. If it’s not justifiable to expect indiscriminate neural activation, researchers would need to put in more time to explain and understand the particular patterns of priming they find. Ideally they would do this advance of conducting the research (as Klein et al did), as that would likely save people a lot of time publishing papers on priming that subsequently fail to replicate.  

References: Klein, S., Cosmides, L., Tooby, J., & Chance, S. (2002). Decisions and the evolution of memory: multiple systems, multiple functions. Psychological Review, 109, 306-329.

Overperception Of Sexual Interest Or Overeager Researchers?

Though I don’t make a habit of watching many shows, I do often catch some funny clips of them that have been posted online. One that I saw semi-recently (which I feel relates to the present post) is a clip from Portlandia. In this video, people are writing a magazine issue about a man living a life that embodies manhood. In this case, they select a man who used to work at an office, but then left his job and now makes furniture. While everyone is really impressed with the idea, it eventually turns out that the man in question does make furniture…but it’s terrible. Faced with the revelation that the man’s work isn’t good – that it probably wasn’t worth leaving his job to do something he’s bad at – the people in question aren’t impressed by his over-confidence in pursuing his furniture work. They don’t seem to find him more attractive because he was overconfident; quite the opposite in fact. The key determent of his attractiveness was the actual quality of his work. In other words, since he couldn’t back up his confidence with his efforts, the ratings of his attractiveness appeared to drop precipitously.

It might not be comfortable, but at least it’s hand-made

What we can learn from an example like this is that something like overconfidence per se – being more confident than one should be and behaving in ways one rightfully shouldn’t because of it – doesn’t appear to be impressive to potential mates. As such, we might expect that people who pursue activities they aren’t well suited for tend to do worse in the mating domain than those who are able to exist within a niche they more suitably fill: if you can’t cut it as a craftsman, better to keep that steady, yet less-interesting office job. This is largely a factor of the overconfident investing their time and effort into pursuits that do not yield positive benefits for them or others. You can think of it like playing the lottery, in a sense: if you are overly-confident that you’ll win the lottery, you might incorrectly invest money into lottery tickets that you could otherwise spend on pursuits that don’t amount to lighting it on fire.

This is likely why the research on the (over)perception of sexual intent turns out the way it does. I’ve written about the topic before, but to give you a quick overview of the main points: researchers have uncovered that men tend to perceive more sexual interest in women than women themselves report having. To put that in a simple example, if you were to ask a woman, “given you were holding a man’s hand, how sexually interested in him are you?” you’ll tend to get a different answer than if you ask a man, “given that a woman was holding your hand, how sexually interested do you think she is in you?” In particular, men tend to think behaviors like hand-holding signal more sexual intent than women report. These kinds of results have been chalked up to men overperceiving sexual intent, but more recent research puts a different spin on the answers: specifically, if you were to ask a woman, “given that another woman (who is not you) is holding a man’s hand, how sexually interested in him do you think she is?” the answers from the women now align with those of the men. Women (as well as men) seem to believe that other women will underreport their sexual intent, while believing their own self-reports are accurate. Taken together, then, both men and women seem to perceive more sexual intent in a woman’s behavior than the woman herself reports. Rather than everyone else overperceiving sexual intent, it seems a bit more probable that women themselves tend to underreport their own sexual intent. In a sentence, women might play a little coy, rather than everyone else in the world being wrong about them.

Today, I wanted to talk about a very recent paper (Murray et al, 2017) by some of the researchers who seem to favor the overperception hypothesis. That is, they seem to suggest that women honestly and accurately report their own sexual interest, but everyone else happens to perceive it incorrectly. In particular, their paper represents an attempt to respond to the point that women overperceive the sexual intent of other women as well. At the outset, I will say that I find the title of their research rather peculiar and their interpretation of the data rather strange (points I’ll get to below). Indeed, I found both things so strange that I had to ask around a little first before I stared writing this post to ensure that I wasn’t misreading something, as I know smart people wrote the paper in question and the issues seemed rather glaring to me (and if a number of smart people seem to be mistaken, I wanted to make sure the error wasn’t simply something on my end first). So let’s get into what was done in the paper and why I think there are some big issues.

“…Am I sure I’m not the crazy one here, because this seems real strange”

Starting out with what was done, Murray et al (2017) collected data from 414 heterosexual women online. These women answered questions about 15 different behaviors which might signal romantic interest. They were first asked these questions about themselves, and then about other women. So, for instance, a woman might be asked, “If you held hands with a man, how likely is it you intend to have sex with him?” They would then be asked the same hand-holding question, but about other women: “If a woman (who is not you) held hands with a man, how likely is it that she would…” and then end with something along the lines of, “say she wants to have sex with him,” or “actually want to have a sex with him.” They were trying to tap this difference between the perceptions of “what women will say” and “what women actually want” responses. They also wanted to see what happened when you ask the “say” or “want” question first.

Crucially, the previous research these authors are responding to found that both men and women tend to report that, in general, women tend to want more than they say. The present research was only looking at women, but it found that same pattern: regardless of whether you ask the “say” or “want” questions first, women seem to think that other women will say they are less interested than they actually are. In short, women believe other women to be at least somewhat coy. In that sense, these results are a direct replication of the previous findings.

One of the things I find strange about the paper, then, is the title: “A Preregistered Study of Competing Predictions Suggests That Men Do Overestimate Women’s Sexual Intent.” Since this study was only looking at women, the use of “men” in the title seems poorly thought out. I assume the intentions of the authors were to say that these results are consistent with the idea that men also overperceive, but even in that case it really ought to say “People Overestimate,” rather than “Men Do.” I earnestly can’t think of a reason to single out the male gender in the title other than the possibility that the authors seem to have forgotten they were measuring something other than what they actually wanted to. That is, they wanted their results to speak to the idea that male perceptions are biased upwards (in order support their own, prior work), but they seem to be a bit overeager to do so and jumped the gun.

“Well women are basically men anyway, right? Close enough”

Another point I find rather curious from the paper – some data the authors highlight – is that the women’s responses did depend (somewhat) on whether you ask the “say” or “want” questions first. Specifically, the responses to both the “say” and “want” scales are a little lower when the “want” question is asked first. However, the relative pattern of the data – the effect of perceived coyness – exists regardless of the order. I’ve attached a slightly-modified version of their graph below so you can see what their results look like.

What this suggests to me is that something of an anchoring effect exists, whereby the question order might affect how people interpret the values on the scale of sexual intent (which goes from 1 to 7). What it does not suggest to me is what Murray et al (2017) claim it does:

These results support the hypothesis that women’s differential responses to the “say” and “want” questions in Perilloux and Kurzban’s study were driven by question-order effects and language conventions, rather than by women’s chronic underreporting of their sexual intentions.”

As far as I can tell, they do nothing of the sort. Their results – to reiterate – are effectively direct replications of what Perilloux & Kurzban found. Regardless of the order in which you ask the questions, women believed other women wanted more than they would let on. How that is supposed to imply that the previously (and presently) observed effects are due to questioning ordering are beyond me. To convince me this was simply an order effect, data would need to be presented that either (a) showed the effect goes away when the order of the questions is changed or (b) showed the effect changes direction when the question order is changed. Since neither of those things happened, I’m hard pressed to see how the results can be chalked up to order effects.

For whatever reason, Murray et al (2017) seem to make a strange contrast on that front:

We predicted that responses to the “say” and “want” questions would be equivalent when they were asked first, whereas Perilloux and Kurzban confirmed their prediction that ratings for the “want” question would be higher than ratings for the “say” question regardless of the order of the questions”

Rather than being competing hypotheses, these two seem like hypotheses that could both be true: you could see that people interpret the values on the scales differently, depending on which question you ask first while also predicting that the ratings for “want” questions will be higher than those for “say” questions, regardless of the order. Basically, I have no idea why the word “whereas” was inserted into that passage, as if to suggest both of those things could not be true (or false) at the same time (I also have no idea why the word “competing” was inserted into the title of their paper, as it seems equally inappropriate there as the word “men”). Both of those hypotheses clearly can both be true and, indeed, seem to be if these results are taken at face value.

“They predict this dress contains the color black, whereas we predict it contains white”

To sum up, the present research by Murray et al (2017) doesn’t seem to suggest that women (and, even though they didn’t look at them in this particular study, men) overperceive other women’s sexual intentions. If anything, it suggests the opposite. Indeed, as their supplementary file points out, “…across both experimental conditions women reported their own sexual intentions to be significantly lower than both what other women say and what they actually want,” and, “…it seems that when reporting on their own behavior, the most common responses for acting either less or more interested are either never engaging in this behavior or sometimes doing so, whereas women seem to believe that other women most commonly engage in both behaviors some of the time.”

So, not only do women believe that other women (who aren’t them) engage in this coy behavior more often than they themselves do (which would be impossible, as not everyone can think that about everyone else and be right), but women also even admit to, at least sometimes, acting less interested than they actually are. When women are actually reporting that, “Yes, I have sometimes underreported my sexual interest,” well, that seems to make the underreporting hypothesis sound a bit more plausible. The underreporting hypothesis would also be consistent with the data that finds women tend to underreport their number of sexual partners when they think others might see that report or lies will not be discovered; by contrast, male reports of partner numbers are more consistent (Alexander & Fisher, 2003).

Perhaps the great irony here, then, is the Murray et al (2017) might have been a little overeager to interpret their results in a certain fashion, and so end up misinterpreting their study as speaking to men (when it only looks at women) and their hypothesis as being a competing one (when it is not). There are costs to being overeager, just as there are costs to being overconfident; better to stick with appropriate eagerness or confidence to avoid those pitfalls. 

References: Alexander MG, & Fisher TD (2003). Truth and consequences: using the bogus pipeline to examine sex differences in self-reported sexuality. Journal of sex research, 40 (1), 27-35

Murray, D., Murphy, S., von Hippel, W., Trivers, R., & Haselton, M. (2017). A preregistered study of competing predictions suggests that men do overestimate women’s sexual intent. Psychological Science. 

 

Intergenerational Epigenetics And You

Today I wanted to cover a theoretical matter I’ve discussed before but apparently not on this site: the idea of epigenetic intergenerational transmission. In brief, epigenetics refers to chemical markers attached to your DNA that regulate how it’s expressed and regulated without changing the DNA itself. You could imagine your DNA as a book full of information and each cell in your body contains the same book. However, not every cell expressed the full genome; each cell only expresses part of it (which is why skin cells are different from muscle cells, for instance). The epigenetic portion, then, could be thought of as black tape placed over certain passages in the books so they are not read. As this tape is added or removed by environmental influences, different portions of the DNA will become active. From what I understand about how this works (which is admittedly very little at this juncture), usually these markers are not passed onto offspring from parents. The life experiences of your parents, in other words, will not be passed onto you via epigenetics. However, there has been some talk lately of people hypothesizing that not only are these changes occasionally (perhaps regularly?) passed on from parents to offspring; the implication seems to be present that they also might be passed on in an adaptive fashion. In short, organisms might adapt to their environment not just through genetic factors, but also through epigenetic ones.  

Who would have guessed Lamarckian evolution was still alive?

One of the examples given in the target article on the subject concerns periods of feast and famine. While rare in most first-world nations these days, these events probably used to be more recurrent features of our evolutionary history. The example there involves the following context: during some years in early 1900 Sweden food was abundant, while during other years it was scarce. Boys who were hitting puberty just at the time of a feast season tended to have grandchildren who died six years earlier than the grandchildren of boys who have experienced famine season during the same developmental window. The causes of death, we are told, often involving diabetes. Another case involves the children of smokers: men who smoked right before puberty tended to have children who were fatter, on average, than fathers who smoked habitually but didn’t start until after puberty . The speculation, in this case, is that development was in some way affected in a permanent fashion by food availability (or smoking) during a critical window of development, and those developmental changes were passed onto their sons and the sons of their sons.

As I read about these examples, there were a few things that stuck out to me as rather strange. First, it seems odd that no mention was made of daughters or granddaughters in that case, whereas in the food example there wasn’t any mention of the in-between male generation (they only mentioned grandfathers and grandsons there; not fathers). Perhaps there’s more to the data that is let on there but – in the event that no effects were found for fathers or daughters or any kind – it is also possible that a single data set might have been sliced up into a number of different pieces until the researchers found something worth talking about (e.g., didn’t find an effect in general? Try breaking the data down by gender and testing again). Now that might or might not be the case here, but as we’ve learned from the replication troubles in psychology, one way of increasing your false-positive rate is to divide your sample into a number of different subgroups. For the sake of this post, I’m going to assume that is not the case and treat the data as representing something real, rather than a statistical fluke.   

Assuming this isn’t just a false-positive, there are two issues with the examples as I see them. I’m going to focus predominately on the food example to highlight these issues: first, passing on such epigenetic changes seems maladaptive and, second, the story behind it seems implausible. Let’s take the issues in turn.

To understand why this kind of inter-generational epigenetic transmission seems maladaptive, consider two hypothetical children born one year apart (in, say, the years 1900 and 1901). At the time the first child’s father was hitting puberty, there was a temporary famine taking place and food was scarce; at the time of the second child, the famine had passed and food was abundant. According to the logic laid out, we should expect that (a) both children will have their genetic expression altered due to the epigenetic markers passed down by their parents, affecting their long-term development, and (b) the children will, in turn, pass those markers on to their own children, and their children’s children (and so on).

The big Thanksgiving dinner that gave your grandson diabetes

The problems here should become apparent quickly enough. First, let’s begin by assuming these epigenetic changes are adaptive: they are passed on because they are reproductively useful at helping a child develop appropriately. Specifically, a famine or feast at or around the time of puberty would need to be a reliable cue as to the type of environments their children could expect to encounter. If a child is going to face shortages of food, they might want to develop in a different manner than if they’re expecting food to be abundant.

Now that sounds well and good, but in our example these two children were born just a year apart and, as such, should be expected to face (broadly) the same environment, at least with respect to food availability (since feast and famines tends to be more global). Clearly, if the children were adopting different developmental plans in response to that feast of famine, both of them (plan A affected by the famine and plan B not so affected) cannot be adaptive. Specifically, if this epigenetic inheritance is trying to anticipate children’s future conditions by those present around the time of their father’s puberty, at least one of the children’s developmental plans will be anticipating the wrong set of conditions. That said, both developmental plans could be wrong, and conditions could look different than either anticipated. Trying to anticipate the future conditions one will encounter over their lifespan (and over their children’s and grandchild’s lifespan) using only information from the brief window of time around puberty seems like a plan doomed for failure, or at least suboptimal results.

A second problem arises because these changes are hypothesized to be intergenerational: capable of transmission across multiple generations. If that is the case, why on Earth would the researchers in this study pay any mind to the conditions the grandparents were facing around the time of puberty per se? Shouldn’t we be more concerned with the conditions being faced a number of generations backs, rather than the more immediate ones? To phrase this in terms of a chicken/egg problem, shouldn’t the grandparents in question have inherited epigenetic markers of their own from their grandparents, and so on down the line? If that were the case, the conditions they were facing around their puberty would either be irrelevant (because they already inherited such markers from their own parents) or would have altered the epigenetic markers as well.

If we opt for the former possibility, than studying grandparent’s puberty conditions shouldn’t be too impactful. However, if we opt for the latter possibility, we are again left in a bit of a theoretical bind: if the conditions faced by the grandparents altered their epigenetic markers, shouldn’t those same markers also have been altered by the parent’s experiences, and their grandson’s experiences as well? If they are being altered by the environment each generation, then they are poor candidates for intergenerational transmission (just as DNA that was constantly mutating would be). There is our dilemma, then: if epigenetics change across one’s lifespan, they are unlikely candidates for transmission between generations; if epigenetic changes can be passed down across generations stably, why look at the specific period pre-puberty for grandparents? Shouldn’t we be concerned with their grandparents, and so on down the lines?

“Oh no you don’t; you’re not pinning this one all on me”

Now, to be clear, a famine around the time of conception could affect development in other, more mundane ways. If a child isn’t receiving adequate nutrition at the time they are growing, then it is likely certain parts of their developing body will not grow as they otherwise would. When you don’t have enough calories to support your full development, trade-offs need to be made, just like if you don’t have enough money to buy everything you want at the store you have to pass up on some items to afford others. Those kinds of developmental outcomes can certainly have downstream effects on future generations through behavior, but they don’t seem like the kind of changes that could be passed on the way genetic material can. The same can be said about the smoking example provided as well: people who smoked during critical developmental windows could do damage to their own development, which in turn impacts the quality of the offspring they produce, but that’s not like genetic transmission at all. It would be no more surprising than finding out that parents exposed to radioactive waste tend to have children of a different quality than those not so exposed.

To the extent that these intergenerational changes are real and not just statistical oddities, it doesn’t seem likely that they could be adaptive; they would instead likely reflect developmental errors. Basically, the matter comes down to the following question: are the environmental conditions surrounding a particular developmental window good indicators of future conditions to the point you’d want to not only focus your own development around them, but also the development of your children and their children in turn? To me, the answer seems like a resounding, ‘”No, and that seems like a prime example of developmental rigidity, rather than plasticity.” Such a plan would not allow offspring to meet the demands of their unique environments particularly well. I’m not hopeful that this kind of thinking will lead to any revolutions in evolutionary theory, but I’m always willing to be proven wrong if the right data comes up. 

Mistreated Children Misbehaving

None of us are conceived or born as full adults; we all need to grow and develop from single cells to fully-formed adults. Unfortunately – for the sake of development, anyway – the future world you will find yourself in is not always predictable, which makes development a tricky matter at times. While there are often regularities in the broader environment (such as the presence or absence of sunlight, for instance), not every individual will inhabit the same environment or, more precisely, the same place in their environment. Consider two adult males, one of whom is six-feet tall and 230 pounds of muscle, and the other being five-feet tall and 110 pounds. While the dichotomy here is stark, it serves to make a simple point: if both of these males developed in a psychological manner that led them to pursue precisely the same strategies in life – in this case, say, one involving aggressive contests for access to females – it is quite likely that the weaker male will lose out to the stronger one most (if not all) of the time. As such, in order to be more-consistently adaptive, development must be something of a fluid process that helps tailor an individual’s psychology to the unique positions they find themselves in within a particular environment. Thus, if an organism is able to use some cues within their environment to predict their likely place in it in the future (in this case, whether they would grow large or small), their development could be altered to encourage their pursuit of alternate routes to eventual reproductive success. 

Because pretending you’re cut out for that kind of life will only make it worse

Let’s take that initial example and adapt it to a new context: rather than trying to predict whether one will grow up weak or strong, a child is trying to predict the probability of receiving parental investment in the future. If parental investment is unlikely to be forthcoming, children may need to take a different approach to their development to help secure the needed resources on their own, sometimes requiring their undertaking risky behaviors; by contrast, those children who are likely to receive consistent investment might be relatively less-inclined to take such risky and costly matters into their own hands, as the risk vs. reward calculations don’t favor such behavior. Placed in an understandable analogy, a child who estimates they won’t be receiving much investment from their parents might forgo a college education (and, indeed, even much of a high-school one) because they need to work to make ends meet. When you’re concerned about where your next meal is coming from there’s less time in your schedule for studying and taking out loans to not be working for four years. By contrast, the child from a richer family has the luxury of pursuing an education likely to produce greater future rewards because certain obstacles have been removed from their path.

Now obviously going to college is not something that humans have psychological adaptations for – it wasn’t a recurrent feature of our evolutionary history as a species – but there are cognitive systems we might expect to follow different developmental trajectories contingent on such estimations of one’s likely place in the environment; these could include systems judging the relative attractiveness of short- vs long-term rewards, willingness to take risks, pursuit of aggressive resolutions to conflicts, and so on. If the future is uncertain, saving for it makes less than taking a smaller reward in the present; if you lack social or financial support, being willing to fight to defend what little you do have might sound more appealing (as losing that little bit is more impactful when you won’t have anything left). The questions of interest thus becomes, “what cues in the environment might a developing child use to determine what their future will look like?” This brings us to the current paper by Abajobir et al (2016).

One potential cue might be your experiences with maltreatment while growing up, specifically at the hands of your caregivers. Though Abajobir et al (2016) don’t make the argument I’ve been sketching out explicitly, that seems to be the direction their research takes. They seem to reason (implicitly) that parental mistreatment should be a reliable cue to the future conditions you’re liable to encounter and, accordingly, one that children could use to alter their development. For instance, abusive or neglectful parents might lead to children adopting faster life history strategies involving risk-taking, delinquency, and violence themselves (or, if they’re going the maladaptive explanatory route, the failure of parents to provide supporting environments could in some way hinder development from proceeding as it usually would, in a similar fashion to not having enough food growing up might lead to one being shorter as an adult. I don’t know which line the authors would favor from their paper). That said, there is a healthy (and convincing) literature consistent with the hypothesis that parental behavior per se is not the cause of these developmental outcomes (Harris, 2009), but rather that it simply co-occurs with them. Specifically, abusive parents might be genetically different from non-abusive ones and those tendencies could get passed onto the children, accounting for the correlation. Alternatively, parents that maltreat their children might just happen to go together with children having peer groups growing up more prone to violence and delinquency themselves. Both are caused by other third variables.

Your personality usually can’t be blamed on them; you’re you all on your own

Whatever the nature of that correlation, Abajobir et al (2016) sought to use parental maltreatment from ages 0 to 14 as a predictor of later delinquent behaviors in the children by age 21. To do so, they used a prospective cohort of children and their mothers visiting a hospital between 1981-83. The cohort was then tracked for substantiated cases of child maltreatment reported to government agencies up to age 14, and at age 21 the children themselves were surveyed (the mothers being surveyed at several points throughout that time). Out of the 7200 initial participants, 3800 completed the 21-year follow up. At that follow up point, the children were asked questions concerning how often they did things like get excessively drunk, use recreational drugs, break the law, lie, cheat, steal, destroy the property of others, or fail to pay their debts. The mothers were also surveyed on matters concerning their age when they got pregnant, their arrest records, martial stability, and the amount of supervision they gave their children (all of these factors, unsurprisingly, predicting whether or not people continued on in the study for its full duration).

In total, of the 512 eventual cases of reported child maltreatment, only 172 remained in the sample at the 21-year follow up. As one might expect, maternal factors like her education status, arrest record, economic status, and unstable marriages all predicted increased likelihood of eventual child maltreatment. Further, of the 3800 participants, only 161 of them met the criteria for delinquency at 21 years. All of the previous maternal factors predicted delinquency as well: mothers who were arrested, got pregnant earlier, had unstable marriages, less education, and less money tended to produce more delinquent offspring. Adjusting for the maternal factors, however, it was reported that childhood maltreatment still predicted delinquency, but only for the male children. Specifically, maltreatment in males was associated with approximately 2-to-3.5 times as much delinquency as the non-maltreated males. For female offspring, there didn’t seem to be any notable correlation.

Now, as I mentioned, there are some genetic confounds here. It seems probable that parents who maltreat their children are, in some very real sense, different than parents who do not, and those tendencies can be inherited. This also doesn’t necessarily point a causal finger directly at parents, as it is also likely that maltreatment correlates with other social factors, like the peer group a child is liable to have or the neighborhoods they grow up in. The authors also mention that it is possible their measures of delinquency might not capture whatever effects childhood maltreatment (or its correlates) have on females, and that’s the point I wanted to wrap up discussing. To really put these findings on context, we would need to understand what adaptive role these delinquent behaviors – or rather the psychological mechanisms underlying them – have. For instance, frequent recreational drug use and problems fulfilling financial obligations might both signal that the person in question favors short-term rewards over long-term ones; frequent trouble with the law or destroying other people’s property could signal something about how the individual in question competes for social status. Maltreatment does seem to predict (even if it might not cause) different developmental courses, perhaps reflecting an active adjustment of development to deal with local environmental demands.

 The kids at school will all think you’re such a badass for this one

As we reviewed in the initial example, however, the same strategies will not always work equally well for every person. Those who are physically weaker are less likely to successfully enact aggressive strategies, all else being equal, for reasons which should be clear. Accordingly, we might expect that men and women show different patterns of delinquency to the extent they face unique adaptive problems. For instance, we might expect that females who find themselves in particularly hostile environments preferentially seek out male partners capable of enacting and defending against such aggression, as males tend to be more physically formidable (which is not to say that the women themselves might not be more physically aggressive as well). Any hypothetical shifts in mating preferences like these would not be captured by the present research particularly well, but it is nice to see the authors are at least thinking about what sex differences in patterns of delinquency might exist. It would be preferable if they were asking about those differences using this kind of a functional framework from the beginning, as that’s likely to yield more profitable insights and refine what questions get asked, but it’s good to see this kind of work all the same.

References: Abajobir, A., Kisely, S., Williams, G., Strathearnd, L., Clavarino, A., & Najman, J. (2016). Gender differences in delinquency at 21 years following childhood maltreatment: A birth cohort study. Personality & Individual Differences, 106, 95-103. 

Harris, J. (2009). The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press.

If No One Else Is Around, How Attractive Are You?

There’s anecdote that I’ve heard a few times about a man who goes to a diner for a meal. After finishing his dinner the waitress asks him if he’d like some dessert. When he inquires as to what flavors of pie they have the waitress tells him they have apple and cherry. The man says cherry and the waitress leaves to get it. She returns shortly afterwards and tells him she had forgotten they actually also had a blueberry pie. “In that case,” the man replies, “I’ll have apple.” Breaking this story down into a more abstract form, the man was presented with two options: A and B. Since he prefers A to B, he naturally selected A. However, when represented with A, B, and C, he now appears to reverse his initial preference, favoring B over A. Since he appears to prefer both A and B over C, it seems strange that C would affect his judgment at all, yet here it does. Now that’s just a funny little story, but there does appear some psychological literature suggesting that people’s preferences can be modified in similar ways.

“If only I had some more pointless options to help make my choice clear”

The general phenomenon might not be as strange as it initially sounds for two reasons. First, when choosing between A and B, the two items might be rather difficult to directly compare. Both A and B could have some upsides and downsides, but since they don’t necessarily all fall in the same domains, weighing them against the other isn’t always simple. As a for instance, if you looking to buy a new car, one option might have good gas mileage and great interior features (option A) while the other looks more visually appealing and comes with a lower price tag (option B). Pitting A against B here doesn’t always yield a straightforward choice, but if option C rolls around that gets good gas mileage, looks visually appealing, and comes with a lower price tag, this car can look better than either of the previous options by comparison. This third option need not even better more appealing than both alternatives, however; simply being preferable to one of them is usually enough (Mercier & Sperber, 2011).

Related to this point, people might want to maintain some degree of justifiably in their choices as well. After all, we don’t just make choices in a vacuum; the decisions we make often have wider social ramifications, so making a choice that can be easily justified to others can make them accept your decisions more readily (even if the choice you make is overall worse for you). Sticking with our car example, if you were to select option A, you might be praised by your environmentally-conscience friends while mocked by your friends more concerned with the look of the car; if you choose option B a similar outcome might obtain, but the friends doing the praising and mocking could switch. However, option C might be a crowd pleaser for both groups, yielding a decision with greater approval (you miss out on the interior features you want, but that’s the price you pay for social acceptance). The general logic of this example should extend to a number of different domains both in terms of things you might select and features you might use as the basis to select them on. So long as your decisions need to be justified to others, the individual appeal of certain features can be trumped.

Whether these kinds of comparison effects exist across all domains is an open question, however. The adaptive problems species need to solve often require specific sets of cognitive mechanics, so the mental algorithms that are leveraged to solve problems relating to selecting a car (a rather novel issue at that) might not be the same that help solve other problems. Given that different learning mechanisms appear to underlie seemingly similar problems – like learning the location of food and water - there is some good theoretical reasons to suspect that these kinds of comparison effects might not exist in domains where decisions require less justification, such as selecting a mate. This brings us to the present research today by Tovee et al (2016) who were examining the matter of how attractive people perceive the bodies of others (in this case, women) to be.

“Well, that’s not exactly how the other participants posed, but we can make an exception”

Tovee et al (2016) were interested in finding out whether judging bodies among a large array of other bodies might influence the judgments on any individual body’s attractiveness. The goal here was to find out whether people’s bodies have an attractiveness value independent of the range of bodies they happen to be around, or whether attractiveness judgments are made in relation to immediate circumstances. To put that another way, if you’re a “5-out-of-10″ on your own, might standing next to a three (or several threes) make you look more like a six? This is a matter of clear empirical importance as, when studies of this nature are conducted, it is fairly common for participants to be rating a large number of targets for attractiveness one after the other. If attractiveness judgments are, in some sense, corrupted from previous images there are implications for both past and future research that make use of such methods.

So, to get at the matter, the researchers employed a straightforward strategy: first, they asked one group of 20 participants (10 males and females) to judge 20 images of female bodies for attractiveness (these bodies varied in their BMI and waist-to-hip ratio; all clothing was standardized and all faces blurred out). Following that, a group of 400 participants rated the same images, but this time only rating a single image rather than 20 of them, again providing 10 male and female ratings per picture. The logic of this method is simple: if ratings of attractiveness tend to change contingent on the array of bodies available, then the between-subjects group ratings should be expected to differ in some noticeable way than those of the within-subjects group.

Turning to the results, there was a very strong correspondence between male and female judgments of attractiveness (r = .95) as well as within sex agreement (Cronbach’s alphas of 0.89 and 0.95). People tended to agree that as BMI and WHR increased, the women’s bodies became less attractive (at least within the range of values examined; the results might look different if women with very low BMIs were examined). As it turns out, however, there were no appreciable differences when comparing the within- and between-groups attractiveness ratings. When people were making judgments of just a single picture, they delivered similar judgments to those presented with many bodies. The authors conclude that perceptions of attractiveness appear to be generated by (metaphorically) consulting an internal reference template, rather than such judgments being influenced by the range of available bodies.

Which is not to say that being the best looking member of group will hurt

These findings make quite a bit of sense in light of the job that judgments of physical attractiveness are supposed to accomplish; namely assessing traits like physical health, fertility, strength, and so on. If one is interested in assessing the probable fertility of a given female, that value should not be expected to change as a function of whom she happens to be standing next to. In a simple example, a male copulating with a post-menopausal female should not be expected to achieve anything useful (in the reproductive sense of word), and the fact that she happened to be around women who are even older or less attractive shouldn’t be expected to change that fact. Indeed, on theoretical level we shouldn’t expect the independent attractiveness value of a body to change based on the other bodies around; at least there doesn’t seem to be any obvious adaptive advantages to (incorrectly) perceive a five as a six because she’s around a bunch of threes, rather than just (accurately) perceiving that five as a five and nevertheless concluding she’s the most attractive of the current options. However, if you were to incorrectly perceive that five as a six, it might have some downstream consequences when future options present themselves (such as not pursuing a more attractive alternative because the risk vs. reward calculations are being made with inaccurate information). As usual, acting on accurate information tends to have more benefits that changing your perceptions of the world.

References: Mercier, H. & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34, 57-111.

Tovee, M., Taylor, J., & Cornelissen, P. (2016). Can we believe judgments of human physical attractiveness? Evolution & Human Behavior, doi: 10.1016/j.evolhumbehav.2016.10.005

More About Race And Police Violence

A couple months back, I offered some thoughts on police violence. The most important, take-home message from that piece was that you need to be clear about what your expectations about the world are – as well as why they are that way – before you make claims of discrimination about population level data. If, for instance, you believe that men and women should be approximately equally likely to be killed by police – as both groups are approximately equal in the US population – then the information that approximately 95% or so of civilians killed by police are male might look odd to you. It means that some factors beyond simple representation in the population are responsible for determining who is likely to get shot and killed. Crucially, that gap cannot be automatically chalked up to any other particular factor by default. Just because men are overwhelmingly more likely to be killed by police, that assuredly does not mean police are biased against men and have an interest in killing them simply because of their sex.

“You can tell they just hate men; it’s so obvious”

Today, I wanted to continue on the theme from my last post and ask about what patterns of data we ought to expect with respect to police killing civilians and race. If we wanted to test the hypothesis that police killings tend to be racially-motivated (i.e., driven by anti-black prejudice), I would think we should expect a different pattern of data from the hypothesis that such killings are driven by race-neutral practices (e.g., cases in which the police are defending against perceived lethal threats, regardless of race). In this case, if police killings are driven by anti-black prejudice, we might propose the following hypothesis: all else being equal, we ought to expect white officers to kill black civilians in greater numbers than black officers. This expectation could be reasonably driven by the prospect that members of a group are less likely to be biased against their in-group than out-group members, on average (in other words, the non-fictional Clayton Bigsbys and Uncle Ruckus’s of the world ought to be rare).

If there was good evidence in favor of the racially-motivated hypothesis for police killings, there would be real implications for the trust people – especially minority groups – should put in the police, as well as for particular social reforms. By contrast, if the evidence is more consistent with the race-neutrality hypothesis, then a continuous emphasis of the importance of race could prove a red herring, distracting people from the other causes of police violence and preventing more effective interventions from being discussed. The issue is basically analogous to a doctor trying to treat an infection with a correct or incorrect diagnosis. It is unfortunate (and rather strange, frankly), then, that good data on police killings is apparently difficult to come by. One would think this is the kind of thing that people would have collected more information on, but apparently that’s not exactly the case. Thankfully,  we now have some fresh data on the topic that was just published by Lott & Moody (2016).

The authors collected their own data set of police killings from 2013 to 2015 by digging through Lexis/Nexis, Google, Google Alerts, and a number of other online databases, as well as directly contacting police departments. In total, they were able to compile information on on 2,700 police killings. Compared with the FBI’s information, the authors found about 1,300 more, about 741 more than the CDC, and 18 more than the Washington Post. Importantly, the authors were also able to collect a number of other pieces of information not consistently included in the other sources, including the number of officers on the scene, their age, gender, sex, and race, among a number of other factors. In demonstrating the importance of having good data, whereas the FBI had been reporting a 6% decrease in police killings over that period, the current data actually found a 29% increase. For those curious – and this is preview of what’s to come – the largest increase was attributed to white citizens being killed (312 in 2013 up to 509 in 2015; the comparable numbers for black citizens were 198 and 257).

“Good data is important, you say?”

In general, black civilians represented 25% of those killed by police, but only 12% of the overall population. Many people take this fact to reflect racial bias, but there are other things to consider, perhaps chief among which is that the crime rates were substantially higher in black neighborhoods. The reported violent crime rates were 758 per 100,000 in cities were black citizens were killed, compared with the 480 in which white citizens were killed (the rates of murder were 11.2 and 4.6, respectively). Thus, to the extent that police are only responding to criminal activity and not race, we should expect a greater representation of the black population relative to the overall population (just like we should expect more males than females to be shot, and more young people than older ones).

Turning to the matter of whether the race of the officer mattered, data was available for 904 cases (whereas the race of all those who were killed was known). When that information was entered into a number of regressions predicting the odds of the officer killing a black suspect, it was actually the case that black officers were quite a bit more likely to have killed a black suspect than a white officer in all cases (consistent with other data I’ve talked about before). It should be noted at this point, however, that for 67% of the cases, the race of the officers was unknown, whereas only 2% of the shootings for which race is known involve a black officer. As the CIA data I mentioned earlier highlighted, this unknown factor can be a big deal; perhaps black officers are actually less likely to have shot black suspects but we just can’t see it here. Since the killings of black citizens from the unknown race group did not differ from white officers, however, it seems unlikely that white officers would end up being unusually likely to shoot black suspects. Moreover, the racial composition of the police force was unrelated to those killings.

A number of other interesting findings cropped up as well. First, there was no effect of body cameras on police killings. This might suggest that when officers do kill someone – given the extremity and possible consequences of the action – it is something they tend to undertake earnestly out of fear for their life. Consistent with that idea, the greater the number of officers on the scene, the greater the reduction in the police killing anyone (about a 14-18% decline per additional officer present). Further, white female officers (though their numbers were low in the data) were also quite a bit more likely to shoot unarmed citizens (79% more), likely as a byproduct of their reduced capabilities to prevail in a physical conflict during which their weapon might be taken or they could get killed. To the extent these shootings are being driven by legitimate fears on the parts of the officers, all this data would appear to consistently fit together.

“Unarmed” does not always equal “Not Dangerous”

In sum, there doesn’t appear to be particularly strong empirical evidence that white officers are killing black citizens at higher rates than black officers; quite the opposite, in fact.  While such information might be viewed as a welcome relief, to those who have wed themselves to the idea that black populations are being targeted for lethal violence by police this data will likely be shrugged off. It will almost always be possible for someone seeking to find racism to manipulate their expectations into the world of empirical unfalsifiability. For example, given the current data of a lack of bias against black civilians by white officers, the racism hypothesis could be pushed one step back to some population-level bias whereby all officers, even black ones, are impacted by anti-black prejudice in their judgments (regardless of the department’s racial makeup, the presence of cameras, or any other such factor). It is also entirely possible that any racial biases don’t show up in the patterns of police killings, but might well show up in other patterns of less-lethal aggression or harassment. After all, there are very real consequences for killing a person – even when the killings are deemed justified and lawful – and many people would rather not subject themselves to such complications. Whatever the case, white officers do not appear unusually likely to shoot black suspects. 

References: Lott, J. & Moody, C. (2016). Do white officers unfairly target black suspects? (November 15, 2016). Available at SSRN: https://ssrn.com/abstract=2870189

When It’s Not About Race Per Se

We can use facts about human evolutionary history to understand the shape of our minds; using it to understand people’s reactions to race is no exception. As I have discussed before, it is unlikely that ancestral human populations ever traveled far enough, consistently enough throughout our history as a species to have encountered members of other races with any regularity. Different races, in other words, were unlikely to be a persistent feature of our evolutionary history. As such, it seems correspondingly unlikely that human minds contain any modules that function to attend to race per se. Yet we do seem to automatically attend to race on a cognitive level (just as we do with sex and age), so what’s going on here? The best hypothesis I’ve seen as of yet is that people aren’t paying attention to race itself as much as they are using it as a proxy for something else that likely was recurrently relevant during our history: group membership and social coalitions (Kurzban, Tooby, & Cosmides, 2001). Indeed, when people are provided with alternate visual cues to group membership – such as different color shirts – the automaticity of race being attended to appears to be diminished, even to the point of being erased entirely at times.

Bright colors; more relevant than race at times

If people attend to race as a byproduct of our interest in social coalitions, then there are implications here for understanding racial biases as well. Specifically, it would seem unlikely for widespread racial biases to exist simply because of superficial differences like skin color or facial features; instead, it seems more likely that racial biases are a product of other considerations, such as the possibility that different groups – racial or otherwise – simply hold different values as social associates to others. For instance, if the best interests of group X are opposed to those group Y, then we might expect those groups to hold negative opinions of each other on the whole, since the success of one appears to handicap the success of the other (for an easy example of this, think about how more monogamous individuals tend to come into conflict with promiscuous ones). Importantly, to the extent that those best interests just so happen to correlate with race, people might mistake a negative bias due to varying social values or best interests for one due to race.

In case that sounds a bit too abstract, here’s an example to make it immediately understandable: imagine an insurance company that is trying to set its premiums only in accordance with risk. If someone lives in an area at a high risk of some negative outcome (like flooding or robbery), it makes sense for the insurance company to set a higher premium for them, as there’s a greater chance they will need to pay out; conversely, those in low-risk areas can pay reduced premiums for the same reason. In general, people have no problem with this idea of discrimination: it is morally acceptable to charge different rates for insurance based on risk factors. However, if that high-risk area just so happens to be one in which a particular racial group lives, then people might mistake a risk-based policy for a race-based one. In fact, in previous research, certain groups (specifically liberal ones) generally say it is unacceptable for insurance companies to require those living in high-risk areas pay higher premiums if they happen to be predominately black (Tetlock et al, 2000).

Returning the main idea at hand, previous research in psychology has tended to associate conservatives – but not liberals – with prejudice. However, there has been something of a confounding factor in that literature (which might be expected, given that academics in psychology are overwhelmingly liberal): specifically, much of that literature on prejudice asks about attitudes towards groups whose values tend to lean more towards the liberal side of the political spectrum, like homosexual, immigrant, and black populations (groups that might tend to support things like affirmative action, which conservative groups would tend to oppose). When that confound is present, then it’s not terribly surprising that conservatives would look more prejudiced, but that prejudice might ultimately have little to do with the target’s race or sexual orientation per se.  More specifically, if animosity between different racial groups is due primarily to a factor like race itself, then you might expect those negative feelings to persist even in the face of compatible values. That is, if a white person happens to not like black people because they are black, then the views of a particular black person shouldn’t be liable to change those racist sentiments too much. However, if those negative attitudes are instead more of a product of a perceived conflict of values, then altering those political or social values should dampen or remove the effects of race altogether. 

Shaving the mustache is probably a good place to start

This idea was tested by Chambers et al (2012) over the course of three studies. The first of these involved 170 Mturk participants who indicated their own ideological position (strongly liberal to strongly conservative, 5-point scale), their impressions of 34 different groups (in terms of whether they’re usually liberal or conservative on the same scale, as well as how much they liked the target group), as well as a few other measures related to the prejudice construct, like system justification and modern racism. As it turns out, both liberals and conservatives tended to agree with one another about how liberal or conservative the target groups tended to be (r = .97), so their ratings were averaged. Importantly, when the target group in question tended to be liberal (such as Feminists or Atheists), liberals tended to have higher favorability ratings of them (M = 3.48) than did conservatives (M = 2.57; d = 1.23); conversely, when the target group was perceived as conservative (such as business people or the elderly), liberals now tended to have lower favorability ratings (M = 2.99) of them than conservatives (M = 3.86; d = 1.22). In short, liberals tended to feel positive about liberals, and conservatives tended to feel positive about conservatives. The more extreme the perceived political differences of the target were, the larger these biases were (r = .84). Further, when group memberships needed to be chosen, the biases were larger than when they were involuntary (e.g., as a group, “feminist”s generated more bias from liberals and conservatives than “women”).

Since that was all correlational, studies 2 and 3 took a more experimental approach. Here, participants were exposed to a target whose race (white/black) and positions (conservatives or liberal) were manipulated on six different issues (welfare, affirmative action, wealth redistribution, abortion, gun control, and the Iraq war).In study 2 this was done on a within-subjects basis with 67 participants, and in study 3 it was done between-subjects with 152 participants. In both cases, however, the results were similar: in general, the results showed that while the target’s attitudes mattered when it came to how much the participants liked them, the target’s race did not. Liberals didn’t like black targets who disagreed any more than conservatives did. The conservatives happened to like the targets who expressed conservative views more, whereas liberals tended to like targets who expressed liberal views more. The participants had also provided scores on measures of system justification, modern racism, and attitudes towards blacks. Even when these factors were controlled for, however, the pattern of results remained: people tended to react favorably towards those who shared views and unfavorably to those who did not. The race of the person with those views seemed besides the point for both liberals and conservatives. Not to hammer the point home too much, but perceiving ideological agreement – not race – was doing the metaphorical lifting here. 

Now perhaps these results would have looked different if the samples in question were comprised of people who held, more or less, extreme and explicit racist views; the type of people would wouldn’t want to live next to someone of a different race. While that’s possible, there are a few points to make about that suggestion: first, it’s becoming increasing difficult to find people who hold such racist or sexist views, despite certain rhetoric to the contrary; that’s the reason researchers ask about “symbolic” or “modern” or “implicit” racism, rather than just racism. Such openly-racist individuals are clearly the exceptions, rather than the rule. This brings me to the second point, which is that, even if biases did look different among the hardcore racists (we don’t know if they do), for more average people, like the kind in these studies, there doesn’t appear to be a widespread problem with race per se; at least not if the current data have any bearing on the matter. Instead, it seems possible that people might be inferring a racial motivation where it doesn’t exist because of correlations with race (just like in our insurance example).

Pictured: unusual people; not everyone you disagree with

For some, the reaction to this finding might be to say that it doesn’t matter. After all, we want to reduce racism, so being incredibly vigilant for it should ensure that we catch it where it exists, rather than miss it or make it seem permissible. Now that likely true enough, but there are other considerations to add into that equation. One of them is that by reducing your type-two errors (failing to see racism where it exists) you increase your type-one errors (seeing racism where there is none). As long as accusations of being a racist are tied to social condemnation (not praise; a fact alone which ought to tell you something), you will be harming people by overperceiving the issue. Moreover, if you perceive racism where it doesn’t exist too often, you will end up with people who don’t take your claims of racism seriously anymore. Another point to make is that if you’re actually serious about addressing a social problem you see, accurately understanding its causes will go a long way. That is to say that time and energy invested in interventions to reduce racism is time not spent trying to address other problems. If you have misdiagnosed the issue you seek to tackle as being grounded in race, then your efforts to address it will be less successful than they otherwise could be, not unlike a doctor prescribing the wrong medication to treat an infection.          

References: Chambers, J., Schlenker, B., & Collisson, B. (2012). Ideology and prejudice: The role of value conflicts. Psychological Science, 24, 140-149.

Kurzban, R., Tooby, J., & Cosmides, L. (2001). Can race be erased? Coalitional computation and social categorization. PNAS, 98, 15387-15392.

Tetlock, P., Kristel, O., Elson, S., Green, M., & Lerner, J. (2000). The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78 (5), 853-870 DOI: 10.1037//0022-3514.78.5.853

What Might Research Ethics Teach Us About Effect Size?

Imagine for a moment that you’re in charge of overseeing medical research approval for ethical concerns. One day, a researcher approaches you with the following proposal: they are interested in testing whether a food stuff that some portion of the population occasionally consumes for fun is actually quite toxic, like spicy chilies. They think that eating even small doses of this compound will cause mental disturbances in the short term – like paranoia and suicidal thoughts – and might even cause those negative changes permanently in the long term. As such, they intend to test their hypothesis by bringing otherwise-healthy participants into the lab, providing them with a dose of the possibly-toxic compound (either just once or several times over the course of a few days), and then see if they observe any negative effects. What would your verdict on the ethical acceptability of this research be? If I had to guess, I suspect that many people would not allow the research to be conducted because one of the major tenants of research ethics is that harm should not befall your participants, except when absolutely necessary. In fact, I suspect that were you the researcher – rather than the person overseeing the research – you probably wouldn’t even propose the project in the first place because you might have some reservations about possibly poisoning people, either harming them directly and/or those around them indirectly.

“We’re curious if they make you a danger to yourself and others. Try some”

With that in mind, I want to examine a few other research hypotheses I have heard about over the years. The first of these is the idea that exposing men to pornography will cause a number of harmful consequences, such as increasing how appealing rape fantasies were, bolstering the belief that women would enjoy being raped, and decreasing the perceived seriousness of violence against women (as reviewed by Fisher et al, 2013). Presumably, the effect on those beliefs over time is serious as it might lead to real-life behavior on the part of men to rape women or approve of such acts on the parts of others. Other, less-serious harms have also been proposed, such as the possibility that exposure to pornography might have harmful effects on the viewer’s relationship, reducing their commitment, making it more likely that they would do things like cheat or abandon their partner. Now, if a researcher earnestly believed they would find such effects, that the effects would be appreciable in size to the point of being meaningful (i.e., are large enough to be reliably detected by statistical test in relatively small samples), and that their implications could be long-term in nature, could this researcher even ethically test such issues? Would it be ethically acceptable to bring people into the lab, randomly expose them to this kind of (in a manner of speaking) psychologically-toxic material, observe the negative effects, and then just let them go? 

Let’s move onto another hypothesis that I’ve been talking a lot about lately: the effects of violent media on real life aggression. Now I’ve been specifically talking about video game violence, but people have worried about violent themes in the context of TV, movies, comic books, and even music. Specifically, there are many researchers who believe that exposure to media violence will cause people to become more aggressive through making them perceive more hostility in the world, view violence as a more acceptable means of solving problems, or by making violence seem more rewarding. Again, presumably, changing these perceptions is thought to cause the harm of eventual, meaningful increases in real-life violence. Now, if a researcher earnestly believed they would find such effects, that the effects would be appreciable in size to the point of being meaningful, and that their implications could be long-term in nature, could this researcher even ethically test such issues? Would it be ethically acceptable to bring people into the lab, randomly expose them to this kind of (in a manner of speaking) psychologically-toxic material, observe the negative effects, and then just let them go?

Though I didn’t think much of it at first, the criticisms I read about the classic Bobo doll experiment are actually kind of interesting in this regard. In particular, researchers were purposefully exposing young children to models of aggression, the hope being that the children will come to view violence as acceptable and engage in it themselves. The reason I didn’t pay it much mind is that I didn’t view the experiment as causing any kind of meaningful, real-world, or lasting effects on the children’s aggression; I don’t think mere exposure to such behavior will have meaningful impacts. But if one truly believed that it would, I can see why that might cause some degree of ethical concerns. 

Since I’ve been talking about brief exposure, one might also worry about what would happen to researchers were to expose participants to such material – pornographic or violent – for weeks, months, or even years on end. Imagine a study that asked people to smoke for 20 years to test the negative effects in humans; probably not getting that past the IRB. As a worthy aside on that point, though, it’s worth noting that as pornography has become more widely available, rates of sexual offending have gone down (Fisher et al, 2013); as violent video games have become more available, rates of youth violent crime have done down too (Ferguson & Kilburn, 2010). Admittedly, it is possible that such declines would be even steeper if such media wasn’t in the picture, but the effects of this media – if they cause violence at all – are clearly not large enough to reverse those trends.

I would have been violent, but then this art convinced me otherwise

So what are we to make of the fact that these research was proposed, approved, and conducted? There are a few possibility to kick around. The first is that the research was proposed because the researchers themselves don’t give much thought to the ethical concerns, happy enough if it means they get a publication out of it regardless of the consequences, but that wouldn’t explain why it got approved by other bodies like IRBs. It is also possible that the researchers and those who approve it believe it to be harmful, but view the benefits to such research as outstripping the costs, working under the assumption that once the harmful effects are established, further regulation of such products might follow ultimately reducing the prevalence or use of such media (not unlike the warnings and restrictions placed on the sale of cigarettes). Since any declines in availability or censorship of such media have yet to manifest – especially given how access to the internet provides means for circumventing bans on the circulation of information – whatever practical benefits might have arisen from this research are hard to see (again, assuming that things like censorship would yield benefits at all) .

There is another aspect to consider as well: during discussions of this research outside of academia – such as on social media – I have not noted a great deal of outrage expressed by consumers of these findings. Anecdotal as this is, when people discuss such research, they do not appear to raising the concern that the research itself was unethical to conduct because it will doing harm to people’s relationships or women more generally (in the case of pornography), or because it will result in making people more violent and accepting of violence (in the video game studies). Perhaps those concerns exist en mass and I just haven’t seen them yet (always possible), but I see another possibility: people don’t really believe that the participants are being harmed in this case. People generally aren’t afraid that the participants in those experiments will dissolve their relationship or come to think rape is acceptable because they were exposed to pornography, or will get into fights because they played 20 minutes of a video game. In other words, they don’t think those negative effects are particularly large, if they even really believe they exist at all. While this point would be a rather implicit one, the lack of consistent moral outrage expressed over the ethics of this kind of research does speak to the matter of how serious these effects are perceived to be: at least in the short-term, not very. 

What I find very curious about these ideas – pornography causes rape, video games cause violence, and their ilk – is that they all seem to share a certain assumption: that people are effectively acted upon by information, placing human psychology in a distinctive passive role while information takes the active one. Indeed, in many respects, this kind of research strikes me as remarkably similar to the underlying assumptions of the research on stereotype threat: the idea that you can, say, make women worse at math by telling them men tend to do better at it. All of these theories seem to posit a very exploitable human psychology capable of being manipulated by information readily, rather than a psychology which interacts with, evaluates, and transforms the information it receives.

For instance, a psychology capable of distinguishing between reality and fantasy can play a video game without thinking it is being threatened physically, just like it can watch pornography (or, indeed, any videos) without actually believing the people depicted are present in the room with them. Now clearly some part of our psychology does treat pornography as an opportunity to mate (else there would be no sexual arousal generated in response to it), but that part does not necessarily govern other behaviors (generating arousal is biologically cheap; aggressing against someone else is not). The adaptive nature of a behavior depends on context.

Early hypotheses of the visual-arousal link were less successful empirically

As such, expecting something like a depiction to violence to translate consistently into some general perception that violence is acceptable and useful in all sorts of interactions throughout life is inappropriate. Learning that you can beat up someone weaker than you doesn’t mean it’s suddenly advisable to challenge someone stronger than you; relatedly, seeing a depiction of people who are not you (or your future opponent) fighting shouldn’t make it advisable for you to change your behavior either. Whatever the effects of this media, they will ultimately be assessed and manipulated internally by psychological mechanisms and tested against reality, rather than just accepted as useful and universally applied.  

I have seen similar thinking about information manipulating people another time as well: during discussions of memes. Memes are posited to be similar to infectious agents that will reproduce themselves at the expense of their host’s fitness; information that literally hijacks people’s minds for its own reproductive benefits. I haven’t seen much in the way of productive and successful research flowing from that school of thought quite yet – which might be a sign of its effectiveness and accuracy – but maybe I’m just still in the dark there. 

References: Ferguson, C. & Kilburn, J. (2010). Much ado about nothing: The misestimation and overinterpretation of violent video game effects in eastern and western nations: Comment on Anderson et al (2010). Psychological Bulletin, 136, 174-178.

Fisher, W., Kohut, T., Di Gioacchino, L., & Fedoroff , P. (2013). Pornography, sex crime, and paraphilia. Current Psychiatry Reports, 15, 362.