Conscience Does Not Explain Morality

“We may now state the minimum conception: Morality is, at the very least, the effort to guide one’s conduct by reason…while giving equal weight to the interests of each individual affected by one’s decision” (emphasis mine).

The above quote comes to us from Rachaels & Rachaels (2010) introductory chapter entitled “What is morality?” It is readily apparent that their account of what morality is happens to be a conscience-centric one, focusing on self-regulatory behaviors (i.e. what you, personally, ought to do). These conscience-based accounts are exceedingly popular among many people, academics and non-academics alike, perhaps owing to its intuitive appeal: it certainly feels like we don’t do certain things because they feel morally wrong, so understanding morality through conscience seems like the natural starting point. With all due respect to the philosopher pair and the intuitions of people everywhere, they seem to have begun their analysis of morality on entirely the wrong foot.

So close to the record too…

Now, without a doubt, understanding conscience can help us more fully understand morality, and no account of morality would be complete without explaining conscience; it’s just not an ideal starting point for beginning our analysis (DeScioli & Kurzban, 2009; 2013). This is because moral conscience does not, in and of itself, explain our moral intuitions well. Specifically, it fails to highlight the difference between what we might consider ‘preferences’ and ‘moral rules’. To better understand this distinction, consider two following statements: (1) “I have no interest in having homosexual intercourse”, and (2) “Homosexual intercourse is immoral”. These two statements are distinct utterances, aimed at expressing different thoughts. The first expresses a preference, and that preference would appear sufficient for guiding one’s behavior, all else being equal; the latter statement, however, appears to express a different sentiment altogether. That second sentiment appears to imply that others ought to not have homosexual intercourse, regardless of whether you (or they) want to engage in the act.

This is the key distinction, then: moral conscience (regulating one’s own behavior) does not appear to straightforwardly explain moral condemnation (regulating the behavior of others). Despite this, almost every expressed moral rule or law involves punishing others for how they behave – at least implicitly. While the specifics of what gets punished and how much punishment is warranted vary to some degree from individual to individual, the general form of moral rules does not. Were I to say I do not wish to have homosexual intercourse, I’m only expressing a preference, a bit like stating whether or not I would like my sandwich on white or wheat bread. Were I to say homosexuality is immoral, I’m expressing the idea that those who engage in the act ought to be condemned for doing so. By contrast, I would not be interested in punishing people for making the ‘wrong’ choice about bread, even if I think they could have made a better choice.

While we cannot necessarily learn much about moral condemnation via moral conscience, the reverse is not true: we can understand moral conscience quite well through moral condemnation. Provided that there are groups of people who will tend to punish for you for doing something, this provides ample motivation to avoid engaging in that act, even if you otherwise highly desire to do so. Murder is a simple example here: there tend to be some benefits for removing specific conspecifics from one’s world. Whether because those others inflict costs on you or prevent the acquisition of benefits, there is little question that murder might occasionally be adaptive. If, however, the would-be target of your homicidal intentions happens to have friends and family members that would rather not see them dead, thank you very much, the potential costs those allies might inflict need to be taken into account. Provided those costs are appreciably great, and certain actions are punished with sufficient frequency over time, a system for representing those condemned behaviors and their potential costs – so as to avoid engaging in them – could easily evolve.

“Upon further consideration, maybe I was wrong about trying to kill your mom…”

That is likely what our moral conscience represents. To the extent that behaviors like stealing from or physically harming others tended to be condemned and punished, we ought to be expected to have a cognitive system to represent that fact. Now perhaps that all seems a bit perverse. After all, many of us simply experience the sensation that an act is morally wrong or not; we don’t necessarily think about our actions in terms of the likelihood and severity of punishment (we do think such things some of the time, but that’s typically not what appears to be responsible for our feeling of “that’s morally wrong”. People think things are morally wrong regardless of whether one is caught doing it). That all may be true enough, but remember, the point is to explain why we experience those feelings of moral wrongness; not to just note that we do experience them and that they seem to have some effect on our behavior. While our behavior might be proximately motivated by those feelings of moral wrongness, those feelings came to exist because they were useful in guiding out behavior in the face of punishment. That does raise a rather important question, though: why do we still feel certain acts are immoral even when the probability of detection or punishment are rather close to zero?

There are two ways of answering that question, neither of which is mutually exclusive with the other. The first is that the cognitive systems which compute things like the probability of being detected and estimate the likely punishment that will ensue are always working under conditions of uncertainty. Because of this uncertainty, it is inevitable that the system will, on occasion, make mistakes: sometimes one could get away without repercussions when behaving immorally, and one would be better off if they took those chances than if they did not. One also needs to consider the reverse error as well, though: if you assess that you will not be caught or punished when you actually will, you would have been better off not behaving immorally. Provided the costs of punishment are sufficiently high (the loss of social allies, abandonment by sexual partners, the potential loss of your life, etc), it might pay in some situations to still avoid behaving in morally unacceptable ways even when you’re almost positive you could get away with it (Delton et al, 2012). The point here is that it doesn’t just matter if you’re right or wrong about whether you’re likely to be punished: the costs to making each mistake need to be factored into the cognitive equation as well, and those costs are often asymmetric.

The second way of approaching that question is to suggest that the conscience system is just one cognitive system among many, and these systems don’t always need to agree with one another. That is, a conscience system might still represent an act as morally unacceptable while other systems (those designed to get certain benefits and assess costs) might output an incompatible behavioral choice (i.e. cheating on your committed partner despite knowing that it is morally condemned to do so, as the potential benefits are perceived as being greater than the costs). To the extent that these systems are independent, then, it is possible for each to hold opposing representations about what to do at the same time. Examples of this happening in other domains are not hard to find: the checkerboard illusion, for instance, allows us to hold both the representation that A and B are different colors and that A and B are the same color in our mind at once. We need not be of one mind about all such matters because our mind is not one thing.

“Well, shoot; I’ll get the glue gun…”

Now, to be sure, there are plenty of instances where people will behave in ways deemed to be immoral by others (or even by themselves, at different times) without feeling the slightest sensation of their conscience telling them “what you’re doing is wrong”. Understanding how the conscience develops, and the various input conditions likely to trigger it – or fail to do so – are interesting matters. In order to make better progress on researching them, however, it would benefit researchers to begin with an understanding of why moral conscience exists. Once the function of conscience – avoiding condemnation – has been determined, figuring out what questions to ask about conscience becomes an altogether easier task. We might expect, for instance, that moral conscience is less likely to be triggered when others (the target and their allies) are perceived to be incapable of effective retaliation. While such a prediction might appear eminently sensible when beginning with condemnation, it is not entirely clear how one could deliver such a prediction if they began their analysis with conscience instead.

References: Delton, A., Krasnow, M., Cosmides, L., & Tooby, J. (2012). Evolution of direct reciprocity under uncertainty can explain human generosity in one-shot encounters.  Proceedings of the National Academy of Sciences, 108, 13335-13340.

DeScioli P, & Kurzban R (2009). Mysteries of morality. Cognition, 112 (2), 281-99 PMID: 19505683

DeScioli P, & Kurzban R (2013). A solution to the mysteries of morality. Psychological bulletin, 139 (2), 477-96 PMID: 22747563

Rachaels, J. & Rachels S. (2010). The Elements of Moral Philosophy. New York, NY: McGraw Hill.

More About Memes

Sometime ago, I briefly touched on the why I felt the concept of a meme didn’t help us understand some apparent human (and nonhuman) predispositions for violence. I don’t think my concerns about the idea that memes are analogs to genes – both being replicators that undergo a selective process, resulting in what one might call evolution by natural selection – were done full justice there. Specifically, I only scratched the surface of one issue, without explicitly getting down to the deeper, theoretical concerns with the ‘memes-as-replicators’ idea. As far I can see at the moment, memetics proves to be too underspecified in many key regards to profitably help us understand human cognition and behavior. By extension, the concept of cultural group selection faces many of the same challenges. None of what I’m about to say discredits the notion that people can often end up with similar ideas in their heads: I didn’t think up the concepts of evolution by natural selection, genes, or memes, yet here I am discussing them (with people who will presumably understand them to some degree as well). The point is that those ideas probably didn’t end up in our heads because the ideas themselves were good replicators.

Good luck drawing the meme version of the tree of life.

The first of these conceptual issues concerns the problem of discreteness. This is the basic question of what are the particulate units of inheritance that are being replicated? Let’s use the example provided by Wikipedia:

A meme has no given size. Susan Blackmore writes that melodies from Beethoven’s symphonies are commonly used to illustrate the difficulty involved in delimiting memes as discrete units. She notes that while the first four notes of Beethoven’s Fifth Symphony form a meme widely replicated as an independent unit, one can regard the entire symphony as a single meme as well.

So, are those first four notes to be regarded an independent meme or part of a larger meme? The answer, unhelpfully, seems to be, “yes”. To see why this answer is unhelpful, consider a biological context:organisms are collections of traits, traits are collections of proteins, proteins are coded for by genes, and genes are made up of alleles. By contrast, this post (a meme) is made up of paragraphs (memes), which are made up of sentences (memes), which are made up of words (memes), which are made up of letters (memes), all of which are intended to express abstract ideas (also memes). In the biological sense, then, the units of heredity (alleles/genes) can be conceived of and spoken about in a distinct manner from their products (proteins, traits, and organisms). The memetics sense, blurs this distinction; the hypothetical units of heredity (memes) are the same as their products (memes), and can broken down into effectively-limitless combinations (words, letters, notes, songs, speeches, cultures, etc). If the definition of a meme can be applied to accommodate almost anything, it adds nothing to our understanding of ideas.

This definitional obscurity has other conceptual downsides as well that begin to tip the idea that ‘memes replicate‘ into the realm of unfalsifiability. Let’s return to the biological domain: here, two organisms can have identical sets of genes, yet display different phenotypes, as their genetic relatedness is a separate concept from their phenotypic relatedness. The reverse can also hold: two organisms can have phenotypically similar traits – like wings – despite not inheriting that trait from a genetic common ancestor (think bats and pigeons). What these examples tell us is that phenotypic resemblance – or lack thereof – is not necessarily a good cue for determining biological relatedness. In the case of memes, there is no such conceptual dividing line using parallel concepts: the phenotype of a meme is its genotype. This makes it very difficult to do things like measure relatedness between memes or determine if they have a common ancestor. To make this example more concrete, imagine you have come up with a great idea (or a truly terrible one; the example works regardless of quality). When you share this idea with your friend, your friend appears stunned, for just the other day they had precisely the same idea.

Assuming both of you have identical copies of this idea in your respective heads, does it make sense to call one idea a replication of the other? It would seem not. Though they might resemble one another in every regard, one is not the offspring of another. To shift the example back to biology, were a scientist to create a perfect clone of you, that clone would not be a copy of you by descent; you would not share any common ancestors, despite your similarities. The conceptualization of memes appears to blur this distinction, as there is currently no way of separating out descent from a common ancestor from separate creation events in regards to ideas. Without this distinction, the potential application of natural selection to memes is weakened substantially. One could make the argument that memes, like adaptations, are too improbably organized to arise spontaneously, which would imply they represent replications with mutations/modifications, rather than independent creation events. That argument would be deficient on at least two counts.

One case in which there is a real controversy.

The first problem with that potential counterargument is that there are two competing accounts for special design: evolution and creationism. In the case of biology, that debate is (or at least ought to be) largely over. In the case of memes, however, the creationism side has a lot going for it; not in the supernatural-sense, mind you, but rather in the information-processing sense. Our minds are not passive receptors for sensory information, attempting to bring perceptions from ‘out there’ inside; they actively process incoming information, structuring it in predictable ways to create our subjective experience of the world (Michael Mills has an excellent post on that point). Brains are designed to organize and represent incoming information in particular ways and, importantly, this organization is often not recoverable from the information itself. There is nothing about certain wavelengths of light that would lead to their automatic perception as “green” or “red”, and nothing intrinsic about speech that makes it grammatical. This would imply that at least some memes (like grammatical rules) need to be created in a more or less de novo fashion; others need to be given meaning not found in the information itself: while a parrot can be taught to repeat certain phrases, it is unlikely that the phrases trigger the same set of representations inside the parrot’s head as they do in ours.

The second response to the potential rebuttal concerns the design features of memes more generally, and again returns us to their definitional obscurity. Biological replicators which create more copies of themselves become more numerous, relative to replicators that do a worse job; that much is a tautology. The question of interest is how they manage to do so. There are scores of adaptive problems that need to be successfully solved for biological organisms to reproduce. When we look for evidence of special design, we are looking for evidence of adaptations designed to solve those kinds of problems. To do so requires (a) the identification of an adaptive problem, (b) a trait that solves the problem, and (c) an account of how it does does so.  As the basic structure of memes has not been formally laid out, it becomes impossible to pick out evidence of memetic design features that came to be because they solved particular adaptive problems. I’m not even sure whether proper adaptive problems faced by memes specifically, and not adaptive problems faced by their host organism, have even been articulated.

One final fanciful example that highlights both these points is the human ability to (occasionally) comprehend scrambled words with ease:

I cdn’uolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg: the phaonmneel pweor of the hmuan mnid. Aoccdrnig to a rseearch taem at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae.

In the above passage, what is causing some particular meme (the word ‘taht’) to be transformed into a different meme (the word ‘that’)? Is there some design feature of the word “that” which is particularly good at modifying other memes to make copies of itself? Probably not, since no one read “cluod” in the above passage as “that”. Perhaps the meme ‘taht’ is actually composed of 4 different memes, ‘t’, ‘a’, ‘h’, and ‘t’, which have some affinity for each other. Then again, probably not, since I doubt non-English speakers would spontaneously turn the four into the word ‘that’. The larger points here are that (a) our minds are not passive recipients of information, but rather activity represent and create it, and (b) if one cannot speak meaningfully about different features of memes (like design features, or heritable units) beyond, “I know it when I see it”, the enterprise of discussing memes seems to more closely resemble a post hoc fitting of any observed set of data to the theory, rather than the theory driving predictions about unknown data. 

“Oh, it’ll fit alright…”

All of this isn’t to say that memetics will forever be useless in furthering our understanding of how ideas are shaped and spread but, in order to be useful, a number of key concepts would need to be deeply clarified at a minimum. A similar analysis applies to other similar types of explanations, such as cultural ones: it’s beyond doubt that local conditions – like cultures and ideas – can shape behavior. The key issue, however, is not noting that these things can have effects, but rather developing theories that deliver testable predictions about the ways in which those effects are realized. Adaptationism and evolution by natural selection fit the bill well, and in that respect I would praise memetics: it recognizes the potential power of such an approach. The problem, however, lies in the execution. If these biological theories are used loosely to the point of metaphor, their conceptual power to guide research wanes substantially.

Why Would Bad Information Lead To Better Results?

There are some truly strange arguments made in the psychological literature from time to time. Some might even be so bold as to call that frequency “often”, while others might dismiss the field of psychology as a variety of pseudoscience and call it a day. Now, were I to venture some guesses as to why strange arguments seem so popular, I’d have two main possibilities in mind: first, there’s the lack of well-grounded theoretical framework that most psychologists tend to suffer from, and, second, there’s a certain pressure put on psychologists to find and publish surprising results (surprising in that they document something counter-intuitive or some human failing. I blame this one for the lion’s share of these strange arguments). These two factors might come together to result in rather nonsensical arguments being put forth fairly-regularly and their not being spotted for what they are. One of these strange arguments that has come across my field of vision fairly frequently in the past few weeks is the following: that our minds are designed to actively create false information, and because of that false information we are supposed to be able to make better choices. Though it comes in various guises across different domains, the underlying logic is always the same: false beliefs are good. On the face of it, such an argument seems silly. In all fairness, however, it only seems that way because, well, it is that way.

If only all such papers came with gaudy warning hats…

Given the strangeness of these arguments, it’s refreshing to come across papers critical of them that don’t pull any rhetorical punches. For that reason, I was immediately drawn towards a recent paper entitled, “How ‘paternalistic’ is spatial perception? Why wearing a heavy backpack doesn’t – and couldn’t – make hills steeper” (Firestone, 2013; emphasis his). The general idea that the paper argues against is the apparently-popular suggestion that our perception essentially tells us – the conscious part of us, anyway – many little lies to get us to do or not do certain things. As the namesake of the paper implies, one argument goes that wearing a heavy backpack will make hills actually look steeper. Not just feel harder to climb, mind you, but actually look visually steeper. The reason some researchers posited this might be the case is because they realized, correctly, that wearing a heavy backpack makes hills harder to climb. In order to dissuade us from climbing them under such conditions, then, out perceptual system is thought makes the hill look harder to climb than it actually is, so we don’t try. Additionally, such biases are said to make decisions easier by reducing the cognitive processing required to make them.

Suggestions like these do violence to our intuitive experience of the world. Were you looking down the street unencumbered, for instance, your perception of the street would not visibly lengthen before your eyes were you to put on a heavy backpack, despite the distance now being harder to travel. Sure; you might be less inclined to take that walk down the street with the heavy backpack on, but that’s a different matter as to whether you would see the world any differently. Those who favor the embodied model might (and did) counter that it’s not that the distances themselves change, but rather the units on the ruler used to measure one’s position relative to them that does (Proffitt, 2013). In other words, since our measuring tool looks different, the distances look different. I find such an argument wanting, as it appears to be akin to suggesting that we should come to a different measurement of a 12-foot room contingent on whether we’re using foot-long or yard-long measuring sticks, but perhaps I’m missing some crucial detail.

In any case, there are many other problems with the embodied account that Firestone (2013) goes through, such as the magnitude of the effect sizes – which can be quite small – being insufficient to accurately adjust behavior, their being little to no objective way of scaling one’s relative abilities to certain kinds of estimates, and, perhaps most damningly, that many of these effects fail to replicate or can be eliminated by altering the demand characteristics of the experiments in which they’re found. Apparently subjects in these experiments seemed to make some connection – often explicitly – between the fact they were just asked to put on a heavy backpack and then make an estimate of the steepness of a hill. They’re inferring what the experimenter wants and then adjusting their estimates accordingly.

While Firestone (2013) makes many good points in suggesting why the paternalistic (or embodied) account probably isn’t right, there are some I would like to add to the list. The first of these additions is that, in many cases, the embodied account seems to be useless for discriminating between even directly-comparable actions. Consider the following example in which such biases might come into play: you have a heavy load to transport from point A to point B, and you want to figure out the easiest way of doing so. One route takes you over a steep hill; another route takes you the longer distance around the hill. How should we expect perceptual estimates to be biased in order to help you solve the task? On the one hand, they might bias you to avoid the hill, as the hill now looks steeper; on the other hand, they might bias you to avoid the more circuitous route, as distances now look longer. It would seem the perceptual bias resulting from the added weight wouldn’t help you make a seemingly simple decision. At best, such biases might make you decide to not bother carrying the load in the first place, but the moment you put it down, the perceptions of these distances ought to shrink, making the task seem more manageable. All such a biasing system would seem to do in cases like this, then, is add extra cognitive processing into the mix in the form of whatever mechanisms are required to bias your initial perceptions.

“It’s symbolic; things don’t always have to “do” things. Now help me plug it into the wall”

The next addition I’d like to make is also in regards to the embodied account not being useful: the embodied account, at least at times, would seem to get causality backwards. Recall that the hypothesized function of these ostensible perceptual distortions is to guide actions. Provided I’m understanding the argument correctly, then, these perceptual distortions ought to occur before one decides what to do; not after the decision has already been made. The problem is that they don’t seem to be able to work in that fashion, and here’s why: these biasing systems would be unable to know in which direction to bias perceptions prior to a decision being made. If, for instance, some part of your mind is trying to bias your perception of the steepness of a hill so as to dissuade you from climbing it, that would seem to imply that some part of your mind already made the decision as to whether or not to try and make the climb. If the decision hadn’t been made, the direction or extent of the bias would remain undetermined. Essentially, these biasing rules are being posited to turn your perceptual systems into superfluous yes-men.

On that point, it’s worth noting that we are talking about biasing existing perceptions. The proposition on the table seems to be the following chain of events: first, we perceive the world as it is (or at least as close to that state as possible; what I’ll call the true belief). This leaves most of cognitive work already done, as I mentioned above. Then, from those perceptions, an action is chosen based on some expected cost/benefit analysis (i.e. don’t climb the hill because it will be too hard). Following this, our mind takes the true belief it already made the action decision with and turns it into a false one. This false belief then biases our behavior so as to get us to do what we were going to do anyway. Since the decision can be made on the basis of the initially-calculated true information, the false belief seem to have no apparent benefit for your immediate decision. The real effect of these false beliefs, then, ought to be expected to be seen in subsequent decisions. This raises yet another troubling possibility for the model: in the event that some perception – like steepness – is used to generate estimates of multiple variables (such as energy expenditure, risk, or so on), a biased perception will similarly bias all these estimates.

A quick example should highlight some of potential problems with this. Let’s say you’re a camper returning home with a heavy load of gear on your back. Because you’re carrying a heavy load, you mistakenly perceive that your camping group is farther away than they actually are. Suddenly, you notice an rather hungry-looking predator approaching you. What do you do? You could try and run back to the safety of your group, or you could try and fight it off (forgoing other behavioral options for the moment). Unfortunately, because you mistakenly believe that your group is farther away than they are, you miscalculate the probability of making it to them before the predator catches up with you and opt to fight it off instead. Since the basis for that decision is false information, the odds of it being the best choice are diminished. This analysis works in the opposition direction as well. There are two types of errors you might make: thinking you can make the distance when you can’t, or thinking you can’t make it when you can. Both of these are errors to be avoided, and avoiding errors is awfully hard when you’re working with bad information.

Especially when you just disregarded the better information you had

It seems hard to find the silver lining in these false-belief models. They don’t seem to save any cognitive load, as they require the initially true beliefs to already be present in the mind somewhere. They don’t seem to help us make a decision either. At best, false beliefs lead us to do the same thing we would do in the presence of true beliefs anyway; at worst, false beliefs lead us to make worse decisions than we otherwise would. These models appear to require that our minds take the best possible state of information they have access to and then add something else to it. Despite these (perhaps not-so) clear shortcomings, false belief models appear to be remarkably popular, and are used to explain topics from religious beliefs to ostensible misperceptions of sexual interest. Given that people generally seem to understand that it’s beneficial to see through the lies of others and not be manipulated with false information, it seems peculiar that they have a harder time recognizing that it’s similarly beneficial to avoid lying to ourselves.

References: Firestone, C. (2013). How “Paternalistic” Is Spatial Perception? Why Wearing a Heavy Backpack Doesn’t- and Couldn’t – Make Hills Look Steeper. Perspectives on Psychological Science, 8, 455-473

Proffitt, D. (2013). An Embodied Approach to Perception: By What Units Are Visual Perceptions Scaled? Perspectives on Psychological Science, 8, 474-483.

PZ Myers; Again…

Since my summer vacation is winding itself to a close, it’s time to relax with a fun, argumentative post that doesn’t deal directly with research. PZ Myers, an outspoken critic of evolutionary psychology – or at least an imaginary version of the field, which may bear little or no resemblance to the real thing – is back at it at again. After a recent defense of the field against PZ’s rather confused comments by Jerry Coyne and Steven Pinker, PZ has now responded to Pinker’s comments. Now, presumably, PZ feels like he did a pretty good job here. This is somewhat unfortunate, as PZ’s response basically plays by every rule outlined in the pop anti-evolutionary-psychology game: he asserts, incorrectly, what evolutionary psychology holds to as a discipline, fails to mention any examples of this going on in print (though he does reference blogs, so there’s that…), and then expressing wholehearted agreement with many of the actual theoretical commitments put forth by the field. So I wanted to take this time to briefly respond to PZ’s recent response and defend my field. This should be relatively easy, since it’s takes PZ a full two sentences into his response proper to say something incorrect.

Gotta admire the man’s restraint…

Kicking off his reply, PZ has this to say about why he dislikes the methods of evolutionary psychology:

PZ: That’s my primary objection, the habit of evolutionary psychologists of taking every property of human behavior, assuming that it is the result of selection, building scenarios for their evolution, and then testing them poorly.”

Familiar as I am with the theoretical commitments of the field, I find it strange that I overlooked the part that demands evolutionary psychologists assume that every property of human behavior is the result of selection. It might have been buried amidst all those comments about things like “byproducts”, “genetic drift”, “maladaptiveness” and “randomness” by the very people who, more or less, founded the field. Most every paper using the framework in the primary literature I’ve come across, strangely, seem to write things like “…the current data is consistent with the idea that [trait X] might have evolved to [solve problem Y], but more research is needed”, or might posit that,”…if [trait X] evolved to [solve problem Y], we. ought to expect [design feature Z]“. There is, however, a grain of truth to what PZ writes, and that is this: that hypotheses about adaptive function tend to make better predictions that non-adaptive ones. I highlighted this point in my last response to a post by PZ, but I’ll recreate the quote by Tooby and Cosmides here:

“Modern selectionist theories are used to generate rich and specific prior predictions about new design features and mechanisms that no one would have thought to look in the absence of these theories, which is why they appeal so strongly to the empirically minded….It is exactly this issue of predictive utility, and not “dogma”, that leads adaptationists to use selectionist theories more often than they do Gould’s favorites, such as drift and historical contingency. We are embarrassed to be forced, Gould-style, to state such a palpably obvious thing, but random walks and historical contingency do not, for the most part, make tight or useful prior predictions about the unknown design features of any single species.”

All of that seems to be besides the point, however, because PZ evidently doesn’t believe that we can actually test byproduct claims in the first place. You see, it’s not enough to just say that [trait X] is a byproduct; you need to specify what it’s a byproduct of. Male nipples, for instance, seem to be a byproduct of functional female nipples; female orgasm may be a byproduct of a functional male orgasm. Really, a byproduct claim is more a negative claim than anything else: it’s a claim that [trait X] has (or rather, had) no adaptive function. Substantiating that claim, however, requires one to be able to test for and rule out potential adaptive functions. Here’s what PZ had to say in his comments section about doing so:

PZ: My argument is that most behaviors will NOT be the product of selection, but products of culture, or even when they have a biological basis, will be byproducts or neutral. Therefore you can’t use an adaptationist program as a first principle to determine their origins.”

Overlooking the peculiar contrasting of “culture” and “biological basis” for the moment, if one cannot use an adaptationist paradigm to test for possible functions in the first place, then it seems one would be hard-pressed to make any claim at all about function – whether that claim is that there is or isn’t one. One could, as PZ suggests, assume that all traits are non-functional until demonstrated otherwise, but, again, since we apparently cannot use an adaptationist analysis to determine function, this would leave us assuming things like “language is a byproduct”. This is somewhat at odds with PZ’s suggestion that “there is an evolved component of human language”, but since he doesn’t tell us how he reached that conclusion – presumably not through some kind of adaptationism program – I suppose we’ll all just have to live with the mystery.

Methods: Concentrated real hard, then shook five times.

Moving on, PZ raises the following question about modularity in the next section of his response:

“PZ: …why talk about “modules” at all, other than to reify an abstraction into something misleadingly concrete?”

Now this isn’t really a criticism about the field so much as a question about it, but that’s fine; questions are generally welcomed. In fact, I happen to think that PZ answers this question himself without any awareness of it, when he was previously discussing spleen function:

PZ: What you can’t do is pick any particular property of the spleen and invent functions for it, which is what I mean by arbitrary and elaborate.”

While PZ is happy with the suggestion that spleen itself serves some adapted function, he overlooks the fact, and indeed, would probably take it for granted, that it’s meaningful to talk about the spleen as being a distinct part of the body in which it’s found. To put PZ’s comment in context, imagine some anti-evolutionary physiologist suggesting that it’s nonsensical to try and “pick any particular” part of the body and talk about “it’s specific function” as if it’s distinct from any other part (I imagine the exchange might go like this: “You’re telling me the upper half of the chest functions as a gas exchanger and the lower half functions to extract nutrients from food? What an arbitrary distinction!”). Of course, we know it does make sense to talk about different parts of the body – the heart, the lungs, and spleen – and we do so as each is viewed as having different functions. Modularity essentially does the same thing for the brain. Though the brain might outwardly appear to be a single organ, it is actually a collection of functionally-distinct pieces. The parts of your brain that process taste information aren’t good at solving other problems, like vision. Similarly, a system that processes sexual arousal might do terribly at generating language. This is why brain damage tends to cause rather-selective deficits in cognitive abilities, rather than global or unpredictable ones. We insist on modularity of the mind for the same reason PZ insists on modularity of the body.

PZ also brings the classic trope of dichotomizing “learned/cultural” and “evolved/genetic” to bear, writing:

“PZ: …I suspect it’s most likely that they are seeing cultural variations, so trying to peg them to an adaptive explanation is an exercise in futility”

I will only give the fairly-standard reply to such sentiments, since they’ve been voiced so often before that it’s not worth spending much time on. Yes, cultures differ, and yes, culture clearly has effects on behavior and psychology. I don’t think any evolutionary psychologist would tell you differently. However, these cultural differences do not just come from nowhere, and neither do our consistent patterns of responses to those differences. If, for instance, local sex-ratios have some predictable effects on mating behavior, one needs to explain why that is the case. This is like the byproduct point above: it’s not enough to say “[trait X] is a product of culture” and leave it at that if you want an explanation of trait X that helps you understand anything about it. You need to explain why that particular bit of environmental input is having the effect that it does. Perhaps the effect is the result of psychological adaptation for processing that particular input, or perhaps the effect is a byproduct of mechanisms not designed to process it (which still requires identifying the responsible psychological adaptations), or perhaps the consistent effect is just a rather-unlikely run of random events all turning out the same. In any case, to reach any of these conclusions, one needs an adaptationist approach – or PZ’s magic 8-ball.

Also acceptable: his magic Ouija board.

The final point I want to engage with are two rather interesting comments from PZ. The first comment comes from his initial reply to Coyne and the second from his reply to Pinker:

PZ: I detest evolutionary psychology, not because I dislike the answers it gives, but on purely methodological and empirical grounds…Once again, my criticisms are being addressed by imagining motives”

While PZ continues to stress that, of course, he could not possibly have ulterior, conscious or unconscious, motives for rejecting evolutionary psychology, he then makes a rather strange comment in the comments section:

PZ: Evolutionary psychology has a lot of baggage I disagree with, so no, I don’t agree with it. I agree with the broader principle that brains evolved.”

Now it’s hard to know precisely what PZ meant to imply with the word “baggage” there because, as usual, he’s rather light on the details. When I think of the word “baggage” in that context, however, my mind immediately goes to unpleasant social implications (as in, “I don’t identify as a feminist because the movement has too much baggage”). Such a conclusion would imply there are non-methodological concerns that PZ has about something related to evolutionary psychology. Then again, perhaps PZ simply meant some conceptual, theoretical baggage that can be remedied with some new methodology that evolutionary psychology currently lacks. Since I like to assume the best (you know me), I’ll be eagerly awaiting  PZ’s helpful suggestions as to how the field can be improved by shedding its baggage as it moves into the future.

The Inferential Limits Of Economic Games

Having recently returned from the Human Behavior & Evolution Society’s (HBES) conference, I would like to take a moment to let everyone know what an excellent time I had there. Getting to meet some of my readers in person was a fantastic experience, as was the pleasure of being around the wider evolutionary research community and reconnecting with old friends. The only negative parts of the conference involved making my way through the flooded streets of Miami on the first two mornings (which very closely resembled this scene from the Simpsons) and the pool party at which I way over-indulged in drinking. Though there was a diverse array of research presented spanning many different areas, I ended up primarily in the seminars on cooperation, as the topic tends most towards my current research projects. I would like to present two of my favorite findings from those seminars, which serve as excellent cautionary tales concerning what conclusions one can draw from economic games. Despite the popular impression, there’s a lot more to evolutionary psychology than sex research.

Though the Sperm-Sun HBES logo failed to adequately showcase that diversity.

The first game to be discussed is the classic dictator game. In this game, two participants are brought into the lab and assigned the role of either ‘dictator; or ‘recipient’. The dictator is given a sum of money (say, $10) and is given the option to divide it however they want between the pair. If the dictator was maximally selfish – as standard economic rationality might suggest – they would consistently keep all the money and given none to the recipient. Yet this is not what we frequently see: dictators tend to give at least some of the money to the other person, and an even split is often made. While giving these participants anonymity from one another does tend to reduce offers, even ostensibly anonymous dictators continue to give. This result clashes somewhat with our every day experiences: after all, provided we have money in our pocket, we’re faced with possible dictator-like experiences every time we pass someone on the street, whether they’re homeless and begging for money or apparently well-off. Despite the near-constant opportunities during which we could transfer money to others, we frequently do not. So how do we reconcile the two experimental and everyday results?

One possibility is to suggest that the giving in dictator games is largely induced by experimental demand effects: subjects are being placed into a relatively odd situation and are behaving rather oddly because of it (more specifically, because they are inferring what the experimenter “wants” them to do). Of course, it’s not so easy to replicate the effects the contexts of the dictator game (a sudden windfall of a divisible asset and a potential partner to share it with) without subjects knowing they’re talking part in an experiment. Winking & Mizer (2013) manged to find a way around these problems in Las Vegas. In this field experiment, a confederate would be waiting at a bus stop when the ignorant subject approached. Once the subject was waiting for the bus as well, the confederate would pretend to take a phone call and move slightly away from the area with their back turned to the subject. It was at this point that the experiment approached on his cell, ostensibly in a hurry. As the experimenter passed the subject, he gave them $20 in poker chips, saying that he was late for his ride to the airport and didn’t have time to cash them in. These casino chips are an excellent stimuli, as they provided a good cover story for why they were being handed over: they only have value when cashed in, and the experimenter didn’t have time to do so. Using actual currency wouldn’t work well, as it might raise suspicions about the setup, since currency travels well from place to place.

In the first condition, the experimenter left and the confederate returned without further instruction; in the second condition, the experimenter said, “I don’t know. You can split them with that guy however you want” while gesturing at the confederate before he ran off. A third condition involved an explicit version of the dictator game experiment with poker chips, during which anonymity was granted. In the standard version of the experiment – when the subjects knew about the game explicitly – 83% of subjects offered at least some of the chips to other people with a median offer around $5, resembling previous experimental results fairly well. How about the other two conditions? Well, of the 60 participants who were not told they were explicitly taking part in the game, all of them kept all the money. This suggests very strongly that all – or at least most – of the giving we observe in dictator games is grounded in the nature of the experiment itself. Indeed, many of the subjects in the first condition, where the instruction to split was not given, seemed rather perplexed by the purpose of the study during the debriefing. The subjects wondered precisely why in the world they would split the money with the confederate in the first place. Like all of us walking down the street with money on our person, the idea that they would just give that money to other people seemed rather strange.

“I’m still not following: you want to do what with all this money, again?”

The second paper of interest looked at behavior in another popular game: the public goods game. In these games, subjects are typically placed together in groups of four and are provided with a sum of money. During each round, players can invest any amount of their money in the public pot and keep the rest. All the money in the pot is then multiplied by some amount and then divided equally amongst all the participants. In this game, the rational economic move is typically to not put any money in, as for each dollar you put in, you receive less than a dollar back (since the multiplier is below the number of subjects in the group); not a great investment. On the other hand, the group-maximizing outcome is for all the subjects to donate all their money, so everyone ends up richer than when they started. Again, we find that subjects in these games tend to donate some of their money to the public pot, and many researchers have inferred from this giving that people have prosocial preferences (i.e. making other people better off per se increases my subjective welfare). If such an inference was correct, then we ought to expect that subjects should give more money to the public good provided they know how much good they’re doing for others.

Towards examining this inference, Burton-Chellew & West (2013) put subjects into a public goods game in three different conditions. First, there was the standard condition, described above. Second was a condition like the standard game, except subjects received an additional piece of information in the form of how much the other players in the game earned. Finally, there was a third condition in which subjects didn’t even know the game was being played with other people; subjects were merely told they could donate some fraction of their money (from 0 to 40 units) to a “black box” which would perform a transformation on the money received and give them a non-negative payoff (which was the same average benefit they received in the game when playing with other people, but they didn’t know that). In total, 236 subjects played in one of the first two conditions and also in the black box condition, counterbalancing the order of the games (they were informed the two were entirely different experiments).

How did contributions change between the standard condition and the black box condition over time? They didn’t. Subjects that knew they were playing a public goods game donated approximately as much during each round as the subjects who were just putting payments into the black box and getting some payment out: donations started out relatively high, and declined over time (presumably and subjects were learning they tended to get less money by contributing). The one notable difference was in the additional information condition: when subjects could see the earnings of others, relative to their contributions, subjects started to contribute less money to the public good. As a control condition, all the above three games were replicated with a multiplication rule that led the profit-maximizing strategy to being donate all of one’s available money, rather than none. In these conditions, the change in donations between standard and black box conditions again failed to differ significantly, and contributions were still lower in the enhanced-information condition. Further, in all these games subjects tended to fail to make the profit-maximizing decision, irrespective of whether that decision was to donate all their money or none of it. Despite this strategy being deemed relatively to “easy” to figure out by researchers, it apparently was not.

Other people not included, or required

Both of these experiments pose some rather stern warnings about the inferences we might draw from the behavior of people playing economic games. Some our our experiments might end up inducing certain behaviors and preferences, rather than revealing them. We’re putting people into evolutionarily-strange situations in these experiments, and so we might expect some evolutionarily-strange outcomes. It is also worth noting that just because you observe some prosocial outcome – like people giving money apparently altruistically or contributing to the good of others – it doesn’t follow that these outcomes are the direct result of cognitive modules designed to bring them about. Sure, my behavior in some of these games might end up reducing inequality, for instance, but it doesn’t following that people’s psychology was selected to do such things. There are definite limits to how far these economic games can take us inferentially, and it’s important to be aware of them. Do these studies show that such games are worthless tools? I’d say certainly not, as behavior in them is certainly not random. We just need to be mindful of their limits when we try and draw conclusions from them.

References: Burton-Chellew MN, & West SA (2013). Prosocial preferences do not explain human cooperation in public-goods games. Proceedings of the National Academy of Sciences of the United States of America, 110 (1), 216-21 PMID: 23248298

Winking, J., & Mizer, N. (2013). Natural-field dictator game shows no altruistic giving. Evolution and Human Behavior. http://dx.doi.org/10.1016/j.evolhumbehav.2013.04.002

How Hard Is Psychology?

The scientific method is a pretty useful tool for assisting people in doing things related to testing hypotheses and discerning truth – or as close as one can come to such things. Like the famous Churchill quote about democracy, the scientific method is the worst system we have for doing so, except for all the others. That said, the scientists who use the method are often not doing so in the single-minded pursuit of truth. Perhaps phrased more aptly, testing hypotheses is generally not done for its own sake: people testing hypotheses are typically doing so for other reasons, such as raising one’s status and furthering one’s career in the process. So, while the scientific method could be used to test any number of hypotheses, scientists tend to try and use for certain ends and to test certain types ideas: those perceived to be interesting, novel, or useful. I imagine that none of that is particularly groundbreaking information to most people: science in theory is different from science in practice. A curious question, then, is given that we ought to expect scientists from all fields to use the method for similar reasons, why are some topics to which the scientific method is applied viewed as “soft” or “hard” (like psychology and physics, respectively)?

Very clever, Chemistry, but you’ll never top Freud jokes.

One potential reason for this impression is that these non-truth-seeking (what some might consider questionable) uses to which people attempt to put the scientific method could simply be more prevalent in some fields, relative to other ones. The further one strays from science in theory to science in practice, the softer your field might be seen as being. If, for instance, psychology was particularly prone to biases that compromises the quality or validity of the data, relative to other fields, then people would be justified in taking a more critical stance towards the findings from it. One of those possible biases involves tending to only report the data consistent with one hypothesis or another. As the scientific method requires reporting the data that is both consistent and inconsistent with one’s hypothesis, if only one of those is being done, then the validity of the method can be compromised and you’re no longer doing “hard” science. A 2010 paper by Fanellli provides us with some reason to worry on that front. In that paper, Fanelli examined approximately 2500 papers randomly drawn from various disciplines to determine the extent to which positive results (those which support one or more of the hypotheses being tested statistically) dominate in the published literature. The Psychology/Psychiatry category sat at the top of the list, with 91.5% of all published papers reporting positive results.

While that number may seem high, it is important to put the figure into perspective: the field at the bottom of that list – the one which reported the fewest positive results overall – were the Space Sciences, with 70.2% of all the sampled published work reporting positive results. Other fields ran a relatively smooth line between the upper- and lower-limits, so the extent to which the fields differ in positive results dominating is a matter of degree; not kind. Physics and Chemistry, for instance, both ran about 85% in terms of positive results, despite both being considered “harder” sciences than psychology. Now that the 91% figure might seem a little less worrying, let’s add some more context to reintroduce the concern: those percentages only consider whether any positive results were reported, so papers that tested multiple hypotheses tended to have a better chance of reporting something positive. It also happened that papers within psychology tended to test more hypotheses on average than papers in other fields. When correcting for that issue, positive results in psychology were approximately five-times more likely than positive results in the space sciences. By comparison, positive results physics and chemistry were only about two-and-a-half-times more likely. How much cause for concern should this bring us?

There are two questions to consider, before answering that last question: (1) what are the causes of these different rates of positive results and (2) are these differences in positive results driving the perception among people that some sciences are “softer” than others? Taking these in order, there are still more reasons to worry about the prevalence of positive results in psychology: according to Fanelli, studies in psychology tend to have lower statistical power than studies in physical science fields. Lower statistical power means that, all else being equal, psychological research should find fewer – not greater – percentages of positive results overall. If psychological studies tend to not be as statistically powerful, where else might the causes of the high-proportion of positive results reside? One possibility is that psychologists are particularly likely to be predicting things that happen to be true. In other words, “predicting” things in psychology tends to be easy because hypotheses tend to only be made after a good deal of anecdata has been “collected” by personal experience (incidentally, personal experience is a not-uncommonly cited reason for research hypotheses within psychology). Essentially, then, predictions in psychology are being made once a good deal of data is already in, at least informally, making them less predictions and more restatements of already-known facts.

“I predict that you would like a psychic reading, on the basis of you asking for one, just now.”

A related possibility is that psychologists might be more likely to engage in outright-dishonest tactics, such actually collecting their data formally first (rather than just informally), and then making up “predictions” that restate their data after the fact. In the event that publishers within different fields are more or less interested in positive results, then we ought to expect researchers within those fields to attempt this kind of dishonesty on a greater scale (it should be noted, however, that the data is still the data, regardless of whether it was predicted ahead of time, so the effects on the truth-value ought to be minimal). Though greater amounts of outright dishonesty is a possibility, it would be unclear as to why psychology would be particularly prone to this, relative to any other field, so it might not be worth worrying too much about. Another possibility is that psychologists are particularly prone to using questionable statistical practices that tend to boost their false-positive rates substantially, an issue which I’ve discussed before.

There are two issues above all the others stand out to me, though, and they might help to answer the second question – why psychology is viewed as “soft” and physics as “hard”. The first issue has to do with what Fanelli refers to as the distinction between the “core”  and the “frontier” of a discipline. The core of a field of study represents the agreed upon theories and concepts on which the field rests; the frontier, by contrast, is where most of the new research is being conducted and new concepts are being minted. Psychology, as it currently stands, is largely frontier-based. This lack of a core can be exemplified by a recent post concerning “101 greats insights from psychology 101“. In the list, you’ll find the word “theory” used a collective three times, and two of those mentions concern Freud. If you consider the plural – “theories” – instead, you’ll find five novel uses of the term, four of which mention no specific theory. The extent to which the remaining two uses represent actual theories, as opposed to redescriptions of findings, is another matter entirely. If one is left with only a core-less frontier of research, that could well send the message that the people within the field don’t have a good handle on what it is they’re studying, thus the “soft” reputation.

The second issue involves the subject matter itself. The “soft” sciences – psychology and its variants (like sociology and economics) – seem to dabble in human affairs. This can be troublesome for more than one reason. A first reason might involve the fact that the other humans reading about psychological research are all intuitive psychologists, so to speak. We all have an interest in understanding the psychological factors that motivate other people in order to predict what they’re going to do. This seems to give many people the impression that psychology, as a field, doesn’t have much new information to offer them. If they can already “do” psychology without needing explicit instructions, they might come to view psychology as “soft” precisely because it’s perceived as being easy. I would also note that this suggestion ties neatly into the point about psychologists possibly tending to make many predictions based on personal experience and intuitions. If the findings they are delivering tend to give people the impression that “Why did you need research? I could have told you that”, that ease of inference might cause people to give psychology less credit as a science.

“We go to the moon because it is hard, making physics a real science”

The other standout reason as to why psychology might pose people with the soft perception is that, on top of trying to understand other people’s psychological goings-on, we also try to manipulate them. It’s not just that we want to understand why people support or oppose gay marriage, for instance, it’s that we might also want to change their points of view. Accordingly, findings from psychology tend to speak more directly to issues people care a good deal about (like sex, drugs, and moral goals. Most people don’t seem to argue over the latest implications from chemistry research), which might make people either (a) relatively resistant to the findings or (b) relatively accepting of them, contingent more on one’s personal views and less on the scientific quality of the work itself. This means that, in addition to many people having a reaction of “that is obvious” with respect to a good deal of psychological work, they also have the reaction of “that is obviously wrong”, neither of which makes psychology look terribly important.

It seems likely to me that many of these issues could be mediated with the addition of a core to psychology. If results need to fit into theory, various statistical manipulations might become somewhat easier to spot. If students were learning how to think about psychology, rather than to think about and remember lists of findings which they feel are often trivial or obviously wrong, they might come away with a better impression of the field. Now if only a core could be found

References: Fanelli D (2010). “Positive” results increase down the Hierarchy of the Sciences. PloS one, 5 (4) PMID: 20383332

When (And Why) Is Discrimination Acceptable?

As a means of humble-bragging, I like to tell people that I have been rejected from many prestigious universities; the University of Pennsylvania, Harvard, and Yale are all on that list. Also on that list happens to be the University of New Mexico, home of one Geoffrey Miller. Very recently, Dr. Miller has found himself in a little bit of moral hot water from what seems to be an ill-conceived tweet. It reads as follows: “Dear obese PhD applicants: if you don’t have enough willpower to stop eating carbs, you won’t have the willpower to do a dissertation #truth“. Miller subsequently deleted the tweet and apologized for it in two follow up tweets. Now, as I mentioned, I’ve been previously rejected from Miller’s lab – on more than one occasion, mind you (I forgot if it was 3 or 4 times now) – so clearly, I was discriminated against. Indeed, discrimination policies are vital to anyone, university or otherwise, with open positions to fill. When you have 10 slots open and you get approximately 750 applications, you need some way of discriminating between them (and whatever method you use will disappoint approximately 740 of them). Evidently, being obese is one characteristic that people found to be morally unacceptable to even jokingly suggest you were discriminating on the basis of. This raises the question of why?

Oh no; someone’s going to get a nasty email…

Let’s start with a related situation: it’s well-known that many universities make use of standardized test scores, such as the SAT or GRE, in order to screen out applicants. As a general rule, this doesn’t tend to cause too much moral outrage, though it does cause plenty of frustration. One could – any many do – argue that using these scores is not only morally acceptable, but appropriate, given that they predict some facets of performance at school-related tasks. While there might be some disagreement over whether or not the tests are good enough predictors of performance (or whether they’re predicting something conceptually important), there doesn’t appear to be much disagreement about whether or not they could be made use of, from a moral standpoint. That’s a good principle to start the discussion over the obese comment with, isn’t it? If you have a measure that’s predictive of some task-relevant skill, it’s OK to use it.

Well, not so fast. Let’s say, for the sake of this argument, that obesity was actually a predictor of graduate school performance. I don’t know if there’s actually any predictive value there, but let’s assume there is and, just for the sake of this example, let’s assume that being obese was indicative of doing slightly worse at school, like Geoffrey suggested; why it might have that effect is, for the moment, of no importance. So, given that obesity could, to some extent, predict graduate school performance, should schools be morally allowed  to use it in order to discriminate between potential applicants?

I happen to think the matter is not nearly so simple as predictive value. For starters, there doesn’t seem to be any widely-agreed upon rule as for precisely how predictive some variable needs to be before its use is deemed morally acceptable. If obesity could, controlling for all other variables, predict an additional 1% of the variance graduate performance, should applications start including boxes for height and weight? While 1% might not seem like a lot, if you could give yourself a 1% better chance at succeeding at some task for free (landing a promotion, getting hired, avoiding being struck by a car or, in this case, admitting a productive student), it seems like almost everyone would be interested in doing so; ignoring or avoiding useful information would be a very curious route to opt for, as it only ensures that, on the whole, you make a worse decision than if you hadn’t considered it. One could play around with the numbers to try and find some threshold of acceptability, if they were so inclined (i.e. what if it could predict 10%, or only 0.1%), to help drive the point home. In any case, there are a number of different factors which could predict graduate school performance in different respects: previous GPAs, letters of recommendation, other reasoning tasks, previous work experience, and so on. However, to the best of my knowledge, no one is arguing that it would be immoral to only use any of them other than the best predictor (or the top X number of predictors, or the second best if you aren’t using the first, and so on). The core of the issue seems to center on obesity, rather than discriminant validity per se.

*May also apply to PhD applications.

Thankfully, there is some research we can bring to bear on the matter. The research comes from a paper by Tetlock et al (2000) who were examining what they called “forbidden base rates” – an issue I touched on once before. In one study, Tetlock et al presented subjects with an insurance-related case: an insurance executive had been tasked with assessing how to charge people for insurance. Three towns had been classified as high-risk (10% chance of experiencing fires or break-ins), while another three had been classified as low-risk (less than 1% chance). Naturally, you would expect that anyone trying to maximize their risk-to-profit ratio would change different premiums, contingent on risk. If one is not allowed to do so, they’re left with the choices of offering coverage at a price that’s too low to be sustainable for them or too high to be viable for some of their customers. While you don’t want to charge low-risk people more than you need to, you also don’t want to under-charge the high-risk ones and risk losing money. Price discrimination in this example is a good thing.

The twist was that these classifications of high- and low-risk either happened to correlate along racial lines, or they did not, despite their being no a priori interest in discriminating against any one race. When faced with this situation, something interesting happens: compared to conservatives and moderates, when confronted with data suggesting black people tended to live in the high-risk areas, liberals tended to advocate for disallowing the use of the data to make profit-maximizing economic choices. However, this effect was not present when the people being discriminated against in the high-risk area happened to be white.

In other words, people don’t seem to have an issue with the idea of using useful data to discriminate amongst groups of people itself, but if that discrimination ended up affecting the “wrong” group, it can be deemed morally problematic. As Tetlock et al (2000) argued, people are viewing certain types of discrimination not as “tricky statistical issues” but rather as moral ones. The parallels to our initial example are apparent: even if discriminating on the basis of obesity could provide us with useful information, the act itself is not morally acceptable in some circles. Why people might view discrimination against obese people morally offensive itself is a separate matter. After all, as previously mentioned, people tend to have no moral problems with tests like GRE that discriminate not on weight, but other characteristics, such as working memory, information processing speeds, and a number of other difficult to change factors. Unfortunately, people tend to not have much in the way of conscious insight into how their moral judgments are arrived at and what variables they make use of (Hauser et al, 2007), so we can’t just ask people about their judgments and expect compelling answers.

Though I have no data bearing on the subject, I can make some educated guesses as to why obesity might have moral protection: first, and perhaps most obvious, is that people with the moral qualms about discrimination along the weight dimension might themselves tend to be fat or obese and would prefer to not have that count against them. In much the say way, I’m fairly confident that we could expect people who scored low on tests like the GRE to downplay their validity as a measure and suggest that schools really ought to be looking at other factors to determine admission criteria. Relatedly, one might also have people they consider to be their friends or family members who are obese, so they adopt moral stances against discrimination that would ultimately harm their social ingroup. If such groups become prominent enough, siding against them would become progressively costlier. Adopting a moral rule disallowing discrimination on the basis of weight can spread in those cases, even if enforcing that rule is personally costly, on account of not adopting the rule can end up being an even greater cost (as evidenced by Geoffrey currently being hit with a wave of moral condemnation for his remarks).

Hopefully it won’t crush you and drag you to your death. Hang ten.

As to one final matter, one could be left wondering why this moralization of judgments concern certain traits – like obesity – can be successful, whereas moralization of judgments based on other traits – like whatever GREs measure – doesn’t obtain. My guess in that regard is that some traits simply effect more people or effect them in much larger ways, and that can have some major effects on the value of an individual adopting certain moral rules. For instance, being obese effects many areas of one’s life, such as mating prospects and mobility, and weight cannot easily be hidden. On the other hand, something like GRE scores effect very little (really, only graduate school admissions), and are not readily observable. Accordingly, one manages to create a “better” victim of discrimination; one that is proportionately more in need of assistance and, because of that, more likely to reciprocate any given assistance in the future (all else being equal). Such a line of thought might well explain the aforementioned difference we see in judgments between racial discrimination being unacceptable when it predominately harms blacks, but fine when it predominately harmed whites. So long as the harm isn’t perceived as great enough to generate an appropriate amount of need, we can expect people to be relatively indifferent to it. It just doesn’t create the same social-investment potential in all cases.

References: Hauser, M., Cushman, F., Young, L., Kang-Xing Jin, R., & Mikhail, J. (2007). A dissociation between moral judgments and justifications. Mind & Language, 22, 1-21.

Tetlock, P., Kristel, O., Elson, S., Green, M., & Lerner, J. (2000). The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78 (5), 853-870 DOI: 10.1037//0022-3514.78.5.853

Why Are They Called “Spoilers”?

Imagine you are running experiments with mice. You deprive the mice of food until they get hungry and then you drop them into a maze. Now obviously the hungry mice are pretty invested in the idea of finding the food; you have been starving them and all. You’re not really that evil of a researcher, though: in one group, you color-code the maze so the mice always know where to go to find the reward. The mice, I expect, would not be terribly bothered by your providing them with information and, if they could talk, I doubt many of them would complain about your “spoiling” the adventure of finding the food themselves. In fact, I would also expect most people would respond the same way when they were hungry: they would rather you provide them with the information they sought directly instead of having to make their own way through the pain of a maze (or do some equally-annoying psychological task) before they could eat. We ought to expect this because, at least in this instance, as well as many others, having access to greater quantities of accurate information allows you to do more useful things with your time. Knowing where food is cuts down on your required search time, which allows you to spend that time in other, more fruitful ways (like doing pretty much anything that undergraduates can do that doesn’t involve serving a participant for psychologists). So what are we to make of cases where people seem to actively avoid such information and claim they find it aversive?

Spoiler warning: If you would rather formulate your own ideas first, stop reading now.

The topic arose for me lately in the context of the upcoming E3 event, where the next generation of video games will be previewed. There happens to be one video game specifically I find myself heavily invested in and, for whatever reason, I find myself wary of tuning into E3 due to the risk of inadvertently exposing myself to any more content from the game. I don’t want to know what the story is; I don’t want to see any more game play; I want to remain as ignorant as possible until I can experience the game firsthand. I’m also far from alone in that experience: of approximately 40,000 who have voiced their opinions, a full half reported that they found spoilers unpleasant. Indeed, the word that refers to the leaking of crucial plot details itself implies that the experience of learning them can actually ruin the pleasure that finding them out for yourself can bring, in much the same way that microorganisms make food unpalatable or dangerous to ingest. Am I, along with the other 20,000, simply mistaken? That is, do spoilers actually make the experience of reading some book or playing some video game any less pleasant? At least two people think that answer is “yes”.

Leavitt & Chistienfeld (2011) suggest that spoilers, in fact, do not make the experience of a story any less pleasant. After all, the authors mention people are perfectly willing to experience stories again, such as by rereading a book, without any apparent loss of pleasure from the story (curiously they cite no empirical evidence on this front, making it an untested assumption). Leavitt & Christienfeld also suggested that perceptual fluency (in the form of familiarity) with a story might make it more pleasant because the information subsequently becomes easier to process. Finally, the pair appear all but entirely disinterested in positing any reasons as to why so many people might find spoilers unpleasant. The most they offer up is the possibility that suspense might have something to do with it, but we’ll return to that point later. The authors, like your average person discussing spoilers, didn’t offer anything resembling a compelling reason as for why people might not like them. They simply note that many people think spoilers are unpleasant and move on.

In any case, to test whether spoilers really spoiled things, they recruited approximately 800 subjects to read a series of short stories, some of which came with a spoiler, some of which without, and some in which the spoiler was presented as the opening paragraph of the short story itself. These stories were short indeed: between 1,400 and 4,200 words a piece, which amounts to the approximate length of this post to about three of them. I think this happens to be another important detail to which I’ll return later, (as I have no intention of spoiling my ideas fully yet). After the subjects had read each story, they rated how much they enjoyed it on a scale of 1 to 10. Across all three types of stories that were presented – mysteries, ironic twists, and literary ones – subjects actually reported liking the spoiled stories somewhat more than the non-spoiled ones. The difference was slight, but significant, and certainly not in the spoiler-are-ruining-things direction. From this, the authors suggest that people are, in fact, mistaken in their beliefs about whether spoilers have any adverse impact on the pleasure one gets from a story. They also suggest that people might like birthday presents more if they were wrapped in clear cellophane.

Then you can get the disappointment over with much quicker.

Is this widespread avoidance of spoilers just another example of quirky, “irrational” human behavior, then, born from the fact that people tend to not have side-by-side exposure to both spoiled and non-spoiled version of a story? I think Leavitt & Christenfeld are being rather hasty in their conclusion, to put it mildly. Let’s start with the first issue: when it comes to my concern over watching the E3 coverage, I’m not worried about getting spoilers for any and all games. I’m worried about getting spoilers for one specific game, and it’s a game from a series I already have a deep emotional commitment to (Dark Souls, for the curious reader). When Harry Potter fans were eagerly awaiting the moment they got to crack open the next new book in the series, I doubt they would care much one way or the other if you told them about the plot to the latest Die Hard movie. Similarly, a hardcore Star Wars fan would probably not have enjoyed someone leaving the theater in 1980 blurting out that Darth Vader was Luke’s father; by comparison, someone who didn’t know anything about Star Wars probably wouldn’t have cared. In other words, the subjects likely have absolutely no emotional attachment to the stories they were reading and, as such, the information they were being given was not exactly a spoiler. If the authors weren’t studying what people would typically consider aversive spoilers in the first place, then their conclusions about spoilers more generally are misplaced.

One of the other issues, as I hinted at before, is that the stories themselves were all rather short. It would take no more than a few minutes to read even the longest of them. This lack of investment of time could cause a major issue for the study but, as the authors didn’t posit any good reasons for why people might not like spoilers in the first place, they didn’t appear to give the point much, if any, consideration. Those who care about spoilers, though, seem to be those who consider themselves part of some community surrounding the story; people who have made some lasting emotional connection with in it along with at least a moderately deep investment of time and energy. At the very least, people have generally selected the story to which they’re about to be exposed themselves (which is quite unlike being handed a preselected story by an experimenter).

If the phenomenon we’re considering appears to be a costly act with no apparent compensating benefits – like actively avoiding information that would otherwise require a great deal of temporal investment to obtain – then it seems we’re venturing into the realm of costly signaling theory (Zahavi, 1975). Perhaps people are avoiding the information ahead of time so they can display their dedication to some person, group, or signal something about themselves by obtaining the information personally. If the signal is too cheap, its information value can be undermined, and that’s certainly something people might be bothered by.

So, given the length of these stories, there didn’t seem to be much that one could actually spoil. If one doesn’t need to invest any real time or energy in obtaining the relevant information, spoilers would not be likely to cause much distress, even in cases where someone was already deeply committed to the story. At worst, the spoilers have ruined what would have been 5 minutes of effort. Further, as I previously mentioned, people don’t seem to dislike receiving all kinds of information (“spoilers” about the location of food or plot detains from stories they don’t care about, for instance). In fact, we ought to expect people to crave these “spoilers” with some frequency, as information gain for cheap or free is, on the whole, generally a good thing. It is only when people are attempting to signal something with their conspicuous ignorance that we ought to expect “spoilers” to actually be spoilers, because it is only then that they have the potential spoil anything. In this case, they would be ruining an attempt to signal some underlying quality of the person who wants to find out for themselves.

Similar reasoning helps explain why it’s not enough for them to just hate people privately.

In two short pages, then, the paper by Leavitt & Christenfeld (2011) demonstrates a host of problems that can be found in the field of psychological research. In fact, this might be the largest number of problems I’ve seen crammed into such a small space. First, they appear to fundamentally misunderstand the topic they’re ostensibly researching. It seems, to me, anyway, as if they’re trying to simply find a new “irrational belief” that people hold, point it out, and say, “isn’t that odd?”. Of course, simply finding a bias or mistaken belief doesn’t explain anything about it, and there’s little to no apparent effort made to understand why people might hold said odd belief. The best the authors offer is that the tension in a story might be heightened by spoilers, but that only comes after they had previously suggested that such suspense might detract from enjoyment by diverting a reader’s attention. While these two claims aren’t necessarily opposed, they seem at least somewhat conflicting and, in any case, neither claim is ever tested.

There’s also a conclusion that vastly over-reaches the scope of the data and is phrased without the necessary cautions. They go from saying that their data “suggest that people are wasting their time avoiding spoilers” to intuitions about spoilers just being flat-out “wrong”. I will agree that people are most definitely wasting their time by avoiding spoilers. I would just also add that, well, that waste is probably the entire point.

References: Leavitt JD, & Christenfeld NJ (2011). Story spoilers don’t spoil stories. Psychological science, 22 (9), 1152-4 PMID: 21841150

Zahavi, M. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.

Two Fallacies From Feminists

Being that it’s summer, I’ve decided to pretend I’m going to kickback once more from working for a bit and write about a more leisurely subject. The last time I took a break for some philosophical play, the topic was Tucker Max’s failed donation to Planned Parenthood. To recap that debacle, there were many people who were so put off by Tucker’s behavior and views that they suggested that Planned Parenthood accepting his money ($500,000) and putting his name on a clinic would be too terrible to contemplate. Today, I’ll be examining two fallacies that likely come from an largely-overlapping set of people: those who consider themselves feminists. While I have no idea how common these views are among the general population or even among feminists themselves, they’ve come across my field of vision enough times to warrant a discussion. It’s worth noting up front that these lines of reasoning are by no means limited strictly to feminists; they just come to us from feminists in these instances. Also, I like the alliteration that singling that group brings in this case. So, without any further ado, let’s dive right in with our first fallacy.

Exhibit A: Colorful backgrounds do not a good argument make.

For those of you not in the know, the above meme is known as the “Critical Feminist Corgi”. The sentiment expressed by it – if you believe in equal rights, then you’re a feminist – has been routinely expressed by many others. Perhaps the most notable instance of the expression is the ever-quotable “feminism is the radical notion that women are people“, but it comes in more than one flavor. The first clear issue with the view expressed here is reality. One doesn’t have to look very far to find people who do not think men can be feminists. Feminist allies, maybe, but not true feminists; that label is reserved strictly for women, since it is a “woman’s movement”. If feminism was simply a synonym for a belief in equal rights or the notion that women are people, then that this disagreement even exists seems rather strange. In fact, were feminism a synonym for a belief in equal rights, then one would need to come to the conclusion that anyone who doesn’t think men can be feminists cannot be a feminist themselves (in much the same way that someone who believes in a god cannot also be an atheist; it’s simply definitional). If those who feel men cannot be feminists can themselves still be considered feminists (perhaps some off-brand feminist, but feminist nonetheless), then it would seem clear that the equal-rights definition can’t be right.

A second issue with this line of reasoning is more philosophical in nature. Let’s use the context of the corgi quote, but replace the specifics: if you believe in personal freedom, then you are a Republican. Here, the problems become apparent more readily. First, a belief in freedom is neither necessary or sufficient for calling oneself a Republican (unlike the previous atheist example, where a lack of belief is both necessary and sufficient). Second, the belief itself is massively underspecified. The boundary conditions on what “freedom” refers to are so vague that it makes the statement all but meaningless. The same notions can said to apply well to the feminism meme: a belief in equal rights is apparently neither necessary or sufficient, and what “equal rights” means depends on who you ask and what you ask about. Finally, and most importantly, the labels “Republican” and “Feminist” appear to represent approximate group-identifications; not a single belief or goal, let alone a number of them. The meme attempts to blur the line between a belief (like atheism) and group-identification (some atheist movement; perhaps the Atheism+ people, who routinely try to blur such lines).

That does certainly raise the question as to why people would try and blur that line, as well as why people would resist the blurring. I feel the answer to the former can be explained in a similar manner to why a cat’s threat display involves puffed-up fur and their backs arched: it’s an attempt to look larger and more intimidating than one actually is. All else being equal, aggressing against a larger or more powerful individual is costlier than the same aggression directed towards a less-intimidating one. Accordingly, it would seem to also follow that aggressing against larger alliances is costlier than aggressing against smaller ones. So, being able to suggest that approximately 62% of people are feminists makes a big difference, relative to suggesting that only 19% of people independently adopt the label. Of course, the 43% of people who didn’t initially identify as feminists might take some issue with their social support being co-opted: it forces an association upon them that may be detrimental to their interests. Further still, some of those within the feminist camp might also wish that others would not adopt the label for related reasons. The more feminists their are, the less social status can be derived from the label. If, for instance, feminism was defined as the belief that women are people, then pretty much every single person would be feminist, and being a feminist wouldn’t tell you much about that person. The signal value of the label gets weakened and the specific goals of certain feminists might become harder to achieve amongst the sea of new voices. This interaction between relative status within a group and signal value may well help us understand the contexts in which this blurring behavior should be expected to be deployed and resisted.

Exhibit B: Humor does not a good argument make either.

The second fallacy comes to us from Saturday Night Live, but they were hardly the innovators of this line of thought. The underlying idea here seems to be that men and women have different and relatively non-overlapping sets of best interests, and the men are only willing to support things that personally inconvenience them. Abortion falls on the female-side of the best interests, naturally. Again, this argument falters on both the fronts of reality and philosophy, but I’ll take them in reverse order this time. The philosophical fallacy being committed here is known as the Ecological Fallacy. In this fallacy, essentially, each individual is viewed as being a small representative of the larger group to which they belong.  An easy example is the classic one about height: just because men are taller than women on average, it does not mean that any given male you pull from the population will be taller than any given female. Another more complicated example could involve IQ. Let’s say you tested a number of men and women on an IQ test and found that men, on average, performed better. However, that gap may be due to some particularly well-performing outlier males. If that’s the case, it may be the case that the “average” man actually scores worse than the “average” woman by in large, but the skewed group distributions tell a different story.

Now, onto the reality issues. When it comes to question of whether gender is the metaphorically horse pulling the cart of abortion views, the answer is “no”. In terms of explaining the variance in support for abortion, gender has very little to do with it, with approximately equal numbers of men and women supporting and opposing it. A variable that seems to do a much better job of explaining the variance in views towards abortion is actually sexual strategy: whether one is more interested in short-term or long-term sexual relationships. Those who take the more short-term strategy are less interested in investing in relationships and their associated costs – like the burdens of pregnancy – and accordingly tend to favor policies and practices that reduce said costs, like available contraceptives and abortions. However, those playing a more long-term strategy are faced with a problem: if the costs to sex are sufficiently low and people are more promiscuous because of that, the value of the long-term relationships declines. This leads those attempting to invest in long-term strategies to support policies and practices that make promiscuity costlier, such as outlawing abortion and making contraceptives difficult to obtain. To the extent that gender can predict views on abortion (which is not very well to begin with), that connection is likely driven predominately by other variables not exclusive to gender.

We are again posed with the matter of why these fallacies are committed here. My feeling is that the tactic being used here is, as before, the manipulation of association values. By attempting to turn abortion into a gendered issue – one which benefits women, no less – the message that’s being sent is that if you oppose abortion, you also oppose most women. In essence, it attempts to make the opposition to abortion appear to be a more powerfully negative signal. It’s not just that you don’t favor abortion; it’s that you also hate women. The often-unappreciated irony of this tactic is that it serves to, at least in part, discredit the idea that we live in a deeply misogynistic society that is biased against women. If the message here is that being a misogynist is bad for your reputation, which it would seem to be, it would seem that state of affairs would only hold in a society where the majority of people are, in fact, opposed to misogyny. Were we to use a sports analogy, being a Yankee’s fan is generally tolerated or celebrated in New York. If that same fan travels to Boston, however, their fandom might now become a distinct cost, as not only are most people there not Yankee’s fans, but many actively despise their baseball rivals. The appropriateness and value of an attitude depends heavily on one’s social context. So, if the implication that one is a misogynist is negative, that tells you something important about the values of wider culture in which the accusation is made.

Unlike that degree in women’s studies.

I suppose the positive message to get from all this is that attitudes towards women aren’t nearly as negative as some feminists make them out to be. People tend to believe in equality – in the vague sense, anyway – whether or not they consider themselves feminists, and misogyny – again, in the vague sense – is considered a bad thing. However, if the perceptions about those things are open to manipulation, and if those perceptions can be used to persuade people to help you achieve your personal goals, we ought to expect people – feminist and non-feminist alike – to try and take advantage of that state of affairs. The point in these arguments, so to speak, is to be persuasive; not to be accurate (Mercier & Sperber, 2011). Accuracy only helps insomuch as it’s easier to persuade people of true things, relative to false ones.

References: Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory Behavioral and Brain Sciences, 34 (02), 57-74 DOI: 10.1017/S0140525X10000968

Why Psychology 101 Should Be Evolutionary Psychology

In two recent posts, I have referenced a relatively-average psychologist (again, this psychologist need not bear any resemblance to any particular person, living or dead). I found this relatively-average psychologist to be severely handicapped in their ability to think about psychology – human and non-human psychology alike – because they lacked a theoretical framework for doing so. What this psychologist knows about one topic, such as self-esteem, doesn’t help this psychologist think about any other topic which is not self-esteem, by in large. Even if this psychologist managed to be an expert on the voluminous literature on the subject, it would probably not tell them much about, say,  learning, or sexual behavior (save the few times where those topics directly overlapped as measured or correlated variables). The problem became magnified when topics shifted outside of humans into other species. Accordingly, I find the idea of teaching students about an evolutionary framework to be more important than teaching them about any particular topic within psychology. Today, I want to consider a paper from one of my favorite side-interests: Darwinian Medicine – the application of evolutionary theory to understanding diseases. I feel this paper will serve as a fine example for driving the point home.

As opposed to continuing to drive with psychology as it usually does.

The paper, by Smallegange et al (2013), was examining malarial transmission between humans and mosquitoes. Malaria is a vector-borne parasite, meaning that it travels from host to host by means of an intermediate source. The source by which the disease is spread is known as a vector, and in this case, that vector is mosquito bites. Humans are infected with malaria by mosquitoes and the malaria reproduces in its human host. That host is subsequently bitten by other mosquitoes who transmit some of the new parasites to future hosts. One nasty side-effect of vector-borne diseases is that they don’t require the hosts to be mobile to spread. In the case of other parasites like, say, HIV, the host needs to be active in order to spread the disease to others, so the parasites have a vested interest in not killing or debilitating their hosts too rapidly. On the other hand, if the disease is spread through mosquito bites, the host doesn’t need to be moving to spread it. In fact, it might even be better – from the point of view of the parasite – if the host was relatively disabled; it’s harder to defend against mosquitoes if one is unable to swat them away. Accordingly, malaria (along with other vector-borne diseases) ends up being a rather nasty killer.

Since malaria is transmitted from human to human by way of mosquito bites, it would stand to reason that the malaria parasites would prefer, so to speak, that mosquitoes preferentially target humans as food sources: more bites equals more chances to spread. The problem, from the malaria’s perspective, is that mosquitoes might not be as inclined to preferentially feed from humans as the malaria would. So, if the malaria parasite could alter the mosquitoes behavior in some way, so as to assist in its spread by making the mosquitoes preferentially target humans, this would be highly adaptive from the malaria’s point of view. In order to test whether the malaria parasites did so, Smallegange et al (2013) collected some human odor samples using a nylon matrix. This matrix, along with a control matrix, were presented to caged mosquitoes and the researchers measured how frequently the mosquitoes – either infected with malaria or not – landed on each. The results showed that mosquitoes, whether infected or uninfected, didn’t seem particularly interested in the control matrix. When it came to the human odor matrix, however, the mosquitoes infected with malaria were substantially more likely to land on it and attempt to probe it than the non-infected ones (the human odor matrix received about four times the attention from infected mosquitoes that it did from the uninfected).

While this result is pretty neat, what can it tell us about the field of psychology? For starters, in order to alter mosquito behavior, the malaria parasite would need to do so via some aspect of the mosquitoes’ psychology. One could imagine a mosquito infected with malaria suddenly feeling overcome with the urge to have human for dinner (if it is proper to talk about mosquitoes having similar experiences, that is) without having the faintest idea why. A mosquito psychologist, unaware of the infection-behavior link, might posit that preferences for food sources naturally vary along a continuum in mosquitoes, and there’s nothing particularly strange about mosquitoes that seem to favor humans excessively; it’s just part of normal mosquito variation. (the parallels to human sexual orientation seem to be apparent, in some respects). This mosquito psychologist might also suggest that there was something present in mosquito culture that made some mosquitoes more likely to seek out humans. Maybe the mosquitoes that prefer humans were insecurely attached to their mother. Maybe they have particularly high self-esteem. That we know such explanations are likely wrong – it seems to be the malaria driving the behavior here – without reference to evolutionary theory and an understanding of pathogen-host relationships, our mosquito psychologists would be at a loss, relatively, to understand what’s going on.

Perhaps mosquitoes are just deficient in empathy towards human hosts and should go vegan.

What this example boils down to (for my purposes here, anyway) is that thinking about the function(s) of behavior – and of psychology by extension – helps us understand it immensely. Imagine mosquito psychologists who insisted on not “limiting themselves” to evolutionary theory for understanding what they’re studying. They might have a hard time understanding food preferences and aversions (like, say, pregnancy-related ones) in general, much less the variations of it. The same would seem probable to hold for sexual behavior and preferences. Mosquito doctors who failed to try and understand function might occasionally (or frequently) try to “treat” natural bodily defense mechanisms against infections and toxins (like, say, reducing fever or pregnancy sickness, respectively) and end up causing harm to their patients inadvertently. Mosquito-human-preference advocates might suggest that the malaria hypothesis purporting to explain their behavior to be insulting, morally offensive, and not worthy of consideration. After all, if it were true, preferences might be alterable by treating some infection, resulting in a loss of some part of their rich and varied culture.

If, however, doctors and psychologists were trained to think about evolved functions from day one, some of these issues might be avoidable. Someone versed in evolutionary theory could understand the relevance between findings in the two fields quickly. The doctors would be able to consider findings from psychology and psychologists from doctors because they were both working within the same conceptual framework; playing by the same rules. On top of that, the psychologists would be better able to communicate with each other, picking out possible errors or strengths in each others’ research projects, as well as making additions, without having to be experts in the fields first (though it certainly wouldn’t hurt). A perspective that not only offers satisfactory explanations within a discipline, between disciplines, and ties them all together, is far more valuable than any set of findings within those fields. It’s more interesting too, especially when considered against the islands-of-findings model that currently seems to predominate in the teaching of psychology. At this point, I feel those who would make a case for not starting with evolutionary theory ought to be burdened by, well, making that case and making it forcefully. That we currently don’t start teaching psychology with evolution is, in my mind, no argument to continue not doing so.

References: Smallegange, R., van Gemert, G., van de Vegte-Bolmer, M., Gezan, S., Takken, W., Sauerwein, R., & Logan, J. (2013). Malaria Infected Mosquitoes Express Enhanced Attraction to Human Odor PLoS ONE, 8 (5) DOI: 10.1371/journal.pone.0063602