Why Psychology 101 Should Be Evolutionary Psychology

In two recent posts, I have referenced a relatively-average psychologist (again, this psychologist need not bear any resemblance to any particular person, living or dead). I found this relatively-average psychologist to be severely handicapped in their ability to think about psychology – human and non-human psychology alike – because they lacked a theoretical framework for doing so. What this psychologist knows about one topic, such as self-esteem, doesn’t help this psychologist think about any other topic which is not self-esteem, by in large. Even if this psychologist managed to be an expert on the voluminous literature on the subject, it would probably not tell them much about, say,  learning, or sexual behavior (save the few times where those topics directly overlapped as measured or correlated variables). The problem became magnified when topics shifted outside of humans into other species. Accordingly, I find the idea of teaching students about an evolutionary framework to be more important than teaching them about any particular topic within psychology. Today, I want to consider a paper from one of my favorite side-interests: Darwinian Medicine – the application of evolutionary theory to understanding diseases. I feel this paper will serve as a fine example for driving the point home.

As opposed to continuing to drive with psychology as it usually does.

The paper, by Smallegange et al (2013), was examining malarial transmission between humans and mosquitoes. Malaria is a vector-borne parasite, meaning that it travels from host to host by means of an intermediate source. The source by which the disease is spread is known as a vector, and in this case, that vector is mosquito bites. Humans are infected with malaria by mosquitoes and the malaria reproduces in its human host. That host is subsequently bitten by other mosquitoes who transmit some of the new parasites to future hosts. One nasty side-effect of vector-borne diseases is that they don’t require the hosts to be mobile to spread. In the case of other parasites like, say, HIV, the host needs to be active in order to spread the disease to others, so the parasites have a vested interest in not killing or debilitating their hosts too rapidly. On the other hand, if the disease is spread through mosquito bites, the host doesn’t need to be moving to spread it. In fact, it might even be better – from the point of view of the parasite – if the host was relatively disabled; it’s harder to defend against mosquitoes if one is unable to swat them away. Accordingly, malaria (along with other vector-borne diseases) ends up being a rather nasty killer.

Since malaria is transmitted from human to human by way of mosquito bites, it would stand to reason that the malaria parasites would prefer, so to speak, that mosquitoes preferentially target humans as food sources: more bites equals more chances to spread. The problem, from the malaria’s perspective, is that mosquitoes might not be as inclined to preferentially feed from humans as the malaria would. So, if the malaria parasite could alter the mosquitoes behavior in some way, so as to assist in its spread by making the mosquitoes preferentially target humans, this would be highly adaptive from the malaria’s point of view. In order to test whether the malaria parasites did so, Smallegange et al (2013) collected some human odor samples using a nylon matrix. This matrix, along with a control matrix, were presented to caged mosquitoes and the researchers measured how frequently the mosquitoes – either infected with malaria or not – landed on each. The results showed that mosquitoes, whether infected or uninfected, didn’t seem particularly interested in the control matrix. When it came to the human odor matrix, however, the mosquitoes infected with malaria were substantially more likely to land on it and attempt to probe it than the non-infected ones (the human odor matrix received about four times the attention from infected mosquitoes that it did from the uninfected).

While this result is pretty neat, what can it tell us about the field of psychology? For starters, in order to alter mosquito behavior, the malaria parasite would need to do so via some aspect of the mosquitoes’ psychology. One could imagine a mosquito infected with malaria suddenly feeling overcome with the urge to have human for dinner (if it is proper to talk about mosquitoes having similar experiences, that is) without having the faintest idea why. A mosquito psychologist, unaware of the infection-behavior link, might posit that preferences for food sources naturally vary along a continuum in mosquitoes, and there’s nothing particularly strange about mosquitoes that seem to favor humans excessively; it’s just part of normal mosquito variation. (the parallels to human sexual orientation seem to be apparent, in some respects). This mosquito psychologist might also suggest that there was something present in mosquito culture that made some mosquitoes more likely to seek out humans. Maybe the mosquitoes that prefer humans were insecurely attached to their mother. Maybe they have particularly high self-esteem. That we know such explanations are likely wrong – it seems to be the malaria driving the behavior here – without reference to evolutionary theory and an understanding of pathogen-host relationships, our mosquito psychologists would be at a loss, relatively, to understand what’s going on.

Perhaps mosquitoes are just deficient in empathy towards human hosts and should go vegan.

What this example boils down to (for my purposes here, anyway) is that thinking about the function(s) of behavior – and of psychology by extension – helps us understand it immensely. Imagine mosquito psychologists who insisted on not “limiting themselves” to evolutionary theory for understanding what they’re studying. They might have a hard time understanding food preferences and aversions (like, say, pregnancy-related ones) in general, much less the variations of it. The same would seem probable to hold for sexual behavior and preferences. Mosquito doctors who failed to try and understand function might occasionally (or frequently) try to “treat” natural bodily defense mechanisms against infections and toxins (like, say, reducing fever or pregnancy sickness, respectively) and end up causing harm to their patients inadvertently. Mosquito-human-preference advocates might suggest that the malaria hypothesis purporting to explain their behavior to be insulting, morally offensive, and not worthy of consideration. After all, if it were true, preferences might be alterable by treating some infection, resulting in a loss of some part of their rich and varied culture.

If, however, doctors and psychologists were trained to think about evolved functions from day one, some of these issues might be avoidable. Someone versed in evolutionary theory could understand the relevance between findings in the two fields quickly. The doctors would be able to consider findings from psychology and psychologists from doctors because they were both working within the same conceptual framework; playing by the same rules. On top of that, the psychologists would be better able to communicate with each other, picking out possible errors or strengths in each others’ research projects, as well as making additions, without having to be experts in the fields first (though it certainly wouldn’t hurt). A perspective that not only offers satisfactory explanations within a discipline, between disciplines, and ties them all together, is far more valuable than any set of findings within those fields. It’s more interesting too, especially when considered against the islands-of-findings model that currently seems to predominate in the teaching of psychology. At this point, I feel those who would make a case for not starting with evolutionary theory ought to be burdened by, well, making that case and making it forcefully. That we currently don’t start teaching psychology with evolution is, in my mind, no argument to continue not doing so.

References: Smallegange, R., van Gemert, G., van de Vegte-Bolmer, M., Gezan, S., Takken, W., Sauerwein, R., & Logan, J. (2013). Malaria Infected Mosquitoes Express Enhanced Attraction to Human Odor PLoS ONE, 8 (5) DOI: 10.1371/journal.pone.0063602

Moral Outrage At Disney World

A few months ago, I took a trip down to Florida. I happen to know people who work for both Disney and Universal and, as a result, ended up getting to experience the parks for free. Prior to my visit to Disney, however, my future benefactor for that day who worked for the company had been injured during a performance. Because of the injury to his leg, his ability to walk around the park and stand in the long lines was understandably compromised. However, Disney happened to have a policy that sends disabled people – along with their parties – to the front of the line in recognition of that issue. While the leg injury was no doubt a cost to the person who let me get into the park, it ended up being a bonus for our visit. Rather than needing to wait  on lines for upwards of two hours, we were able to basically stroll to the front of every ride we wanted, and saw most of the park’s attractions in no time at all. Overall, having a disabled person made the experience all the better, minus pushing around the unwieldy wheelchair, that is. A recent article on the NY post pointed out that such a policy is, apparently, open to exploitation: some wealthy individuals are “renting” the services of disabled people who act as “tour guides” for families at Disney. For a respectable $130 an hour, disabled individuals will join families to help them cut to the front of the lines.

“How much an hour? Alright; I’m in. Just aim for the left leg…”

This article has garnered a significant amount of attention, but something about the reactions to it seem a bit strange. The reactions take one of three basic forms: (1) “I wish I had thought of that”, (2) “I don’t see the big deal”, and (3) “The rich people are morally condemnable for doing this”. Those reactions themselves, however, are not the strange part. The first response appears to be an acknowledgement that people would want to gain the benefits of skipping the long lines at Disney by exploiting some loophole in the rules. I found the experience of avoiding the long lines to be rather refreshing, and I imagine the majority of people would prefer not having to wait to having to wait. Cheating pays off, and when people can be cheaters safely, many seem to prefer it. The second typically response acknowledges that the disabled “tour guides” are making a nice amount of money in exchange for their services, with both the rich buyers and disabled sellers ending up better off than they were before. The rich people save some money on buying VIP passes that allow for a similar type of line-skipping ability, get to skip the lines more efficiently, and the disabled people are much better off at the end of the day after being paid over a $1000 to go to Disney.

However, not everyone is better off from that trade: those who now have to wait in line several extra seconds per disabled party would be worse off. Understandably, many people experience moral outrage at the thought of the rich line-jumper’s rule exploitation. The curious part is the moral outrage I did not see much of: outrage directed at the disabled people similarly exploiting the system for their own benefit. It would seem the disabled people selling their services are fully aware of what they’re doing and are engaging in the exploitative behavior intentionally, so why is there not much (if any) moral anger directed their way? By way of analogy, let’s say I wanted to kill someone, but I didn’t want to take the risks of being the one who pulled the trigger myself. So, instead of killing the person, I hired a contract killer to do the job instead. In the event the plot was uncovered, not only would the killer go to jail, but I would likely share their fate as well on a charge of conspiracy (as the vocalist for As I Lay Dying is all too familiar with at the moment). As I’ve discussed before, moral judgments are by no means restricted to the person who committed an act themselves; friends or allies may also suffer as a result of their mere perceived association. Assisting someone in committing a crime is often times considering morally blameworthy, so why not here?

This raises the question as to why we see these patterns of inconsistent condemnation. Solving that problem requires both an identification of one or more factors that differ in the cases of contract-killer-like cases and disabled-Disney cases, and an explanation as to why that difference is relevant. One possible difference could be the income disparity between the parties. There seems to be a fair share of anger leveled at the richer among us, sometimes giving the impression that the distaste for the rich can be built on the basis of their wealth alone. To the extent that the disabled individuals are perceived to be poor or in need of the money, this might soften any condemnation they would face. This factor is unlikely to get us all the way to where we want to go on its own, though: I don’t think people would be terribly forgiving of a contract killer who just happened to be poor and was only killing because they really needed (or really wanted) the money. Further, there’s no evidence presented in the coverage of the article to suggest that the disabled people serving as tour guides were distinctly poor.

Especially when they can make more in a day than I do in two weeks.

Wealth, in the form of money, however, only serves a proxy measure of some other characteristic that people might wish to assess: that characteristic is likely to be the perception of need. Why might people care about the neediness of others? All else being equal, people who are perceived as being in need might make more valuable social investments than people whose needs appear relatively satiated. For instance, someone who hasn’t eaten in a day will value the same amount of food more than someone who just finished a large meal and, accordingly, the hungry individual may well be more grateful to the person who provides them with a meal; the meal has a higher marginal value to the hungry individual, so the investment value of it is likely higher as well. So, in this case, disabled individuals may be viewed as more needy than the rich, making them the more valuable potential social investment than the other. While this explanation has a certain degree of plausibility to it, there are some complicating factors. One of those issues is that one needs to not only be grateful for the help, but also capable of returning the favor at a later time in order for the investment to pay off. Though disabled people might be viewed as particularly needy, my intuition is that they’re also viewed as being less likely to be able to reciprocate the assistance for the same reasons. Similarly, while the rich people may be judged as less needy, they’d also be viewed as more likely to be able to return on an investment given. The extent to which the need and ability issues tradeoff with each other and affect the judgments, I can’t definitively say.

Another possible difference between contract killers and disabled guides concerns the nature of the rules involved themselves. Perhaps since killing for money is generally frowned upon morally, but cutting the line if you’re disabled isn’t, people don’t register the disabled person as breaking a moral rule; only the rich one. Again, this explanation might hold some degree of plausibility, but only get us so far. After all, the disabled people are most certainly willing accomplices in assisting the rich break the moral rule. Without the help of the disabled, the rich individuals would be unable to exploit Disney’s policy and bypass the lines. Further still, Disney’s policy does allow for the disabled individuals to bring up to six guests with them. No part of the rule seems to say that those guests need to be family members, or even people the disabled individual likes; just up to six guests. While such an act may feel like it is breaking some part of the rule, it’s difficult to say precisely what part is being broken. Was I breaking the rule when my friend took me to Disney with him and we skipped the lines because he was injured? How do those cases differ? In any case, while this rule-based explanation might explain why people are more morally upset with the rich people than the disabled people, it would not explain why people, by in large, don’t seem upset with the disabled tour guides to any real extent. One could well be more upset with the people who hire killers than than contract killer themselves, but still morally condemn both parties substantially.

There is also the matter to consider as to how to deal with the issue if one wishes to condemn it. If one thinks the rule allowing disabled people being able to skip to the front of the line in the first place should be done away with, it would certainly stem the issue of the rich hiring the disabled, but it would also come at a cost to the disabled people who aren’t giving the tours. One might wish to keep the rule helping the disabled people out around, but stop the rich people from exploiting it, however. If some method could successfully exclude the rich people from hiring the disabled, one also needs to realize that it would come at a distinct cost the disabled tour guides as well: now, instead of having the option to earn a fantastic salary for a cushy job, that possibility will be foreclosed on. Rich tourists would instead have to spend more money on the inferior VIP tours offered by Disney, allowing them to still cut the line but without also directing the money towards the disabled. Since the policy at Disney seemed to have been put in place to benefit the disabled in the first place, combating the issue seems like something of a double-edged sword for them.

Incidentally, double-edged swords are also a leading cause of disability.

The tour guide issue, in some important ways, seems to parallel the moral rules surrounding sex: one is free to give away to sex to whomever one wants, but one is often morally condemned for selling it. Similarly, disabled people can go to the parks with whomever they want and get them to the front of the line (even if those friends or family members are exploiting them for that benefit), but if they go to the parks with someone buying their services, it then becomes morally unacceptable. I find that very interesting. Unfortunately, I don’t have more than speculations as to this curious pattern of moral condemnation at the current time. What one can say about the judgments and their justifications is that, at the least, they tend towards being relatively inconsistent. This ought to be expected on the basis of morality being deployed strategically to achieve useful outcomes, rather than consistently to achieve impartially. Thinking about the possible functions of moral judgments – that is, what useful outcomes they might be designed to bring about – can help us begin to think about what factors the cognitive mechanisms that generate them are using as inputs. It can also help us figure out why it’s morally unacceptable to sell some things that it’s acceptable to give away.

Welcome To Introduction To Psychology

In my last post, I mentioned a hypothetical relatively-average psychologist (caveat: the term doesn’t necessarily apply to any specific person, living or dead). I found him to be a bit strange, since he tended to come up with hypotheses that were relatively theory-free; there was no underlying conceptual framework he was using to draw his hypotheses. Instead, most of his research was based on some hunch or personal experience. Perhaps this relatively-average psychologist might also have made predictions on the basis of what previous research had found. For instance, if one relatively-average psychologist found that priming people to think about the elderly made them walk marginally slower, another relatively-average psychologist might predict that priming people to think of a professor would make them marginally smarter. I posited that these relatively-average psychologists might run into some issues when it comes to evaluating published research because, without a theoretical framework with which to understand the findings, all one can really consider are the statistics; without a framework, relatively-average psychologists have a harder time thinking about why some finding might make sense or not.

If you’re not willing to properly frame something, it’s probably not wall-worthy.

So, if a population of these relatively-average psychologists are looking to evaluate research, what are they supposed to evaluate it against? I suppose they could check and see if the results of some paper jibe with their set of personal experiences, hunches, or knowledge of previous research, but that seems to be a bit dissatisfying. Those kinds of practices would seem to make evaluations of research look more like Justice Stewart trying to define pornography: “I know [good research] when I see it”. Perhaps good research would involve projects that delivered results highly consistent with people’s personal general experiences; perhaps good research would be a project that found highly counter-intuitive or surprising results; perhaps good research would be something else still. In any case, such a practice – if widespread enough – would make the field of psychology look like grab bag of seemingly scattered and random findings. Learning how to think about one topic in psychology (say, priming) wouldn’t be very helpful when it came to learning how to think about another topic (say, learning). That’s not to say that the relatively-average psychologists have nothing helpful at all to add, mind you; just that their additions aren’t being driven by anything other than those same initial considerations, such as hunches or personal experience. Sometimes people have good guesses; in the land of psychology, however, it can be difficult to differentiate between good and bad ones a priori in many cases.

It seems like topic-to-topic issues would be hard enough for our relatively-average psychologists to deal with, but that problem becomes magnified once the topics shift outside of what’s typical for one’s local culture, and even further when topics shift outside of one’s species. Sure; maybe male birds will abandon a female partner after a mating season if the pair is unable to produce any eggs because the male birds feel a threat to their masculinity that they defend against by reasserting their virility elsewhere. On the flip side, maybe female birds leave the pair because their sense of intrinsic motivation was undermined by the extrinsic reward of a clutch of eggs. Maybe male ducks force copulations on seemingly unwilling female ducks because male ducks use rape as a tactic to keep female ducks socially subordinate and afraid. Maybe female elephant seals aren’t as combative as their male counterparts because of sexist elephant seal culture. Then again, maybe female elephant seals don’t fight as much as males because of their locus of control or stereotype threat. Maybe all of that is true, but my prior on such ideas is that they’re unlikely to end up treading much explanatory water. Applied to non-human species, their conceptual issues seem to pop out a bit better. Your relatively-average psychologist, then, ends up being rather human-centric, if not a little culture- and topic-centric as well. Their focus is on what’s familiar to them, largely because what they know doesn’t help them think about what they do not know too much.

So let’s say that our relatively-average psychologist has been tasked with designing a college-level introduction to psychology course. This course will be the first time many of the students are being formally exposed to psychology; for the non-psychology majors in the class, it may also be their last time. This limits what the course is capable of doing, in several regards, as there isn’t much information you can take for granted. The problems don’t end there, however: the students, having a less-than-perfect memory, will generally forget many, if not the majority, of the specifics they will be taught. Further, students may never again in their life encounter the topics they learned about in the intro course, even if they do retain the knowledge about them. If you’re like most of the population, knowing the structure of a neuron or who William James was will probably never come up in any meaningful way unless you find yourself at a trivia night (and even then it’s pretty iffy). Given these constraints, how is our relatively-average psychologist supposed to give their students an education of value? Our relatively-average psychologist could just keep pouring information out, hoping some of it sticks and is relevant later. They could also focus on some specific topics, boosting retention, but at the cost of breadth and, accordingly, chance of possible relevance. They could even try to focus on a series of counter-intuitive findings in the hopes of totally blowing their students’ minds (to encourage students’ motivation to show up and stay awake), or perhaps some intended to push a certain social agenda – they might not learn much about psychology, but at least they’ll have some talking points for the next debate they find themselves in. Our relatively-average psychology could do all that, but what they can’t seem to do well is to help students learn how to think about psychology; even if the information is retained, relevant, and interesting, it might not be applicable to any other topics not directly addressed.

“Excuse me, professor: how will classical conditioning help me get laid?”

I happen to feel that we can do better than our relative-average psychologists when designing psychology courses – especially introductory-level ones. If we can successfully provide students with a framework to think about psychology with, we don’t have to necessarily concern ourselves with whether one topic or another was covered or whether they remember some specific list of research findings, as such a framework can be applied to any topic the students may subsequently encounter. Seeing how findings “fit” into something bigger will also make the class seem that much more interesting. Granted, covering more topics in the same amount of depth is generally preferable to covering fewer, but there are very real time constraints to consider. With that limited time, I feel that giving students tools for thinking about psychological material is more valuable than providing them findings within various areas of psychology. Specific topics or findings within psychology should be used predominately as vehicles for getting students to understand that framework; trying to do things the other way around simply isn’t viable. This will not come as a surprise to any regular reader, but the framework that I feel we ought to be teaching students is the functionalist perspective guided by an understanding of evolution by natural selection. Teaching students how to ask and evaluate questions of “what is this designed to do” is a far more valuable skill than teaching them about who Freud was or some finding that failed to replicate but is still found in the introductory textbooks.

On that front, there is both reason to be optimistic and disappointed. According to a fairly exhaustive review of introductory psychology textbooks available from 1975 to 2004 (Cornwell et al, 2005), evolutionary psychology has been gaining greater and more accurate representation: whereas the topic was almost non-existent in 1975, in the 2000s, approximately 80% of all introductory texts discussed the subject at some point.  Further, the tone that the books take towards the subject has become more neutral or positive as well, with approximately 70% of textbooks treating the topic as such. My enthusiasm of the evolutionary perspective’s representation is dampened somewhat by a few other complicating factors, however. First, many of the textbooks analyzed contained inaccurate information when the topic was covered (approximately half of them overall, and the vast majority of the more recent texts that were considered, even if those inaccuracies might appear to have become more subtle over the years). Another concern is that, even when representations of evolutionary psychology were present within the textbooks, the discussion of the topic appeared relatively confined. Specifically, it didn’t appear that many important concepts (like kin selection or parental investment theory) received more than one or two paragraphs on average, if they even got that much space. In fact, the only topic that received much coverage seemed to be David Buss’s work on mating strategies; his citation count alone was greater than all others authors within evolutionary psychology combined. As Cornwell et al (2005) put it:

These data are troubling when one considers undergraduates might conclude that EP is mainly a science of mating strategies studied by David Buss. (p.366).

So, the good news is that introductory psychology books are acknowledging that evolutionary psychology exists in greater and greater number. The field is also less likely to be harshly criticized for being something it isn’t (like genetic determinism). That’s progress. The bad news is that this information is, like many topics in introductory books appear to be, cursory, often inaccurate in at least some regards, and largely restricted to the work of one researcher within the field. Though Cornwell et al (2005) don’t specifically mention it, another factor to consider is where the information is presented within the texts. Though I have no data on hand beyond my personal sample of introductory books I’ve seen in recent years (I’d put that number around a dozen or so), evolutionary psychology is generally found somewhere in the middle of the book when it is found at all (remember, approximately 1-in-5 texts didn’t seem to even acknowledge the topic). Rather than being presented as a framework that can help students understand any topic within psychology, it seems to be presented more as just another island within psychology. In other words, it doesn’t tend to stand out.

So not exactly the portrayal I had hoped for…

Now I have heard some people who aren’t exactly fans (though not necessarily opponents, either) of evolutionary psychology suggest that we wouldn’t want to prematurely close off any alternative avenues of theoretical understanding in favor of evolutionary psychology. The sentiment seems to suggest that we really ought to be treating evolutionary psychology as just another lonely island in the ocean of psychology. Of course, I would agree in the abstract: we wouldn’t want to prematurely foreclose on any alternative theoretical frameworks. If a perspective existed that was demonstrably better than evolution by natural selection and the functionalist view in some regards – perhaps for accounting for the data, understanding it, and generating predictions – I’d be happy to make use of it. I’m trying to further my academic career as much as the next person, and good theory can go a long way. However, psychology, as a field, has had about 150 years with which to come up with anything resembling a viable alternative theoretical framework – or really, a framework at all that goes beyond description – and seems to have resoundingly failed at that task. Perhaps that shouldn’t be surprising, since evolution is currently the only good theory we have for explaining complex biological design, and psychology is biology. So, sure, I’m on board with no foreclosing on alternative ideas, just as soon as those alternatives can be said to exist.

References: Cornwell, R., Palmer, C., Guinther, P., & Davis. H. (2005). Introductory Psychology Texts as a View of Sociobiology/Evolutionary Psychology’s Role in Psychology Evolutionary Psychology, 3, 355-374

I Find Your Lack Of Theory (And Replications) Disturbing

Let’s say you find yourself in charge of a group of children. Since you’re a relatively-average psychologist, you have a relatively strange hypothesis you want to test: you want to see whether wearing a red shirt will make children better at dodge ball. You happen to think that it will. I say this hypothesis is strange because you derived it from, basically, nothing; it’s just a hunch. Little more than a “wouldn’t it be cool if it were true?” idea. In any case, you want to run a test of your hypothesis.You begin by lining the students up, then you walk past them and count aloud: “1, 2, 1, 2, 1…”. All the children with a “1″ go an put on a red shirt and are on a team together; all the children with a “2″ go and pick a new shirt to put on from a pile of non-red shirts. They serve as your control group. The two teams then play each other in a round of dodge ball. The team wearing the red shirts comes out victorious. In fact, they win by a substantial margin. This must mean that the wearing the red shirts made students better at dodge ball, right? Well, since you’re a relatively-average psychologist, you would probably conclude that, yes, the red shirts clearly have some effect. Sure, your conclusion is, at the very least, hasty and likely wrong, but you are only an average psychologist: we can’t set the bar too high.

“Jump was successful (p < 0.05)”

A critical evaluation of the research could note that just because the children were randomly assigned to groups, it doesn’t mean that both groups were equally matched to begin with. If the children in the red shirt group were just better beforehand, that could drive the effect. It’s also likely that the red shirts might have had very little to do with which team ended up winning. The pressing question here would seem to be why would we expect red shirts to have any effect? It’s not as if a red shirt makes a child quicker, stronger, or better able to catch or throw than before; at least not for any theoretical reason that comes to mind. Again, this hypothesis is a strange one when you consider its basis. Let’s assume, however, that wearing red shirts actually did make children perform better, because it helped children tap into some preexisting skill set. This raises the somewhat obvious question: why would children require a red shirt to tap into that previously-untapped resource? If being good at the game is important socially – after all, you don’t want to get teased by the other children for your poor performance – and children could do better, it seems, well, odd that they would ever do worse. One would need to posit some kind of trade-off effected by shirt color, which sounds like kind of an odd variable for some cognitive mechanism to take into account.

Nevertheless, like any psychologist hoping to further their academic career, you publish your results in the Journal of Inexplicable Findings. The “Red Shirt Effect” becomes something of a classic, reported in Intro to Psychology textbooks. Published reports start cropping up from different people who have had other children wear red shirts and perform various tasks athletic task relatively better. While none of these papers are direct replications of your initial study, they also have children wearing red shirts outperforming their peers, so they get labeled “conceptual replications”. After all, since the concepts seem to be in order, they’re likely tapping the same underlying mechanism. Of course, these replications still don’t deal with the theoretical concerns discussed previously, so some other researchers begin to get somewhat suspicious about whether the “Red Shirt Effect” is all it’s made out to be. Part of these concerns are based around an odd facet of how publication works: positive results – those that find effects – tend to be favored for publication over studies that don’t find effects. This means that there may well be other researchers who attempted to make use of the Red Shirt Effect, failed to find anything and, because of their null or contradictory results, also failed to publish anything.

Eventually, word reaches you of a research team that attempted to replicate the Red Shirt Effect a dozen times in the same paper and failed to find anything. More troubling still, for you academic career, anyway, their results saw publication. Naturally, you feel pretty upset by this. Clearly the research team was doing something wrong: maybe they didn’t use the proper shade of red shirt; maybe they used a different brand of dodge balls in their study; maybe the experimenters behaved in some subtle way that was enough to counteract the Red Shirt Effect entirely. Then again, maybe the journal the results were published in doesn’t have good enough standards for their reviewers. Something must be wrong here; you know as much because your Red Shirt Effect was conceptually replicated many times by other labs. The Red Shirt Effect just must be there; you’ve been counting the hits in the literature faithfully. Of course, you also haven’t been counting the misses which were never published. Further, you were counting the slightly-altered hits as “conceptual replications but not the slightly-altered misses as “conceptual disconfirmations”. You still haven’t managed to explain, theoretically, why we should expect to see the Red Shirt Effect anyway, either. Then again, why would any of that matter to you? Part of your reputation is at stake.

And these colors don’t run!  (p < 0.05)

In somewhat-related news, there have been some salty comments from Social psychologist Ap Dijksterhuis aimed at a recent study (and coverage of the study, and the journal it was published in) concerning nine failures to replicate some work Ap did on intelligence priming, as well as work done by others on intelligence priming (Shanks et al, 2013). The initial idea of intelligence priming, apparently, was that priming subjects with professor-related cues made them better at answering multiple-choice, general-knowledge questions, whereas priming subjects with soccer-hooligan related cues made them perform worse (and no; I’m not kidding. It really was that odd). Intelligence itself is a rather fuzzy concept, and it seems that priming people to think about professors – people typically considered higher in some domains of that fuzzy concept – is a poor way to make them better at multiple choice questions. As far as I can tell, there was no theory surrounding why primes should work that way or, more precisely, why people should lack access to such knowledge in absence of some vague, unrelated prime. At the very least, none was discussed.

It wasn’t just that the failures to replicate reported by Shanks et al (2013) were non-significant but in the right direction, mind you; they often seemed to go in the wrong direction. Shanks et al (2013) even looked for demand characteristics explicitly, but couldn’t find them either. Nine consecutive failures are surprising in light of the fact that the intelligence priming effects were previously reported as being rather large. It seem rather peculiar that large effects can disappear so quickly; they should have had very good chance of replicating, were they real. Shanks et al (2013) rightly suggest that many of the confirmatory studies of intelligence priming, then, might represent publication bias, researcher degrees of freedom in analyzing data, or both. Thankfully, the salty comments of Ap reminded readers that: “the finding that one can prime intelligence has been obtained in 25 studies in 10 different labs”. Sure; and when a batter in the MLB only counts the times he hit the ball while at bat, his batting average would be a staggering 1.000. Counting only the hits and not the misses will sure make it seem like hits are common, no matter how rare they are. Perhaps Ap should have thought about professors more before writing his comments (though I’m told thinking about primes ruins them as well, so maybe he’s out of luck).

I would like to add there were similarly salty comments leveled by another Social Psychologist, John Bargh, when his work on priming old stereotypes on walking speed failed to replicate (though John has since deleted his posts). The two cases bear some striking similarties: claims of other “conceptual replications”, but no claims of “conceptual failures to replicate”; personal attacks on the credibility of the journal publishing the results; personal attacks on the researchers who failed to replicate the finding; even personal attacks on the people reporting about the failures to replicated. More interestingly, John also suggested that the priming effect was apparently so fragile that even minor deviations from the initial experiment could throw the entire thing into disarray. Now it seems to me that if your “effect” is so fleeting that even minor tweaks to the research protocol can cancel it out completely, then you’re really not dealing with much in the way of importance concerning the effect, even were it real. That’s precisely the kind of shooting-yourself-in-the-foot a “smarter” person might have considered leaving out of their otherwise persuasive tantrum.

“I handled the failure to replicate well (p < 0.05)”

I would also add, for the sake of completeness, that priming effects of stereotype threat haven’t replicated out well either. Oh, and the effects of depressive realism don’t show much promise. This brings me to my final point on the matter: given the risks posed by research degrees of freedom and publication bias, it would be wise to enact better safeguards against this kind of problem. Replications, however, only go so far. Replications require researchers willing to do them (and they can be low-reward, discouraged activities) and journals willing to publish them with sufficient frequency (which many do not, currently). Accordingly, I feel replications can only take us so far in fixing the problem. A simple – though only partial – remedy for the issue is, I feel, to require the inclusion of actual theory in psychological research; evolutionary theory in particular. While it does not stop false positives from being published, it at least allows other researchers and reviewers to more thoroughly assess the claims being made in papers. This allows poor assumptions to be better weeded out and better research projects crafted to address them directly. Further, updating old theory and providing new material is a personally-valuable enterprise. Without theory, all you have is a grab bag of findings, some positive, some negative, and no idea what to do with them or how they are to be understood. Without theory, things like intelligence priming – or Red Shirt Effects – sound valid.

 References: Shanks, D., Newell, B., Lee, E., Balakrishnan, D., Ekelund, L., Cenac, Z., Kavvadia, F., & Moore, C. (2013). Priming Intelligent Behavior: An Elusive Phenomenon PLoS ONE, 8 (4) DOI: 10.1371/journal.pone.0056515

He’s Climbing In Your Windows; He’s Snatching Your People Up

One topic that has been addressed by evolutionary psychologists that managed to draw a good deal of ire was rape. Given the sensitive nature of the issue, the criticisms that the theorizing about it brought were largely undeserved, reflecting, perhaps, a human tendency to mistake explanation with exculpation. Needless to say, at this point, sexual assault will be the topic for examination today, so if it’s the kind of thing that bothers you to read about, I suggest clicking away. Now that the warning has been made, if you’re still reading we can move forward. There has been some debate among evolutionary-minded researchers as to whether or not there are any rape-specific cognitive adaptations in humans, or whether rape represents a byproduct of other mating mechanisms. The debate remains unresolved for lack of unambiguous predictions or data. As the available data could be interpreted as consistent with both sides of the debate, the question remains a slippery and contentious one.

So do be careful if you decide to try and pick it up.

A paper by Felson & Cundiff (2012) suggests to have found some data they say support the byproduct view for rape. While I find myself currently favoring the byproduct explanation, I also find their interpretation of the evidence they bring to bear on the matter underwhelming. I actually find their interpretation of several matters off, but we’ll get to that later. First, let’s consider the research itself. The authors sought to examine existing data on robberies committed by lone males 12 years or older where a lone female was present at the time. From the robbery data, the authors were further interested in examining the subset of them that also involved a report of sexually assault. Towards this end, Felson & Cundiff (2012) reported data from approximately 45,000 robberies spanning from 2000-2007. Of those robberies, roughly 2% of them also involved a sexual assault, yielding about 900 cases for examination. As an initial note, the 2% figure would seem to suggest, to me, anyway, that in most instances of robbery/sexual assault, the assaults tended to not be preplanned; they look more opportunistic.

From this sample, the authors first examined what effect the female victim’s age had on the likelihood of a sexual assault being reported during the robbery. As it turns out, the age of the woman was a major determinant: women at the highest risk of being assaulted were in the 15-29 age range (with the peak being within the 20-24 year old age range), where the average risk of a sexual assault was around 2.5%. Before this age range, the risk of assault is substantially lower, around 1.3%. After 29 years, the rate begins to decline, dropping markedly after 40, down to around an average of 0.5%. In terms of opportunistic sexual assaults, then, male robbers appear to target women in their fertile years at disproportionate frequencies, presumably partially or largely on the basis of victim’s physical attractiveness. This finding appears consistent with previous work that had found the average age of a female who was the victim of a robbery alone was 35, while the average age of a robbery/assault victim was 27.9; about 7 years of difference. Any theories of rape that assume the act is motivated by power and not by sex would seem to have a very difficult time accounting for this pattern in the data.

Next, the authors turned their attention towards characteristics of the male robbers that predict whether or not an assault was reported. The results showed that the likelihood of a sexual assault increased as the males reached sexual maturity and steadily increased further until about their mid-thirties, after which they began to decline. Further, regardless of their age, the robbers didn’t show much in the way of variance in terms of the age of women they tended to target. That is to say whether the man was in his late teens or his late forties, they all seemed to preferentially target younger women nearer to their peak fecundity. The one exception to this pattern were the males aged 12-17, who seemed to even more disproportionately prefer women in their teens and early twenties. Felson & Cundiff (2012) note that this pattern of preferences is not typically observed in consensual relationships, where men and women tend to pair up around similar ages. This suggests that older men’s patterns of engaging in relationships with older women likely represents the relative aversion of younger women to the older males; not a genuine preference on the part of men for older women per se.

Though it’s difficult to imagine why older men aren’t preferred…

That’s not to say that older men may not have a preference for pursing relatively older women, just that such a preference wouldn’t be driven by the woman’s age. Such a preference might well be driven by other factors, however, such as the relative willingness of a woman to enter into a relationship with the man in question. There’s not much point for a man in pursuing women they’re unlikely to ever attain success with, even if those women are highly attractive; better to spend that time and energy in domains more liable to payoff. Louis C.K. sums the issue up neatly in one of his stand-up routines: “to me, you’re not a woman until you’ve had a couple of kids and your life is in the toilet…[if you're a younger girl] I don’t want to fuck you…[alright] I do want to fuck you, but you won’t fuck me, so fuck you”. When such tradeoffs can be circumvented – as is the case in sexual assault – a person’s underlying preferences for certain characteristics can be more readily assessed.

This brings us to my complaints with the paper. As I mentioned initially, there’s an ongoing debate as to whether or not men have cognitive mechanisms designed for rape specifically, or whether rape is generated as a byproduct of mechanisms designed for other purposes. Felson & Cundiff (2012) suggest that their data support the byproduct interpretation. Why? Because they found that women in the 15-29 age range who were sexually assaulted were less likely to be raped than older women. This pattern of data is supposed to support the byproduct hypothesis because, I think, the authors are positing some specific motivation for sex acts that could result in conception, rather than some more general interest in sexual behavior. It’s hard to say, since the authors fail to lay out the theory behind their hypothesis with precision. This strikes me as somewhat of a strange argument, though, as it would essentially posit that sexual acts that are unlikely to result in conception (such as oral or anal sex) are motivated by a different set of cognitive mechanisms that an interest in vaginal sex. While that might potentially be the case, I’ve never seen a case made for it, and there isn’t a strong one to be found in the paper.

The other complaint I have is that the authors use a phrase that’s a particular pet peeve of mine: “..our results are consistent with the predictions from evolutionary psychology”. This phrase always troubles me because evolutionary psychology, as field, does not make a set of uniform predictions about sexual behavior. Their results may well be consistent with some sub-theories derived by psychologists using an evolutionary framework – such as sexual strategies theory – but they are not derived from evolutionary psychology more broadly. To say that a result is consistent or inconsistent with evolutionary psychology is to imply that such a finding supports or fails to support the foundational assumptions of the field; assumptions which have to do with the nature of information processing mechanisms. While this might seem like a minor semantic point at first, I feel it’s actually a rather deep issue. It’s a frequent mistake that many of evolutionary psychology’s critics make when attempting to write off the entire field on the basis of a single idea they don’t like. To the extent that such inaccurate generalizations serve to hinder people’s engagement with the field, there’s a problem to be addressed.

And if you’re not willing to engage with me, I’d like the ring back.

As evolutionary psychology more broadly doesn’t deliver specific predictions about rape, neither the hypothesis that rape is an adaptation or a byproduct should rightly be considered the official evolutionary psychology perspective on the topic; this would be the case regardless of whether the evidence strongly supported one side or the other, I might add. While the the current research doesn’t speak to either of these possibilities distinctly, it does manage to speak against the idea that rape isn’t about sex, adding to the already substantial evidence that such a view is profoundly mistaken. Of course, the not-sex explanation was always more of a political slogan than a scientific one, so the lack of empirical support for it might not prove terribly troubling for its supporters.

References: Felson, R., & Cundiff, P. (2012). Age and sexual assault during robberies Evolution and Human Behavior, 33 (1), 10-16 DOI: 10.1016/j.evolhumbehav.2011.04.002

Understanding Understanding

“The greatest enemy of knowledge is not ignorance; it is the illusion of knowledge.” – Steven Hawking

Many researchers in the field of psychology don’t appear to understand that restating a finding is not the same as explaining that finding. For instance, if you found that men are more likely to gamble than women, a typical form of “explanation” of this finding would be to say that men have more of a “risk bias” than women, resulting in them gambling more. Clearly this explanation doesn’t add anything that stating the finding didn’t; all it manages to do is add a label to the finding. Now some psychologists might understand this shortcoming and take the next step: they might say something along the lines of men perceive gambling to be more fun or more likely to payoff than women do. While that might well be true, it still falls short of an complete explanation. Instead, it would merely push the explanation stage back a step to a question about why men might perceive gambling differently than women do. If the researchers understand this further shortcoming and take the next step, they’ll reference some cause of that feeling. If we’re lucky, that cause will be non-circular and amount to more than the phrase “culture did it”.

The smart money is on betting against that outcome, though…

A good explanation needs to focus on some outcome of a behavior; some plausible function of that outcome that can account for the emotion or feeling itself. This is notably easier in some cases than others: hunger motivates people to seek out and consume food avoiding starvation; fear motivates people to escape from or avoid threatening situations, avoiding danger; guilt motivates people to make amends and repair relationships towards wronged parties, avoiding condemnation and punishment while reaping the benefits of social interaction. Recently, I found myself posing that functional question about a feeling that is not often discussed: understanding. Teasing out the function of understanding is by no means a straightforward task. Before undertaking the task, however, I need to make a key distinction concerning precisely what I mean by “understanding”. After all, if wikipedia has a hard time defining the term, I can’t just assume that we’ll all be the on the same page despite using the same word.

The distinction I would like to draw is between understanding per se and the feeling of understanding. The examples given on wikipedia reflect understanding per se: the ability to draw connections among mental representations. Understanding per se, then, represents the application of knowledge. If a rat has learned to press a bar for food, for instance, we would say that the rat understands something about the connection between bar pressing and receiving food, in that the former seems to cause the latter. The degree of understanding per se can vary in terms of accuracy and completeness. To continue on with the rat example, a rat can understand that pressing the bar generally leads to it receiving food without understanding the mechanisms through which the process works. Similarly, a person might understand that taking an allergy pill will result in their allergy symptoms being reduced, but their understanding of how that process works might be substantially less detailed or accurate than the understanding of the researchers responsible for developing the pill.

Understanding per se is to be distinguished from the feeling of understanding. While understanding per se refers to the actual connections among your mental representations, the feeling of understanding refers to your mental representations about the state of those other mental representations. The feeling of understanding, then, is a bit of a metacognitive sensation; your thinking about your thinking. Much like understanding per se, the feeling of understanding comes in varying degrees: one can feel as if they don’t understand something at all through feeling as if they understand it completely, and anything in between. With this distinction made, we can begin to start considering some profitable questions: what is the connection between understanding per se and the feeling of understanding? What behaviors are encouraged by the feeling of understanding? What functional outcome(s) are those behaviors aimed at achieving? Given these functional outcomes, what predictions can we draw about how people experiencing various degrees of feeling as if they understand something will react to certain contexts?

Maybe even what Will Smith meant when he wrote “Parents Just Don’t Understand

To begin to answer these questions, let’s return to the initial quote. The enemy of knowledge is not ignorance, but rather the illusion of knowledge; the feeling of understanding. While a bit on the dramatic and poetic sides of things, the quote brings to light an important idea: there is not necessarily a perfect correlation between understanding per se and the feeling of understanding. Sure, understanding per se might tend to trigger feelings of understanding, but we ought to be concerned with matters of degree. It is clear that increased feelings of understanding do not require a tight connection to degrees of understanding per se. In much the same way, one’s judgment of how attractive they are need not perfectly correlate with how attractive they actually are. This is a partial, if relatively underspecified, answer to our first question. Thankfully, it is all my account of understanding requires: a less than perfect correlation between understanding per se and feelings of understanding.

This brings us to the second question: what behaviors are motivated by the feeling of understanding. If you’re a particularly astute reader, you’ll have noticed that the term “understanding” appeared several times in the first paragraph. In each instance, it referred to researchers feeling that their understanding per se was incomplete. What did this feeling motivate researchers to do? Continue to attempt and build their understanding per se. In the cases where researchers lack the feeling that their understanding per se was incomplete, they seem to do one thing: stop. That is to say that reaching a feeling of understanding appears to act as a stopping rule for learning. That people stop investing in learning when they feel they understand is likely what Hawkins was hinting at in his quote. The feeling of understanding is the enemy of knowledge because it motivates you to stop acquiring the stuff. It might even motivate you to begin to share that information with others, opting to speak on a topic, rather than defer to who you perceive to be an expert, but I won’t deal with that point here.

Given that people often do not ever seem to reach complete understanding per se, why should we ever expect people to stop trying to improve? Part of that reason is that there’s a tradeoff between investing time in one aspect of your life versus investing it in any other. Time spent learning more about one skill is not time not spent doing other potentially-useful things. Further still, were you to plot a learning curve, charting how much new knowledge is gained per-unit of time invested in learning, you’d likely see diminishing returns over time. Let’s say you were trying to learn how to play a song on some musical instrument. The first hour you spend practicing will result in you gaining more information than, say, the thirtieth hour. At some point in your practicing, you’ll reach a point where the value-added by each additional hour simply isn’t worth the investment anymore. It is at this point, when some cognitive balance shifts away from investing time on learning one task to doing other things, that we should predict people to reach a strong feeling of understanding. Just as hunger wanes with each additional bite of food, feelings of understanding should grow with each additional piece of information.

Also like hunger, some people tend a touch more towards the gluttonous side.

This brings us to the final question: what can we predict about people’s behavior on the basis of their feelings of understanding? Aside from the above mentioned disinclination to learn about some specific topic further, we might also predict that repeated exposure to information we feel we already understand would be downright aversive (again, in much the same way that eating food after you feel full is painful). We might, for instance, expect people to react with boredom and diverted attention in classes that cover material too slowly. We might also expect people to react with anger when someone tries to explain something to them that they feel they already understand. In fact, there is a word for what people consider that latter act: condescending. Not only does condescension waste an individual’s time with redundant information, it could also serve as an implicit or explicit challenge to their social status via a challenge to their understanding per se (i.e. “You think you understand this topic, but you really don’t. Let me say it explain it to you again…nice…and…slowly…). While this list is quite modest, I feel it represents a good starting point for understanding understanding. Of course, since I feel that way, there’s a good chance I’ll probably stop looking for other starting points, so I may never know.

Why Hang Them Separately When We Can Hang Them Together?

For those of you lucky enough to not have encountered it, there is a concept known as privilege that floats around in predominately feminist-leaning groups. The basic idea of the concept of privilege is that some groups of people have unearned social status or economic benefits provided to them strictly on the basis of their group membership. White people are supposed to be privileged over non-whites; men are supposed to be privileged over women; heterosexuals are supposed to be privileged over homosexuals. An official method for determining which groups are privileged over others appears to largely be absent, so the exercise tends to lean towards the infamous, “I know it when I see it” method of classification. That said, the unofficial method seems to be some combination of the Ecological and Apex fallacies. One curious facet of the idea of privilege is that it’s commonly used as a springboard for various types of moral condemnation. For instance, there are many who assert that sexism = power + prejudice, with power being equated with privilege. Accordingly, if you’re not privileged (i.e. not a male), you can’t be sexist. You can be discriminatory on the basis on sex if you’re a woman, but that is apparently something entirely different and worthy of a distinction (presumably because some people feel one ought to be more punishable than the other).

This sex-based discrimination was so accepted the first time they made a sequel.

Last Easter I discussed the curious case of morally punishing baseball batters for the misdeeds of baseball pitchers on the same team. Some similar underlying psychology seems to be at play in the case of privilege: people seem to perceive the moral culpability or welfare of all group members as being connected. In the case of privilege, if the top of the social hierarchy is predominantly male, males at the bottom of the hierarchy can be viewed as being similarly benefited, even if those men are obviously disadvantaged. In another case, white people might be viewed as being collectively complicit in harms done to non-whites, even if any contemporary white person clearly had no hand in the act, either directly or indirectly. For a final example, harms done to some specific women might be viewed as harms done to all women, with the suffering being co-opted by women who were never victimized by the act in question. Any plausible theory of morality that seeks to explain why people morally condemn others ought to be able to convincingly explain this idea of collective moral responsibility. Today I would like to examine what I consider to be the two major models for understanding moral judgments and see how they fare against a curious case of collective punishment: third-party punishment of genetic relatives of the perpetrator.

I’ll take those two matters in reverse order. A paper by Uhlmann et al (2012) sought to examine whether moral blameworthiness can spill over from the perpetrator to the blood relatives of the perpetrator, even if the perpetrator and their relative never knew one another. The first of the three studies in the paper looked at the misdeeds of someone’s grandfather in past generations. The 106 subjects read a story about Sal, whose grandfather owned a factory during the great depression and was exploitative of the workers. However, Sal received no direct benefit from this act (in that there was no inheritance left to him). Further, Sal’s grandfather was described as being either a biological relative or a non-biological one (only being related by marriage). Sal ended up winning the lottery and wanted to donate some of his winnings to a charity: either the descendants of some of the exploited workers (the purpose of which was to help them go to college) or a hungry children’s fund. Subjects were more likely to recommend that Sal donate money to the college fund of the exploited workers when his grandfather was a blood relative (M = 4.15) compared to when his grandfather was not (M = 5.28, where the scale was 1 to 9, with 1 representing donating to the college fund and a 9 representing donating to the hungry children). Obligations to try and right past wrongs appeared to transfer across generations to some extent.

The second study involved a case of a robbery/murder. A group of 191 subjects read a story about a man who killed a store clerk during a robbery. A video camera had managed to get a clear view of the perpetrator, but this was the only evidence to go on. Two possible perpetrators had been arrested for the crime on account of them looking identical to person in the video. Neither of these perpetrators knew the other, but in one case they were described as twins, whereas in the other they were described as not being related despite their similar appearance. The subjects were asked whether the two should be held in captivity while the police looked for more evidence or whether they should be let go until the matter was resolved (on a scale of 1 to 7, with 1 representing held in custody and 7 representing being let go until more evidence came in). The results showed that subjects were more willing to hold both in custody when they were twins (M = 3.03) relative to when they were not (M = 4.21). On top of transferring obligations, then, people also seemed somewhat willing to inflict costs on innocent relatives of a perpetrator.

Better kill them all, just to be on the safe-side

Of course, it’s not enough to just point out that moral judgments seem to have the capacity to be collective; one also needs to explain why this is the case. Collective punishment would seem to require that moral judgments make use of an actor’s identity, rather than an actor’s actions. Such an outcome appears to run directly counter to what is known as the dynamic coordination model (DeScioli & Kurzban, 2013). In the dynamic coordination model, third-party moral condemners choose sides in a moral debate on the basis of an individual’s actions as a means to avoid discoordination with other condemners (simply put, you want to be on the side that most other people are, so people opt to use an individual’s actions to judge which side to take. In much the same way, drivers want to avoid hitting other cars, so they decide when to stop and go on the basis of a traffic light). In the case of collective punishment, however, there is no potentially condemnable action on the part of the person being punished. The dynamic coordination model would have to require the mere act of being associated in any way with a perpetrator to be morally condemnable as well for this kind of punishment to make sense. While there might be laws against certain acts – like killing and stealing – being related to someone who committed a crime is typically not against the law (at least not the best of my knowledge. I’ll check with my legal team and get back to you about that).

While the dynamic coordination model would seem to have a good deal of trouble accounting for collective moral judgments, an alliance model would not. As Ulhmann et al (2012) note, the threat of punishment for one’s social allies can serve as a powerful deterrent. This is a point I brought up previously when considering why reputations matter: if I were to harm anyone who associated with person X, regardless of whether the person I was harming actually did anything wrong themselves, any associations with person X naturally have become costlier. If people are disinclined to associate with person X, then person X is all the worse off for it and the punishment successfully reached its ultimate target. If social ties are cut, person X will find it increasing difficult to engage in many behaviors that might ultimately be detrimental to others. This raises a concern to be dealt with, though: in the Ulhmann et al (2012) stories, the kin of the perpetrator were not described as being social allies (just as white males are not all allies, despite them being lumped together in the same group by the privilege term). If they weren’t allies, how can an alliance model account for the collective punishment?

My answer to this concern would be as follows: existing social alliances might only be one proximate cue that moral systems use. The primary targets of collective punishment would seem to be those with whom the perpetrator is perceived to share welfare with, and not all welfare connections are going to be worth targeting, given the costs involved in punishment. My welfare is, all else being equal, more dependent on kin than non-kin. Accordingly, collective punishment directed at kin is likely to be even costlier for that perpetrator, making kin punishment particularly appealing for any moral condemners. This would leave us with the following prediction: the degree to which collective punishment is enacted ought to be mediated by the perception of the degree of shared welfare between the perpetrator and the person being punished. Kin should be punished more than non-kin; close allies should be punished more than distant ones; allies that offer substantial benefits to the perpetrator ought to be punished more than allies who offer more meager benefits. Further, this punishment presents the social allies of a perpetrator with new adaptive problems to solve, specifically: how do they trade-off distancing themselves enough from the perpetrator so as to avoid being condemned with the loss of benefits that such distancing can bring?

Crude, yet effective.

This brings me to one final question: are moral judgments ever impartial? My sense is that no, moral judgments are in fact never impartial. This point requires some clarification. The first point of clarification is that moral judgments can have the appearance of impartiality without actually being generated by mechanisms designed to bring that state of affairs about. In fact, we ought not expect any cognitive mechanisms to be designed to generate impartiality because impartiality per se – much like feeling good – doesn’t do anything useful. One useful outcome that being impartial might bring would be, as DeScioli & Kurzban (2013) suggest, being better able to coordinate with other third-party condemners. Of course, if the target behavior is being on the winning side of a dispute, then we ought to expect mechanisms designed to take sides contingent on which side already has a majority of the support. Those mechanisms, though, should rightly be considered partial, in that they are judging the identity of who is on whose side, rather than neutrally on the basis of who did what to whom. This should be expected, in that the latter is only important insomuch as it predicts the former; being impartial is only useful insomuch as it leads to one being partial.

Oh; I would also like to add that providing you with this analysis of collective punishment was my privilege.

References: DeScioli, P., & Kurzban, R. (2013). A solution to the mysteries of morality. Psychological Bulletin, 139 (2), 477-496 DOI: 10.1037/a0029065

Uhlmann, E., Zhu, L., Pizarro, D., & Bloom, P. (2012). Blood is thicker: Moral spillover effects based on kinship Cognition, 124 (2), 239-243 DOI: 10.1016/j.cognition.2012.04.010

An Implausible Function For Depression

Recently, I was involved in a discussion about experimenter-induced expectation biases in performance, also known as demand characteristics. The basic premise of the idea runs along the following lines: some subjects in your experiment are interested in pleasing the experimenter or, more generally, trying to do “well” on the task (others might be trying to undermine your task – the “screw you” effect – but we’ll ignore them for now). Accordingly, if the researchers conducting an experiment are too explicit about the task, or drop hints as to what the purpose is or what results they are expecting, even hints that might seem subtle, they might actually create the effect they are looking for, rather than just observe it. However, the interesting portion of the discussion I was having is that some people seemed to think you could get something for nothing from demand characteristics. That is to say some people seem to think that, for instance, if the experimenter thinks a subject will do well on a math problem, that subject will actually get better at doing math.

Hypothesis 1: Subjects will now be significantly more bullet-proof than they previously were.

This raises the obvious question: if certain demand characteristics can influence subjects to perform better or worse at some tasks, how would such an effect be achieved? (I might add that it’s a valuable first step to ensure that the effect exists in the first place which, in the case of stereotype threat with regard to math abilities, it might well not) It’s not as if these expectations are teaching subjects any new skills, so whatever information is being made use of (or not being made use of, in some cases) by the subject must have already been potentially accessible. No matter how much they might try, I highly doubt that researchers are able to simply expect subjects into suddenly knowing calculus or lifting twice as much weight as they normally can. The question of interest, then, would seem to become: given that subjects could perform better at some important task, why would they ever perform worse at it? Whatever specific answer one gives for that question, it will inevitably include the mention of trade-offs, where being better at some task (say, lifting weights) carries costs in other domains (such as risks of injury or the expenditure of energy that could be used for other tasks). Subjects might perform better on math problems after exercise, for instance, not because the exercise makes them better at math, but because there are fewer cognitive systems currently distracting the math one.

This brings us to depression. In attempting to explain why so many people get depressed, there are plenty of people who have suggested that there is a specific function to depression: people who are depressed are thought to be more accurate in some of their perceptions, relative to those who are not depressed. Perhaps, as Neel Burton and, curiously, Steven Pinker suggest, depressed individuals might do better at assessing the value of social relationships with others, or at figuring out when to stop persisting at a task that’s unlikely to yield benefits.  The official title for this hypothesis is depressive realism. I do appreciate such thinking insomuch as researchers appear to be trying to explain some psychological phenomenon functionally. Depressed people are more accurate in certain judgments, being more accurate in said judgments leads to some better social outcomes, so there are some adaptive benefits to being depressed. Neat. Unfortunately, such a line of thinking misses the aforementioned critical mention of trade-offs: specifically, if depressed people are supposed to perform better at such tasks, if people have the ability to better assess social relationships and their control over them, why would people ever be worse at those tasks?

If people hold unrealistically positive cognitive biases about their performance, and these biases cause people to, on the whole, do worse than they would without them, then the widespread existence of those positive biases need to be explained. The biases can’t simply exist because they make us feel good. Not only would such an explanation be uninformative (in that it doesn’t explain why we’d feel bad without them), but it would also be useless, as “feeling good” doesn’t do anything evolutionary useful. Notwithstanding those issues, however, the depressive realism hypothesis doesn’t even seem to be able to explain the nature of depression very well; not on the face of it anyway. Why should increasing one’s perceptual accuracy in certain domains go hand-in-hand with low energy levels or loss of appetite? Why should women be more likely to be depressed than men? Why should increases in perceptual accuracy similarly increase an individual’s risk of suicidal behavior? None of those symptoms seem like the hallmark of good, adaptive design when considered in the context of overcoming other, unexplained, and apparently maladaptive positive biases.

“We’ve manged to fix that noise the car made when it started by making it unable to start”

So, while the depressive realism hypothesis manages to think about functions, it would appear to fail to consider other relevant matters. As a result, it ends up positing a seemingly-implausible function for depression; it tries to get something (better accuracy) for nothing, all without explaining why other people don’t get that something as well. This might mean that depressive realism identifies an outcome of being depressed instead of explaining depression, but even that much is questionable. This returns to the initial point I made, in that one wants to be sure that the effect in question even exists in the first place. A meta-analysis of 75 studies of depressive realism conducted by Moore & Fresco (2012) did not yield a great deal of support for the effect being all that significant or theoretically interesting. While they found evidence of some depressive realism, the effect size of that realism was typically around or less than a tenth of a standard deviation in favor of the depressed individuals; an effect size that the authors repeatedly mentioned was “below [the] convention for a small effect” in psychology. In many cases, the effect sizes were so close to zero that they might of as well have been zero for all practical purposes; in other cases it was the non-depressed individuals who performed better. It would seem that depressed people aren’t terribly more realistic; certainly not relative to the costs that being depressed brings. More worryingly for the depressive realism hypothesis, the effect size appeared to be substantially larger in studies using poor methods of assessing depression, relative to studies using better methods. Yikes.

So, just to summarize, what we’re left with is an effect that might not exist and a hypothesis purporting to explain that possible effect which makes little conceptual sense. To continue to pile on, since we’re already here, the depressive realism hypothesis seems to generate few, if any, additional testable predictions. Though there might well be plenty of novel predictions that flow from the suggestion that depressed people are more realistic than non-depressed individuals, there aren’t any that immediately come to my mind. Now I know this might all seem pretty bad, but let’s not forget that we’re still in the field of psychology, making this outcome sort of par for the course in many respects, unfortunate as that might seem.

The curious part of the depressive realism hypothesis, to me, anyway, is why it appears to have generated as much interest as it did. The meta-analysis found over 120 research papers on the topic, which is (a) probably not exhaustive and (b) not representative of any failures to publish research on the topic, so there has clearly been a great deal of research done on the idea. Perhaps it has something to do with the idea that there’s a bright side to depression; some distinct benefit that ought to make people more sympathetic towards those suffering from depression. I have no data that speaks to that idea one way or the other though, so I remain confused as to why the realism hypothesis has drawn so much attention. It wouldn’t be the first piece of pop psychology to confuse me in such a manner.

And if it confuses you too, feel free to stop by this site for more updates.

As a final note, I’m sure there are some people out there who might be thinking that though the depressive realism idea is, admittedly, lacking in many regards, it’s currently the best explanation for depression on offer. While such conceptual flaws are, in my mind, reason enough to discard the idea even in the event there isn’t an alternative on offer, there is, in fact, a much better alternative theory. It’s called the bargaining model of depression, and the paper is available for free here. Despite not being an expert on depression myself, the bargaining model seems to make substantially more conceptual sense while simultaneously being able to account for the existing facts about depression. Arguably, it doesn’t paint the strategy of depression in the most flattering light, but it’s at least more realistic.

References: Moore, M., & Fresco, D. (2012). Depressive realism: A meta-analytic review Clinical Psychology Review, 32 (6), 496-509 DOI: 10.1016/j.cpr.2012.05.004

Mothers And Others (With Benefits)

Understanding the existence and persistence of homosexuality in the face of its apparently reproductive fitness costs has left many evolutionary researchers scratching their heads. Though research into homosexuality has not been left wanting for hypotheses, every known hypothesis to date but one has had several major problems when it comes to accounting for the available data (and making conceptual sense). Some of them lack a developmental story; some fail to account for the twin studies; others posit benefits that just don’t seem to be there. What most of the aforementioned research shares in common, however, is its focus: male homosexuality. Female homosexuality has inspired considerably less hypothesizing, perhaps owing to the assumption, valid or not, that female sexual preferences played less of a role in determining fitness outcomes, relative to men’s. More precisely, physical arousal is required for men in order for their to engage in intercourse, whereas it is not necessarily required for women.

Not that lack of female arousal has ever been an issue for this fine specimen.

A new paper out in Evolutionary Psychology by Kuhle & Radtke (2013) takes a functional stab at attempting to explain some female homosexual behavior. Not the homosexual orientations, mind you; just some of the same-sex behavior. On this point, I would like to note that homosexual behavior isn’t what poses an evolutionary mystery anymore than other, likely nonadaptive behaviors, such as masturbation. The mystery is why an individual would be actively averse to intercourse with members of the opposite sex; their only path to reproduction. Nevertheless, the suggestion that Kuhle & Radtke (2013) put forth is that some female homosexual sexual behavior evolved in order to recruit female alloparent support. An alloparent is an individual who provided support for an infant but is not one of that infant’s parents. A grandmother helping to raise a grandchild, then, would represent a case of alloparenting. On the subject of grandmothers, some have suggested that the reason human females reach menopause so early in their lifespan – relative to other species who go on with the potential to reproduce until right around the point they die – is that grandmother alloparenting, specifically maternal grandmother, was a more valuable resource at the point, relative to direct reproduction. On the whole, alloparenting seems pretty important, so getting a hold of good resources for the task would be adaptive.

The suggestion that women might use same-sex sexual behavior to recruit female alloparental support is good, conceptually, on at least three fronts: first, it pays some mind to what is at least a potential function for a behavior. Most psychological research fails to think about function at all, much less plausible functions, and is all the worse because of it. The second positive part of this hypothesis is that it has some developmental story to go with it, making predictions about what specific events are likely to trigger the proposed adaptation and, to some extent, anyway, why they might. Finally, it is consistent with – or at least not outright falsified by – the existing data, which is more than you can say for almost all the current theories purporting to explain male homosexuality. On these conceptual grounds, I would praise the lesbian-sex-for-alloparenting model. On other grounds, both conceptual and empirical, however, I have very serious reservations.

The first of these reservations comes in form of the source of alloparental investment. While, admittedly, I have no hard data to bear on this point (as my search for information didn’t turn up any results), I would wager it’s a good guess that a substantial share of the world’s alloparental resources come from the mother’s kin: grandparents, cousins, aunts, uncles, siblings, or even other older children. As mentioned previously, some have hypothesized that grandmothers stop reproducing, at least in part, for that end. When alloparenting is coming from the female’s relatives, it’s unlikely that much, if any, sexual behavior, same-sex or otherwise, is involved or required. Genetic relatedness is likely providing a good deal of the motivation for the altruism in these cases, so sex would be fairly unnecessary. That thought brings me neatly to my next point, and it’s one raised briefly by the authors themselves: why would the lesbian sex even be necessary in the first place?

“I’ll help mother your child so hard…”

It’s unclear to me what the same-sex behavior adds to the alloparenting equation here. This concern comes in a number of forms. The first is that it seems adaptations designed for reciprocal altruism would work here just fine: you watch my kids and I’ll watch yours. There are plenty of such relationships between same-sex individuals, regardless of whether they involve childcare or not, and those relationships seem to get on just fine without sex being involved. Sure, sexual encounters might deepen that commitment in some cases, but that’s a fact that needs explaining; not the explanation itself. How we explain it will likely have a bearing on further theoretical analysis. Sex between men and women might deepen that commitment on account of it possibly resulting in conception and all the shared responsibilities that brings. Homosexual intercourse, however, does not carry that conception risk. This means that any deepening of the social connections homosexual intercourse might bring would most likely be a byproduct of the heterosexual counterpart. In much the same way, masturbation probably feels good because the stimulation sexual intercourse provides can be successfully mimicked by one’s hand (or whatever other device the more creative among us make use of). Alternatively, it could be possible that the deepening of an emotional bond between two women as the result of a sexual encounter was directly selected for because of it’s role in recruiting alloparent support, but I don’t find the notion particularly likely.

A quick example should make it clear why: for a woman who currently does not have dependent children, the same-sex encounters don’t seem to offer her any real benefit. Despite this, there are many women who continue to engage in frequent to semi-frequent same-sex sexual behaviors and form deep relationships with other women (who are themselves frequently childless as well). If the deepening of the bond between two women was directly selected for in the case of homosexual sexual behavior due to the benefits that alloparents can bring, such facts would seem to be indicative of very poor design. That is to say we should predict that women without children would be relatively uninterested in homosexual intercourse, and the experience would not deepen their social commitment to their partner. So sure, homosexual intercourse might deepen emotional bonds between the people engaging in it, which might in turn effect how the pair behave towards one another in a number of ways. That effect, however, is likely a byproduct of mechanisms designed for heterosexual intercourse; not something that was directly selected for itself. Kuhle & Radtke (2013) do say that they’re only attempting to explain some homosexual behavior, so perhaps they might grant that some increases in emotional closeness are the byproduct of mechanisms designed for heterosexual intercourse while other increases in closeness are due to selection for alloparental concerns. While possible, such a line of reasoning can set up a scenario where the hits for the theory can be counted as supportive and the misses (such as childless women engaging in same-sex sexual behaviors) dismissed as being the product of some other factor.

On top of that concern, the entire analysis rests on the assumption that women who have engaged in sexual behavior with the mother in question ought to be more likely to provide substantially better alloparental care than women who did not. This seems to be an absolutely vital prediction of the model. Curiously, that prediction is not represented in any of the 14 predictions listed in the paper. The paper also offers no empirical data bearing on this point, so whether homosexual behavior actually causes an increase in alloparental investment is in doubt. Even if we assume this point was confirmed however, it raises another pressing question: if same-sex intercourse raises the probability or quality of alloparental investment, why would we expect, as the authors predict, that women should only adopt this homosexual behavior as a secondary strategy? More precisely, I don’t see any particularly large fitness costs to women when it comes to engaging in same-sex sexual behavior but, under this model, there would be substantial benefits. If the costs to same-sex behavior are low and the benefits high, we should see it all the time, not just when a woman is having trouble finding male investment.

“It’s been real, but men are here now so…we can still be friends?”

On the topic of male investment, the model would also seem to predict that women should be relatively inclined to abandon their female partners for male ones (as, in this theory, women’s sexual interest in other women is triggered by lack of male interest). This is anecdotal, of course, but a fairly-frequent complaint I’ve heard from lesbians or bisexual women currently involved in a relationship with a woman is that men won’t leave them alone. They don’t seem to be wanting for male romantic attention. Now maybe these women are, more or less, universally assessing these men as being unlikely or unable to invest on some level, but I have my doubts as to whether this is the case.

Finally, given these sizable hypothesized benefits and negligible costs, we ought to expect to see women competing with other women frequently in the realm of attracting same-sex sexual interest. Same-sex sexual behavior should be expected to not only be cross-cultural universals, but fairly common as well, in much the same way that same-sex friendship is (as they’re hypothesized to serve much the same function, really). Why same-sex sexual interest would be relatively confined to a minority of the population is entirely unclear to me in terms of what is outlined in the paper. This model also doesn’t deal why any women, let alone the vast majority of them, would appear to feel averse to homosexual intercourse. Such aversions would only cause a woman to lose out the hypothesized alloparental benefits which, if the model is true, ought to have been substantial. Women who were not averse would have had more consistent alloparental support historically, leading to whatever genes made such attractions more likely to spread at the expense of women who eschewed it. Again, such aversions would appear to be evidence of remarkably poor design; if the lesbian-alloparents-with-benefits idea is true, that is…

References: Kuhle BX, & Radtke S (2013). Born both ways: The alloparenting hypothesis for sexual fluidity in women. Evolutionary psychology : an international journal of evolutionary approaches to psychology and behavior, 11 (2), 304-23 PMID: 23563096

Should Psychological Neuroscience Research Be Funded?

In my last post, when discussing some research by Singer et al (2006), I mentioned as an aside that their use of fMRI data didn’t seem to add a whole lot to their experiment. Yes, they found that brain regions associated with empathy appear to be less active in men watching a confederate who behaved unfairly towards them receive pain; they also found that areas associated with reward seemed slightly more active. Neat; but what did that add beyond what a pencil and paper or behavioral measure might? That is, let’s say the authors (all six of them) had subjects interact with a confederate who behaved unfairly towards them. This confederate then received a healthy dose of pain. Afterwards, the subjects were asked two questions: (1) how bad do you feel for the confederate and (2) how happy are you about what happened to them? This sounds fairly simple, likely because, well, it is fairly simple. It’s also incredibly cheap, and pretty much a replication of what the authors did. The only difference is the lack of a brain scan. The question becomes, without the fMRI, how much worse is this study?

“No fMRI data? Why not just insult psychology directly and get it over with?”

There are two crucial questions in mind, when it comes to the above question. The first is a matter of new information: how much new and useful information has the neuroscience data given us? The second is a matter of bang-for-your-buck: how much did that neuroscience information cost? Putting the two questions together,we have the following: how much additional information (in whatever unit information comes in) did we get from this study per dollar spent? As an initial caveat before I give my answer to the question, I will point out that I am by no means an expert in the field of neuroscience. Though some might feel this automatically disqualified my having an opinion about the field, I would follow that up by noting that there’s are reasons I’m not an expert in the field of neuroscience. As far as I can tell, some of the major reasons include that I have found almost all of it that I have been exposed to either incredibly dull, lacking in perceived value, or both in many cases.

Now that my neuroscience credentials and biases have been laid bare, let’s move onto the question of the day. As with most questions, I’ll begin my answer to it with a thought experiment: let’s say you ran the initial same study as Singer et al did, and in addition to your short questionnaire you put people into an fMRI machine and got brain scans. In the first imaginary world, we obtained results identical to what Singer et al reported: areas thought to be related to empathy decrease in activation, areas thought to be related to pleasure increase in activation. The interpretation of these results seems fairly straightforward – that is, until one considers the second imaginary world. In this second world, we see the results of brain scan show the reverse pattern: specifically, areas thought to be related to empathy show an increase in activation and areas associated with reward show a decrease. The trick to this thought experiment, however, is that the survey responses remain the same; the only differences between the two worlds are the brain pictures.

This makes interpreting our results rather difficult. In the second world, do we conclude that the survey responses are, in some sense, wrong? The subjects “really” feel bad about the confederates being hurt, but they are unaware of it? This strikes me as a bit off, as far as conclusions go. Another route might be to suggest that our knowledge of what areas of the brain are associated with empathy and pleasure is somehow off: maybe increased activation means less empathy, or maybe empathy is processed elsewhere in the brain, or some other cognitive process is interfering. Hell; maybe it’s possible that the technology employed by fMRIs just isn’t sensitive to what you’re trying to look at. Though the brain scan might have highlighted our ignorance as to how the brain is working in that case, it didn’t help us to resolve it. Further, that the second interpretative route seems like a more reasonable one than the first, it also brings to our attention a perhaps under-appreciated fact: we would be privileging the results of the survey measure above the results of the brain scan.

So make sure to check your survey privilege.

The fact that the survey measures are privileged in this case raises the possibility of another hypothetical world: imagine you had done the the experiment and the brain scan as before, but not the survey. In that case, interpretation of the fMRI  data doesn’t even seem possible; description of the brain activation is, sure, but not a profitable understanding of what we would be seeing. This leads to an interesting perspective on the relative contribution of each experimental tool: the majority of the useful information in this study – its  academic value – does not appear to be derived from the brain imaging. The only thing the brain imaging adds is a description of the activation. So yes, the brain scans are technically adding something, but their primary contributions are descriptions of themselves, rather than new interpretations or insights. While such a thought experiment does not definitely answer the question of how much value is added by neuroscience information in psychology, it provides a tentative starting position: not the majority. The bulk of the valuable information in the study came from the survey, and all the subsequent brain information was interpreted in light of it.

Let’s move onto the second question, then: how much did this information cost to obtain? Admittedly, objective information on this question isn’t the easiest to find. The estimates I have come across, however, range from about $400 to over $1000, perhaps even closer to $2000, per subject (the latter article estimates that 20 subjects would cost approximately $40,000). For the sake of comparison, I’d like to discuss how much a recent study I ran cost. The study involved getting subjects to read a hypothetical moral dilemma and answer approximately 5 questions. It was short and approximately as complicated as the non-neuroscience part of the Singer et al paper. Using Mturk (an Amazon site where you can pay people to take your surveys), I was able to pay subjects around $0.10 each (rounding up) for their responses. My sample of approximately 350 subjects cost me well under $50, but let’s say it cost $50 to make the math easy. If I wanted to run that same survey and also collect fMRI data, I would have been looking at a bill of somewhere in the neighborhood of $350,000. On top of the cost, there’s also the matter of time: it takes far longer to get the subject set up in the fMRI and collect the data (which means you need to pay the subject and researchers more for their time), and it also takes far longer to analyze the data you do collect. So there are unaccounted for opportunity costs here as well that we’ll ignore for now.

So now we have a tentative answer for our second question: the neuroscience-version of my study would likely have cost well over 7000 times as much as the non-neuroscience one. Thus, in order to justify the cost of the additional neuroscience, we would want approximately 99.9999% of the information gain of our research to come from the neuroscience information we gathered, and that estimate is actually fairly charitable towards the neuroscience end of things. However, as I previously estimated, we would be hard-pressed to say that even half of the information value of a study could be attributed towards the addition of neuroscience information. In fact, the actual value is likely well below half. In other words, we’re not even anywhere close to justifying the money invested in neuroscience in psychology. Accordingly, I find the justification for the use of neuroscience in psychology to be wanting, and I would advocate the money being dumped into the field (however much that is) be diverted to areas where it could do more research good. Of course, the US could also consider investing $100,000,000 into mapping the brain, I suppose.

Or let me conduct research with a combined sample size of twice the US population. Please?

Is all this to say that no useful information or positive outcomes would be derived from large investments in neuroscience? Well, that depends on two things: (a) what the investment is in and (b) what else the investment might have been in. I can’t speak to how much benefit we might observe from investing the money directly into neuroscience technology itself in the hopes of improving it and/or bringing the cost of its use down. I would also be vest hesitant to speak to what other investments might be more profitable. What I do feel comfortable saying, however, is that if we’re talking about basic, run-of-the-mill psychological research, there is no feasible way that neuroscience is capable of justifying the monstrous costs involved in producing it. The value added from a single neuroscience paper on 30 subjects is not greater than the value added by dozens, hundreds, or thousands of non-neuroscience papers (the precise value of which depends, obviously, on how much you pay your participants). What people and top journals see in psychological neuroscience, I don’t really understand. Then again, I’m not expert in it, so there’s that, I suppose…

References: Singer, T., Seymour, B., O’Doherty, J., Stephan, K., Dolan, R., & Frith, C. (2006). Empathic neural responses are modulated by the perceived fairness of others Nature, 439 (7075), 466-469 DOI: 10.1038/nature04271