Two Fallacies From Feminists

Being that it’s summer, I’ve decided to pretend I’m going to kickback once more from working for a bit and write about a more leisurely subject. The last time I took a break for some philosophical play, the topic was Tucker Max’s failed donation to Planned Parenthood. To recap that debacle, there were many people who were so put off by Tucker’s behavior and views that they suggested that Planned Parenthood accepting his money ($500,000) and putting his name on a clinic would be too terrible to contemplate. Today, I’ll be examining two fallacies that likely come from an largely-overlapping set of people: those who consider themselves feminists. While I have no idea how common these views are among the general population or even among feminists themselves, they’ve come across my field of vision enough times to warrant a discussion. It’s worth noting up front that these lines of reasoning are by no means limited strictly to feminists; they just come to us from feminists in these instances. Also, I like the alliteration that singling that group brings in this case. So, without any further ado, let’s dive right in with our first fallacy.

Exhibit A: Colorful backgrounds do not a good argument make.

For those of you not in the know, the above meme is known as the “Critical Feminist Corgi”. The sentiment expressed by it – if you believe in equal rights, then you’re a feminist – has been routinely expressed by many others. Perhaps the most notable instance of the expression is the ever-quotable “feminism is the radical notion that women are people“, but it comes in more than one flavor. The first clear issue with the view expressed here is reality. One doesn’t have to look very far to find people who do not think men can be feminists. Feminist allies, maybe, but not true feminists; that label is reserved strictly for women, since it is a “woman’s movement”. If feminism was simply a synonym for a belief in equal rights or the notion that women are people, then that this disagreement even exists seems rather strange. In fact, were feminism a synonym for a belief in equal rights, then one would need to come to the conclusion that anyone who doesn’t think men can be feminists cannot be a feminist themselves (in much the same way that someone who believes in a god cannot also be an atheist; it’s simply definitional). If those who feel men cannot be feminists can themselves still be considered feminists (perhaps some off-brand feminist, but feminist nonetheless), then it would seem clear that the equal-rights definition can’t be right.

A second issue with this line of reasoning is more philosophical in nature. Let’s use the context of the corgi quote, but replace the specifics: if you believe in personal freedom, then you are a Republican. Here, the problems become apparent more readily. First, a belief in freedom is neither necessary or sufficient for calling oneself a Republican (unlike the previous atheist example, where a lack of belief is both necessary and sufficient). Second, the belief itself is massively underspecified. The boundary conditions on what “freedom” refers to are so vague that it makes the statement all but meaningless. The same notions can said to apply well to the feminism meme: a belief in equal rights is apparently neither necessary or sufficient, and what “equal rights” means depends on who you ask and what you ask about. Finally, and most importantly, the labels “Republican” and “Feminist” appear to represent approximate group-identifications; not a single belief or goal, let alone a number of them. The meme attempts to blur the line between a belief (like atheism) and group-identification (some atheist movement; perhaps the Atheism+ people, who routinely try to blur such lines).

That does certainly raise the question as to why people would try and blur that line, as well as why people would resist the blurring. I feel the answer to the former can be explained in a similar manner to why a cat’s threat display involves puffed-up fur and their backs arched: it’s an attempt to look larger and more intimidating than one actually is. All else being equal, aggressing against a larger or more powerful individual is costlier than the same aggression directed towards a less-intimidating one. Accordingly, it would seem to also follow that aggressing against larger alliances is costlier than aggressing against smaller ones. So, being able to suggest that approximately 62% of people are feminists makes a big difference, relative to suggesting that only 19% of people independently adopt the label. Of course, the 43% of people who didn’t initially identify as feminists might take some issue with their social support being co-opted: it forces an association upon them that may be detrimental to their interests. Further still, some of those within the feminist camp might also wish that others would not adopt the label for related reasons. The more feminists their are, the less social status can be derived from the label. If, for instance, feminism was defined as the belief that women are people, then pretty much every single person would be feminist, and being a feminist wouldn’t tell you much about that person. The signal value of the label gets weakened and the specific goals of certain feminists might become harder to achieve amongst the sea of new voices. This interaction between relative status within a group and signal value may well help us understand the contexts in which this blurring behavior should be expected to be deployed and resisted.

Exhibit B: Humor does not a good argument make either.

The second fallacy comes to us from Saturday Night Live, but they were hardly the innovators of this line of thought. The underlying idea here seems to be that men and women have different and relatively non-overlapping sets of best interests, and the men are only willing to support things that personally inconvenience them. Abortion falls on the female-side of the best interests, naturally. Again, this argument falters on both the fronts of reality and philosophy, but I’ll take them in reverse order this time. The philosophical fallacy being committed here is known as the Ecological Fallacy. In this fallacy, essentially, each individual is viewed as being a small representative of the larger group to which they belong.  An easy example is the classic one about height: just because men are taller than women on average, it does not mean that any given male you pull from the population will be taller than any given female. Another more complicated example could involve IQ. Let’s say you tested a number of men and women on an IQ test and found that men, on average, performed better. However, that gap may be due to some particularly well-performing outlier males. If that’s the case, it may be the case that the “average” man actually scores worse than the “average” woman by in large, but the skewed group distributions tell a different story.

Now, onto the reality issues. When it comes to question of whether gender is the metaphorically horse pulling the cart of abortion views, the answer is “no”. In terms of explaining the variance in support for abortion, gender has very little to do with it, with approximately equal numbers of men and women supporting and opposing it. A variable that seems to do a much better job of explaining the variance in views towards abortion is actually sexual strategy: whether one is more interested in short-term or long-term sexual relationships. Those who take the more short-term strategy are less interested in investing in relationships and their associated costs – like the burdens of pregnancy – and accordingly tend to favor policies and practices that reduce said costs, like available contraceptives and abortions. However, those playing a more long-term strategy are faced with a problem: if the costs to sex are sufficiently low and people are more promiscuous because of that, the value of the long-term relationships declines. This leads those attempting to invest in long-term strategies to support policies and practices that make promiscuity costlier, such as outlawing abortion and making contraceptives difficult to obtain. To the extent that gender can predict views on abortion (which is not very well to begin with), that connection is likely driven predominately by other variables not exclusive to gender.

We are again posed with the matter of why these fallacies are committed here. My feeling is that the tactic being used here is, as before, the manipulation of association values. By attempting to turn abortion into a gendered issue – one which benefits women, no less – the message that’s being sent is that if you oppose abortion, you also oppose most women. In essence, it attempts to make the opposition to abortion appear to be a more powerfully negative signal. It’s not just that you don’t favor abortion; it’s that you also hate women. The often-unappreciated irony of this tactic is that it serves to, at least in part, discredit the idea that we live in a deeply misogynistic society that is biased against women. If the message here is that being a misogynist is bad for your reputation, which it would seem to be, it would seem that state of affairs would only hold in a society where the majority of people are, in fact, opposed to misogyny. Were we to use a sports analogy, being a Yankee’s fan is generally tolerated or celebrated in New York. If that same fan travels to Boston, however, their fandom might now become a distinct cost, as not only are most people there not Yankee’s fans, but many actively despise their baseball rivals. The appropriateness and value of an attitude depends heavily on one’s social context. So, if the implication that one is a misogynist is negative, that tells you something important about the values of wider culture in which the accusation is made.

Unlike that degree in women’s studies.

I suppose the positive message to get from all this is that attitudes towards women aren’t nearly as negative as some feminists make them out to be. People tend to believe in equality – in the vague sense, anyway – whether or not they consider themselves feminists, and misogyny – again, in the vague sense – is considered a bad thing. However, if the perceptions about those things are open to manipulation, and if those perceptions can be used to persuade people to help you achieve your personal goals, we ought to expect people – feminist and non-feminist alike – to try and take advantage of that state of affairs. The point in these arguments, so to speak, is to be persuasive; not to be accurate (Mercier & Sperber, 2011). Accuracy only helps insomuch as it’s easier to persuade people of true things, relative to false ones.

References: Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory Behavioral and Brain Sciences, 34 (02), 57-74 DOI: 10.1017/S0140525X10000968

It’s (Sometimes) Good To Be The King

Given my wealth of anecdata, I would feel confident saying that, on the whole, people high in status (whether because of their wealth, their social connections, or both) tend to not garner much in the way of sympathy from third parties. It’s why we end up with popular expressions like “First World Problems” – frustrations deemed to be minor, experienced by people who are relatively well off in life. The idea that people so well-off can be bothers by such trivial annoyances serves as the subject of some good comedic fodder. There are a number of interesting topics surrounding the issue, though: first, there’s the matter of why First World Problems exist in the first place. That is, why do people not simply remain content with their life once they reach a certain level of comfort? Did people really need to bother developing high-speed wireless internet when we already had dial-up (which is pretty good, compared to people without internet)? Why would it feel so frustrating for us with high-speed wireless if we had to switch back? A second issue would be the hypocrisy that frequently surrounds people who use the First World Problems term in jest, rather than sympathy. There are those who will mock others for what they perceive to be First World Problems, then turn around and complain about something trivial themselves (like, say, the burden of having to deal with annoying people on their high-speed wireless internet). The topic of the day will concern a third topic: considering contexts in which sympathy is strategically deployed (or not deployed) in terms of moral judgments on the basis of the social status of a target individual.

But first a quick First World Problem: I really dislike earbud headphones.

A new paper by Polman, Pettit, & Wisenfeld (2013) sought to examine the phenomenon of moral licensing with respect the status of an actor. Roughly speaking, moral licensing represents the extent to which one is not morally condemned or punished for an immoral action, relative to someone without that licensing. Were person A and B to both commit the same immoral act (like adultery), if one was to be punished less, or otherwise suffered fewer social costs associated with the act, all else being equal, we would say that person’s actions were morally licensed  by the condemners to some degree. The authors, in this case, predicted that both high- and low-status individuals ought to be expected to receive some degree of moral licensing, but for different reasons: high-status individuals were posited to receive this license because of a “credential bias…[which leads] people to perceive dubious behavior as less dubious”, whereas low status individuals were posited to receive moral licensing through moral “credits…[which offer] counterbalancing [moral] capital”, allowing low-status individuals to engage in immoral behavior to the extent that their transgressions do not outweigh their capital. If immoral behavior is viewed as creating a metaphorical debt, the generated debt would be lower for those high in status, but able to be paid off more readily by those low in status.

So the authors predicted that high-status individuals will have their behavior reinterpreted to be more morally positive when there is some ambiguity allowing for reinterpretation, whereas low-status individuals won’t be morally condemned as strongly because “they’ve already suffered enough”. Now I know I’ve said it before, but these types of things can’t be said enough as far as I’m concerned: these predictions seem to be drawn from intuitions – not from theory. The biases that the authors are positing are, essentially, circular restatements of a pattern of results they hoped to find (i.e. high-status people will be given moral license because of a bias that causes high-status people to be given moral license). Which is not to say they’re necessarily wrong, mind you (in fact, since this is psychology, I don’t think I’m spoiling anything by telling you they found the results they predicted); it’s just that this paper doesn’t further theory in moral licensing, as the authors suggest it does. Rather, what the paper does instead is present us with a new island of findings within the moral licensing research. In any case, let’s first take a look at what the paper reported.

In the first study, the authors presented subjects with a case of potential racial discrimination (where 5 white and 2 black candidates for a job were interviewed and only 2 white ones were hired). The name of person doing the hiring was manipulated to try and make them sound high in status (Winston Rivington), low-status (Billy-Bob), neutral (James). The subjects were subsequently asked whether the person doing the hiring should be condemned in various ways, whether socially or legally. The results showed that, as predicted, both high- and low-status individuals were condemned less (M = 3.22 and 3.14 out of a 1 to 9 scale, respectively), than the control (M = 3.78). While there was an effect, it was a relatively weak one, perhaps as ought to expected from such a weak manipulation. The manipulation was stronger in the next study. In study two, subjects were also asked about someone making a hiring decision, but this person was now either the executive of a fortune 500 company or a janitor. Further, the racism in the hiring decision was either clear (the person doing the hiring admitted to it) or ambiguous (the person doing the hiring referenced performance for their decision). The results of the second study showed that, when the moral infraction was unambiguous, the high status individual was condemned more (M = 7.81), relative to when the infraction was ambiguous (M = 5.42). By contrast, whether the infraction was committed ambiguously or unambiguously by the lower-status individual, the condemnation remained the same (6.42 and 6.48 respectively). Further, individuals with more dispositional sympathy tended to be the ones punishing the low-status individuals less. The effect of that sympathy, however, did not transfer to the high-status individuals. While high-status individual’s condemnation varied with ambiguity of the act, low-status people seemed to get the same level of sympathy regardless of whether they transgressed ambiguously or not.

If only he was poorer; then the racism would be more acceptable.

In the final study, subjects were again presented with a story that contained an ambiguous, potential moral wrong: someone taking money off a table at the end of a cafeteria-style lunch and putting it in their pocket. The person taking the money was either just “someone”, “the janitor”, or “the residence director” at the cafeteria. Again, the low- and high-status individuals received less condemnation on average (M = 3.11 and 3.17) than the control (M = 4.33). However, only high-status individuals had their behavior perceived as less wrong (M = 5.03); the behavior was rated as being equally wrong in both the control (M = 6.21) and low-status (6.05) condition. Conversely, it was only in the low-status condition that the person taking the money was given more sympathy (M = 5.75); both the high-status (3.60) and control (3.70) received equal and lesser amounts of sympathy.

Now for the more interesting part. This study hints at the perhaps-unsurprising conclusion that people who differ in some regard – in this case, status – are treated differently. Sympathy is reserved for certain groups of people (in this case, typically not for people like Winston Rivington), whereas the benefit of the doubt can be reserved for others (typically not for the Billy-Bobs of the world). The matter which is not dealt with in this paper is the more interesting one, in my mind: why should we expect that to be the case? That is, what is the adaptive function of sympathy and, given that function, in what situations ought we expect it to be strategically deployed? For instance, the authors offer up the following suggestion:

Moreover, we contend that high or low status may sometimes deprive wrongdoers of a license. When wrongdoers’ high status is viewed as undeserved, illegitimate or exploitative, they may pay a particularly high cost for their transgressions.

It seems as if they’re heading in the right direction – thinking about different variables which might have some effects on how much moral condemnation an individual might suffer as the result of an immoral act – but they don’t quite know how to get there. Presumably, what the authors are suggesting in their above example has something to do with the social value of an actor to other potential third-party condemners. Someone who merely inherited their high status may be seen as a bad investment, as their ability to maintain that position and benefits it may bring – and thus their future social value -  is in question. If their high status is derived from exploitative means, their social value may be questioned on the grounds that the benefits they might provide come at too great a cost; the cost of drawing condemnation from the enemies the high-status individual has made while rising to power. Conversely, individuals who are low in status as a result of behaviors that makes them bad investments – like, say, excessive drug use – may well not see the benefits of sympathy-based moral licensing. It might be less useful to feel sympathy for someone who repeatedly made poor choices and shows no signs of altering that pattern. The larger point is that, in order to generate good theory and good predictions, you’d be well-served by thinking about adaptive costs and benefits in cases like this. Intuitions will only get you so far.

In this case, intuitions only netted them a publication in a high impact factor journal. So good show on that front.

What I find particularly interesting about this study, though, is that the results run (sort of) counter to some data I recently collected, despite my predicting similar kinds of effects. With respect to at least one kind of ambiguously-immoral behavior and two personal characteristics (neither of which was status), moral judgments and condemnation appeared to be stubbornly impartial. While my results aren’t ready for prime time yet (and I do hope the lack of significant results doesn’t cause issues when it comes to publication; another one of my first world problems), I merely want to note (as the authors also suggest) that such moral licensing effects do appear to come with boundary conditions, and teasing those out will certainly take time. Whatever the shape of those boundary conditions, redescribing them in terms of a “bias” doesn’t cut it as an explanation, nor does it assist in future theorizing about the subject. In order to move research along, it’s long-past time our intuitions are granted a more solid foundation.

References: Polman, E., Pettit, N., & Wiesenfeld, B. (2013). Effects of wrongdoer status on moral licensing Journal of Experimental Social Psychology, 49 (4), 614-623 DOI: 10.1016/j.jesp.2013.03.012

Why Psychology 101 Should Be Evolutionary Psychology

In two recent posts, I have referenced a relatively-average psychologist (again, this psychologist need not bear any resemblance to any particular person, living or dead). I found this relatively-average psychologist to be severely handicapped in their ability to think about psychology – human and non-human psychology alike – because they lacked a theoretical framework for doing so. What this psychologist knows about one topic, such as self-esteem, doesn’t help this psychologist think about any other topic which is not self-esteem, by in large. Even if this psychologist managed to be an expert on the voluminous literature on the subject, it would probably not tell them much about, say,  learning, or sexual behavior (save the few times where those topics directly overlapped as measured or correlated variables). The problem became magnified when topics shifted outside of humans into other species. Accordingly, I find the idea of teaching students about an evolutionary framework to be more important than teaching them about any particular topic within psychology. Today, I want to consider a paper from one of my favorite side-interests: Darwinian Medicine – the application of evolutionary theory to understanding diseases. I feel this paper will serve as a fine example for driving the point home.

As opposed to continuing to drive with psychology as it usually does.

The paper, by Smallegange et al (2013), was examining malarial transmission between humans and mosquitoes. Malaria is a vector-borne parasite, meaning that it travels from host to host by means of an intermediate source. The source by which the disease is spread is known as a vector, and in this case, that vector is mosquito bites. Humans are infected with malaria by mosquitoes and the malaria reproduces in its human host. That host is subsequently bitten by other mosquitoes who transmit some of the new parasites to future hosts. One nasty side-effect of vector-borne diseases is that they don’t require the hosts to be mobile to spread. In the case of other parasites like, say, HIV, the host needs to be active in order to spread the disease to others, so the parasites have a vested interest in not killing or debilitating their hosts too rapidly. On the other hand, if the disease is spread through mosquito bites, the host doesn’t need to be moving to spread it. In fact, it might even be better – from the point of view of the parasite – if the host was relatively disabled; it’s harder to defend against mosquitoes if one is unable to swat them away. Accordingly, malaria (along with other vector-borne diseases) ends up being a rather nasty killer.

Since malaria is transmitted from human to human by way of mosquito bites, it would stand to reason that the malaria parasites would prefer, so to speak, that mosquitoes preferentially target humans as food sources: more bites equals more chances to spread. The problem, from the malaria’s perspective, is that mosquitoes might not be as inclined to preferentially feed from humans as the malaria would. So, if the malaria parasite could alter the mosquitoes behavior in some way, so as to assist in its spread by making the mosquitoes preferentially target humans, this would be highly adaptive from the malaria’s point of view. In order to test whether the malaria parasites did so, Smallegange et al (2013) collected some human odor samples using a nylon matrix. This matrix, along with a control matrix, were presented to caged mosquitoes and the researchers measured how frequently the mosquitoes – either infected with malaria or not – landed on each. The results showed that mosquitoes, whether infected or uninfected, didn’t seem particularly interested in the control matrix. When it came to the human odor matrix, however, the mosquitoes infected with malaria were substantially more likely to land on it and attempt to probe it than the non-infected ones (the human odor matrix received about four times the attention from infected mosquitoes that it did from the uninfected).

While this result is pretty neat, what can it tell us about the field of psychology? For starters, in order to alter mosquito behavior, the malaria parasite would need to do so via some aspect of the mosquitoes’ psychology. One could imagine a mosquito infected with malaria suddenly feeling overcome with the urge to have human for dinner (if it is proper to talk about mosquitoes having similar experiences, that is) without having the faintest idea why. A mosquito psychologist, unaware of the infection-behavior link, might posit that preferences for food sources naturally vary along a continuum in mosquitoes, and there’s nothing particularly strange about mosquitoes that seem to favor humans excessively; it’s just part of normal mosquito variation. (the parallels to human sexual orientation seem to be apparent, in some respects). This mosquito psychologist might also suggest that there was something present in mosquito culture that made some mosquitoes more likely to seek out humans. Maybe the mosquitoes that prefer humans were insecurely attached to their mother. Maybe they have particularly high self-esteem. That we know such explanations are likely wrong – it seems to be the malaria driving the behavior here – without reference to evolutionary theory and an understanding of pathogen-host relationships, our mosquito psychologists would be at a loss, relatively, to understand what’s going on.

Perhaps mosquitoes are just deficient in empathy towards human hosts and should go vegan.

What this example boils down to (for my purposes here, anyway) is that thinking about the function(s) of behavior – and of psychology by extension – helps us understand it immensely. Imagine mosquito psychologists who insisted on not “limiting themselves” to evolutionary theory for understanding what they’re studying. They might have a hard time understanding food preferences and aversions (like, say, pregnancy-related ones) in general, much less the variations of it. The same would seem probable to hold for sexual behavior and preferences. Mosquito doctors who failed to try and understand function might occasionally (or frequently) try to “treat” natural bodily defense mechanisms against infections and toxins (like, say, reducing fever or pregnancy sickness, respectively) and end up causing harm to their patients inadvertently. Mosquito-human-preference advocates might suggest that the malaria hypothesis purporting to explain their behavior to be insulting, morally offensive, and not worthy of consideration. After all, if it were true, preferences might be alterable by treating some infection, resulting in a loss of some part of their rich and varied culture.

If, however, doctors and psychologists were trained to think about evolved functions from day one, some of these issues might be avoidable. Someone versed in evolutionary theory could understand the relevance between findings in the two fields quickly. The doctors would be able to consider findings from psychology and psychologists from doctors because they were both working within the same conceptual framework; playing by the same rules. On top of that, the psychologists would be better able to communicate with each other, picking out possible errors or strengths in each others’ research projects, as well as making additions, without having to be experts in the fields first (though it certainly wouldn’t hurt). A perspective that not only offers satisfactory explanations within a discipline, between disciplines, and ties them all together, is far more valuable than any set of findings within those fields. It’s more interesting too, especially when considered against the islands-of-findings model that currently seems to predominate in the teaching of psychology. At this point, I feel those who would make a case for not starting with evolutionary theory ought to be burdened by, well, making that case and making it forcefully. That we currently don’t start teaching psychology with evolution is, in my mind, no argument to continue not doing so.

References: Smallegange, R., van Gemert, G., van de Vegte-Bolmer, M., Gezan, S., Takken, W., Sauerwein, R., & Logan, J. (2013). Malaria Infected Mosquitoes Express Enhanced Attraction to Human Odor PLoS ONE, 8 (5) DOI: 10.1371/journal.pone.0063602

Moral Outrage At Disney World

A few months ago, I took a trip down to Florida. I happen to know people who work for both Disney and Universal and, as a result, ended up getting to experience the parks for free. Prior to my visit to Disney, however, my future benefactor for that day who worked for the company had been injured during a performance. Because of the injury to his leg, his ability to walk around the park and stand in the long lines was understandably compromised. However, Disney happened to have a policy that sends disabled people – along with their parties – to the front of the line in recognition of that issue. While the leg injury was no doubt a cost to the person who let me get into the park, it ended up being a bonus for our visit. Rather than needing to wait  on lines for upwards of two hours, we were able to basically stroll to the front of every ride we wanted, and saw most of the park’s attractions in no time at all. Overall, having a disabled person made the experience all the better, minus pushing around the unwieldy wheelchair, that is. A recent article on the NY post pointed out that such a policy is, apparently, open to exploitation: some wealthy individuals are “renting” the services of disabled people who act as “tour guides” for families at Disney. For a respectable $130 an hour, disabled individuals will join families to help them cut to the front of the lines.

“How much an hour? Alright; I’m in. Just aim for the left leg…”

This article has garnered a significant amount of attention, but something about the reactions to it seem a bit strange. The reactions take one of three basic forms: (1) “I wish I had thought of that”, (2) “I don’t see the big deal”, and (3) “The rich people are morally condemnable for doing this”. Those reactions themselves, however, are not the strange part. The first response appears to be an acknowledgement that people would want to gain the benefits of skipping the long lines at Disney by exploiting some loophole in the rules. I found the experience of avoiding the long lines to be rather refreshing, and I imagine the majority of people would prefer not having to wait to having to wait. Cheating pays off, and when people can be cheaters safely, many seem to prefer it. The second typically response acknowledges that the disabled “tour guides” are making a nice amount of money in exchange for their services, with both the rich buyers and disabled sellers ending up better off than they were before. The rich people save some money on buying VIP passes that allow for a similar type of line-skipping ability, get to skip the lines more efficiently, and the disabled people are much better off at the end of the day after being paid over a $1000 to go to Disney.

However, not everyone is better off from that trade: those who now have to wait in line several extra seconds per disabled party would be worse off. Understandably, many people experience moral outrage at the thought of the rich line-jumper’s rule exploitation. The curious part is the moral outrage I did not see much of: outrage directed at the disabled people similarly exploiting the system for their own benefit. It would seem the disabled people selling their services are fully aware of what they’re doing and are engaging in the exploitative behavior intentionally, so why is there not much (if any) moral anger directed their way? By way of analogy, let’s say I wanted to kill someone, but I didn’t want to take the risks of being the one who pulled the trigger myself. So, instead of killing the person, I hired a contract killer to do the job instead. In the event the plot was uncovered, not only would the killer go to jail, but I would likely share their fate as well on a charge of conspiracy (as the vocalist for As I Lay Dying is all too familiar with at the moment). As I’ve discussed before, moral judgments are by no means restricted to the person who committed an act themselves; friends or allies may also suffer as a result of their mere perceived association. Assisting someone in committing a crime is often times considering morally blameworthy, so why not here?

This raises the question as to why we see these patterns of inconsistent condemnation. Solving that problem requires both an identification of one or more factors that differ in the cases of contract-killer-like cases and disabled-Disney cases, and an explanation as to why that difference is relevant. One possible difference could be the income disparity between the parties. There seems to be a fair share of anger leveled at the richer among us, sometimes giving the impression that the distaste for the rich can be built on the basis of their wealth alone. To the extent that the disabled individuals are perceived to be poor or in need of the money, this might soften any condemnation they would face. This factor is unlikely to get us all the way to where we want to go on its own, though: I don’t think people would be terribly forgiving of a contract killer who just happened to be poor and was only killing because they really needed (or really wanted) the money. Further, there’s no evidence presented in the coverage of the article to suggest that the disabled people serving as tour guides were distinctly poor.

Especially when they can make more in a day than I do in two weeks.

Wealth, in the form of money, however, only serves a proxy measure of some other characteristic that people might wish to assess: that characteristic is likely to be the perception of need. Why might people care about the neediness of others? All else being equal, people who are perceived as being in need might make more valuable social investments than people whose needs appear relatively satiated. For instance, someone who hasn’t eaten in a day will value the same amount of food more than someone who just finished a large meal and, accordingly, the hungry individual may well be more grateful to the person who provides them with a meal; the meal has a higher marginal value to the hungry individual, so the investment value of it is likely higher as well. So, in this case, disabled individuals may be viewed as more needy than the rich, making them the more valuable potential social investment than the other. While this explanation has a certain degree of plausibility to it, there are some complicating factors. One of those issues is that one needs to not only be grateful for the help, but also capable of returning the favor at a later time in order for the investment to pay off. Though disabled people might be viewed as particularly needy, my intuition is that they’re also viewed as being less likely to be able to reciprocate the assistance for the same reasons. Similarly, while the rich people may be judged as less needy, they’d also be viewed as more likely to be able to return on an investment given. The extent to which the need and ability issues tradeoff with each other and affect the judgments, I can’t definitively say.

Another possible difference between contract killers and disabled guides concerns the nature of the rules involved themselves. Perhaps since killing for money is generally frowned upon morally, but cutting the line if you’re disabled isn’t, people don’t register the disabled person as breaking a moral rule; only the rich one. Again, this explanation might hold some degree of plausibility, but only get us so far. After all, the disabled people are most certainly willing accomplices in assisting the rich break the moral rule. Without the help of the disabled, the rich individuals would be unable to exploit Disney’s policy and bypass the lines. Further still, Disney’s policy does allow for the disabled individuals to bring up to six guests with them. No part of the rule seems to say that those guests need to be family members, or even people the disabled individual likes; just up to six guests. While such an act may feel like it is breaking some part of the rule, it’s difficult to say precisely what part is being broken. Was I breaking the rule when my friend took me to Disney with him and we skipped the lines because he was injured? How do those cases differ? In any case, while this rule-based explanation might explain why people are more morally upset with the rich people than the disabled people, it would not explain why people, by in large, don’t seem upset with the disabled tour guides to any real extent. One could well be more upset with the people who hire killers than than contract killer themselves, but still morally condemn both parties substantially.

There is also the matter to consider as to how to deal with the issue if one wishes to condemn it. If one thinks the rule allowing disabled people being able to skip to the front of the line in the first place should be done away with, it would certainly stem the issue of the rich hiring the disabled, but it would also come at a cost to the disabled people who aren’t giving the tours. One might wish to keep the rule helping the disabled people out around, but stop the rich people from exploiting it, however. If some method could successfully exclude the rich people from hiring the disabled, one also needs to realize that it would come at a distinct cost the disabled tour guides as well: now, instead of having the option to earn a fantastic salary for a cushy job, that possibility will be foreclosed on. Rich tourists would instead have to spend more money on the inferior VIP tours offered by Disney, allowing them to still cut the line but without also directing the money towards the disabled. Since the policy at Disney seemed to have been put in place to benefit the disabled in the first place, combating the issue seems like something of a double-edged sword for them.

Incidentally, double-edged swords are also a leading cause of disability.

The tour guide issue, in some important ways, seems to parallel the moral rules surrounding sex: one is free to give away to sex to whomever one wants, but one is often morally condemned for selling it. Similarly, disabled people can go to the parks with whomever they want and get them to the front of the line (even if those friends or family members are exploiting them for that benefit), but if they go to the parks with someone buying their services, it then becomes morally unacceptable. I find that very interesting. Unfortunately, I don’t have more than speculations as to this curious pattern of moral condemnation at the current time. What one can say about the judgments and their justifications is that, at the least, they tend towards being relatively inconsistent. This ought to be expected on the basis of morality being deployed strategically to achieve useful outcomes, rather than consistently to achieve impartially. Thinking about the possible functions of moral judgments – that is, what useful outcomes they might be designed to bring about – can help us begin to think about what factors the cognitive mechanisms that generate them are using as inputs. It can also help us figure out why it’s morally unacceptable to sell some things that it’s acceptable to give away.

Welcome To Introduction To Psychology

In my last post, I mentioned a hypothetical relatively-average psychologist (caveat: the term doesn’t necessarily apply to any specific person, living or dead). I found him to be a bit strange, since he tended to come up with hypotheses that were relatively theory-free; there was no underlying conceptual framework he was using to draw his hypotheses. Instead, most of his research was based on some hunch or personal experience. Perhaps this relatively-average psychologist might also have made predictions on the basis of what previous research had found. For instance, if one relatively-average psychologist found that priming people to think about the elderly made them walk marginally slower, another relatively-average psychologist might predict that priming people to think of a professor would make them marginally smarter. I posited that these relatively-average psychologists might run into some issues when it comes to evaluating published research because, without a theoretical framework with which to understand the findings, all one can really consider are the statistics; without a framework, relatively-average psychologists have a harder time thinking about why some finding might make sense or not.

If you’re not willing to properly frame something, it’s probably not wall-worthy.

So, if a population of these relatively-average psychologists are looking to evaluate research, what are they supposed to evaluate it against? I suppose they could check and see if the results of some paper jibe with their set of personal experiences, hunches, or knowledge of previous research, but that seems to be a bit dissatisfying. Those kinds of practices would seem to make evaluations of research look more like Justice Stewart trying to define pornography: “I know [good research] when I see it”. Perhaps good research would involve projects that delivered results highly consistent with people’s personal general experiences; perhaps good research would be a project that found highly counter-intuitive or surprising results; perhaps good research would be something else still. In any case, such a practice – if widespread enough – would make the field of psychology look like grab bag of seemingly scattered and random findings. Learning how to think about one topic in psychology (say, priming) wouldn’t be very helpful when it came to learning how to think about another topic (say, learning). That’s not to say that the relatively-average psychologists have nothing helpful at all to add, mind you; just that their additions aren’t being driven by anything other than those same initial considerations, such as hunches or personal experience. Sometimes people have good guesses; in the land of psychology, however, it can be difficult to differentiate between good and bad ones a priori in many cases.

It seems like topic-to-topic issues would be hard enough for our relatively-average psychologists to deal with, but that problem becomes magnified once the topics shift outside of what’s typical for one’s local culture, and even further when topics shift outside of one’s species. Sure; maybe male birds will abandon a female partner after a mating season if the pair is unable to produce any eggs because the male birds feel a threat to their masculinity that they defend against by reasserting their virility elsewhere. On the flip side, maybe female birds leave the pair because their sense of intrinsic motivation was undermined by the extrinsic reward of a clutch of eggs. Maybe male ducks force copulations on seemingly unwilling female ducks because male ducks use rape as a tactic to keep female ducks socially subordinate and afraid. Maybe female elephant seals aren’t as combative as their male counterparts because of sexist elephant seal culture. Then again, maybe female elephant seals don’t fight as much as males because of their locus of control or stereotype threat. Maybe all of that is true, but my prior on such ideas is that they’re unlikely to end up treading much explanatory water. Applied to non-human species, their conceptual issues seem to pop out a bit better. Your relatively-average psychologist, then, ends up being rather human-centric, if not a little culture- and topic-centric as well. Their focus is on what’s familiar to them, largely because what they know doesn’t help them think about what they do not know too much.

So let’s say that our relatively-average psychologist has been tasked with designing a college-level introduction to psychology course. This course will be the first time many of the students are being formally exposed to psychology; for the non-psychology majors in the class, it may also be their last time. This limits what the course is capable of doing, in several regards, as there isn’t much information you can take for granted. The problems don’t end there, however: the students, having a less-than-perfect memory, will generally forget many, if not the majority, of the specifics they will be taught. Further, students may never again in their life encounter the topics they learned about in the intro course, even if they do retain the knowledge about them. If you’re like most of the population, knowing the structure of a neuron or who William James was will probably never come up in any meaningful way unless you find yourself at a trivia night (and even then it’s pretty iffy). Given these constraints, how is our relatively-average psychologist supposed to give their students an education of value? Our relatively-average psychologist could just keep pouring information out, hoping some of it sticks and is relevant later. They could also focus on some specific topics, boosting retention, but at the cost of breadth and, accordingly, chance of possible relevance. They could even try to focus on a series of counter-intuitive findings in the hopes of totally blowing their students’ minds (to encourage students’ motivation to show up and stay awake), or perhaps some intended to push a certain social agenda – they might not learn much about psychology, but at least they’ll have some talking points for the next debate they find themselves in. Our relatively-average psychology could do all that, but what they can’t seem to do well is to help students learn how to think about psychology; even if the information is retained, relevant, and interesting, it might not be applicable to any other topics not directly addressed.

“Excuse me, professor: how will classical conditioning help me get laid?”

I happen to feel that we can do better than our relative-average psychologists when designing psychology courses – especially introductory-level ones. If we can successfully provide students with a framework to think about psychology with, we don’t have to necessarily concern ourselves with whether one topic or another was covered or whether they remember some specific list of research findings, as such a framework can be applied to any topic the students may subsequently encounter. Seeing how findings “fit” into something bigger will also make the class seem that much more interesting. Granted, covering more topics in the same amount of depth is generally preferable to covering fewer, but there are very real time constraints to consider. With that limited time, I feel that giving students tools for thinking about psychological material is more valuable than providing them findings within various areas of psychology. Specific topics or findings within psychology should be used predominately as vehicles for getting students to understand that framework; trying to do things the other way around simply isn’t viable. This will not come as a surprise to any regular reader, but the framework that I feel we ought to be teaching students is the functionalist perspective guided by an understanding of evolution by natural selection. Teaching students how to ask and evaluate questions of “what is this designed to do” is a far more valuable skill than teaching them about who Freud was or some finding that failed to replicate but is still found in the introductory textbooks.

On that front, there is both reason to be optimistic and disappointed. According to a fairly exhaustive review of introductory psychology textbooks available from 1975 to 2004 (Cornwell et al, 2005), evolutionary psychology has been gaining greater and more accurate representation: whereas the topic was almost non-existent in 1975, in the 2000s, approximately 80% of all introductory texts discussed the subject at some point.  Further, the tone that the books take towards the subject has become more neutral or positive as well, with approximately 70% of textbooks treating the topic as such. My enthusiasm of the evolutionary perspective’s representation is dampened somewhat by a few other complicating factors, however. First, many of the textbooks analyzed contained inaccurate information when the topic was covered (approximately half of them overall, and the vast majority of the more recent texts that were considered, even if those inaccuracies might appear to have become more subtle over the years). Another concern is that, even when representations of evolutionary psychology were present within the textbooks, the discussion of the topic appeared relatively confined. Specifically, it didn’t appear that many important concepts (like kin selection or parental investment theory) received more than one or two paragraphs on average, if they even got that much space. In fact, the only topic that received much coverage seemed to be David Buss’s work on mating strategies; his citation count alone was greater than all others authors within evolutionary psychology combined. As Cornwell et al (2005) put it:

These data are troubling when one considers undergraduates might conclude that EP is mainly a science of mating strategies studied by David Buss. (p.366).

So, the good news is that introductory psychology books are acknowledging that evolutionary psychology exists in greater and greater number. The field is also less likely to be harshly criticized for being something it isn’t (like genetic determinism). That’s progress. The bad news is that this information is, like many topics in introductory books appear to be, cursory, often inaccurate in at least some regards, and largely restricted to the work of one researcher within the field. Though Cornwell et al (2005) don’t specifically mention it, another factor to consider is where the information is presented within the texts. Though I have no data on hand beyond my personal sample of introductory books I’ve seen in recent years (I’d put that number around a dozen or so), evolutionary psychology is generally found somewhere in the middle of the book when it is found at all (remember, approximately 1-in-5 texts didn’t seem to even acknowledge the topic). Rather than being presented as a framework that can help students understand any topic within psychology, it seems to be presented more as just another island within psychology. In other words, it doesn’t tend to stand out.

So not exactly the portrayal I had hoped for…

Now I have heard some people who aren’t exactly fans (though not necessarily opponents, either) of evolutionary psychology suggest that we wouldn’t want to prematurely close off any alternative avenues of theoretical understanding in favor of evolutionary psychology. The sentiment seems to suggest that we really ought to be treating evolutionary psychology as just another lonely island in the ocean of psychology. Of course, I would agree in the abstract: we wouldn’t want to prematurely foreclose on any alternative theoretical frameworks. If a perspective existed that was demonstrably better than evolution by natural selection and the functionalist view in some regards – perhaps for accounting for the data, understanding it, and generating predictions – I’d be happy to make use of it. I’m trying to further my academic career as much as the next person, and good theory can go a long way. However, psychology, as a field, has had about 150 years with which to come up with anything resembling a viable alternative theoretical framework – or really, a framework at all that goes beyond description – and seems to have resoundingly failed at that task. Perhaps that shouldn’t be surprising, since evolution is currently the only good theory we have for explaining complex biological design, and psychology is biology. So, sure, I’m on board with no foreclosing on alternative ideas, just as soon as those alternatives can be said to exist.

References: Cornwell, R., Palmer, C., Guinther, P., & Davis. H. (2005). Introductory Psychology Texts as a View of Sociobiology/Evolutionary Psychology’s Role in Psychology Evolutionary Psychology, 3, 355-374

I Find Your Lack Of Theory (And Replications) Disturbing

Let’s say you find yourself in charge of a group of children. Since you’re a relatively-average psychologist, you have a relatively strange hypothesis you want to test: you want to see whether wearing a red shirt will make children better at dodge ball. You happen to think that it will. I say this hypothesis is strange because you derived it from, basically, nothing; it’s just a hunch. Little more than a “wouldn’t it be cool if it were true?” idea. In any case, you want to run a test of your hypothesis.You begin by lining the students up, then you walk past them and count aloud: “1, 2, 1, 2, 1…”. All the children with a “1″ go an put on a red shirt and are on a team together; all the children with a “2″ go and pick a new shirt to put on from a pile of non-red shirts. They serve as your control group. The two teams then play each other in a round of dodge ball. The team wearing the red shirts comes out victorious. In fact, they win by a substantial margin. This must mean that the wearing the red shirts made students better at dodge ball, right? Well, since you’re a relatively-average psychologist, you would probably conclude that, yes, the red shirts clearly have some effect. Sure, your conclusion is, at the very least, hasty and likely wrong, but you are only an average psychologist: we can’t set the bar too high.

“Jump was successful (p < 0.05)”

A critical evaluation of the research could note that just because the children were randomly assigned to groups, it doesn’t mean that both groups were equally matched to begin with. If the children in the red shirt group were just better beforehand, that could drive the effect. It’s also likely that the red shirts might have had very little to do with which team ended up winning. The pressing question here would seem to be why would we expect red shirts to have any effect? It’s not as if a red shirt makes a child quicker, stronger, or better able to catch or throw than before; at least not for any theoretical reason that comes to mind. Again, this hypothesis is a strange one when you consider its basis. Let’s assume, however, that wearing red shirts actually did make children perform better, because it helped children tap into some preexisting skill set. This raises the somewhat obvious question: why would children require a red shirt to tap into that previously-untapped resource? If being good at the game is important socially – after all, you don’t want to get teased by the other children for your poor performance – and children could do better, it seems, well, odd that they would ever do worse. One would need to posit some kind of trade-off effected by shirt color, which sounds like kind of an odd variable for some cognitive mechanism to take into account.

Nevertheless, like any psychologist hoping to further their academic career, you publish your results in the Journal of Inexplicable Findings. The “Red Shirt Effect” becomes something of a classic, reported in Intro to Psychology textbooks. Published reports start cropping up from different people who have had other children wear red shirts and perform various tasks athletic task relatively better. While none of these papers are direct replications of your initial study, they also have children wearing red shirts outperforming their peers, so they get labeled “conceptual replications”. After all, since the concepts seem to be in order, they’re likely tapping the same underlying mechanism. Of course, these replications still don’t deal with the theoretical concerns discussed previously, so some other researchers begin to get somewhat suspicious about whether the “Red Shirt Effect” is all it’s made out to be. Part of these concerns are based around an odd facet of how publication works: positive results – those that find effects – tend to be favored for publication over studies that don’t find effects. This means that there may well be other researchers who attempted to make use of the Red Shirt Effect, failed to find anything and, because of their null or contradictory results, also failed to publish anything.

Eventually, word reaches you of a research team that attempted to replicate the Red Shirt Effect a dozen times in the same paper and failed to find anything. More troubling still, for you academic career, anyway, their results saw publication. Naturally, you feel pretty upset by this. Clearly the research team was doing something wrong: maybe they didn’t use the proper shade of red shirt; maybe they used a different brand of dodge balls in their study; maybe the experimenters behaved in some subtle way that was enough to counteract the Red Shirt Effect entirely. Then again, maybe the journal the results were published in doesn’t have good enough standards for their reviewers. Something must be wrong here; you know as much because your Red Shirt Effect was conceptually replicated many times by other labs. The Red Shirt Effect just must be there; you’ve been counting the hits in the literature faithfully. Of course, you also haven’t been counting the misses which were never published. Further, you were counting the slightly-altered hits as “conceptual replications but not the slightly-altered misses as “conceptual disconfirmations”. You still haven’t managed to explain, theoretically, why we should expect to see the Red Shirt Effect anyway, either. Then again, why would any of that matter to you? Part of your reputation is at stake.

And these colors don’t run!  (p < 0.05)

In somewhat-related news, there have been some salty comments from Social psychologist Ap Dijksterhuis aimed at a recent study (and coverage of the study, and the journal it was published in) concerning nine failures to replicate some work Ap did on intelligence priming, as well as work done by others on intelligence priming (Shanks et al, 2013). The initial idea of intelligence priming, apparently, was that priming subjects with professor-related cues made them better at answering multiple-choice, general-knowledge questions, whereas priming subjects with soccer-hooligan related cues made them perform worse (and no; I’m not kidding. It really was that odd). Intelligence itself is a rather fuzzy concept, and it seems that priming people to think about professors – people typically considered higher in some domains of that fuzzy concept – is a poor way to make them better at multiple choice questions. As far as I can tell, there was no theory surrounding why primes should work that way or, more precisely, why people should lack access to such knowledge in absence of some vague, unrelated prime. At the very least, none was discussed.

It wasn’t just that the failures to replicate reported by Shanks et al (2013) were non-significant but in the right direction, mind you; they often seemed to go in the wrong direction. Shanks et al (2013) even looked for demand characteristics explicitly, but couldn’t find them either. Nine consecutive failures are surprising in light of the fact that the intelligence priming effects were previously reported as being rather large. It seem rather peculiar that large effects can disappear so quickly; they should have had very good chance of replicating, were they real. Shanks et al (2013) rightly suggest that many of the confirmatory studies of intelligence priming, then, might represent publication bias, researcher degrees of freedom in analyzing data, or both. Thankfully, the salty comments of Ap reminded readers that: “the finding that one can prime intelligence has been obtained in 25 studies in 10 different labs”. Sure; and when a batter in the MLB only counts the times he hit the ball while at bat, his batting average would be a staggering 1.000. Counting only the hits and not the misses will sure make it seem like hits are common, no matter how rare they are. Perhaps Ap should have thought about professors more before writing his comments (though I’m told thinking about primes ruins them as well, so maybe he’s out of luck).

I would like to add there were similarly salty comments leveled by another Social Psychologist, John Bargh, when his work on priming old stereotypes on walking speed failed to replicate (though John has since deleted his posts). The two cases bear some striking similarties: claims of other “conceptual replications”, but no claims of “conceptual failures to replicate”; personal attacks on the credibility of the journal publishing the results; personal attacks on the researchers who failed to replicate the finding; even personal attacks on the people reporting about the failures to replicated. More interestingly, John also suggested that the priming effect was apparently so fragile that even minor deviations from the initial experiment could throw the entire thing into disarray. Now it seems to me that if your “effect” is so fleeting that even minor tweaks to the research protocol can cancel it out completely, then you’re really not dealing with much in the way of importance concerning the effect, even were it real. That’s precisely the kind of shooting-yourself-in-the-foot a “smarter” person might have considered leaving out of their otherwise persuasive tantrum.

“I handled the failure to replicate well (p < 0.05)”

I would also add, for the sake of completeness, that priming effects of stereotype threat haven’t replicated out well either. Oh, and the effects of depressive realism don’t show much promise. This brings me to my final point on the matter: given the risks posed by research degrees of freedom and publication bias, it would be wise to enact better safeguards against this kind of problem. Replications, however, only go so far. Replications require researchers willing to do them (and they can be low-reward, discouraged activities) and journals willing to publish them with sufficient frequency (which many do not, currently). Accordingly, I feel replications can only take us so far in fixing the problem. A simple – though only partial – remedy for the issue is, I feel, to require the inclusion of actual theory in psychological research; evolutionary theory in particular. While it does not stop false positives from being published, it at least allows other researchers and reviewers to more thoroughly assess the claims being made in papers. This allows poor assumptions to be better weeded out and better research projects crafted to address them directly. Further, updating old theory and providing new material is a personally-valuable enterprise. Without theory, all you have is a grab bag of findings, some positive, some negative, and no idea what to do with them or how they are to be understood. Without theory, things like intelligence priming – or Red Shirt Effects – sound valid.

 References: Shanks, D., Newell, B., Lee, E., Balakrishnan, D., Ekelund, L., Cenac, Z., Kavvadia, F., & Moore, C. (2013). Priming Intelligent Behavior: An Elusive Phenomenon PLoS ONE, 8 (4) DOI: 10.1371/journal.pone.0056515