Smart People Are Good At Being Dumb In Politics

While I do my best to keep politics out of my life – usually by selectively blocking people who engage in too much proselytizing via link spamming on social media – I will never truly be rid of it. I do my best to cull my exposure to politics, not because I am lazy and looking to stay uninformed about the issues, but rather because I don’t particularly trust most of the sources of information I receive to leave me better informed than when I began. Putting this idea in a simple phrase, people are biased. In these socially-contentious domains, we tend to look for evidence that supports our favored conclusions first, and only stop to evaluate it later, if we do at all. If I can’t trust the conclusions of such pieces to be accurate, I would rather not waste my time with them at all, as I’m not looking to impress a particular partisan group with my agreeable beliefs. Naturally, since I find myself disinterested in politics – perhaps even going so far as to say I’m biased against such matters – this should mean I am more likely to approve of research that concludes people engaged with political issues aren’t quite good at reaching empirically-correct conclusions. Speaking of which… 

“Holy coincidences, Batman; let’s hit them with some knowledge!”

A recent paper by Kahan et al (2013) examined how people’s political beliefs affected their ability to reach empirically-sound conclusions in the face of relevant evidence. Specifically, the authors were testing two competing theories for explaining why people tended to get certain issues wrong. The first of these is referred to as the Science Comprehension Thesis (SCT), which proposes that people tend to get different answers to questions like, “Is global warming affected by human behavior?” or “Are GMOs safe to eat?” simply because they lack sufficient education on such topics or possess poor reasoning skills. Put in more blunt terms, we might (and frequently do) say that people get the answers to such questions wrong because they’re stupid or ignorant. The competing theory the authors propose is called the Identity-Protective Cognition Thesis (ICT) which suggests that these debates are driven more by people’s desire to not be ostracized by their in-group, effectively shutting off their ability to reach accurate conclusions. Again, putting this in more blunt terms, we might (and I did) say that people get the answers to such questions wrong because they’re biased. They have a conclusion they want to support first, and evidence is only useful inasmuch as it helps them do that.

Before getting to the matter of politics, though, let’s first consider skin cream. Sometimes people develop unpleasant rashes on their skin and, when that happens, people will create a variety of creams and lotions designed to help heal the rash and remove its associated discomfort. However, we want to know if these treatments actually work; after all, some rashes will go away on their own, and some rashes might even get worse following the treatment. So we do what any good scientist does: we conduct an experiment. Some people will use the cream while others will not, and we track who gets better and who gets worse. Imagine, then, that you are faced with the following results from your research: of the people who did use the skin cream, 223 of them got better, while 75 got worse; of the people who did not use the cream, 107 got better, while 21 got worse. From this, can we conclude that the skin cream works?

A little bit of division tells us that, among those who used the cream, about 3 people got better for each 1 who got worse; among those not using the cream, roughly 5 people got better for each 1 who got worse. Comparing the two ratios, we can conclude that the skin cream is not effective; if anything, it’s having precisely the opposite result. If you haven’t guessed by now, this is precisely the problem that Kahan et al (2013) posed to 1,111 US adults (though they also flipped the numbers between the conditions so that sometimes the treatment was effective). As it turns out, this problem is by no means easy for a lot of people to solve: only about half the sample was able to reach the correct conclusion. As one might expect, though, the participant’s numeracy – their ability to use quantitative skills – did predict their ability to get the right answer: the highly-numerate participants got the answer right about 75% of the time; those in the low-to-moderate end of numeracy ability got it right only about 50% of the time.

“I need it for a rash. That’s my story and I’m sticking to it”

Kahan et al (2013) then switched up the story. Instead of participants reading about a skin cream, they instead read about gun legislation that banned citizens from carrying handguns concealed in public; instead of looking at whether a rash went away, they examined whether crime in the cities that enacted such bans went up or down, relative to those cities that did not. Beyond the change in variables, all the numbers remained exactly the same. Participants were asked whether the gun ban was effective at reducing crime.  Again, people were not particularly good at solving this problem either – as we would expect – but an interesting result emerged: the most numerate subjects were now only solving the problem correctly 57% of the time, as compared with 75% in the skin-cream group. The change of topic seemed to make people’s ability to reason about these numbers quite a bit worse.

Breaking the data down by political affiliations made it clear what was going on. The more numerate subjects were, again, more likely to get the answer to the question correct, but only when it accorded with their political views. The most numerate liberal democrats, for instance, got the answer right when the data showed that concealed carry bans resulted in decreased crime; when crime increased, however, they were not appreciably better at reaching that conclusion relative to the less-numerate democrats. This pattern was reversed in the case of conservative republicans: when the concealed carry bans resulted in increased crime, the more numerate ones got the question right more often; when the ban resulted in decreased crime, performance plummeted.

More interestingly still, the gap in performance was greatest for the more-numerate subjects. The average difference in getting the right answer among the highly-numerate individuals was about 45% between cases in which the conclusion of the experiment did or did not support their view, while it was only 20% in the case of the less-numerate ones. Worth noting is that these differences did not appear when people were thinking about the non-partisan skin-cream issue. In essence, smart people were either not using their numeracy skills regularly  in cases where it meant drawing unpalatable political conclusions, or they were using them and subsequently discarding the “bad” results. This is an empirical validation of my complaints about people ignoring base rates when discussing Islamic terrorism. Highly-intelligent people will often get the answers to these questions wrong because of their partisan biases, not because of their lack of education. They ought to know better – indeed, they do know better – but that knowledge isn’t doing them much good when it comes to being right in cases where that means alienating members of their social group.

That future generations will appreciate your accuracy is only a cold comfort

At the risk of repeating this point, numeracy seemed to increase political polarization, not make it better. These abilities are being used more to metaphorically high-five in-group members than to be accurate. Kahan et al (2013) try to explain this effect in two ways, one of which I think is more plausible than the other. On the implausible front, the authors suggest that using these numeracy abilities is a taxing, high-effort activity that people try to avoid whenever possible. As such, people with this numeracy ability only engage in effortful reasoning when their initial beliefs were threatened by some portion of the data. I find this idea strange because I don’t think that – metabolically – these kinds of tasks are particularly costly or effortful. On the more plausible front, Kahan et al (2013) suggest that these conclusions have a certain kind of rationality behind them: if drawing an unpalatable conclusion would alienate important social relations that one depends on for their own well-being, then an immediate cost/benefit analysis can favor being wrong. If you are wrong about whether GMOs are harmful, the immediate effects on you are likely quite small (unless you’re starving); on the other hand, if your opinion about them puts off your friends, the immediate social effects are quite large.

In other words, I think people sometimes interpret data in incorrect ways to suit their social goals, but I don’t think they avoid interpreting it properly because doing so is difficult.

References: Kahan, D., Peters, E., Dawson, E., & Slovic, P. (2013). Motivated numeracy and enlightened self-government. Yale Law School, Public Law Working Paper No. 307.

The Politics Of Fear

There’s an apparent order of operations frequently observed in human reasoning: politics first, facts second. People appear perfectly willing to accept flawed arguments or incorrect statistics they would otherwise immediately reject, just so long as they support the reasoner’s point of view; Greg Cochran documented a few such cases (in his simple and eloquent style) a few days ago on his blog. Such a bias in our reasoning ability is not only useful – inasmuch as persuading people to join your side of a dispute tends to carry benefits, regardless of whether you’re right or wrong – but it’s also common: we can see evidence of it in every group of people, from the uneducated to those with PhDs and decades of experience in their field. In my case, the most typical contexts in which I encounter examples of this facet of our psychology – like many of you, I would suspect – is through posts shared or liked by others on social media. Recently, these links have been cropping up concerning the topic of fear. More precisely, there are a number of writers who think that people (or at least those who disagree with them) are behaving irrationally regarding their fears of Islamic terrorism and the threat it poses to their life. My goal here is not to say that people are being rational or irrational about such things – I happen to have a hard time finding substance in such terms – but rather to provide a different perspective than the ones offered by the authors; one that is likely in the minority among my professional and social peers.

You can’t make an omelette without alienating important social relations 

The first article on the chopping block was published on the New York Times website in June of last year. The article is entitled, “Homegrown extremists tied to deadlier toll than Jihadists in U.S. since 9/11,” and it attempts to persuade the reader that we, as a nation, are all too worried about the threat Islamic terrorism poses. In other words, American fears of terrorism are wildly out of proportion to the actual threat it presents. This article attempted to highlight the fact that, in terms of the number of bodies, right-wing, anti-government violence was twice as dangerous as Jihadist attacks in the US since 9/11 (48 deaths from non-Muslims; 26 by Jihadists). Since we seem to dedicate more psychological worry to Islam, something was wrong there There are three important parts of that claim to be considered: first, a very important word in that last sentence is “was,” as the body count evened out by early December in that year (currently at 48 to 45). This updated statistic yields some interesting questions: were those people who feared both types of attacks equally (if they existed) being rational or not on December 1st? Were those who feared right-wing attacks more than Muslim ones suddenly being irrational on the 2nd? The idea these questions are targeting is whether or not fears can only be viewed as proportionate (or rational) with the aid of hindsight. If that’s the case, rather than saying that some fears are overblown or irrational, a more accurate statement would be that such fears “have not yet been founded.” Unless those fears have a specific cut-off date (e.g., the fear of being killed in a terrorist attack during a given time period), making claims about their validity is something that one cannot do particularly well. 

The  second important point of the article to consider is that the count begins one day after a Muslim attack that killed over 3,000 people (immediately; that doesn’t count those who were injured or later died as a consequence of the events). Accordingly, if that count is set back just slightly, the fear of being killed by a Muslim terrorist attack would be much more statistically founded, at least in a very general sense. This naturally raises the question of why the count starts when it does. The first explanation that comes to mind is that the people doing the counting (and reporting about the counting) are interested in presenting a rather selective and limited view of the facts that support their case. They want to denigrate the viewpoints of their political rivals first, and so they select the information that helps them do that while subtly brushing aside the information that does not. That seems like a fairly straightforward case of motivated reasoning, but I’m open to someone presenting a viable alternative point of view as to why the count needs to start when it does (such as, “their primary interest is actually in ignoring outliers across the board”).    

Saving the largest for last, the final important point of the article to consider is that it appears to neglect the matter of base rates entirely. The attacks labeled as “right-wing” left a greater absolute number of bodies (at least at the time it was written), but that does not mean we learned right-wing attacks (or individuals) are more dangerous. To see why, we need to consider another question: how many bodies should we have expected? The answer to that question is by no means simple, but we can do a (very) rough calculation. In the US, approximately 42% of the population self-identifies as Republican (our right-wing population), while about 1% identifies as Muslim. If both groups were equally likely to kill others, then we should expect that the right-wing terrorist groups leave 42 bodies for every 1 that the Muslim group do. That ratio would reflect a genuine parity in threat. A count suggesting that this ratio was 2-to-1 at the time it written, and was 1-to-1 later that same year, we might reasonably conclude that the Muslim population, per individual member, is actually quite a bit more prone to killing others in terrorist attacks; if we factor in the 9/11 number, that ratio becomes something closer to 0.01-to-1, which is a far cry from demographic expectations.

Thankfully, you don’t have to report inconvenient numbers

Another example comes from The New Yorker, published just the other day (perhaps is it something about New York that makes people publish these pieces), entitled, “Thinking rationally about terror.” The insinuation, as before, is that people’s fears about these issues do not correspond well to the reality. In order to make the case that people’s fears are wrongheaded, Lawrence Krauss leans on few examples. One of these concerns the recent shootings in Paris. According to Lawrence, these attacks represented an effective doubling of the overall murder rate in Paris from the previous year (2.6 murders per 100,000 residents), but that’s really not too big of a deal because that just makes Paris as dangerous as New York City, and people aren’t that worried about being killed in NYC (or are they? No data on that point is mentioned). In fact, Lawrence goes on to say, the average Paris resident is about as likely to have been killed in a car accident during any given year than to have been killed during the mass shooting. This point is raised, presumably, to highlight an irrationality: people aren’t concerned about being killed by cars for the most part, so they should be just as unconcerned about being killed by a terrorist if they want to be rational.

This point about cars is yet another fine example of an author failing to account for base rates. Looking at the raw body count is not enough, as people in Paris likely interact with hundreds (or perhaps even thousands; I don’t have any real sense for that number) of cars every day for extended periods of time. By contrast, I would imagine Paris residents interact markedly less frequently with Muslim extremists. Per unit of time spent around cars, they would pose what is likely a much, much lower threat of death than Muslim extremists. Further, people do fear the harm caused by cars (we look both ways before crossing a street, we restrict licenses to individuals who demonstrate their competence to handle the equipment, have speed limits, and so on), and it is likely that the harm they inflict would be much greater if such fears were not present. In much the same way, it is also possible that the harms caused by terrorist groups would be much higher if people decided that such things were not worth getting worked up about and took no steps to assure their safety early on. Do considerations of these base rates and future risks fall under the umbrella of “rational” thinking? I would like to think so, and yet they seemed so easily overlooked by someone chiding others for being irrational: Lawrence at least acknowledges that future terror risks might increase for places like Paris, but notes that that kind of life is pretty much normal for Israel; the base-rate problems is not even mentioned.

While there’s more I could say on these topics, the major point I hope to get across is this: if you want to know why people experience fear about certain topics, it’s probably best to not start your analysis with the assumption that these people are wrong to feel the way they do. Letting one’s politics do the thinking is not a reliable way to get at a solid understanding of anything, even if it might help further your social goals. If we were interested in understanding the “why” behind such fears, we might begin, for instance, with the prospect that many people likely fear historically-relevant, proximate cues of danger, including groups of young, violent males making threats to your life based on your group membership, and cases where those threats are followed through and made credible. Even if such individuals currently reside many miles away, and even if only a few such threats have been acted upon, and even if the dangerous ones represent a small minority of the population, fearing them for one’s own safety does not – by default – seem to be an unreasonable thing to do; neither does fearing them for the safety of one’s relatives, social relations, or wider group members.

“My odds of getting hurt were low, so this isn’t worth getting worked up over”

Now, as I mentioned, all of this is not to say that people ought to fear some particular group or not; my current interests do not reside in directing your fears or their scope. I have no desire to tell you that your fears are well founded or completely off base (in no small part because I earnestly don’t know if they are). My interests are much more general than that, as this kind of thinking is present in all kinds of different contexts. There’s a real problem in beginning with the truth of your perspective and beginning your search for evidence only after the fact. The problem can run so deep that I actually find myself surprised to see someone take up the position that they were wrong after an earnest dig through the available evidence. Such an occurrence should be commonplace if rationality or truth were the goal in these debates, as people get things wrong (at least to some extent) all the time, especially when such opinions are formed in advance of such knowledge. Admissions of incorrect thinking does require, however, that one is willing to, at least occasionally, sacrifice a belief that used to be held quite dear; it requires looking like a fool publicly now and again; it even requires working against your own interests sometimes. These are things you will have to do; not just things that the opposition will. As such, I suspect these kinds of inadequate lines of reasoning will continue to pervade such discussions, which is a bit of a problem when the lives of others literally hang in the balance of the outcome.

Truth And Non-Consequences

A topic I’ve been giving some thought to lately concerns the following question: are our moral judgments consequentialist or nonconsequentialist? As the words might suggest, the question concerns to what extent our moral judgments are based in the consequences that result from an action or the behavior per se that people engage in. We frequently see a healthy degree of inconsistency around the issue. Today I’d like to highlight a case I came across while rereading The Blank Slate, by Steven Pinker. Here’s part of what Steven had to say about whether any biological differences between groups could justify racism or sexism:

“So could discoveries in biology turn out to justify racism and sexism? Absolutely not! The case against bigotry is not a factual claim that humans are biologically indistinguishable. It is a moral stance that condemns judging an individual according to the average traits of certain groups to which the individual belongs.”

This seems like a reasonable statement, on the face of it. Differences between groups, on the whole, does not necessarily mean any differences on the same trait between any two given individuals. If a job calls for a certain height, in other words, we should not discriminate against women just because men tend to be taller. That average difference does not mean that many men and women are the not same height, or that the reverse relationship never holds.

Even if it generally does…

Nevertheless, there is something not entirely satisfying about Steven’s position, namely that people are not generally content to say “discrimination is just wrong“. People like to try and justify their stance that it is wrong, lest the proposition be taken to simply be an arbitrary statement with no more intrinsic appeal than “trimming your beard is just wrong“. Steven, like the rest us, thus tries to justify his moral stance on the issue of discrimination:

Regardless of IQ or physical strength or any other trait that can vary, all humans can be assumed to have certain traits in common. No one likes being enslaved. No one likes being humiliated. No one likes being treated unfairly, that is, according to traits that the person cannot control. The revulsion we feel toward discrimination and slavery comes from a conviction that however much people vary on some traits, they do not vary on these.”

Here, Steven seems to be trying to have his nonconsequentialist cake and eat it too*. If the case against bigotry is “absolutely not” based on discoveries in biology or a claim that people are biologically indistinguishable, then it seems peculiar to reference biological facts concerning some universal traits to try and justify one’s stance. Would the discovery that certain people might dislike being treated unfairly to different degrees justify doing so, all else being equal? If it would, the first quoted idea is wrong; if it would not, the second statement doesn’t make much sense. What is also notable about these two quotes is that they are not cherry-picked from difference sections of the book; the second quote comes from the paragraph immediately following the first. I found their juxtaposition is rather striking.

With respect to the consequentialism debate, the fact that people try to justify their moral stances in the first place seems strange from a nonconsequentialist perspective: if a behavior is just wrong, regardless of the consequences, then it needs no explanation. or justification. Stealing, in that view, should be just wrong; it should matter who stole from who, or the value of the stolen goods. A child stealing a piece of candy from a corner store should be just as wrong as an adult stealing a TV from Best Buy; it shouldn’t matter that Robin Hood stole from the rich and gave to the poor, because stealing is wrong no matter the consequences and he should be condemned for it. Many people would, I imagine, agree that not all acts of theft are created equal though. On the topic of severity, many people would also agree that murder is generally worse than theft. Again, from a nonconsequentialist perspective, this should only be the case for arbitrary reasons, or at least reasons that have nothing at all to do with the fact that murder and theft have different consequences. I have tried to think of what those other, nonconsequentialist reasons might be, but I appear to suffer from a failure of imagination in that respect.

Might there be some findings that one might ostensibly support the notion that moral judgments are, at least in certain respects, nonconsequentialist? Yes; in fact there are. The first of these are a pair of related dilemmas known as the trolley and footbridge dilemmas. In both contexts one life can be sacrificed so that five lives are saved. In the former dilemma, a train heading towards five hikers can be diverted to a side track where there is only a single hiker; in the latter, a train heading towards five hikers can be stopped by pushing a person in front of it. In both cases the welfare outcomes are identical (one dead; five not), so it seems that if moral judgments only track welfare outcomes, there should be no difference between these scenarios. Yet there are: about 90% of people will support diverting the train, and only 10% tend to support pushing (Mikhail, 2007). This would certainly be a problem for any theory of morality that claimed the function of moral judgments more broadly is to make people better off on the whole. Moral judgments that fail to maximize welfare would be indicative of poor design for such a function.

Like how this bathroom was poorly optimized for personal comfort.

There are concerns with the idea that this finding supports moral nonconsequentialism, however: namely, the judgments of moral wrongness for pushing or redirecting are not definitively nonconsequentialist. People oppose pushing others in front of trains, I would imagine, because of the costs that pushing inflicts on the individual being pushed. If the dilemma was reworded to one in which acting on a person would not harm them but save the lives of others, you’d likely find very little opposition to it (i.e. pushing someone in front a train in order to send a signal to the driver, but with enough time so the pushed individual can exit the track and escape harm safely). This relationship holds in the trolley dilemma: when an empty side track is available, redirection to said track is almost universally preferred, as might be expected (Huebner & Hauser, 2011).  One who favors the nonconsequentialist account might suggest that such a manipulation is missing the point: after all, it’s not that pushing someone in front a train is immoral, but rather that killing someone is immoral. This rejoinder would seem to blur the issue, as it suggests, somewhat confusingly, that people might judge certain consequences non-consequentially. Intentionally shooting someone in the head, in this line of reasoning, would be wrong not because it results in death, but because killing is wrong; death just so happens to be a necessary consequence of killing. Either I’m missing some crucial detail or distinction seems unhelpful, so I won’t spend anymore time on it. 

Another matter of evidence touted as evidence of moral nonconsequentialism is the research done on moral dumbfounding (Haidt et al, 2000). In brief, research has found that when presented with cases where objective harms are absent, many people continue to insist that certain acts are wrong. The most well-known of these involves a bother-sister case of consensual incest on a single occasion. The sister is using birth control and the brother wears a condom; they keep their behavior a secret and feel closer because of it. Many subjects (about 80%) insisted that the act was wrong. When pressed for an explanation, many initially referenced harms that might occur as a result, those these harms were always countered by the context (no pregnancy, no emotional harm, no social stigma, etc). From this, it was concluded that conscious concerns for harm appear to represent post hoc justifications for an intuitive moral intuition.

One needs to be cautious in interpreting these results as evidence of moral nonconsequentialism, though, and a simple example would explain why. Imagine in that experiment what was being asked about was not whether the incest itself was wrong, but instead why the brother and sister pair had sex in the first place. Due to the dual contraceptive use, there was no probability of conception. Therefore, a similar interpretation might say, this shows that people are not consciously motivated to have sex because of children. While true enough that most acts of intercourse might not be motivated by the conscious desire for children, and while the part of the brain that’s talking might not have access to information concerning how other cognitive decision rules are enacted, it doesn’t mean the probability of conception plays no role shaping in the decision to engage in intercourse; despite what others have suggested, sexual pleasure per se is not adaptive. In fact, I would go so far as to say that the moral dumbfounding results are only particularly interesting because, most of the time, harm is expected to play a major role in our moral judgments. Pornography manages to “trick” our evolved sexual motivation systems by providing them with inputs similar to those that reliably correlate with the potential for conception; perhaps certain experimental designs – like the case of brother-sister incest – manage to similarly “trick” our evolved moral systems by providing them with inputs similar to those that reliably correlated with harm.

Or illusions; whatever your preferred term is.

In terms of making progress the consequentialism debate, it seems useful to do away with the idea that moral condemnation functions to increase welfare in general: not only are such claims clearly empirically falsified, they could only even be plausible in the realm of group selection, which is a topic we should have all stopped bothering with long ago. Just because moral judgments fail the test of group welfare improvement, however, it does not suddenly make the nonconsequentialist position tenable. There are more ways of being consequentialist than with respect to the total amount of welfare increase. It would be beneficial to turn our eye towards considering strategic welfare consequences that likely to accrue to actors, second parties, and third parties as a result of these behaviors. In fact, we should be able to use such considerations to predict contexts under which people should flip back and forth from consciously favoring consequentialist and nonconsequentialist kinds of moral reasoning. Evolution is a consequentialist process, and we should expect it to produce consequentialist mechanisms. To the extent we are not finding them, the problem might owe itself more to a failure of our expectations for the shape of these consequences than an actual nonconsequentialist mechanism.

References: Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished manuscript.

Huebner, B. & Hauser, M. (2011). Moral judgments about altruistic self-sacrifice: When philosophical and folk intuitions clash. Philosophical Psychology, 24, 73-94.

Mikhail, J. (2007). Universal moral grammar: Theory, evidence, and the future. Trends in Cognitive Science, 11, 143-151.

 

*Later, Steven writes:

“Acknowledging the naturalistic fallacy does not mean that facts about human nature are irrelevant to our choices…Acknowledging the naturalistic fallacy implies only that discoveries about human nature do not, by themselves, dictate our choices…”

I am certainly sympathetic to such arguments and, as usual, Steven’s views on the topic are more nuanced than the these quotes alone are capable of displaying. Steven does, in fact, suggest that all good justifications for moral stances concern harms and benefits. Those two particular quotes are only used to highlight the frequent inconsistencies between people’s stated views.

Classic Research In Evolutionary Psychology: Reasoning

I’ve consistently argued that evolutionary psychology, as a framework, is a substantial, and, in many ways, vital remedy to some wide-spread problems: it allows us to connect seemingly disparate findings under a common understanding, and, while the framework is by itself no guarantee of good research, it forces researchers to be more precise in their hypotheses, allowing for conceptual problems with hypotheses and theories to be more transparently observed and addressed. In some regards the framework is quite a bit like the practice of explaining something in writing: while you may intuitively feel as if you understand a subject, it is often not until you try to express your thoughts in actual words that you find your estimation of your understanding has been a bit overstated. Evolutionary psychology forces our intuitive assumptions about the world to be made explicit, often to our own embarrassment.

“Now that you mention it, I’m surprised I didn’t notice that sooner…”

As I’ve recently been discussing one of the criticisms of evolutionary psychology – that the field is overly focused on domain-specific cognitive mechanisms – I feel that now would be a good time to review some classic research that speaks directly to the topic. Though the research to be discussed itself is of recent vintage (Cosmides, Barrett, & Tooby, 2010), the topic has been examined for some time, which is whether our logical reasoning abilities are best convinced of as domain-general or domain-specific (whether they work equally well, regardless of content, or whether content area is important to their proper functioning). We ought to expect domain specificity in our cognitive functioning for two primary reasons (though these are not the only reasons): the first is that specialization yields efficiency. The demands of solving a specific task are often different from the demands of solving a different one, and to the extent that those demands do not overlap, it becomes difficult to design a tool that solves both problems readily. Imagining a tool that can both open wine bottles and cut tomatoes is hard enough; now imagine adding on the requirement that it also needs to function as a credit card and the problem becomes exceedingly clear. The second problem is outlined well by Cosmides, Barrett, & Tooby (2010) and, as usual, they express it more eloquently than I would:

The computational problems our ancestors faced were not drawn randomly from the universe of all possible problems; instead, they were densely clustered in particular recurrent families.

Putting the two together, we end up with the following: humans tend to face a non-random set of adaptive problems in which the solution to any particular one tends to differ from the solution to any other. As domain-specific mechanisms solve problems more efficiently than domain-general ones, we ought to expect the mind to contain a large number of cognitive mechanisms designed to solve these specific and consistently-faced problems, rather than only a few general-purpose mechanisms more capable of solving many problems we do not face, but poorly-suited to the specific problems we do. While such theorizing sounds entirely plausible and, indeed, quite reasonable, without empirical support for the notion of domain-specificity, it’s all so much bark and no bite.

Thankfully, empirical research abounds in the realm of logical reasoning. The classic tool used to assess people’s ability to reason logically is the Wason selection task. In this task, people are presented with a logical rule taking the form of “if P, then Q“, and a number of cards representing P, Q, ~P, and ~Q (i.e. “If a card has a vowel on one side, then it has an even number on the other”, with cards showing A, B, 1 & 2). They are asked to point out the minimum set of cards that would need to be checked to test the initial “if P, then Q” statement. People’s performance on the task is generally poor, with only around 5-30% of people getting it right on their first attempt. That said, performance on the task can become remarkably good – up to around 65-80% of subjects getting the correct answer – when the task is phrased as a social contract (“If someone [gets a benefit], then they need to [pay a cost]“, the most well known being “If someone is drinking, then they need to be at least 21″). Despite the underlying logical form not being altered, the content of the Wason task matters greatly in terms of performance. This is a difficult finding to account for if one holds to the idea of a domain-general logical reasoning mechanism that functions the same way in all tasks involving formal logic. Noting that content matters is one thing, though; figuring out how and why content matters becomes something of a more difficult task.

While some might suggest that content simply matters as a function of familiarity – as people clearly have more experience with age restrictions on drinking and other social situations than vaguer stimuli – familiarity doesn’t help: people will fail the task when it is framed in terms of familiar stimuli and people will succeed at the task for unfamiliar social contracts. Accordingly, criticisms of the domain-specific social contract (or cheater-detection) mechanism shifted to suggest that the mechanism at work is indeed content-specific, but perhaps not specific to social contracts. Instead, the contention was that people are good at reasoning about social contracts, but only because they’re good at reasoning about deontic categories – like permissions and obligations – more generally. Assuming such an account were accurate, it remains debatable as to whether that mechanism would be counted as a domain-general or domain-specific one. Such a debate need not be had yet, though, as the more general account turns out to be unsupported by the empirical evidence.

We’re just waiting for critics to look down and figure it out.

While all social contracts involve deontic logic, not all deontic logic involves social contracts. If the more general account of deontic reasoning were true, we ought to not expect performance difference between the former and latter types of problems. In order to test whether such differences exist, Cosmides, Barrett, & Tooby’s (2010) first experiment involved presenting subjects with a permission rule – “If you do P, you must do Q first” – varying whether P was a benefit (going out at night), neutral (staying in), or a chore (taking out the trash; Q, in this case, involved tying a rock around your ankle). When the rule was a social contract (the benefit), performance was high on the Wason task, with 80% of subjects answering correctly. However, when the rule involved staying in, only 52% of subjects got it right; that number was even lower in the garbage condition, with only 44% accuracy among subjects. Further, this same pattern of results was subsequently replicated in a new context involving filing/signing forms as well. This results is quite difficult to account for with a more-general permission schema, as all the conditions involve reasoning about permissions; they are, however, consistent with the predictions from social contract theory, as only the contexts involving some form of social contract ended up eliciting the highest levels of performance.

Permission schemas, in their general form, also appear unconcerned with whether one violates a rule intentionally or accidentally. By contrast, social contract theory is concerned with the intentionality of the violation, as accidental violations do not imply the presence of a cheater the way intentional violations do. To continue to test the distinction between the two models, subjects were presented with the Wason task in contexts where the violations of the rule were likely intentional (with or without a benefit for the actor) or accidental. When the violation was intentional and benefited the actor, subjects performed accurately 68% of the time; when it was intentional but did not benefit that actor, that percentage dropped to 45%; when the violation was likely unintentional, performance bottomed-out at 27%. These results make good sense if one is trying to find evidence of a cheater; they do not if one is trying to find evidence of a rule violation more generally.

In a final experiment, the Wason task was again presented to subjects, this time varying three factors: whether one was intending to violate a rule or not; whether it would benefit the actor or not; and whether the ability to violate was present or absent. The pattern of results mimicked those above: when benefit, intention, and ability were all present, 64% of subjects determined the correct answer to the task; when only 2 factors were present, 46% of subjects got the correct answer; and when only 1 factor was present, subjects did worse still, with only 26% getting the correct answer, which is approximately the same performance level as when there were no factors present. Taken together, these three experiments provide powerful evidence that people aren’t just good at reasoning about the behavior of other people in general, but rather that they are good at reasoning about social contracts in particular. In the now-immortal words of Bill O’Reilly, “[domain-general accounts] can’t explain that“.

“Now cut their mic and let’s call it a day!”

Now, of course, logical reasoning is just one possible example for demonstrating domain specificity, and these experiments certainly don’t prove that the entire structure of the mind is domain specific; there are other realms of life – such as, say, mate selection, or learning – where domain general mechanisms might work. The possibility of domain-general mechanisms remains just that – possible; perhaps not often well-reasoned on a theoretical level or well-demonstrated at an empirical one, but possible all the same. The problem in differentiating between these different accounts may not always be easy in practice, as they are often thought to generate some, or even many, of the same predictions, but in principle it remains simple: we need to place the two accounts in experimental contexts in which they generate opposing predictions. In the next post, we’ll examine some experiments in which we pit a more domain-general account of learning against some more domain-specific ones.

References: Cosmides L, Barrett HC, & Tooby J (2010). Adaptive specializations, social exchange, and the evolution of human intelligence. Proceedings of the National Academy of Sciences of the United States of America, 107 Suppl 2, 9007-14 PMID: 20445099

Simple Rules Do Useful Things, But Which Ones?

Depending on who you ask – and their mood at moment – you might come away with the impression that humans are a uniquely intelligent species, good at all manner of tasks, or a profoundly irrational and, well, stupid one, prone to frequent and severe errors in judgment. The topic often penetrates into lay discussions of psychology, and has been the subject of many popular books, such as the Predictably Irrational series. Part of the reason that people might give these conflicting views of human intelligence – either in terms of behavior or reasoning – is the popularity of explaining human behavior through cognitive heuristics. Heuristics are essentially rules of thumb which focus only on limited sets of information when making decisions. A simple, perhaps hypothetical example of a heuristic might be something like a “beauty heuristic”. This heuristic might go something along the lines of when deciding who to get into a relationship with, pick the most physically attractive available option; other information – such as the wealth, personality traits, and intelligence of the perspective mates – would be ignored by the heuristic.

Which works well when you can’t notice someone’s personality at first glance.

While ignoring potential sources information might seem perverse at first glance, given that one’s goal is to make the best possible choice, it has the potential to be a useful strategy. One of these reasons is that the world is a rather large place, and gathering information is a costly process. The benefits of collecting additional bits of information are outweighed by the costs of doing so past a certain point, and there are many, many potential sources of information to choose from. To the extent that additional information helps one make a better choice, making the best objective choice is often a practical impossibility. In this view, heuristics trade off accuracy with effort, leading to ‘good-enough’ decisions. A related, but somewhat more nuanced benefit of heuristics comes from the sampling-error problem: whenever you draw samples from a population, there is generally some degree of error in your sample. In other words, your small sample is often not entirely representative of the population from which it’s drawn. For instance, if men are, on average, 5 inches taller than women the world over, if you select 20 random men and women from your block to measure, your estimate will likely not be precisely 5 inches; it might be lower or higher, and the degree of that error might be substantial or negligible.

Of note, however, is the fact that the fewer people from the population you sample, the greater your error is likely to be: if you’re only sampling 2 men and women, your estimate is likely to be further from 5 inches (in one direction or the other) relative to when you’re sampling 20, relative to 50, relative to a million. Importantly, the issue of sampling error crops up for each source of information you’re using. So unless you’re sampling large enough quantities of information capable of balancing that error out across all the information sources you’re using, heuristics that ignore certain sources of information can actually lead to better choices at times. This is because the bias introduced by the heuristics might well be less predictively-troublesome than the degree of error variance introduced by insufficient sampling (Gigerenzer, 2010). So while the use of heuristics might at times seem like a second-best option, there appear to be contexts where it is, in fact, the best option, relative to an optimization strategy (where all available information is used).

While that seems to be all well and good, the acute reader will have noticed the boundary conditions required for heuristics to be of value: they need to know how much of which sources of information to pay attention to. Consider a simple case where you have five potential sources of information to attend to in order to predict some outcome: one of these is sources strongly predictive, while the other four are only weakly predictive. If you play an optimization strategy and have sufficient amounts of information about each source, you’ll make the best possible prediction. In the face of limited information, a heuristic strategy can do better provided you know you don’t have enough information and you know which sources of information to ignore. If you picked which source of information to heuristically-attend to at random, though, you’d end up making a worse prediction than the optimizer 80% of the time. Further, if you used a heuristic because you mistakenly believed you didn’t have sufficient amounts of information when you actually did, you’ve also made a worse prediction than the optimizer 100% of the time.

“I like those odds; $10,000 on blue! (The favorite-color heuristic)”

So, while heuristics might lead to better decisions than attempts at optimization at times, the contexts in which they manage that feat are limited. In order for these fast and frugal decision rules to be useful, you need to be aware of how much information you have, as well as which heuristics are appropriate for which situations. If you’re trying to understand why people use any specific heuristic, then, one would need to make substantially more textured predictions about the functions responsible for the existence of the heuristic in the first place. Consider the following heuristic, suggested by Gigerenzer (2010): if there is a default, do nothing about it. That heuristic is used to explain, in this case, the radically different rates of being an organ donor between countries: while only 4.3% of Danish people are donors, nearly everyone in Sweden is (approximately 85%). Since the explicit attitudes about the willingness to be a donor don’t seem to differ substantially between the two countries, the variance might prove a mystery; that is, until one realizes that the Danes have an ‘opt in’ policy to be a donor, whereas the Swedes have an ‘opt out’ one. The default option appears to be responsible for driving most of variance in rates of organ donor status.

While such a heuristic explanation might seem, at least initially, to be a satisfying one (in that it accounts for a lot of the variance), it does leave one wanting in certain regards. If anything, the heuristic seems more like a description of a phenomenon (the default option matters sometimes) rather than an explanation of it (why does it matter, and under what circumstances might we expect it to not?). Though I have no data on this, I imagine if you brought subjects into the lab and presented them with an option to give the experimenter $5 or have the experimenter give them $5, but highlighted the first option as default, you would probably find very few people who did not ignore the default heuristic. Why, then, might the default heuristic be so persuasive at getting people to be or fail to be organ donors, but profoundly unpersuasive at getting people to give up money? Gigerenzer’s hypothesized function for the default heuristic – group coordination – doesn’t help us out here, since people could, in principle, coordinate around either giving or getting. Perhaps one might posit that another heuristic – say, when possible, benefit the self over others – is at work in the new decision, but without a clear, and suitably textured theory for predicting when one heuristic or another will be at play, we haven’t explained these results.

In this regard, then, heuristics (as explanatory variables) share the same theoretical shortcoming as other “one-word explanations” (like ‘culture’, ‘norms’, ‘learning’, ‘the situation’, or similar such things frequently invoked by psychologists). At best, they seem to describe some common cues picked up on by various cognitive mechanisms, such as authority relations (what Gigerenzer suggested formed the following heuristic: if a person is an authority, follow requests) or peer behavior (the imitate-your-peers heuristic: do as your peers do) without telling us anything more. Such descriptions, it seems, could even drop the word ‘heuristic’ altogether and be none the worse for it. In fact, given that Gigerenzer (2010) mentions the possibility of multiple heuristics influencing a single decision, it’s unclear to me that he is still be discussing heuristics at all. This is because heuristics are designed specifically to ignore certain sources of information, as mentioned initially. Multiple heuristics working together, each of which dabble in a different source of information that the others ignore seem to resemble an optimization strategy more closely than heuristic one.

And if you want to retain the term, you need to stay within the lines.

While the language of heuristics might prove to be a fast and frugal way of stating results, it ends up being a poor method of explaining them or yielding much in the way of predictive value. In determining whether some decision rule even is a heuristic in the first place, it would seem to behoove those advocating the heuristic model to demonstrate why some source(s) of information ought to be expected to be ignored prior to some threshold (or whether such a threshold even exists). What, I wonder, might heuristics have to say about the variance in responses to the trolley and footbridge dilemmas, or the variation in moral views towards topics like abortion or recreational drugs (where people are notably not in agreement)? As far as I can tell, focusing on heuristics per se in these cases is unlikely to do much to move us forward. Perhaps, however, there is some heuristic heuristic that might provide us with a good rule of thumb for when we ought to expect heuristics to be valuable…

References: Gigerenzer, G. (2010). Moral Satisficing: Rethinking Moral Behavior as Bounded Rationality Topics in Cognitive Science., 2, 528-554 DOI: 10.1111/j.1756-8765.2010.01094.x

This Is Water: Making The Familiar Strange

In the fairly-recent past, there was a viral video being shared across various social media sites called “This is Water” by David Foster Wallace. The beginning of the speech tells a story of two fish who are oblivious to the water in which they exist, in much the same way that humans come to take the existence of the air they breathe for granted. The water is so ubiquitous that the fish fail to notice it; it’s just the way things are. The larger point of the video – for my present purposes – is that the inferences people make in their day-to-day lives are so automatic as to become taken for granted. David correctly notes that there are many, many different inferences that one could make about people we see in our every day lives: is the person in the SUV driving it because they fear for their safety or are they selfish for driving that gas-guzzler? Is the person yelling at their kids not usually like that, or are they an abusive parent? There are two key points in all of this. The first is the aforementioned habit people have to take the ability we have to draw these kinds of inferences in the first place for granted; what Cosmides & Tooby (1994) call instinct blindness. Seeing, for instance, is an incredibly complex and difficult-to-solve task, but the only effort we perceive when it comes to vision involves opening our eyes: the seeing part just happens. The second, related point is the more interesting part to me: it involves the underdetermination of the inferences we draw from the information we’re provided. That is to say that no part of the observations we make (the woman yelling at her child) intrinsically provides us with good information to make inferences with (what is she like at other times).

Was Leonidas really trying to give them something to drink?

There are many ways of demonstrating underdetermination, but visual illusions – like this one – prove to be remarkable effective in quickly highlighting cases where the automatic assumptions your visual systems makes about the world cease to work. Underdetermination isn’t just a problem need to be solved with respect to vision, though: our minds make all sorts of assumptions about the world that we rarely find ourselves in a position to appreciate or even notice. In this instance, we’ll be considering some of the information our mind automatically fills in concerning the actions of other people. Specifically, we perceive our world along a dimension of intentionality. Not only do we perceive that individuals acted “accidentally” or “on purpose”, we also perceive that individuals acted to achieve certain goals; that is, we perceive “motives” in the behavior of others.

Knowing why others might act is incredibly useful for predicting and manipulating their future behavior. The problem that our minds need to solve, as you can no doubt guess by this point, is that intentions and motives are not readily observable from actions. This means that we need to do our best to approximate them from other cues, and that entails making certain assumptions about observable actions and the actors who bring them about. Without these assumptions, we would have no way to distinguish between someone killing in self-defense, killing accidentally, or killing just for the good old fashion fun of it. The questions for consideration, then, concern which kinds of assumptions tend to be triggered by which kinds of cues under what circumstances, as well as why they get triggered by that set of cues. Understanding what problems these inferences about intentions and motives were designed to solve can help us more accurately predict the form that these often-unnoticed assumptions will likely take.

While attempting to answer that question about what cues our minds use, one needs to be careful to not lapse in the automatically-generated inferences our minds typically make and remain instinct-blind. The reason that one ought to avoid doing this – in regards to inferences about intentions and motives – is made very well by Gawronski (2009):

“…how [do] people know that a given behavior is intentional or unintentional[?]  The answer provided…is that a behavior will judged as intentional if the agent (a) desired the outcome, (b) believed that the action would bring about the outcome, (c) planned the action, (d) had the skill to accomplish the action, and (e) was aware of accomplishing the outcome…[T]his conceptualization implies the risk of circularity, as inferences of intentionality provide a precondition for inferences about aims and motives, but at the same time inferences of intentionality depend on a perceivers’ inferences about aims and motives.”

In other words, people often attempt to explain whether or not someone acted intentionally by referencing motives (“he intended to harm X because he stood to benefit”), and they also often attempt to explain someone’s motives on the basis of whether or not they acted intentionally (“because he stood to benefit by harming X, he intended harm”). On top of that, you might also notice that inferences about motives and intentions are themselves derived, at least in part, from other, non-observable inferences about talents and planning. This circularity manages to help us avoid something resembling a more-complete explanation for what we perceive.

“It looks three-dimensional because it is, and it is 3-D because it looks like it”

Even if we ignore this circularity problem for the moment and just grant that inferences about motives and intentions can influence each other, there is also the issue of the multiple possible inferences which could be drawn about a behavior. For instance, if you observe a son push his father down the stairs and kill him, one could make several possible inferences about motives and intentions. Perhaps the son wanted money from an inheritance, resulting in his intending to push his father to cause death. However, pushing his father not only kills close kin, but also carries the risk of a punishment. Since the son might have wanted to avoid punishment (and might well have loved his father), this would result in his not intending to push his father and cause death (i.e. maybe he tripped, which is what caused him to push). Then again, unlikely as it may sound, perhaps the son actively sought punishment, which is why he intended to push. This could go on for some time. The point is that, in order to reach any one of these conclusions, the mind needs to add information that is not present in the initial observation itself.

This leads us to ask what information is added, and on what basis? The answer to this question, I imagine, would depend on the specific inferential goals of the perceiver. One goal is could be accuracy: people wish to try and infer the “actual” motivations and intentions of others, to the extent it makes sense to talk about such things. If it’s true, for instance, that people are more likely to act in ways that avoid something like their own bodily harm, our cognitive systems could be expected to pick up on that regularity and avoid drawing the the inference that someone was intentionally seeking it. Accuracy only gets us so far, however, due to the aforementioned issue of multiple potential motives for acting: there are many different goals one might be intending to achieve and many different costs one might be intending to avoid, and these are not always readily distinguishable from one another. The other complication is that accuracy can sometimes get in the way of other useful goals. Our visual system, for instance, while not always accurate, might well be classified as honest. That is to say though our visual system might occasionally get things wrong, it doesn’t tend to do so strategically; there would be no benefit to sometimes perceiving a shirt as blue and other times as red in the same lighting conditions.

That logic doesn’t always hold for perceptions of intentions and motives, though: intentionally committed moral infractions tend to receive greater degrees of moral condemnation than unintentional ones, and can make one seem like a better or worse social investment. Given that there are some people we might wish to see receive less punishment (ourselves, our kin, and our allies) and some we might wish to see receive more (those who inflict costs on us or our allies), we ought to expect our intentional systems to perceive identical sets of actions very differently, contingent on the nature of the actor in question. In other words, if we can persuade others about our intentions and motives, or the intentions and motives of others, and alter their behavior accordingly, we ought to expect perceptual biases that assist in those goals to start cropping up. This, of course, rests on the idea that other parties can be persuaded to share your sense of these things, posing us with related problems like under what circumstances does it benefit other parties to develop one set of perceptions or another?

How fun this party is can be directly correlated to the odds of picking someone up.

I don’t pretend to have all the answers to questions like these, but they should serve as a reminder that the our minds need to add a lot of structure to the information they perceive in order to do many of the things of which they are capable. Explanations for how and why we do things like perceive intentionality and motive need to be divorced from the feeling that such perceptions are just “natural” or “intuitive”; what we might consider the experience of the word “duh”. This is an especially large concern when you’re dealing with systems that are not guaranteed to be accurate or honest in their perceptions. The cues that our minds use to determine what the motives people had when they acted and what they intended to do are by no means always straightforward, so saying that inferences are generated by “the situation” is unlikely to be of much help, on top of just being wrong.

References: Cosmides, L. & Tooby, J. (1996). Beyond intuition and instinct blindness: Towards an evolutionary rigorous cognitive science. Cognition, 50, 41-77.

Gawronski, B. (2009). The Multiple Inference Model of Social Perception: Two Conceptual Problems and Some Thoughts on How to Resolve Them. Psychological Inquiry, 20, 24-29 DOI: 10.1080/10478400902744261

How Hard Is Psychology?

The scientific method is a pretty useful tool for assisting people in doing things related to testing hypotheses and discerning truth – or as close as one can come to such things. Like the famous Churchill quote about democracy, the scientific method is the worst system we have for doing so, except for all the others. That said, the scientists who use the method are often not doing so in the single-minded pursuit of truth. Perhaps phrased more aptly, testing hypotheses is generally not done for its own sake: people testing hypotheses are typically doing so for other reasons, such as raising one’s status and furthering one’s career in the process. So, while the scientific method could be used to test any number of hypotheses, scientists tend to try and use for certain ends and to test certain types ideas: those perceived to be interesting, novel, or useful. I imagine that none of that is particularly groundbreaking information to most people: science in theory is different from science in practice. A curious question, then, is given that we ought to expect scientists from all fields to use the method for similar reasons, why are some topics to which the scientific method is applied viewed as “soft” or “hard” (like psychology and physics, respectively)?

Very clever, Chemistry, but you’ll never top Freud jokes.

One potential reason for this impression is that these non-truth-seeking (what some might consider questionable) uses to which people attempt to put the scientific method could simply be more prevalent in some fields, relative to other ones. The further one strays from science in theory to science in practice, the softer your field might be seen as being. If, for instance, psychology was particularly prone to biases that compromises the quality or validity of the data, relative to other fields, then people would be justified in taking a more critical stance towards the findings from it. One of those possible biases involves tending to only report the data consistent with one hypothesis or another. As the scientific method requires reporting the data that is both consistent and inconsistent with one’s hypothesis, if only one of those is being done, then the validity of the method can be compromised and you’re no longer doing “hard” science. A 2010 paper by Fanellli provides us with some reason to worry on that front. In that paper, Fanelli examined approximately 2500 papers randomly drawn from various disciplines to determine the extent to which positive results (those which support one or more of the hypotheses being tested statistically) dominate in the published literature. The Psychology/Psychiatry category sat at the top of the list, with 91.5% of all published papers reporting positive results.

While that number may seem high, it is important to put the figure into perspective: the field at the bottom of that list – the one which reported the fewest positive results overall – were the Space Sciences, with 70.2% of all the sampled published work reporting positive results. Other fields ran a relatively smooth line between the upper- and lower-limits, so the extent to which the fields differ in positive results dominating is a matter of degree; not kind. Physics and Chemistry, for instance, both ran about 85% in terms of positive results, despite both being considered “harder” sciences than psychology. Now that the 91% figure might seem a little less worrying, let’s add some more context to reintroduce the concern: those percentages only consider whether any positive results were reported, so papers that tested multiple hypotheses tended to have a better chance of reporting something positive. It also happened that papers within psychology tended to test more hypotheses on average than papers in other fields. When correcting for that issue, positive results in psychology were approximately five-times more likely than positive results in the space sciences. By comparison, positive results physics and chemistry were only about two-and-a-half-times more likely. How much cause for concern should this bring us?

There are two questions to consider, before answering that last question: (1) what are the causes of these different rates of positive results and (2) are these differences in positive results driving the perception among people that some sciences are “softer” than others? Taking these in order, there are still more reasons to worry about the prevalence of positive results in psychology: according to Fanelli, studies in psychology tend to have lower statistical power than studies in physical science fields. Lower statistical power means that, all else being equal, psychological research should find fewer – not greater – percentages of positive results overall. If psychological studies tend to not be as statistically powerful, where else might the causes of the high-proportion of positive results reside? One possibility is that psychologists are particularly likely to be predicting things that happen to be true. In other words, “predicting” things in psychology tends to be easy because hypotheses tend to only be made after a good deal of anecdata has been “collected” by personal experience (incidentally, personal experience is a not-uncommonly cited reason for research hypotheses within psychology). Essentially, then, predictions in psychology are being made once a good deal of data is already in, at least informally, making them less predictions and more restatements of already-known facts.

“I predict that you would like a psychic reading, on the basis of you asking for one, just now.”

A related possibility is that psychologists might be more likely to engage in outright-dishonest tactics, such actually collecting their data formally first (rather than just informally), and then making up “predictions” that restate their data after the fact. In the event that publishers within different fields are more or less interested in positive results, then we ought to expect researchers within those fields to attempt this kind of dishonesty on a greater scale (it should be noted, however, that the data is still the data, regardless of whether it was predicted ahead of time, so the effects on the truth-value ought to be minimal). Though greater amounts of outright dishonesty is a possibility, it would be unclear as to why psychology would be particularly prone to this, relative to any other field, so it might not be worth worrying too much about. Another possibility is that psychologists are particularly prone to using questionable statistical practices that tend to boost their false-positive rates substantially, an issue which I’ve discussed before.

There are two issues above all the others stand out to me, though, and they might help to answer the second question – why psychology is viewed as “soft” and physics as “hard”. The first issue has to do with what Fanelli refers to as the distinction between the “core”  and the “frontier” of a discipline. The core of a field of study represents the agreed upon theories and concepts on which the field rests; the frontier, by contrast, is where most of the new research is being conducted and new concepts are being minted. Psychology, as it currently stands, is largely frontier-based. This lack of a core can be exemplified by a recent post concerning “101 greats insights from psychology 101“. In the list, you’ll find the word “theory” used a collective three times, and two of those mentions concern Freud. If you consider the plural – “theories” – instead, you’ll find five novel uses of the term, four of which mention no specific theory. The extent to which the remaining two uses represent actual theories, as opposed to redescriptions of findings, is another matter entirely. If one is left with only a core-less frontier of research, that could well send the message that the people within the field don’t have a good handle on what it is they’re studying, thus the “soft” reputation.

The second issue involves the subject matter itself. The “soft” sciences – psychology and its variants (like sociology and economics) – seem to dabble in human affairs. This can be troublesome for more than one reason. A first reason might involve the fact that the other humans reading about psychological research are all intuitive psychologists, so to speak. We all have an interest in understanding the psychological factors that motivate other people in order to predict what they’re going to do. This seems to give many people the impression that psychology, as a field, doesn’t have much new information to offer them. If they can already “do” psychology without needing explicit instructions, they might come to view psychology as “soft” precisely because it’s perceived as being easy. I would also note that this suggestion ties neatly into the point about psychologists possibly tending to make many predictions based on personal experience and intuitions. If the findings they are delivering tend to give people the impression that “Why did you need research? I could have told you that”, that ease of inference might cause people to give psychology less credit as a science.

“We go to the moon because it is hard, making physics a real science”

The other standout reason as to why psychology might pose people with the soft perception is that, on top of trying to understand other people’s psychological goings-on, we also try to manipulate them. It’s not just that we want to understand why people support or oppose gay marriage, for instance, it’s that we might also want to change their points of view. Accordingly, findings from psychology tend to speak more directly to issues people care a good deal about (like sex, drugs, and moral goals. Most people don’t seem to argue over the latest implications from chemistry research), which might make people either (a) relatively resistant to the findings or (b) relatively accepting of them, contingent more on one’s personal views and less on the scientific quality of the work itself. This means that, in addition to many people having a reaction of “that is obvious” with respect to a good deal of psychological work, they also have the reaction of “that is obviously wrong”, neither of which makes psychology look terribly important.

It seems likely to me that many of these issues could be mediated with the addition of a core to psychology. If results need to fit into theory, various statistical manipulations might become somewhat easier to spot. If students were learning how to think about psychology, rather than to think about and remember lists of findings which they feel are often trivial or obviously wrong, they might come away with a better impression of the field. Now if only a core could be found

References: Fanelli D (2010). “Positive” results increase down the Hierarchy of the Sciences. PloS one, 5 (4) PMID: 20383332

The Fight Over Mankind’s Essence

All traits of biological organisms require some combination and interaction of genetic and non-genetic factors to develop. As Tooby and Cosmides put it in their primer:

Evolutionary psychology is not just another swing of the nature/nurture pendulum. A defining characteristic of the field is the explicit rejection of the usual nature/nurture dichotomies — instinct vs. reasoning, innate vs. learned, biological vs. cultural. What effect the environment will have on an organism depends critically on the details of its evolved cognitive architecture.

The details of that cognitive architecture are, to some extent, what people seem to be referring to when they use the word “innate”, and figuring out the details of that architecture is a monumental task indeed. For some reason, this task of figuring out what’s “innate” also draws some degree of what I feel is unwarranted hostility and precisely why it does is a matter of great interest. One might posit that some of this hostility is due to the term itself. “Innate” seems to be a terribly problematic term for the same two reasons that most other contentious terms are: people can’t seem to agree on a clear definition for the word or a  context to apply it in, but they still use it fairly often despite that. Because of this, interpersonal communication can get rather messy, much like two teams trying to play a sport in which each is playing the game under a different set of rules; a philosophical game of Calvinball. I’m most certainly not going to be able to step into this debate and provide the definition for “innate” that all parties will come to intuitively agree upon and use consistently in the future. Instead, my goal is to review two recent papers that examined the contexts in which people’s views of innateness vary.

“Just add environment!” (Warning: chicken outcome will vary with environment)

Anyone with a passing familiarity in the debates that tend to surround evolutionary psychology will likely have noticed that most of these debates tend to revolve around issues of sex differences. Further, this pattern tends to hold whether it’s a particular study being criticized or the field more generally; research on sex differences just seems to catch a disproportionate amount of the criticism, relative to most other topics, and that criticism can often get leveled at the entire field by association (even if the research is not published in an evolutionary psychology, and even if the research is not conducted by people using an evolutionary framework). While this particular observation of mine is only an anecdote, it seems that I’m not alone in noticing it. The first of the two studies on attitudes towards innateness was conducted by Geher & Gambacorta (2010) on just this topic. They sought to determine the extent to which attitudes about sex differences might be driving opposition to evolutionary psychology and, more specifically, the degree to which those attitudes might be correlated with being an academic, being a parent, or being politically liberal.

Towards examining this issue, Geher & Gambacorta (2010) created questions aimed at assessing people attitudes in five domains: (1) human sex differences in adulthood, (2) human sex differences in childhood, (3) behavioral sex differences in chickens, (4) non-sex related human universals, and (5) behavioral differences between dogs and cats. Specifically, the authors asked about the extent to which these differences were due to nature or nurture. As mentioned in the introduction, this nature/nurture dichotomy is explicitly rejected in the conceptual foundations of evolutionary psychology and is similarly rejected by the authors as being useful. This dimension was merely used in order to capture the more common attitudes about the nature of biological and environmental causation, where the two are often seen as fighting for explanatory power in some zero-sum struggle.

Of the roughly 270 subjects who began the survey, not all of them completed every section. Nevertheless, the initial sample included 111 parents and 160 non-parents, 89 people in academic careers and 182 non-academics, and the entire sample was roughly 40 years old and mildly politically liberal, on average. The study found that political orientation was correlated with judgments of whether sex differences in humans (children and adults) were due to nature or environment, but not the other three domains (cats/dogs, chickens/hens, or human universals): specifically, those with more politically liberal leanings were also more likely to endorse environmental explanations for human sex differences. Across other domains there were some relatively small and somewhat inconsistent effects, so I wouldn’t make much of them just yet (though I will mention that women’s studies and sociology fields seemed consistently more inclined to chalk each domain – excepting the differences between cats and dogs – up to nurture, relative to other fields; I’ll also mention their sample was small). There was, however, a clear effect that was not discussed in the paper:subjects were more likely to chalk non-human animal behavior up to nature, relative to human behavior, and this effect seemed more pronounced with regards to sex differences specifically. With these findings in mind, I would echo the conclusion of the paper that there is appears to be some political, or, more specifically, moral dimension to these judgments of the relative roles of nature and nurture. As animal behavior tends to fall outside of the traditional human moral domain, chalking their behavior up to nature seemed less unpalatable for the subjects.

See? Men and women can both do the same thing on the skin of a lesser beast.

The next paper is a new release from Knobe & Samuels (2013). You might remember Knobe from his other work in asking people slightly different questions and getting vastly different responses, and it’s good to see he’s continuing on with that proud tradition. Knobe & Samuels begins by asking the reader to imagine how they’d react to the following hypothetical proposition:

Suppose that a scientist announced: ‘I have a new theory about the nature of intention. According to this theory, the only way to know whether someone intended to bring about a particular effect is to decide whether this effect truly is morally good or morally bad.’

The authors predict that most people would reject this piece of folk psychology made explicit; value judgments are supposed to be a different matter entirely from tasks like assessing intentionality or innateness, yet these judgments do not appear to be truly be independent from each other in practice. Morally negative outcomes are rated as being more intentional than morally positive ones, even if both are brought about as a byproduct of another goal. Knobe & Samuels (2013) sought to extent this line of research in the realm of attitudes about innateness.

In their first experiment, Knobe & Samuels asked subjects to consider an infant born with a rare genetic condition. This condition ensures that if a baby breastfeeds in the first two weeks of life it will either have extraordinarily good math abilities (condition one) or exceedingly poor math skills (condition two). While the parents could opt to give the infant baby formula that would ensure the baby would just turn out normal with regard to its math abilities, in all cases the parents were said to have opted to breastfeed, and the child developed accordingly. When asked about how “innate” the child’s subsequent math ability was, subjects seemed to feel that baby’s abilities were more innate (4.7 out of 7) when they were good, relative to when those abilities were poor (3.4). In both cases, the trait depended on the interaction of genes and environment and for the same reason, yet when the outcome was negative, this was seen as being less of an innate characteristic. This was followed up by a second experiment where a new group of subjects were presented with a vignette describing a fake finding about human’s genes: if people experienced decent treatment (condition one) or poor treatment (condition two) by parents at least sometimes, then a trait would reliability develop. Since most all people do experience decent or poor treatment by their parents on at least some occasions, just about everyone in the population comes to develop this trait. When asked about how innate this trait was, again, the means through which it developed mattered: traits resulting from decent treatment were rated as more innate (4.6) than traits resulting from poor treatment (2.7).

Skipping two other experiments in the paper, the final study presented these cases either individually, with each participant seeing only one vignette as before, or jointly, with some subjects seeing both versions of the questions (good/poor math abilities, decent/poor treatment) one immediately after the other, with the relevant differences highlighted. When subjects saw the conditions independently, the previous effects were pretty much replicated, if a bit weakened. However, even seeing these cases side-by-side did not completely eliminate the effect of morality on innateness judgments: when the breastfeeding resulted in worse math abilities this was still seen as being less innate (4.3) than the better math abilities (4.6) and, similarly, when poor treatment led to a trait developing it was viewed as less innate (3.8) than when it resulted from better treatment (3.9). Now these differences only reached significance because of the large sample size in the final study as they were very, very small, so I again wouldn’t make much of them, but I do still find it somewhat surprising that there were still small differences to be talked about at all.

Remember: if you’re talking small effects, you’re talking psychology.

While these papers are by no means the last word on the subject, they represent an important first step in understanding the way that scientists and laypeople alike represent claims about human nature. Extrapolating these results a bit, it would seem that strong opinions about research in evolutionary psychology are held, at least to some extent, for reasons that have little to do with the field per se. This isn’t terribly surprising, as it’s been frequently noted that many critics of evolutionary psychology have a difficult time correctly articulating the theoretical commitments of the field. Both studies do seem to suggest that moral concerns play some role in the debate, but precisely why the moral dimension seems to find itself represented in the debate over innateness is certainly an interesting matter that neither paper really gets into. My guess is that it has something to do with the perception that innate behaviors are less morally condemnable than non-innate ones (hinting at an argumentative function), but that really just pushes the question back a step without answering it. I look forward to future research on this topic – and research on explanations, more generally – to help fill in the gaps of our understanding of this rather strange phenomenon.

References: Geher, G., & Gambacorta, D. (2010). Evolution is Not Relevant to Sex Differences in Humans Because I Want it That Way! Evidence for the Politicization of Human Evolutionary Psychology EvoS: The Journal of the Evolutionary Studies Consortium , 2, 32-47

Knobe, J., & Samuels, R. (2013). Thinking like a scientist: Innateness as a case study Cognition, 126 (1), 72-86 DOI: 10.1016/j.cognition.2012.09.003

Dinner, With A Side Of Moral Stances

One night, let’s say you’re out to dinner with your friends (assuming, of course, that you’re the type with friends). One of these friends decides to order a delightful medium-rare steak with a side of steamed carrots. By the time that the orders arrive, however, some mistake in the kitchen has led said friend to receive the salmon special instead. Now, in the event you’ve ever been out to dinner and this has happened, one of these two things probably followed: (1) your friend doesn’t react, eats the new dish as if they had ordered it, and then goes on about how they made such a good decision to order the salmon, or (2) they grab the waiter and yell a string of profanities at him until he breaks down in tears.

OK; maybe a bit of an exaggeration, but the pattern of behavior that we see in the event of a mixed-up order at a restaurant typically more closely resembles the latter pattern. Given that most people can recognize that they didn’t receive the order they actually made, what are we to make about the proposition that people seem to have trouble recognizing some moral principles they just endorsed?

“I’ll endorse what she’s endorsing…”

A new study by Hall et al (2012) examined, what they’re calling, “choice blindness”, which is, apparently, quite a lot like “change blindness”, except with decisions instead of people. In this experiment, a researcher with a survey about general moral principles or moral stances on certain specific issues approached 160 strangers who happened to be walking through the park. Once the subjects had filled out the first page of the survey and flipped the piece of  paper over the clipboard to move onto the second, an adhesive on the back of the clipboard held on to and removed the lightly-attached portion of the survey to reveal a new set of questions. The twist is that the new set of questions were the opposite set of moral stances, so if a subject said they agreed that the government shouldn’t be monitoring emails, the new question would imply that the subject felt the government should be monitoring emails.

Overall, only about a third to a half of the subjects appeared to catch that the questions had been altered, a number which is very similar to the results found for the change blindness research. Further, many of the subjects that missed the deception also went on to give verbal justifications for their ‘decisions’ that appeared to be in opposition to their initial choice on the survey. That said, only about a third of the subjects who expressed extremely polarized scores (a 1 or a 9) failed to catch the manipulation, and authors also found that those who rated themselves as more politically involved were similarly more likely to detect the change.

So what are we to make of these findings? The authors suggest their is no straight-forward interpretation, but also suggest that choice blindness disqualifies vast swaths of research from being useful, as the results suggest that people don’t have “real” opinions. Though they say they are hesitant to suggest such an interpretation, Hall et al (2012) feel those interpretations need to be taken seriously as well, so perhaps they aren’t so hesitant after all. It might almost seem ironic that Hall et al (2012) seem “blind” to the opinion they had just expressed (don’t want to suggest such alternatives, but also do want to suggest such alternatives), despite that opinion being in print, and both opinions residing within the same sentence.

“Alright, alright; I’ll get the coin…”

It would seem plausible that the authors have no solid explanation of their results because they seemed to have gone into the study without any clearly stated theory. Such is the unfortunate state of much of the research in psychology; a dead-horse issue I will continue to beat. Describing an effect as a psychological “blindness” alone does not tell us anything; it merely restates the finding, and restatements of findings without additional explanations are not terribly useful for understanding what we’re seeing.

There are a number of points to consider regarding these results, so let’s start with the obvious: these subjects were not seeking to express their opinions so much as they were approached by a stranger with a survey. It seems plausible that at least some of these subjects really weren’t paying much attention to what they were doing or not really engaged in the task at hand. I can’t say to what extent this would be a problem, but it’s at least worth keeping in mind. One possible way of remedying this might be to have subjects first not only mark their agreement with an issue on the scale, but also briefly justify that opinion. If you got subjects to then try and argue against their previously stated justifications moments later, that might be a touch more interesting.

Given that there’s no strategic context under which these morals stances are being made in this experiment, some random fluctuation in answers might be expected. In fact, lack of context might be the reason that some subjects may not have been particularly engaged in the task in the first place, as evidenced by people who had more extreme scores or who were more involved in politics being more attentive to these changes. Accordingly, another potential issue here concerns the mere expectation of consistency in responses: research has already shown that people don’t hold universally to one set of moral principles or moral stances (i.e. the results from various versions of the trolley and footbridge dilemmas, among others). Indeed, we should expect moral judgments (and justifications for those judgments) to be made strategically, not universally, for the very simple reason that universal behaviors will not always lead to useful outcomes. For instance, eating when you’re hungry is a good idea; continuing to eat at all points, even when you aren’t hungry, is generally not. What that’s all getting at is that the justification of a moral stance is a different task than the generation of a moral stance, and if memory fails to retain information about what you wrote on a survey some strange researcher just handed you when you’re trying to get through the park,  you’re perfectly capable of reasoning about why some other moral stance is acceptable.

“I could have sworn I was against gay marriage. Ah well”

Phrased in those terms (“when people don’t remember what stance they just endorsed – after being approached by a stranger that was asking them to endorse some stance they might not have given any thought to until moments prior – they’re capable of articulating supportive arguments for an opposing stance”), the results of this study are not terribly strange. People often have to reason differently about whether a moral act is acceptable or not, contingent on where they currently stand in any moral interaction. For example, deciding whether an instance of murder was morally acceptable or not will probably depend, in large part, on which side of that murder you happen to stand on: did you just kill someone you don’t like, or did someone else just kill someone you did like? An individual that stated murder is always wrong in all contexts might be at something of a disadvantage, relative to one with a bit more flexibly in their moral justifications (to the extent that those justifications will persuade others about whether to punish the act or not, of course).

One could worry about what people’s “real” opinions are, then, but it would seem that doing so fundamentally misstates the question. Saying that when something bad happens to you is wrong, and when that same something bad happens to someone you dislike is right, both represent real opinions, but they’re not universal opinions; they’re context-specific. Asking about “real” universal moral opinions would be like asking about “real” universal emotions or states (“Ah, but how happy is he really? He might be happy now, but he won’t be tomorrow, so he’s not actually happy, is he?”). Now, of course, some opinions might be more stable than others, but that will likely be the case only insomuch as the contexts surrounding those judgments doesn’t tend to change.

References: Hall, L., Johansson, P., & Strandberg, T. (2012). Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming Survey PLOS ONE

Some Research I Was Going To Do

I have a number of research projects lined up for my upcoming dissertation, and, as anyone familiar with my ideas can tell you, they’re all brilliant. You can imagine my disappointment, then, to find out that not only had one of my experiments been scooped by another author three years prior, but they found the precise patterns of results I had predicted. Adding insult to injury, the theoretical underpinnings of the work are all but non-existent (as is the case in the vast majority of the psychology research literature), meaning I got scooped by someone who doesn’t seem to have a good idea why they found the results they did. My consolation prize is that I get to write about it earlier than expected, so there’s that, I suppose…

What? No, I’m not crying; I just got something in my eye. Or allergies. Whatever.

The experiment itself (Morewedge, 2009) resembles a Turing test. Subjects come into the lab to play a series of ultimatum games. One person has to divide a pot of $3 one of three ways – $2.25/$0.75 (favoring either the divider or the receiver) or evenly ($1.50 for each) – and the receiver can either accept or reject these offers. The variable of interest, however, is not whether the receiver will accept the money; it’s whether the receiver perceives the offer to have been made by a real person or a computer program, as all the subjects were informed beforehand that the proposers they would encounter were drawn randomly from a pool of computer programs or real players. In essence, the experiment was examining whether or not participants perceived they were playing with an intentional agent (a human) or a non-intentional agent, representing chance (a computer), contingent on the outcome. A brilliant experiment that I thought of first, a mere three years after it had already been published.

Overall, the subjects were no more or less likely to suggest they were playing against a person or a computer, and were also no more likely to see a person or a computer as being responsible when they received an even split. However, this was not the case across the other two offers: when subjects received an unusually positive offer ($2.25 for themselves), they tended to attribute this offer to a computer; when the offer was negative ($0.75 for themselves) they overwhelmingly attributed it to a person. When causation was ambiguous, “harm” (used rather loosely here) seemed to trigger the perception of an intentional agent, whereas a benefit tended to have the opposite effect; a decrease in perceptions of intentionality.

This pattern of results could be explained by referencing some kind of self-interested motive: given what we know about people (they seem to prefer more money to less), an unusually altruistic offer is just that – unusual. It would seem more likely that selfish offers come from self-interested agents, rather than an altruistic offer coming from a selfish agent. In an attempt to remove such thinking from (conscious) consideration, a second experiment was run. This time, the person deciding how the pot would be split had no personal stake in the decision; that is to say, subjects were told that their partners would not receive any payment from the games. No matter whether the proposer offered $0.75 or $2.25, no matter whether the offer was accepted or rejected, proposers made no money. However, the results in this case still looked similar: selfish offers were seen as being made predominately by people, and altruistic offers were seen as being made predominately by computers.

But so help me if a computer has been screwing me over again…

In the next two experiments Morewedge (2009) got a bit more explicit about the role of chance. In the first case, subjects came into the experiment and met a confederate before both were taken into separate rooms and seated at computers. On the computer was a circle divided up into five sections with a spinning arrow in the middle. Each of the five sections of the circle was filled with one of two colors, with the numbers of each color varying from one to four across each spin (so the circle could have 4 blue and 1 yellow slots in one round, then 3 blue and 2 yellow in the next, and so on). Subjects chose one color for the experiment beforehand, and if the spinner landed on their color in any given round, they would win the amount of money listed on the screen; if it didn’t land on their color, they would lose the amount on the screen. These amounts varied between zero and one dollar in $0.25 increments.

The subjects were informed of two other things: first, that the confederate was in a similar position (the confederate had picked one of the two colors as well, so they might both have the same color or not), and second, that the confederate could control the outcome of five of the forty rounds that would be played. It was the subject’s job to report the extent to which they felt the confederate had used their influence on each round. When the subjects had lost money, they tended to attribute responsibility for the action more towards the confederate than chance, and this tendency was exacerbated as the monetary stakes got higher. This effect is all the more interesting when considering the fact that subjects couldn’t be sure whether the confederate had similar or divergent interests. If they had the same interests, the confederate would be hurting both of them if he used his influence to purposely lose. Also of interest is that, when asked to estimate how much money they had ended up with – $5 in all cases; the initial endowment they were given – subjects underestimated, reporting that ended up with only $2.89 on average. They were either bad at math, or their memory was distorted towards recalling losses selectively.

Finally, the last experiment used a paradigm similar to that typically used for children: how long participants would spend looking at something. In this case, that something was information presented to the participant following a round with the spinner. The set up was similar, except the subjects were told a confederate could control half of the outcomes, and after each round the subject was told whether the confederate had controlled the round or not. Once presented with this information, subjects had to press a button to proceed to the next screen, and the amount of time they waited before pressing that button was used as a measure of the time participants were processing the information on the screen. Participants ended up spending more time looking at the screen when it was revealed that the confederate was responsible for their win, relative to being responsible for their loss, but looked equally as long when chance was implicated as responsible. This result could tentatively suggest that participants found it surprising that the confederate was responsible for their wins, implying that the more automatic process might be one of blaming others for losses.

For example, I’d be surprised if he still has that suit and that prostitute after the dice land.

Now to the theory. I will give credit where credit is due: Morewedge (2009) does at least suggest there might have been evolutionary advantages to such a bias, but promptly fails to elaborate on them in any substantial way. The first possible explanation given is that this bias could be used to defer responsibility for negative outcomes from oneself to others, which is an odd explanation given that the subjects in this experiment had no responsibility to defer. The second possible explanation is that people might attribute negative outcomes to others in order to not feel sad, which is, frankly, silly. The forth (going out of order) explanation is that such a bias might just represent a common a mild form of “disordered” thinking concerning a persecution complex, which is really no explanation at all. The third, least silly explanation, is that:

“By assuming the presence of antagonist, one may better be able to avoid a quick repetition of the unpleasant event one has just experienced” (p. 543)

Here, though, I feel Morewedge is making the mistake of assuming past selection pressures resembled the conditions set up in the experiment. I’m not quite sure how else to read that section, nor do I feel that experimental paradigm was particularly representative of past selection pressures or relevant environmental contexts

Now, if Morewedge had placed his findings in some framework concerning how social actions are all partly a result of intentional and chance factors, how perpetrators tend to conceal or downplay their immoral actions or intentions, how victims need to convince third parties to punish others who haven’t wronged them directly, and how certain inputs (such as harm) might better allow victims to persuade others, he’d have a very nice paper indeed. Unfortunately, he misses the strategic, functional element to these biases. When taken out of context, such biases can look “disordered” indeed, in much the same way that, when put underwater, my car seems disordered in its use as a submarine.

References: Morewedge CK (2009). Negativity bias in attribution of external agency. Journal of experimental psychology. General, 138 (4), 535-45 PMID: 19883135