Never attribute to malice what is adequately explained by stupidity – Halon’s Razor
Disagreement and dispute are pervasive parts of human life, arising for a number of reasons. As Halon’s razor suggests, the charitable response to disagreement would be to just call someone stupid for disagreeing, rather than evil. Thankfully, these are not either/or types of aspersion we can cast, and we’re free to consider those who disagree with us both stupid and evil if we so desire. Being the occasional participant in discussions – both in the academic and online worlds – I’m no stranger to either of those labels. The question of the accuracy of the aspersions remains, however: calling someone ignorant or evil could serve the function of spreading accurate information; then again, it could also serve the function of persuading others to not listen to what the target has to say.
“The other side doesn’t have the best interests of the Empire in mind like I do”
When persuasion gets involved, we are entering the realm where perceptions can be inaccurate, yet still be adaptive. Usually being wrong about the world carries costs, as incorrect information yields worse decision making. Believing inaccurately that cigarettes don’t increase the probability of developing lung cancer will not alter the probability of developing a tumor after picking up a pack-a-day habit. If, however, my beliefs can cause other people to behave differently, then I could do myself some good and being wrong isn’t quite as bad. For instance, even if my motives in a debate are purely and ruthlessly selfish, I might be able to persuade other people to support my side anyway through both (1) suggesting that my point of view is not being driven by my underlying biases – but rather by the facts of the matter and my altruistic tendencies – and (2) that my opponent’s perspective is not to be trusted (usually for the opposite set of reasons). The explanation for why people frequently accuse others of not understanding their perspective, or of sporting particular sets of biases, in debates, then, might have little to do with accuracy and more to do with convincing other people to not listen; to the extent that they happen to be accurate might be more be accidental than anything.
One example I discussed last year concerned the curious case of Kim Kardashian. Kim had donated 10% of some eBay sales to disaster relief, prompting many people to deride Kim’s behavior as selfishly motivated (even evil) and, in turn, also suggest that her donation be refused by aid organizations or the people in need themselves. It seemed to me that people were more interested in condemning Kim because they had something against her in particular, rather than because any of what she did was traditionally wrong or otherwise evil. It also seemed to me that, putting it lightly, Kim’s detractors might have been exaggerating her predilection towards evil by just a little bit. Maybe they were completely accurate – it’s possible, I suppose – it just didn’t seem particularly likely, especially given that many of the people condemning her probably knew very little about Kim on a personal level. If you want to watch other people make uncharitable interpretations of other people’s motives, I would encourage you to go observe a debate between people passionately arguing over an issue you couldn’t care less about. If you do, I suspect you will be struck by a sense that both sides of the dispute are, at least occasionally, being a little less than accurate when it comes to pinning motives and views on the other.
Alternatively, you could just observe the opposite side of a dispute you actually are invested in; chances are you will see your detractors as being dishonest and malicious, at least if the results obtained by Reeder et al (2005) are generalizable. In their paper, the researchers sought to examine whether one’s own stance on an issue tended to color their perceptions about the opposition’s motives. In their first study, Reeder et al (2005) posed about 100 American undergradutes with a survey asking them both about their perceptions of the US war in Iraq (concerning matters such as what motivated Bush to undertake the conflict and how likely particular motives were to be part of that reason), as well as whether they supported the war personally and what their political affiliation was. How charitable were the undergraduates when it came to assessing the motives for other people’s behavior?
“Don’t spend it all in one place”
The answer, predictably, tended to hinge on whether or not the participant favored the war themselves. In the open-ended responses, the two most common motives listed for going to war were self-defense and bringing benefits to the Iraqi people, freeing it from a dictatorship; the next two most common reasons were proactive aggression and hidden motives (like trying and take US citizen’s minds off other issues, like the economy). Among those who favored the war, 73% listed self-defense as a motive for the war, compared to just 39% who opposed it; conversely, proactive aggression was listed by 30% of those who supported the war, relative to 73% of those who oppose it. The findings were similar for ratings of self-serving motives: on a 1-7 scale (from being motivated by ethical principles to selfishness), those in favor of the war gave Bush a mean of 2.81; those opposed to the war gave him a 6.07. It’s worth noting at this point that (assuming the scale is, in fact measuring two opposite ends of a spectrum) both groups cannot be accurate in their perceptions of Bush’s motives. Given that those not either opposed to or supportive of the war tended to fall in between those two groups in their attributions of motives, it is also possible that both sides could well be wrong.
Interestingly – though not surprisingly – political affiliation per se did not have much predictive value for determining what people thought of Bush’s motives for the war when one’s own support for the war was entered into a regression model with it. What predicted people’s motive attributions was largely their own view about the war. In other words, Republicans who opposed to the war tended to view Bush largely the same as Democrats opposed to the war, just as Democrats supportive of the war viewed Bush the same as Republicans in favor of it. Reeder et al (2005) subsequently replicated these findings in a sample of Candian undergraduates who, at the time, were far less supportive of the war on the whole than the American sample. Additionally, this pattern of results was also replicated when asking about the motives of other people who support/oppose the war, rather than asking about Bush specifically. Further, when issues other than the war (in this case, abortion and gay marriage) were used, the same pattern of results obtained. In general, opposing an issue made those who support it look more self-serving and biased, and vice versa.
The last set of findings – concerning abortion and gay marriage – was particularly noteworthy because of an addition to the survey: a measure of personal involvement in the issue. Rather than just being asked about whether they support or oppose one side of the issue, they were also asked about how important the issue was to them and how likely they were to change their mind about their stance. As one might expect, this tendency to see your opposition as selfish, biased, close-minded, and ignorant was magnified by the extent to which one found the issue personally important. Though I can’t say for certain, I would venture a guess that, in general, the importance of an issue to me is fairly uncorrelated with how much other people know about it. In fact, if these judgments of other people’s motives and knowledge were driven by the facts of the matter, then the authors should not have observed this effect of issue importance. That line of reasoning, again, suggests that these perceptions are probably aimed more at persuasion than accuracy. The extent to which they’re accurate is likely besides the point.
“Damn it all; I was aiming for the man”
While I find this research interesting, I do wish that it had been grounded in the theory I had initially mentioned, concerning persuasion and accuracy. Instead, Reeder et al (2005) ground their account in naive realism, the tenets of which seem to be (roughly) that (a) people believe they are objective observers and (b) that other objective observers will see the world as they do, so (c) anyone who doesn’t agree must be ignorant or biased. Naive realism looks more like a description of results they found, rather than an explanation for them. In the interests of completeness, the authors also ground their research in self-categorization theory, which states that people seek to differentiate their group from other groups in terms of values, with the goal of making their own group look better. Again, this sounds like a description of behavior, rather than an explanation for it. As the authors don’t seem to share my taste for a particular type of theoretical explanation grounded in considerations of evolutionary function here (at least in terms of what they wrote), I am forced to conclude that they’re at least ignorant, if not downright evil*
References: Reeder, G., Pryor, J., Wohl, M., & Griswell, M. (2005). On attributing negative motives to others who disagree with our opinions. Personality & Social Psychology Bulletin, 31, 1498-1510.