Why Are They Called “Spoilers”?

Imagine you are running experiments with mice. You deprive the mice of food until they get hungry and then you drop them into a maze. Now obviously the hungry mice are pretty invested in the idea of finding the food; you have been starving them and all. You’re not really that evil of a researcher, though: in one group, you color-code the maze so the mice always know where to go to find the reward. The mice, I expect, would not be terribly bothered by your providing them with information and, if they could talk, I doubt many of them would complain about your “spoiling” the adventure of finding the food themselves. In fact, I would also expect most people would respond the same way when they were hungry: they would rather you provide them with the information they sought directly instead of having to make their own way through the pain of a maze (or do some equally-annoying psychological task) before they could eat. We ought to expect this because, at least in this instance, as well as many others, having access to greater quantities of accurate information allows you to do more useful things with your time. Knowing where food is cuts down on your required search time, which allows you to spend that time in other, more fruitful ways (like doing pretty much anything that undergraduates can do that doesn’t involve serving a participant for psychologists). So what are we to make of cases where people seem to actively avoid such information and claim they find it aversive?

Spoiler warning: If you would rather formulate your own ideas first, stop reading now.

The topic arose for me lately in the context of the upcoming E3 event, where the next generation of video games will be previewed. There happens to be one video game specifically I find myself heavily invested in and, for whatever reason, I find myself wary of tuning into E3 due to the risk of inadvertently exposing myself to any more content from the game. I don’t want to know what the story is; I don’t want to see any more game play; I want to remain as ignorant as possible until I can experience the game firsthand. I’m also far from alone in that experience: of approximately 40,000 who have voiced their opinions, a full half reported that they found spoilers unpleasant. Indeed, the word that refers to the leaking of crucial plot details itself implies that the experience of learning them can actually ruin the pleasure that finding them out for yourself can bring, in much the same way that microorganisms make food unpalatable or dangerous to ingest. Am I, along with the other 20,000, simply mistaken? That is, do spoilers actually make the experience of reading some book or playing some video game any less pleasant? At least two people think that answer is “yes”.

Leavitt & Chistienfeld (2011) suggest that spoilers, in fact, do not make the experience of a story any less pleasant. After all, the authors mention people are perfectly willing to experience stories again, such as by rereading a book, without any apparent loss of pleasure from the story (curiously they cite no empirical evidence on this front, making it an untested assumption). Leavitt & Christienfeld also suggested that perceptual fluency (in the form of familiarity) with a story might make it more pleasant because the information subsequently becomes easier to process. Finally, the pair appear all but entirely disinterested in positing any reasons as to why so many people might find spoilers unpleasant. The most they offer up is the possibility that suspense might have something to do with it, but we’ll return to that point later. The authors, like your average person discussing spoilers, didn’t offer anything resembling a compelling reason as for why people might not like them. They simply note that many people think spoilers are unpleasant and move on.

In any case, to test whether spoilers really spoiled things, they recruited approximately 800 subjects to read a series of short stories, some of which came with a spoiler, some of which without, and some in which the spoiler was presented as the opening paragraph of the short story itself. These stories were short indeed: between 1,400 and 4,200 words a piece, which amounts to the approximate length of this post to about three of them. I think this happens to be another important detail to which I’ll return later, (as I have no intention of spoiling my ideas fully yet). After the subjects had read each story, they rated how much they enjoyed it on a scale of 1 to 10. Across all three types of stories that were presented – mysteries, ironic twists, and literary ones – subjects actually reported liking the spoiled stories somewhat more than the non-spoiled ones. The difference was slight, but significant, and certainly not in the spoiler-are-ruining-things direction. From this, the authors suggest that people are, in fact, mistaken in their beliefs about whether spoilers have any adverse impact on the pleasure one gets from a story. They also suggest that people might like birthday presents more if they were wrapped in clear cellophane.

Then you can get the disappointment over with much quicker.

Is this widespread avoidance of spoilers just another example of quirky, “irrational” human behavior, then, born from the fact that people tend to not have side-by-side exposure to both spoiled and non-spoiled version of a story? I think Leavitt & Christenfeld are being rather hasty in their conclusion, to put it mildly. Let’s start with the first issue: when it comes to my concern over watching the E3 coverage, I’m not worried about getting spoilers for any and all games. I’m worried about getting spoilers for one specific game, and it’s a game from a series I already have a deep emotional commitment to (Dark Souls, for the curious reader). When Harry Potter fans were eagerly awaiting the moment they got to crack open the next new book in the series, I doubt they would care much one way or the other if you told them about the plot to the latest Die Hard movie. Similarly, a hardcore Star Wars fan would probably not have enjoyed someone leaving the theater in 1980 blurting out that Darth Vader was Luke’s father; by comparison, someone who didn’t know anything about Star Wars probably wouldn’t have cared. In other words, the subjects likely have absolutely no emotional attachment to the stories they were reading and, as such, the information they were being given was not exactly a spoiler. If the authors weren’t studying what people would typically consider aversive spoilers in the first place, then their conclusions about spoilers more generally are misplaced.

One of the other issues, as I hinted at before, is that the stories themselves were all rather short. It would take no more than a few minutes to read even the longest of them. This lack of investment of time could cause a major issue for the study but, as the authors didn’t posit any good reasons for why people might not like spoilers in the first place, they didn’t appear to give the point much, if any, consideration. Those who care about spoilers, though, seem to be those who consider themselves part of some community surrounding the story; people who have made some lasting emotional connection with in it along with at least a moderately deep investment of time and energy. At the very least, people have generally selected the story to which they’re about to be exposed themselves (which is quite unlike being handed a preselected story by an experimenter).

If the phenomenon we’re considering appears to be a costly act with no apparent compensating benefits – like actively avoiding information that would otherwise require a great deal of temporal investment to obtain – then it seems we’re venturing into the realm of costly signaling theory (Zahavi, 1975). Perhaps people are avoiding the information ahead of time so they can display their dedication to some person, group, or signal something about themselves by obtaining the information personally. If the signal is too cheap, its information value can be undermined, and that’s certainly something people might be bothered by.

So, given the length of these stories, there didn’t seem to be much that one could actually spoil. If one doesn’t need to invest any real time or energy in obtaining the relevant information, spoilers would not be likely to cause much distress, even in cases where someone was already deeply committed to the story. At worst, the spoilers have ruined what would have been 5 minutes of effort. Further, as I previously mentioned, people don’t seem to dislike receiving all kinds of information (“spoilers” about the location of food or plot detains from stories they don’t care about, for instance). In fact, we ought to expect people to crave these “spoilers” with some frequency, as information gain for cheap or free is, on the whole, generally a good thing. It is only when people are attempting to signal something with their conspicuous ignorance that we ought to expect “spoilers” to actually be spoilers, because it is only then that they have the potential spoil anything. In this case, they would be ruining an attempt to signal some underlying quality of the person who wants to find out for themselves.

Similar reasoning helps explain why it’s not enough for them to just hate people privately.

In two short pages, then, the paper by Leavitt & Christenfeld (2011) demonstrates a host of problems that can be found in the field of psychological research. In fact, this might be the largest number of problems I’ve seen crammed into such a small space. First, they appear to fundamentally misunderstand the topic they’re ostensibly researching. It seems, to me, anyway, as if they’re trying to simply find a new “irrational belief” that people hold, point it out, and say, “isn’t that odd?”. Of course, simply finding a bias or mistaken belief doesn’t explain anything about it, and there’s little to no apparent effort made to understand why people might hold said odd belief. The best the authors offer is that the tension in a story might be heightened by spoilers, but that only comes after they had previously suggested that such suspense might detract from enjoyment by diverting a reader’s attention. While these two claims aren’t necessarily opposed, they seem at least somewhat conflicting and, in any case, neither claim is ever tested.

There’s also a conclusion that vastly over-reaches the scope of the data and is phrased without the necessary cautions. They go from saying that their data “suggest that people are wasting their time avoiding spoilers” to intuitions about spoilers just being flat-out “wrong”. I will agree that people are most definitely wasting their time by avoiding spoilers. I would just also add that, well, that waste is probably the entire point.

References: Leavitt JD, & Christenfeld NJ (2011). Story spoilers don’t spoil stories. Psychological science, 22 (9), 1152-4 PMID: 21841150

Zahavi, M. (1975). Mate selection – A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.

Understanding Understanding

“The greatest enemy of knowledge is not ignorance; it is the illusion of knowledge.” – Steven Hawking

Many researchers in the field of psychology don’t appear to understand that restating a finding is not the same as explaining that finding. For instance, if you found that men are more likely to gamble than women, a typical form of “explanation” of this finding would be to say that men have more of a “risk bias” than women, resulting in them gambling more. Clearly this explanation doesn’t add anything that stating the finding didn’t; all it manages to do is add a label to the finding. Now some psychologists might understand this shortcoming and take the next step: they might say something along the lines of men perceive gambling to be more fun or more likely to payoff than women do. While that might well be true, it still falls short of an complete explanation. Instead, it would merely push the explanation stage back a step to a question about why men might perceive gambling differently than women do. If the researchers understand this further shortcoming and take the next step, they’ll reference some cause of that feeling. If we’re lucky, that cause will be non-circular and amount to more than the phrase “culture did it”.

The smart money is on betting against that outcome, though…

A good explanation needs to focus on some outcome of a behavior; some plausible function of that outcome that can account for the emotion or feeling itself. This is notably easier in some cases than others: hunger motivates people to seek out and consume food avoiding starvation; fear motivates people to escape from or avoid threatening situations, avoiding danger; guilt motivates people to make amends and repair relationships towards wronged parties, avoiding condemnation and punishment while reaping the benefits of social interaction. Recently, I found myself posing that functional question about a feeling that is not often discussed: understanding. Teasing out the function of understanding is by no means a straightforward task. Before undertaking the task, however, I need to make a key distinction concerning precisely what I mean by “understanding”. After all, if wikipedia has a hard time defining the term, I can’t just assume that we’ll all be the on the same page despite using the same word.

The distinction I would like to draw is between understanding per se and the feeling of understanding. The examples given on wikipedia reflect understanding per se: the ability to draw connections among mental representations. Understanding per se, then, represents the application of knowledge. If a rat has learned to press a bar for food, for instance, we would say that the rat understands something about the connection between bar pressing and receiving food, in that the former seems to cause the latter. The degree of understanding per se can vary in terms of accuracy and completeness. To continue on with the rat example, a rat can understand that pressing the bar generally leads to it receiving food without understanding the mechanisms through which the process works. Similarly, a person might understand that taking an allergy pill will result in their allergy symptoms being reduced, but their understanding of how that process works might be substantially less detailed or accurate than the understanding of the researchers responsible for developing the pill.

Understanding per se is to be distinguished from the feeling of understanding. While understanding per se refers to the actual connections among your mental representations, the feeling of understanding refers to your mental representations about the state of those other mental representations. The feeling of understanding, then, is a bit of a metacognitive sensation; your thinking about your thinking. Much like understanding per se, the feeling of understanding comes in varying degrees: one can feel as if they don’t understand something at all through feeling as if they understand it completely, and anything in between. With this distinction made, we can begin to start considering some profitable questions: what is the connection between understanding per se and the feeling of understanding? What behaviors are encouraged by the feeling of understanding? What functional outcome(s) are those behaviors aimed at achieving? Given these functional outcomes, what predictions can we draw about how people experiencing various degrees of feeling as if they understand something will react to certain contexts?

Maybe even what Will Smith meant when he wrote “Parents Just Don’t Understand

To begin to answer these questions, let’s return to the initial quote. The enemy of knowledge is not ignorance, but rather the illusion of knowledge; the feeling of understanding. While a bit on the dramatic and poetic sides of things, the quote brings to light an important idea: there is not necessarily a perfect correlation between understanding per se and the feeling of understanding. Sure, understanding per se might tend to trigger feelings of understanding, but we ought to be concerned with matters of degree. It is clear that increased feelings of understanding do not require a tight connection to degrees of understanding per se. In much the same way, one’s judgment of how attractive they are need not perfectly correlate with how attractive they actually are. This is a partial, if relatively underspecified, answer to our first question. Thankfully, it is all my account of understanding requires: a less than perfect correlation between understanding per se and feelings of understanding.

This brings us to the second question: what behaviors are motivated by the feeling of understanding. If you’re a particularly astute reader, you’ll have noticed that the term “understanding” appeared several times in the first paragraph. In each instance, it referred to researchers feeling that their understanding per se was incomplete. What did this feeling motivate researchers to do? Continue to attempt and build their understanding per se. In the cases where researchers lack the feeling that their understanding per se was incomplete, they seem to do one thing: stop. That is to say that reaching a feeling of understanding appears to act as a stopping rule for learning. That people stop investing in learning when they feel they understand is likely what Hawkins was hinting at in his quote. The feeling of understanding is the enemy of knowledge because it motivates you to stop acquiring the stuff. It might even motivate you to begin to share that information with others, opting to speak on a topic, rather than defer to who you perceive to be an expert, but I won’t deal with that point here.

Given that people often do not ever seem to reach complete understanding per se, why should we ever expect people to stop trying to improve? Part of that reason is that there’s a tradeoff between investing time in one aspect of your life versus investing it in any other. Time spent learning more about one skill is not time not spent doing other potentially-useful things. Further still, were you to plot a learning curve, charting how much new knowledge is gained per-unit of time invested in learning, you’d likely see diminishing returns over time. Let’s say you were trying to learn how to play a song on some musical instrument. The first hour you spend practicing will result in you gaining more information than, say, the thirtieth hour. At some point in your practicing, you’ll reach a point where the value-added by each additional hour simply isn’t worth the investment anymore. It is at this point, when some cognitive balance shifts away from investing time on learning one task to doing other things, that we should predict people to reach a strong feeling of understanding. Just as hunger wanes with each additional bite of food, feelings of understanding should grow with each additional piece of information.

Also like hunger, some people tend a touch more towards the gluttonous side.

This brings us to the final question: what can we predict about people’s behavior on the basis of their feelings of understanding? Aside from the above mentioned disinclination to learn about some specific topic further, we might also predict that repeated exposure to information we feel we already understand would be downright aversive (again, in much the same way that eating food after you feel full is painful). We might, for instance, expect people to react with boredom and diverted attention in classes that cover material too slowly. We might also expect people to react with anger when someone tries to explain something to them that they feel they already understand. In fact, there is a word for what people consider that latter act: condescending. Not only does condescension waste an individual’s time with redundant information, it could also serve as an implicit or explicit challenge to their social status via a challenge to their understanding per se (i.e. “You think you understand this topic, but you really don’t. Let me say it explain it to you again…nice…and…slowly…). While this list is quite modest, I feel it represents a good starting point for understanding understanding. Of course, since I feel that way, there’s a good chance I’ll probably stop looking for other starting points, so I may never know.

Now You’re Just A Moral Rule I Used To Know

At one point in my academic career I found myself facing something resembling an ethical dilemma: I had, what I felt was, a fantastic idea for a research project, hindered only by the fact that someone had conducted a very similar experiment a few years prior to my insight. To those of you unfamiliar with the way research is conducted, this might not seem like too big of a deal; after all, good science requires replications, so it seems like I should be able to go about my research anyway with no ill effects. Somewhat unfortunately for me – and the scientific method more generally – academic journals are not often very keen on publishing replications, nor are dissertation committees and other institutions that might eventually examine my resume impressed by them. There was, however, a possible “out” for me in this situation: I could try and claim ignorance. Had I not have read the offending paper, or if others didn’t know about it (“it” being either the paper’s existence or the knowledge that I read it), I could have more convincingly presented my work as entirely novel. All I had to do would be not cite the paper and write as if no work had been conducted on the subject, much like Homer Simpson telling himself, “If I don’t see it, it’s not illegal” as he runs a red light.

Plan B for making the work novel again is slightly messier…

Since I can’t travel back in time and unread the paper, presenting the idea as completely new (which might mean more credit for me) would require that I convince others that I had not read it. Attempting that, however, comes with a certain degree of risk: if other people find out that I had read the paper and failed to give proper credit, my reputation as a researcher would likely suffer as a result. Further, since I know that I read the paper, that knowledge might unintentionally leak out, resulting in my making an altogether weaker claim of novelty. Thankfully, (or not so thankfully, depending on your perspective) there’s another way around this problem that doesn’t involve time travel; my memory for the study could simply “fail”. If I suddenly was no longer aware of the fact that I had read the paper, if those memories no longer existed or existed but could not be accessed, I could honestly claim that my research was new and exciting, making me that much better off.

Some new research by Shu and Gino (2012) asked whether our memories might function in this fashion, much like the Joo Janta 200 Super-Chromatic Peril-Sensitive Sunglasses found in The Hitchhiker’s Guide to the Galaxy series: darkening at the first sign of danger, preventing the wearer from noticing and allowing them to remain blissfully unaware. In this case, however, the researchers asked whether engaging in an immoral action – cheating – might subsequently result in the actor’s inability to remember other moral rules. Across four experiments, when subjects were given an opportunity to act less than honestly, either through commission or omission, they reported remembering fewer previously read moral – but not neutral – rules.

In the first of these experiments, participants read both an honor code and a list of requirements for obtaining a driver’s license and they were informed that they would be answering questions about the two later. The subjects were then given a series of problems to try and solve in a given period of time, with each correct answer netting a small profit. In one of the conditions, the experimenter tallied the number of correct answers for each participant and paid them accordingly; in the other condition, subjects noted how many answers they got right and paid themselves privately, allowing for subjects to misrepresent their performance for financial gain. Following their payment, subjects were then given a memory task for the previously-read information. When given the option for cheating, about a third of the subjects took advantage of the opportunity, reporting that they had solved an additional five of the problems, on average. That some people cheated isn’t terribly noteworthy; what was is that when the subjects were tested on their recall of the information they had initially read, those who cheated tended to remember fewer items concerning the honor code than those who did not (2.33 vs 3.71, respectively), but remembered similar number of items about the license rules (4 vs 3.79). The cheaters’ memories seemed to be, at least temporarily, selectively impaired for moral items.

There goes that semester of business ethics…

Of course, that pattern of results is open to a plausible alternative explanation: people who read the moral information less carefully were also more likely to cheat (or people who are more interested in cheating had less of an interest in moral information). The second experiment sought to rule that explanation out. In the follow-up study, subjects initially read two moral documents: the honor code and the Ten Commandments. The design was otherwise similar, minus one key detail: subjects took two memory tasks, one before they had the opportunity to cheat and another one after the fact. Before there was any option for dishonest behavior, subjects’ performance on their memory for moral items was similar regardless of whether they would later cheat or not (4.33 vs 4.44, respectively). After the problem solving task, however, the subjects who cheated subsequently remembered fewer moral items about the second list they read (3.17), relative to those who did not end up cheating (4.21). The decreased performance on the memory task seemed to be specific to the subjects who cheated, but only after they had acted dishonestly; not before.

The third experiment shifted gears, looking instead at acts of omission rather than outright lying. First, subjects were asked to read the honor code as before, with one group of subjects being informed that the memory task they would later complete would yield an additional $1.50 of payment for each correct answer. This gave the subjects some incentive to remember and accurately report their knowledge of the honor code later (to try and rule out the possibility that, previously, subjects had remembered the same amount of moral information, but just neglected to report that they did). Next, subjects were asked to solve some SAT problems on a computer, and each correct answer would, as before, net the subject some additional payment. However, some subjects were informed that the program they were working with contained a glitch that would cause the correct answer to be displayed on the screen five seconds after the problem appeared unless they hit the space bar. The results showed that, of the subjects that knew the correct answer would pop up on the screen, almost all of them (minus one very moral subject) made use of that glitch at least once during the experiment and, ss before, the cheaters recalled fewer moral items than the non-cheating groups (4.53 vs 6.41). Further, while the incentives for accurate recall were effective in the non-cheating group (they remembered more items when they were paid for each correct answer), this was not the case for the cheaters: whether they were being paid to remember or not, the cheaters still remembered about the same amount of information.

Forgetting about the forth experiment for now, I’d like to consider why we might expect to see this pattern of results. Shu and Gino (2012) suggest that such motivated forgetting might help in “reducing dissonance and regret”, to maintain one’s “self-image”. Such explanations are not even theoretically plausible functions for this kind of behavior, as “feeling good”, in and of itself, doesn’t do anything useful. In fact, forgetting moral rules could be harmful, to the extent that it might make one more likely to commit acts that others would morally condemn, resulting in increased social sanctions or physical aggression. However, if such ignorance was used strategically, it might allow the immoral actor in question to mitigate the extent of that condemnation. That is to say, committing certain immoral acts out of ignorance is seen as being less deserving of punishment than committing them intentionally, so if you can persuade others that you just made a mistake, you’d be better off.

“Oops?”

While such an explanation might be at least plausible, there are some major issues with it, namely that cheating-contingent rule forgetting is, well, contingent on the fact that you cheated. Some cognitive system needs to know that you cheated in the first place to start suppressing one’s memory for moral rule accessibility, and if that system knows that a moral rule has been violated, it may leak that information into the world (in other words, it might cause the same problem that it was hypothesized to solve). Relatedly, suppressing memory accessiability for moral rules more generally, specifically moral rules unrelated to the current situation, probably won’t do you much good when it comes to persuading others that you didn’t know the moral rule which you broke in the first place – what they’ll likely be condemning you for. If you’re caught stealing, forgetting that adultery is immoral won’t help out (and claiming that you didn’t know stealing was immoral is itself not the most believable of excuses).

That said, the function behind the cognitive mechanisms generating this pattern of results likely does involve persuasion at its conceptual core. That people have difficulty accessing moral information after they’ve done something less than moral probably represents some cognitive systems for moral condemnation becoming less active (one side-effect of which is that your memory for moral rules isn’t accessed, as one isn’t trying to find a moral violation), while systems for defending against moral condemnation come online. Indeed, as the forth, unreviewed, study found, even moral words appeared to be less accessible; not just rules. However, this was only the case for cheaters who had been exposed to an honor code; when there was less of a need to defend against condemnation (when one didn’t cheat or hadn’t been exposed to an honor code), those systems stayed relatively dormant.

References: Shu, L., & Gino, F. (2012). Sweeping dishonesty under the rug: How unethical actions lead to forgetting of moral rules. Journal of Personality and Social Psychology, 102 (6), 1164-1177 DOI: 10.1037/a0028381

My Eyes Are Up Here (But My Experiences Aren’t)

According to the Stanford Encyclopedia of Philosophy, objectification is a central notion to feminist theory. Among the listed features of objectification found there, I’d like to focus on number seven in particular: 

Denial of subjectivity: the treatment of a person as something whose experiences and feelings (if any) need not be taken into account.

Non-living objects – and even some living ones – are not typically perceived to have a mind capable of experiences. Cars, for instance, are not often thought of as possessing the capacity to experience things the way sentient beings do, like pain or sound (though that doesn’t always stop drivers from verbally bargaining with their cars or physically hitting them when they “refuse” to start). Accordingly, people find different behaviors more socially justifiable when they’re directed towards an object, as opposed to a being capable of having subjective experiences. For instance, many people might say that it’s morally wrong to hit a person, because, in part, being hit hurts; hitting a car, by contrast, while silly or destructive, isn’t as morally condemned, as the car feels no pain. As people generally don’t wish to be treated in the same fashion as objects, their objections or being objectified would seem to follow naturally.

It has often been asserted that focusing on someone’s (typically a woman’s) physical characteristics (typically the sexual ones) results in the objectification of that person; objectification which strips them of their mind, and with it their capacity for experiences. They’re reduced to the status of “tool” for sexual pleasure, rather than “person”. This assertion makes a rather straightforward prediction: increasing the focus on someone’s body ought to diminish perceptions of their capacity to experience things.

Like the pain of his daily steroid injections.

When that prediction was put to the test by Gray et al (2011), however, the researchers found precisely the opposite pattern of results across six experiments: increasing the focus on a person’s physical characteristics resulted in the perception that the person was more capable of subjective experiences. In the first of these experiments, subjects were presented with either a male or female face alone, or those same faces complete with a portion of their exposed upper body, followed by questions about the ability of the pictured person to do (behave morally, control themselves) or feel (hunger, desire) certain things, relative to others. When more of the person’s body was on display, they were rated as slightly less likely to be able to do things (2.90 vs 3.23 out of 5), but slightly more able to experience things (3.65 vs 3.38), relative to the face-alone condition. It’s also worth noting that a score of 3 on this scale denoted being average, relative to others, in either agency or experience, so showing more skin certainly didn’t remove the perception of the person having a mind (they were still about average); it just altered what kind of mind was being perceived.

The basic effect was replicated in the second of these studies. Subjects were asked to assess pictures of two women along either physical or professionalism variables. Subsequently, the subjects were asked which of two women they thought was more capable of doing or feeling certain things as before.  When the woman in the picture had been assessed along the physical variables, they were rated as being slightly more capable of experiences, but slightly less agentic; when that same woman was instead assessed along the professionalism variables, the reverse pattern held – more agency and less patiency.

The researchers turned up the sex in the next two studies. In the third experiment, subjects saw one of ten target men or women in a picture, either clothed or naked (with the sexy parts tastefully blurred, of course), and assessed the target along the same agency or experiential dimensions. The naked targets were rated as having a greater capacity for experience, relative to their clothed pictures (3.28 vs 3.18), while also having less agency (2.92 vs 3.26). Further, though I didn’t mention this before, it was the female targets that were ascribed more overall mind in this study, as was also the case in the first study, though this difference was small (just to preemptively counter the notion that women were being universally perceived as having less of a mind). On a possibly related note, there was also a positive correlation between target attractiveness and mind perception: the more attractive the person in the picture was, the more capable of agency and experience they was rated as being.

Taking that last experiment one step further, one of the female targets that had previously been represented was again presented to subjects either clothed or naked, but a third condition was added: that woman also happened to have done an adult film, and the (highly sexualized) picture of her on the cover was rated along the same dimensions as the other two. In terms of her capacity for agency, there was a steady decline over the clothed, naked, and sexualized pictures (2.92, 2.76, and 2.58), whereas there was a steady incline on the experiential dimension (2.91, 3.18, and 3.45). Overall, the results really do make a for a very good-looking graph.

Results  so explicit they might not be suitable for minors

Skipping the fifth study, the final experiment looked at how a person might be treated, contingent on how much skin is on display. Subjects were presented with a picture of a male confederate who was hooked up to some electrodes and either clothed or shirtless. It was the subject’s job to decide which tasks to give to the confederate (i.e. have the confederate do task X or task Y), and some of those tasks ostensibly involved painful shocks. The subjects were told to only administer as many shock tasks as they thought would be safe, as their goal was to protect the confederate (while still gathering shock data, that is). Of interest was how often the subjects decided to assign the shock task to the confederate out of the 40 opportunities they had to do so. In the shirtless condition, the subjects tended to think of the confederate more in terms of his body rather than his mind, as was hoped; they also liked the confederate just as much, no matter his clothing situation. Also, as predicted, subjects administered fewer shocks to the shirtless confederates (8 times, on average, as compared with almost 14). Focusing on a person’s body seemed to make these subjects less inclined to hurt them, fitting nicely with the increases we just saw in perceptions of capacity for experience.

Just to summarize, focusing on someone’s physical characteristics, whether that someone was a man or a woman, did not lead to diminished attributions of their capacity to have experiences; just their agency. People were perceiving a mind in the “objectified” targets; they were just perceiving different sorts – or focusing on different aspects – of minds. Now perhaps some people might counter that this paper doesn’t tell us much about objectification because there wasn’t any – sexual or otherwise – going on, as “sexual objectification is the viewing of people solely as de-personalised objects of desire instead of as individuals with complex personalities and desires/plans of their own“. Indeed, all the targets in these experiments were viewed as having both experiences and agency, and the ratings of those two dimensions hovered closely around the midpoints of the scales; they clearly weren’t be viewed as mindless objects in any meaningful sense, so maybe there was no objectification going on here. However, the same website that provided the sexual objectification definition goes on to list pornography and the representation of women in media as good examples of sexual objectification, both of which could be considered to have been represented in the current paper. For such a criticism to have any teeth, the use of the term “objectification” would need to be reined in substantially, restricted to cases where depersonalization actually occurs (meaning things like pointing a video camera at someone’s body don’t qualify).

While these results are all pretty neat, one thing this paper seriously wants for is an explanation for them. Gray et al (2011) only redescribe their findings in terms of “common-sense dualism”, which is less than satisfying and it doesn’t seem to account for the findings on attractiveness very well either. The question they seem to be moving towards involves examining the ways we perceive others more generally; when and why certain aspects of someone’s mind become relatively more salient. Undoubtedly, the ways these perceptions shift will turn out to be quite complex and context-specific. For instance, if I was going in for, say, a major operation, I might be very interested in, to some extent, temporarily “reducing” the person doing my surgery from a complex person with all sorts of unique attributes and desires to being simply a surgeon because, at that moment, their other non-surgery-related traits aren’t particularly relevant.

“A few more people are coming over for dinner; which of you guys are the flattest?”

While it’s not particularly poetic, what’s important in that situation – and many other situations more generally – is whether the person in question can help you do something useful; whether they’re a useful “tool” for the situation at hand (admittedly, they’re rather peculiar kinds of tools that need to be properly motivated to work, but the analogy works well enough). If you need surgery, someone’s value as a mate won’t be particularly relevant there; after you’ve recovered, left the hospital, and found a nice bar, the situation might be reversed. Which of a person’s traits are most worthy of focus will depend on the demands of the task at hand: what goals are being sought, how they might be achieved, and whom they might be most profitably achieved with. Precisely what problem the aforementioned perceptual shifts between agency and experience are supposed to solve – what useful thing they allow the perceivers to do – is certainly a matter worthy of deeper consideration for anyone interested in objectification.

References: Gray, K., Knobe, J., Sheskin, M., Bloom, P., & Barrett, L. (2011). More than a body: Mind perception and the nature of objectification. Journal of Personality and Social Psychology, 101 (6), 1207-1220 DOI: 10.1037/a0025883

Are Associations Attitudes?

If there’s one phrase that people discussing the results of experiments have heard more than any other, a good candidate might be “correlation does not equal causation”. Correlations can often get mistaken for (at least implying) causation, especially if the results are congenial to a preferred conclusion or interpretation. This is a relatively uncontroversial matter which has been discussed to death, so there’s little need to continue on with it. There is, however, a related reasoning error people also tend to make with regard to correlation; one that is less discussed than the former. This mistake is to assume that a lack of correlation (or a very low one) means no causation. Here are two reasons one might find no correlation, despite underlying relationships: in the first case, no correlation could result from something as simple as there being no linear relationship between two variables. As correlations only measure linear relationships, distributions that resemble bell curves would tend to yield correlations equal to zero.

For the second case, consider the following example: event A causes event B, but only in the absence of variable C. If variable C randomly varies (it’s present half the time and absent the other half), [EDIT: H/T Jeff Goldberg] you might end up with no correlation, or at least a very reduced one, despite direct causation. This example becomes immediately more understandable if you relabel “A” as heterosexual intercourse, “B” as pregnancy, and “C” as contraceptives (ovulation works too, provided you also replace “absence” with presence). That said, even if contraceptives aren’t in the picture, the correlation between sexual intercourse and pregnancy is still pretty low.

And just in case you find that correlation reaching significance, there’s always this.

So why all this talk about correlation and causation? Two reasons: first, this is my website and I find the matter pretty neat. More importantly, though, I’d like to discuss the IAT (implicit association test) today; specifically, I’d like to address the matter of how well the racial IAT correlates (or rather, fails to correlate) with other measures of racial prejudice, and how we ought to interpret that result. While I have touched on this test very briefly before, it was in the context of discussing modularity; not dissecting the test itself. Since the IAT has recently crossed my academic path again on more than one occasion, I feel it’s time for a more complete engagement with it. I’ll start by discussing what the IAT is, what many people seem to think it measures, and finally what I feel it actually assesses.

The IAT was introduced by Greenwald et al in 1998. As per its namesake, the test was ostensibly designed to do something it would appear to do fairly well: measure the relative strengths of initial, automatic cognitive associations between two concepts. If you’d like to see how this test works firsthand, feel free to follow the link above, but, just in case you don’t feel like going through the hassle, here’s the basic design (using the race-version of the test): subjects are asked to respond as quickly as possible to a number of stimuli. In the first phase, subjects will view pictures of black and white faces flashed on the screen and asked to press one key if the face is black and another if it’s white. In the second phase, subjects will do the same task, but this time they’ll press one key if the word that flashes on the screen is positive and another if it’s negative. Finally, these two tasks are combined, with subjects asked to press one key if the face is white or the word is positive, and another key if the face is black or the word is negative (these conditions then flip). Different reaction times in this test are taken to be measures of implicit cognitive associations. So, if you’re faster to categorize black faces with positive words, you’re said to have a more positive association towards black people.

Having demonstrated that many people seem to show a stronger association between white faces and positive concepts, the natural question arises about how to interpret these results. Unfortunately, many psychological researchers and laypeople alike have taken a unwarranted conceptual leap: they assume that these differential association strengths imply implicit racist attitudes. This assumption happens to meet with an unfortunate snag, however, which is that these implicit associations tend to have very weak to no correlations with explicit measures of racial prejudice (even if the measures themselves, like the Modern Racism Scale, are of questionable validity to begin with). Indeed, as reviewed by Arkes & Tetlock (2004), whereas the vast majority of undergraduates tested manifest exceedingly low levels of “modern racism”, almost all of them display a stronger association between white faces and positivity. Faced with this lack of correlation, many people have gone on to make a second assumption to account for this lack, that assumption being that the implicit measure is able to tap some “truer” prejudiced attitude that the explicit measures are not as able to tease out. I can’t help but wonder, though, what those same people would have had to say if positive correlations had turned up…

“Correlations or no, there’s literally no data that could possibly prove us wrong”

Arkes & Tetlock (2004) put forth three convincing reasons to not make that conceptual jump from implicit associations to implicit attitudes. Since I don’t have the space to cover all their objections, I’ll focus on the key points of them. The first is one that I feel ought to be fairly obvious: quicker associations between whites and positive concepts are capable of being generated by merely being aware of racial stereotypes, irrespective of whether one endorses them on any level, conscious or not. Indeed, even African American subjects were found to manifest pro-white biases in these tests. One could take those results as indicative of black subjects being implicit racist against their own ethnic group, though it would seem to make more sense to interpret those results in terms of the black subjects being aware of the stereotypes they did not endorse. The latter interpretation also goes a long way towards understanding the small and inconsistent correlations between the explicit and implicit measures; the IAT is measuring a different concept (knowledge of stereotypes) than the explicit measures (endorsement of stereotypes).

In order to appreciate the next criticism of this conceptual leap, there’s an important point worth bearing in mind concerning this IAT: the test doesn’t measure where two concepts are associated in any sense whatsoever; it merely measures relative strengths of these associations (for example, “bread” might be more strongly associated with “butter” than it is with “banana”, though it might be more associated with both than with “wall”). This importance of this point is that the results of the IAT do not test whether there is a negative association towards any one group; just whether one group is rated more positively than another. While whites might have a stronger association with positive concepts than blacks, it does not follow that blacks have a negative association overall, nor that whites have a particularly positive one either. Both groups could be held in high or low regard overall, with one being slightly favored. In much the same way, I might enjoy eating both pizza and turkey sandwiches, but I would tend to enjoy eating pizza more. Since the IAT does not track whether these response time differentials are due to hostility, these results do not automatically seem to apply well to most definitions of prejudice.

Finally, the authors make the (perhaps politically incorrect) point that noticing behavioral differences between groups – racial or otherwise – and altering behavior accordingly is not, de facto, evidence of an irrational racial biases; it could well represent the proper use of Bayesian inference, passing correspondence benchmarks for rational behavior. If one group, A, happens to perform behavior X more than group B, it would be peculiar to ignore this information if you’re trying to predict the behavior of an individual from one of those groups. In fact, when people fail to do as much in other situations, people tend to call that failure a bias or an error. However, given that race is touchy political subject, people tend to condemn others for using what Arkes & Tetlock (2004) call “forbidden base rates”. Indeed, the authors report that previous research found subjects were willing to condemn an insurance company for using base rate data for the likelihood of property damage in certain neighborhoods when that base rate also happened to correlate with the racial makeup of that neighborhood (but not when those racial correlates were absent).

A result which fits nicely with other theory I’ve written about, so subscribe now and don’t miss any more exciting updates!

To end this on a lighter, (possibly) less politically charged note, a final point worth considering is that this test measures the automaticity of activation; not necessarily the pattern of activation which will eventually obtain. While my immediate reaction towards a brownie within the first 200 milliseconds might be “eat that”, that doesn’t mean that I will eventually end up eating said brownie, nor would it make me implicitly opposed toward the idea of dieting. It would seem that, in spite of these implicit associations, society as a whole has been getting less overtly racist. The need for researchers to dig this deep to try and study racism could be taken as heartening, given that we, “now attempt to gauge prejudice not by what people do, or by what people say, but rather by millisecs of response facilitation of inhibition in implicit association paradigms” (p.275). While I’m sure there are still many people who will make a lot about these reaction time differentials for reasons that aren’t entirely free from their personal politics, it’s nice to know just how much successful progress our culture seems to have made towards eliminating racism.

References: Arkes, H.R., & Tetlock, P.E. (2004). Attributions of implicit prejudice, or “Would Jesse Jackson ‘fail’ the implicit association test?” Psychological Inquiry , 15, 257-278

Greenwald, A.G., McGhee, D.E., & Schwartz, J.L.K. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74, 1464-1480

No, Really; Domain General Mechanisms Don’t Work (Either)

Let’s entertain a hypothetical situation in which your life path had led you down the road to becoming a plumber. Being a plumber, your livelihood depends on both knowing how to fix certain plumbing-related problems and having the right tools for getting the job done: these tools would include a plunger, a snake, and a pair of clothes you don’t mind not wearing again. Now let’s contrast being a plumber with being an electrician. Being an electrician also involves specific knowledge and the right tools, but those sets do not overlap well with those of the plumber (I think, anyway; I don’t know too much about either profession, but you get the idea). A plumber that shows up for their job with a soldering iron and wire-strippers is going to be seriously disadvantaged at getting that job done, just as a plunger and a snake are going to be relatively ineffective at helping you wire up the circuits in a house. The same can be said for your knowledge bases as well: knowing how to fix a clogged drain will not tell you much about how to wire a circuit, and vice versa.

Given that these two jobs make very different demands, it would be surprising indeed to find a set of tools and knowledge that worked equally well for both. If you wanted to branch out from being a plumber to also being an electrician, you would subsequently need new additional tools and training.

And/Or a very forgiving homeowner’s insurance policy…

Of course, there is not always, or even often, a 1-to-1 relationship between the intended function of a tool and the applications towards which it can be put. For example, if your job involves driving in a screw and you happen to not have a screwdriver handy, you could improvise and use, say, a knife’s blade to turn the screw as well. That a knife can be used in such a fashion, however, does not mean it would be preferable to do away with screwdrivers altogether and just carry knives instead. As anyone who has ever attempted such a stunt before can attest to, this is because knives often do not make doing the job very quick or easy; they’re generally inefficient in achieving that goal, given their design features, relative to a more functionally-specific tool. While a knife might work well as a cutting tool and less well as screwdriver, it would function even worse still if used as a hammer. What we see here is that as tools become more efficient at one type of task, they often become less efficient at others to the extent that those tasks do no overlap in terms of their demands. This is why it’s basically impossible to design a tool that simply “does useful things”; the request is massively underspecified, and the demands of one task do not often highly correlate to the demands of another. You first need narrow the request by defining what those useful things are you’re trying to do, and then figure out ways of effectively achieving your more specific goals.

It should have been apparent well before this point that my interest is not in jobs and tools per se, but rather in how these examples can be used to understand the functional design of the mind. I previously touched briefly on why it would be a mistake to assume that domain-general mechanisms would lead to plasticity in behavior. Today I hope to expand on that point and explain why we should not expect domain-general mechanisms – cognitive tools that are supposed to be jacks-of-all-trades and masters of none – to even exist. This will largely be accomplished by pointing out some of the ways that Chiappe & MacDonald (2005) err in their analysis of domain-general and domain-specific modules. While there is a lot wrong with their paper, I will only focus on certain key conceptual issues, the first of which involves the idea, again, the domain-specific mechanisms are incapable of dealing with novelty (in much the same way that a butter knife is clearly incapable of doing anything that doesn’t involve cutting and spreading butter).

Chiappe & MacDonald claim that a modular design in the mind should imply inflexibility: specifically, that organisms with modular minds should be unable to solve novel problems or solve non-novel problems in novel ways. A major problem that Chiappe & MacDonald’s account encounters is a failure to recognize that all problems organisms face are novel, strictly speaking. To clarify that point, consider a predator/prey relationship: while rabbits might be adapted for avoiding being killed by foxes, generally speaking, no rabbit alive today is adapted to avoid being killed by any contemporary fox. These predator-avoidance systems were all designed by selection pressures on past rabbit populations. Each fox that a rabbit encounters in its life is a novel fox, and each situation that fox is encountered in is a novel situation. However, since there are statistical similarities between past foxes and contemporary ones, as well as between the situations in which they’re encountered, these systems can still respond to novel stimuli effectively. This evaporates the novelty concern rather quickly; domain-specific modules can, in fact, only solve novel problems, since novel problems are the only kinds of problems that an organism will encounter. How well they will solve those problems will depend in large part on how much overlap there is between past and current scenarios.

Swing and a miss, novelty problem…

A second large problem in the account involves the lack of distinction on the part of Chiappe and MacDonald between the specificity of inputs and of functions. For example, the authors suggest that our abilities for working memory should be classified as domain-general abilities because many different kinds of information can be stored in working memory. This strikes me as a rather silly argument, as it could be used to classify all cognitive mechanisms as domain-general. Let’s return to our knife example; a knife can be used for cutting all sorts of items: it could cut bread, fabric, wood, bodies, hair, paper, and so on. From this, we could conclude that a knife is a domain-general tool, since its function can be used towards a wide-variety of problems that all involve cutting. On the other hand, as mentioned previously, a knife can efficiently do far fewer things than what it can’t do: knives are awful hammers, fire extinguishers, water purifiers, and information-storage devices. The knife has a relatively specific function which can be effectively applied to many problems that all require the same general solution – cutting (provided, of course, the materials are able to be cut by the knife itself. That I might wish to cut through a steel door does not mean my kitchen knife is up to the task). To tie this back to working memory, our cognitive systems that dabble in working memory might be efficient at holding many different sorts of information in short-term memory, but they’d be worthless at doing things like regulating breathing, perceiving the world, deciphering meaning, or almost any other task. While the system can accept a certain range of different kinds of inputs, its function remains constant and domain-specific.

Finally, there is the largest issue their model encounters. I’ll let Chiappe & MacDonald spell it out themselves:

A basic problem [with domain-general modules] is that there are no problems that the system was designed to solve. The system has no preset goals and no way to determine when goals are achieved, an example of the frame problem discussed by cognitive scientists…This is the problem of relevance – the problem of determining which problems are relevant and what actions are relevant for solving them. (p.7)

Though they mention this problem in the beginning of their paper, the authors never actually take any steps to address that series of rather large issues. No part of their account deals with how their hypothetical domain-general mechanisms generate solutions to novel problems. As far as I can tell, you could replace the processes by which their domain-general mechanisms identify problems, figure out which information is and isn’t useful in solving said problems, figure out how to use that information to solve the problems, and figure out when the problem has been solved, with the phrase “by magic” and not really affect the quality of their account much. Perhaps “replace” is the wrong word, however, as they don’t actually put forth any specifics as to how these tasks are accomplished under their perspective. The closest they seem to come is when they write things along the lines of “learning happens” or “information is combined and manipulated” or “solutions are generated”. Unfortunately for their model, leaving it at that is not good enough.

A lesson that I thought South Park taught us long time ago.

In summary, their novelty problem isn’t one, their “domain-general” systems are not general-purpose at the functional level at all, and the ever-present framing problem is ignored, rather than addressed. That does not leave much of an account left. While, as the authors suggest, being able to adaptively respond to non-recurrent features in our environment would probably be, well, adaptive, so would the ability to allow our lungs to become more “general-purpose” in the event we found ourselves having to breathe underwater. Just because such abilities would be adaptive, however, does not mean that they will exist.

As the classic quote goes, there are far more ways of being dead than there are of being alive. Similarly, there are far more ways of not generating adaptive behavior than there are of behaving adaptively. Domain-general information processors that don’t “know” what to do with the information they receive will tend to get things wrong far more often than they’ll get them right on those simple statistical grounds. Sure, domain-specific information processors won’t always get the right answer either, but the pressing question is, “compared to what?”. If that comparison is made to a general-purpose mechanism, then there wouldn’t appear to be much of a contest.

References: Chiappe, D., & MacDonald, K. (2005). The Evolution of Domain-General Mechanisms in Intelligence and Learning The Journal of General Psychology, 132 (1), 5-40 DOI: 10.3200/GENP.132.1.5-40

The Salience Of Cute Experiments

In the course of proposing new research and getting input from others, I have had multiple researchers raise the same basic concern to me: the project I’m proposing might be unlikely to eventually get published because, given that I find the results I predict that I will, reviewers might feel the results are not interesting or attention-grabbing enough. While I don’t doubt that the concern is, to some degree, legitimate*, it has me wondering about whether their exists an effect that is essentially the reverse of that issue. That is, how often does bad research get published simply on the grounds that it appears to be interesting, and are reviewers willing to overlook some or all the flaws of a research project because it is, in a word, cute?

Which is why I always make sure my kitten is an author on all my papers.

The cute experiment of the day is Simons & Levin (1998). If you would like to see a firsthand example of the phenomenon this experiment is looking at before I start discussing it, I’d recommend this video of the color changing card trick. For those of you who just want to skip right to the ending, or have already seen the video, the Simons & Levin (1998) paper sought to examine “change blindness”: the frequent inability of people to detect changes in their visual field from one moment to the next. While the color changing card trick only replaced the colors of people’s shirts, tablecloths, or backdrops, the experiment conducted by Simons & Levin (1998) replaced actual people in the middle of a conversation to see if anyone would notice.The premise of this study would appear to be interesting on the grounds that many people might assume that they would notice something like the fact that they were suddenly talking to a different person then they were a moment prior, and the results of this study would seem to suggest otherwise. Sure sounds interesting when you phrase it like that.

So how did the researchers manage to pull off this stunt? The experiment began when a confederate holding a map approached a subject on campus. After approximately 10 or 15 seconds of talking, two men holding a door would pass in between the confederate and the subject. Behind this door was a second confederate who changed places with the first. The second confederate would, in turn, carry on the conversation as if nothing had happened. Of the 15 subjects approached in such a manner, only 7 reported noticing the change of confederate in the following interview. The authors mention that out of the 7 subjects that did notice the change, there seemed to be a bias in age: specifically, the subjects in the 20-30 age range (which was similar to that of the confederates) seemed to notice the change, whereas the older subjects (in the 35-65 range) did not. To explain this effect, Simons & Levin (1998) suggested that younger subjects might have been treating the confederates as their “in-group” because of their age (and accordingly paying more attention to their individual features) whereas the older subjects were treating the confederates as their “out-group”, also because of their age (and accordingly paying less attention to their features).

In order to ostensibly test their explanation, the authors ran a follow-up study. This time the same two confederates were dressed as construction workers (i.e. they wore slightly different construction hats, different outfits, and different tool belts) in order to make them appear as more of an “out-group” member to the younger subjects. The confederates then exclusively approached people in the younger age group. Lo and behold, when the door trick was pulled, this time only 4 of the 12 subjects caught on. So here we have a cute study with a counter-intuitive set of results and possible implications for all sorts of terms that end in -ism.

And the psychology community goes wild!

It seems to have gone unnoticed, however, that the interpretation of the study wasn’t particularly good. The first issue, though perhaps the smallest, is the sample size. Since these studies only ran a total of 13.5 subjects each, on average, the extent to which this difference in change blindness (approximately 15%) across groups is just due to chance is unknown. Let’s say, however, that we give the results the benefit of the doubt and assume that they would remain stable if sample size was scaled up. Even given that consideration, there are still some very serious problems remaining.

The larger problem is that the authors did not actually test their explanation. This issue comes in two parts. First, Simons and Levin (1998) proposed that subjects were using cues of group membership in determining whether or not to pay attention to an individual’s features. In their first study, this cue was assumed to be age; in the second study, this cue was assumed to now be construction worker. Of note, however, is that the same two confederates took part in both experiments, and I doubt their age changed much between the two trials. This means that if Simons and Levin (1998) were right, age only served as an indicator of group membership in first context; in the second, that cue was overridden by another – construction worker. Why that might be the case is left completely untouched by the authors, and that seems like a major oversight. The second part is that the authors didn’t test whether the assumed “in-group” would be less change blind. In order to do that they would have had to, presumably, pull the same door trick using construction workers as their subjects. Since Simons and Levin (1998) only tested an assumed out-group, they are unable to make a solid case for differences in group membership being responsible for the effect they’re talking about.

Finally, the authors seem to just assume that the subjects were paying attention in the first place. Without that assumption these results are not as counter-intuitive as they might initially seem, just as people might not be terribly impressed by a magician who insisted everyone just turned around while he did his tricks. The subjects had only known the confederates for a matter of seconds before the change took place, and during those seconds they were also focused on another task: giving directions. Further, the confederate (who is still a complete stranger at this point) is swapped out for another very similar one (both are male, both are approximately the same age, race, and height, as well as being dressed very similarly). If the same door trick was pulled with a male and female confederate, or a friend and a stranger, or people of different races, or people of different ages, and so on, one would predict you’d see much less change blindness.

My only change blindness involves being so rich I can’t see bills smaller than $20s

The real interesting questions would then seem to be what cues to people attend to, why do they attend to them, and in what order are they attended to? None of these questions are really dealt with by the paper. If the results they present are to be taken at face value, we can say the important variables are often not the color of one’s shirt, the sound of one’s voice (within reason), very slight differences in height, and modestly different hairstyles (when one isn’t wearing a hat) when dealing with complete strangers of similar gender and age, while also involved in another task.

So maybe that’s not a terribly surprising result, when phrased in such a manner. Perhaps the surprising part might even be that so many people noticed the apparently not so obvious change. Returning to the initial point, however, I don’t think many researchers would say that an experiment designed to demonstrate that people aren’t always paying attention to and remembering every single facet of their environment would be a publishable paper. Make it cute enough, however, and it can become a classic.

*Note: whether the concerns are legitimate or not, I’m going to do the project anyway.

References: Simons, D.J., & Levin, D.T. (1998). Failure to detect changes to people during a real-world interaction Psychonomic Bulletin & Review, 5, 644-649 DOI: 10.3758/BF03208840