Some Research I Was Going To Do

I have a number of research projects lined up for my upcoming dissertation, and, as anyone familiar with my ideas can tell you, they’re all brilliant. You can imagine my disappointment, then, to find out that not only had one of my experiments been scooped by another author three years prior, but they found the precise patterns of results I had predicted. Adding insult to injury, the theoretical underpinnings of the work are all but non-existent (as is the case in the vast majority of the psychology research literature), meaning I got scooped by someone who doesn’t seem to have a good idea why they found the results they did. My consolation prize is that I get to write about it earlier than expected, so there’s that, I suppose…

What? No, I’m not crying; I just got something in my eye. Or allergies. Whatever.

The experiment itself (Morewedge, 2009) resembles a Turing test. Subjects come into the lab to play a series of ultimatum games. One person has to divide a pot of $3 one of three ways – $2.25/$0.75 (favoring either the divider or the receiver) or evenly ($1.50 for each) – and the receiver can either accept or reject these offers. The variable of interest, however, is not whether the receiver will accept the money; it’s whether the receiver perceives the offer to have been made by a real person or a computer program, as all the subjects were informed beforehand that the proposers they would encounter were drawn randomly from a pool of computer programs or real players. In essence, the experiment was examining whether or not participants perceived they were playing with an intentional agent (a human) or a non-intentional agent, representing chance (a computer), contingent on the outcome. A brilliant experiment that I thought of first, a mere three years after it had already been published.

Overall, the subjects were no more or less likely to suggest they were playing against a person or a computer, and were also no more likely to see a person or a computer as being responsible when they received an even split. However, this was not the case across the other two offers: when subjects received an unusually positive offer ($2.25 for themselves), they tended to attribute this offer to a computer; when the offer was negative ($0.75 for themselves) they overwhelmingly attributed it to a person. When causation was ambiguous, “harm” (used rather loosely here) seemed to trigger the perception of an intentional agent, whereas a benefit tended to have the opposite effect; a decrease in perceptions of intentionality.

This pattern of results could be explained by referencing some kind of self-interested motive: given what we know about people (they seem to prefer more money to less), an unusually altruistic offer is just that – unusual. It would seem more likely that selfish offers come from self-interested agents, rather than an altruistic offer coming from a selfish agent. In an attempt to remove such thinking from (conscious) consideration, a second experiment was run. This time, the person deciding how the pot would be split had no personal stake in the decision; that is to say, subjects were told that their partners would not receive any payment from the games. No matter whether the proposer offered $0.75 or $2.25, no matter whether the offer was accepted or rejected, proposers made no money. However, the results in this case still looked similar: selfish offers were seen as being made predominately by people, and altruistic offers were seen as being made predominately by computers.

But so help me if a computer has been screwing me over again…

In the next two experiments Morewedge (2009) got a bit more explicit about the role of chance. In the first case, subjects came into the experiment and met a confederate before both were taken into separate rooms and seated at computers. On the computer was a circle divided up into five sections with a spinning arrow in the middle. Each of the five sections of the circle was filled with one of two colors, with the numbers of each color varying from one to four across each spin (so the circle could have 4 blue and 1 yellow slots in one round, then 3 blue and 2 yellow in the next, and so on). Subjects chose one color for the experiment beforehand, and if the spinner landed on their color in any given round, they would win the amount of money listed on the screen; if it didn’t land on their color, they would lose the amount on the screen. These amounts varied between zero and one dollar in $0.25 increments.

The subjects were informed of two other things: first, that the confederate was in a similar position (the confederate had picked one of the two colors as well, so they might both have the same color or not), and second, that the confederate could control the outcome of five of the forty rounds that would be played. It was the subject’s job to report the extent to which they felt the confederate had used their influence on each round. When the subjects had lost money, they tended to attribute responsibility for the action more towards the confederate than chance, and this tendency was exacerbated as the monetary stakes got higher. This effect is all the more interesting when considering the fact that subjects couldn’t be sure whether the confederate had similar or divergent interests. If they had the same interests, the confederate would be hurting both of them if he used his influence to purposely lose. Also of interest is that, when asked to estimate how much money they had ended up with – $5 in all cases; the initial endowment they were given – subjects underestimated, reporting that ended up with only $2.89 on average. They were either bad at math, or their memory was distorted towards recalling losses selectively.

Finally, the last experiment used a paradigm similar to that typically used for children: how long participants would spend looking at something. In this case, that something was information presented to the participant following a round with the spinner. The set up was similar, except the subjects were told a confederate could control half of the outcomes, and after each round the subject was told whether the confederate had controlled the round or not. Once presented with this information, subjects had to press a button to proceed to the next screen, and the amount of time they waited before pressing that button was used as a measure of the time participants were processing the information on the screen. Participants ended up spending more time looking at the screen when it was revealed that the confederate was responsible for their win, relative to being responsible for their loss, but looked equally as long when chance was implicated as responsible. This result could tentatively suggest that participants found it surprising that the confederate was responsible for their wins, implying that the more automatic process might be one of blaming others for losses.

For example, I’d be surprised if he still has that suit and that prostitute after the dice land.

Now to the theory. I will give credit where credit is due: Morewedge (2009) does at least suggest there might have been evolutionary advantages to such a bias, but promptly fails to elaborate on them in any substantial way. The first possible explanation given is that this bias could be used to defer responsibility for negative outcomes from oneself to others, which is an odd explanation given that the subjects in this experiment had no responsibility to defer. The second possible explanation is that people might attribute negative outcomes to others in order to not feel sad, which is, frankly, silly. The forth (going out of order) explanation is that such a bias might just represent a common a mild form of “disordered” thinking concerning a persecution complex, which is really no explanation at all. The third, least silly explanation, is that:

“By assuming the presence of antagonist, one may better be able to avoid a quick repetition of the unpleasant event one has just experienced” (p. 543)

Here, though, I feel Morewedge is making the mistake of assuming past selection pressures resembled the conditions set up in the experiment. I’m not quite sure how else to read that section, nor do I feel that experimental paradigm was particularly representative of past selection pressures or relevant environmental contexts

Now, if Morewedge had placed his findings in some framework concerning how social actions are all partly a result of intentional and chance factors, how perpetrators tend to conceal or downplay their immoral actions or intentions, how victims need to convince third parties to punish others who haven’t wronged them directly, and how certain inputs (such as harm) might better allow victims to persuade others, he’d have a very nice paper indeed. Unfortunately, he misses the strategic, functional element to these biases. When taken out of context, such biases can look “disordered” indeed, in much the same way that, when put underwater, my car seems disordered in its use as a submarine.

References: Morewedge CK (2009). Negativity bias in attribution of external agency. Journal of experimental psychology. General, 138 (4), 535-45 PMID: 19883135

2 comments on “Some Research I Was Going To Do

  1. It could be the brain in the process of training its loss-aversion response to the situation. i.e. when the student takes a loss, some function kicks in that scans for all possible outcomes that could have caused the loss, and since the subconscious does not understand probability, it seeks to put the blame at the feet of the only other possibility; the supposed human perpetrator. This response would serve the student by keeping him away from the situation where resources are at risk, and possibly enrages him sufficiently enough that he bashes the other student with his club and takes his resources.

  2. Pingback: Now You’re Just A Moral Rule I Used To Know | Pop Psychology