Having one’s research ideas scooped is part of academic life. Today, for instance, I’d like to talk about some research quite similar in spirit to work I intended to do as part of my dissertation (but did not, as it didn’t end up making the cut in the final approved package). Even if my name isn’t on it, it is still pleasing to see the results I had anticipated. The idea itself arose about four years ago, when I was discussing the curious case of Tucker Max’s donation to Planned Parenthood being (eventually) rejected by the organization. To quickly recap, Tucker was attempting to donate half-a-million dollars to the organization, essentially receiving little more than a plaque in return. However, the donation was rejected, it would seem, under fear of building an association between the organization and Tucker, as some people perceived Tucker to be a less-than-desirable social asset. This, of course, is rather strange behavior, and we would recognize it as such if it were observed in any other species (e.g., “this cheetah refused a free meal for her and her cubs because the wrong cheetah was offering it”); refusing free benefits is just peculiar.
“Too rich for my blood…”
As it turns out, this pattern of behavior is not unique to the Tucker Max case (or the Kim Kardashian one…); it has recently been empirically demonstrated by Tasimi & Wynn (2016), who examined how children respond to altruistic offers from others, contingent on the moral character of said others. In their first experiment, 160 children between the ages of 5 and 8 were recruited to make an easy decision; they were shown two pictures of people and told that the people in the pictures wanted to give them stickers, and they had to pick which one they wanted to receive the stickers from. In the baseline conditions, one person was offering 1 sticker, while the other was offering either 2, 4, 8, or 16 stickers. As such, it should come as no surprise that the person offering more stickers was almost universally preferred (71 of the 80 children wanted the person offering more, regardless of how many more).
Now that we’ve established that more is better, we can consider what happened in the second condition where the children received character information about their benefactors. One of the individuals was said to always be mean, having hit someone the other day while playing; the other was said to always be nice, having hugged someone the other day instead. The mean person was always offering more stickers than the nice one. In this condition, the children tended to shun the larger quantity of stickers in most cases: when the sticker ratio was 2:1, less than 25% of children accepted the larger offer from the mean person; the 4:1 and 8:1 ratios were accepted about 40% of the time, and the 16:1 ratio 65% of the time. While more is better in general, it is apparently not better enough for children to overlook the character information at times. People appear willing to forgo receiving altruism when it’s coming from the wrong type of person. Fascinating stuff, especially when one considers that such refusals end up leaving the wrongdoers with more resources than they would otherwise have (if you think someone is mean, wouldn’t you be better off taking those resources from them, rather than letting them keep them?).
This line was replicated in 64 very young children (approximately one-year old). In this experiment, the children observed a puppet show in which two puppets offered them crackers, with one offering a single cracker and the other offering either 2 or 8. Again, unsurprisingly, the majority of children accepted the larger offer, regardless of how much larger it was (24 of 32 children). In the character information condition, one puppet was shown to be a helper, assisting another puppet in retrieving a toy from a chest, whereas the other puppet was a hinderer, preventing another from retrieving a toy. The hindering puppet, as before, now offered the greater number of crackers, whereas the helper only offered one cracker. When the hindering puppet was offering 8 crackers, his offer was accepted about 70% of the time, which did not differ from the baseline group. However, when the hindering puppet was only offering 2, the acceptance rate was a mere 19%. Even young children, it would seem, are willing to avoid accepting altruism from wrongdoers, assuming the difference in offers isn’t too large.
“He’s not such a bad guy once you get $10 from him”
While neat, these results beg for a deeper explanation as to why we should expect such altruism to be rejected. I believe hints of this explanation are provided by the way Tasimi & Wynn (2016) write about their results:
Taken together, these findings indicate that when the stakes are modest, children show a strong tendency to go against their baseline desire to optimize gain to avoid ‘‘doing business” with a wrongdoer; however, when the stakes are high, children show more willingness to ‘‘deal with the devil…”
What I find strange about that passage is that children in the current experiments were not “doing business” or “making deals” with the altruists; there was no quid pro quo going on. The children were no more doing business with the others than they are doing business with a breastfeeding mother. Nevertheless, there appears to an implicit assumption being made here: an individual who accepts altruism from another is expected to pay that altruism back in the future. In other words, merely receiving altruism from another generates the perception of a social association between the donor and recipient.
This creates an uncomfortable situation for the recipient in cases where the donor has enemies. Those enemies are often interested in inflicting costs on the donor or, at the very least, withholding benefits from him. In the latter case, this makes that social association with the donor less beneficial than it otherwise might, since the donor will have fewer expected future resources to invest in others if others don’t help him; in the former case, not only does the previous logic hold, but the enemies of your donor might begin to inflict costs on you as well, so as to dissuade you from helping him. Putting this into a quick example Jon – your friend – goes out an hurts Bob, say, by sleeping with Bob’s wife. Bob and his friends, in response, both withhold altruism from Jon (as punishment) and might even be inclined to attack him for his transgression. If they perceive you as helping Jon – either by providing him with benefits or by preventing them from hurting Jon – they might be inclined to withhold benefits from or punish you as well until you stop helping Jon as a means of indirect punishment. To turn the classic phrase, the friend of my enemy is also my enemy (just as the enemy of my enemy is my friend).
What cues might they use to determine if you’re Jon’s ally? Well, one likely useful cue is whether Bob directs altruism towards you. If you are accepting his altruism, this is probably a good indication that you will be inclined to reciprocate it later (else risk being labeled a social cheater or free rider). If you wish to avoid condemnation and punishment by proxy, then, one route to take is to refuse benefits from questionable sources. This risk can be overcome, however, in cases where the morally-questionable donor is providing you a large enough benefit which, indeed, was precisely the pattern of results observed here. What will determine what counts as “large enough” should be expected to vary as a function of a few things, most notably the size and nature of the transgressions, as well as the degree of expected reciprocity. For example, receiving large donations from morally-questionable donors should be expected to be more acceptable to the extent the donation is made anonymously vs publicly, as anonymity might reduce the perceived social associations between donor and recipient.
You might also try only using “morally clean” money
Importantly (as far as I’m concerned) this data fits well within my theory of morality – where morality is hypothesized to function as an association-management mechanism – but not particularly well with other accounts: altruistic accounts of morality should predict that more altruism is still better, dynamic coordination says nothing about accepting altruism, as giving isn’t morally condemned, and self-interest/mutualistic accounts would, I think, also suggest that taking more money would still be preferable since you’re not trying to dissuade others from giving. While I can’t help but feel some disappointment that I didn’t carry this research out myself, I am both happy with the results that came of it and satisfied with the methods utilized by the authors. Getting research ideas scooped isn’t so bad when they turn out well anyway; I’m just happy enough to see my main theory supported.
References: Tasimi, A. & Wynn, K. (2016). Costly rejection of wrongdoers by infants and children. Cognition, 151, 76-79.